AWS Public Sector Blog
Streamlining naturalization applications with Amazon Bedrock
Public sector organizations worldwide face a common challenge: processing an ever-growing volume of document-heavy applications across various services. From naturalization procedures to asylum applications and university admissions, many crucial processes still rely on manual or partially manual methods, leading to significant backlogs, extended processing times, and increased costs.
This post explores how Amazon Bedrock can be used to address these challenges, focusing on streamlining naturalization applications. While we focus on naturalization as our primary example, the solution discussed can be applied to any public sector use case involving large-scale document processing.
Take naturalization applications as an example. These typically require multiple documents to verify an applicant’s eligibility, including proof of identity and residency and tax documents. The impact of inefficient processing is evident globally:
- In 2023, Ireland received 20,650 naturalization applications while grappling with a backlog of 15,000 applications from previous years, resulting in an average processing time of 19 months.
- In the United States, the US Citizenship and Immigration Services (USCIS) received approximately 781,000 naturalization applications in fiscal year (FY) 2022 and completed nearly 1,076,000 applications—a 20 percent increase from FY 2021 and the highest in nearly 15 years.
- The UK faced similar pressures, with 210,465 citizenship applications in the year ending June 2023.
These challenges aren’t unique to naturalization. Asylum applications often involve complex documentation from various sources and university admissions require processing transcripts, recommendation letters, and other supporting materials from a large pool of applicants.
Many agencies still rely on outdated methods to process these applications:
- Manual review – Human agents physically examining each document
- Basic digital tools – Simple document management systems with limited automation
- Siloed information – Lack of integration between different stages of the application process
These limitations result in systems struggling to keep pace with demand, frustrating applicants and creating inefficiencies for government agencies across various services.
Amazon Bedrock is a fully managed service that offers a choice of high-performing foundation models (FMs) from leading AI companies like AI21 Labs, Anthropic, Cohere, Meta, Mistral AI, Stability AI, and Amazon using a single API. It provides a broad set of capabilities needed to build generative AI applications with security, privacy, and responsible AI. For this solution, we cover the technical implementation using Anthropic’s Claude 3.5 Sonnet large language model (LLM) on Amazon Bedrock.
For naturalization applications, LLMs offer key advantages. They enable rapid document classification and information extraction, which means easier application filing for the applicant and more efficient application reviewing for the immigration officer. LLMs also promote consistency in application evaluation, removing potential bias, and provide scalability to handle large volumes of applications, even with surges in numbers like those seen during the COVID-19 pandemic. By using LLMs through Amazon Bedrock, government agencies can significantly reduce processing times, improve accuracy in application assessment, and allocate human resources more effectively, transforming their document processing workflows.
Solution process flow
To demonstrate this solution, we use Ireland’s naturalization process as an example and walk you through the step-by-step flow. The simplified criteria for this process include the following requirements for naturalization:
- A valid passport from the applicant’s home country as proof of identity.
- Proof of residency for three years, evaluated using a scoring system. Each applicant must score 150 points for each of the three years by providing one document from each of the following categories:
-
- Type A: Examples include Employment Detail Summaries or Department of Social Protection Annual Contributions, which grant 100 points.
- Type B: Examples include utility bills, which grant 50 points.
The following demo highlights the solution in action, providing an end-to-end walkthrough of how naturalization applications are processed. It demonstrates the entire workflow, including document upload, information extraction, residency proof scoring, summary generation, and the immigration officer’s review.
The process follows these steps:
1.The applicant uploads all required documents without needing to fill in any fields. In this step we use a LLM for classification and data extraction from the documents. This helps the applicant save time because they only upload documents instead of filling out long forms. The following screenshot shows the Upload documents page of the developed demo.
2. The LLM processes each document, extracting necessary information based on the prompt, and provides a summary of the processed documents. The summary section gives an immediate overview to the applicant based on the documents they provided. This allows the applicant to add any missing documents to their application without waiting for the officer’s review. This summary offers one of two possibilities:
a. Confirm all documents are present and complete, as shown in the following screenshot.
b. Identify any missing documents, as shown in the following screenshot.
3. The immigration officer reviews the application, where they will be presented with the applicant details, a list of the documents provided by the applicant, and a recommendation on the application status, which could be one of two possibilities:
a. Complete, as shown in the following screenshot.
b. Missing some information, as shown in the following screenshot.
Solution walkthrough
This section provides a detailed walkthrough of the solution and its two primary applications of Anthropic’s Claude 3.5 Sonnet LLM: document processing for data extraction and summarization of the extracted information.
The solution leverages the multimodal capabilities of Claude 3.5 Sonnet alongside prompt engineering techniques to refine outputs and meet specific requirements with precision.
Techniques such as few-shot prompting—where relevant context is provided through examples—and Chain-of-Thought Prompting, which guides the model to reason step-by-step to generate more thoughtful and accurate responses, play a critical role in enhancing the reliability of results. These strategies are integral to both the extraction and summarization processes, ensuring the solution consistently delivers high-quality outputs tailored to the task at hand.
Figure 7 illustrates the architectural design of the solution.

Figure 7. Architectural diagram of the solution described in this post. The major components are an Amazon Simple Storage Service (Amazon S3) bucket, Amazon CloudFront, Amazon Cognito, AWS Lambda, Amazon DynamoDB, AWS AppSync, and Amazon Simple Queue Service (Amazon SQS) .
Steps 1 to 4:
a. The applicant authenticates to the immigration portal using Amazon Cognito. After successful authentication, the applicant is provided with a pre-signed URL to allow them to upload documents securely to the Applicant Documents Amazon Simple Storage Service (Amazon S3) bucket.
b. The file upload process invokes the Process Documents AWS Lambda function.
Steps 5 to 6:
a. The Process Documents Lambda function invokes Amazon Bedrock, passing the document to the Claude model. Using a carefully crafted prompt, Anthropic’s Claude extracts the necessary information from the documents. The extracted data is then saved to an Amazon DynamoDB table.
Steps 7 to 8:
a. A GraphQL mutation triggers an AWS AppSync request, which invokes the Initiate Application Brief Lambda function.
b. This Lambda function processes the event, retrieves the user’s application Id, and sends two messages to Amazon Simple Queue Service (Amazon SQS)
Steps 9 to 11:
a. The Generate Applicant Brief Lambda function retrieves the message from the SQS queue.
b. This Lambda function then queries the Amazon DynamoDB table to retrieve the information associated with the application reference extracted from the SQS message.
c. The Lambda function then passes the applicant information to the LLM in Amazon Bedrock. Using a carefully crafted prompt, the LLM evaluates the documents presented by the applicant and provides a summary.
d. The case summary is then saved to Amazon DynamoDB along with the applicant information.
Stepd 12 to 14:
These steps focus on the immigration officer’s side of the process:
a. The Generate officer Brief Lambda function retrieves a message from the SQS queue.
b. This function then queries Amazon DynamoDB to retrieve the applicant’s information based on the reference extracted from the SQS message.
c. The Lambda function passes the applicant information to the LLM in Amazon Bedrock. Using a specialized prompt, the LLM evaluates the applicant’s eligibility for naturalization and provides a recommendation.
d. The eligibility assessment and recommendation are saved back to Amazon DynamoDB, associated with the applicant’s record.
Conclusion
This use case demonstrates the transformative potential of Amazon Bedrock and Anthropic’s Claude 3.5 Sonnet in streamlining the naturalization application process. By using AI capabilities, you can:
- Reduce manual document review time.
- Provide instant feedback to applicants on their application status.
- Scale to handle large volumes of applications without compromising accuracy.
Although this use case focuses on naturalization applications, the same principles and technologies can be applied to a wide range of document-intensive processes across the public sector.
Are you ready to transform your document processing workflows with AI? Explore Amazon Bedrock and see how it can bring efficiency and intelligence to your public sector operations. Contact your AWS account team to discuss how this solution can be tailored to your specific needs.