AWS Public Sector Blog
Harnessing the power of generative AI in AWS GovCloud
In an era of increasing regulatory scrutiny and data privacy requirements, organizations working in the public sector need robust, secure, and compliant solutions. Amazon Web Services (AWS) GovCloud (US) offers a specialized environment tailored for organizations that need to meet strict regulatory and compliance requirements. AWS GovCloud (US) adheres to Federal Risk and Authorization Management Program (FedRAMP) High, the Department of Justice (DOJ) Criminal Justice Information Systems (CJIS), US International Traffic in Arms Regulations (ITAR), and other compliance mandates. A full list can be found at Compliance in the AWS GovCloud (US) User Guide. The introduction of generative artificial intelligence (AI) in this space is transformative, providing unparalleled opportunities for automation, decision-making, and content generation—all while adhering to stringent security and compliance standards.
In this post, we explore how generative AI, powered by services such as Amazon Bedrock and Amazon SageMaker, can be harnessed to meet the unique challenges of AWS GovCloud (US). In AWS GovCloud (US) Regions, AWS does not use or store AI content processed by Amazon Bedrock and SageMaker. AI-Opt out for data collection is applied by default. Your content is not used to improve the base models and is not shared with any model providers. We highlight use cases that demonstrate the potential of generative AI to enhance efficiency, automate workflows, and extract insights—all within a secure, compliant framework. At the time of this post’s publication, the following models are available in AWS GovCloud (US-West):
- Amazon Titan in Amazon Bedrock (Amazon Titan Text G1 – Express and Amazon Titan Text Embeddings V2)
- Anthropic’s Claude in Amazon Bedrock (Anthropic’s Claude 3 Haiku and Claude 3.5 Sonnet)
- Meta Llama in Amazon Bedrock (Meta Llama 3 8B Instruct and Meta Llama 70B Instruct)
Automating legal document review
In heavily regulated sectors such as government and defense, legal compliance is critical. Organizations often need to review massive volumes of legal documents to confirm that contracts, policies, and agreements comply with regulations. This process can be time-consuming and prone to human error.
Solution: By using Amazon Bedrock large language models (LLMs), public sector organizations can automate legal document review within AWS GovCloud (US). A major challenge is document diversity, selective redaction, and scaling larger volumes and throughput. These models can extract key clauses, flag compliance issues, and even summarize legal documents. Paired with AWS Lambda, the solution can trigger document parsing, analysis, and reporting, all in real time. This not only accelerates the review process but also reduces the risk of human error.
Read OpenText Uses AWS to Help Law Firms Manage Large Document Volumes, Onboard Clients Faster for more information.
Data summarization for agencies
Agencies often need to process large datasets to extract meaningful insights from unstructured data such as text, audio, or video. Traditional data processing methods can be slow and resource-intensive, especially when handling sensitive information that must remain secure and compliant.
Solution: Generative AI models within Amazon Bedrock can automatically summarize and categorize intelligence data. For example, agencies can now use the new Claude 3.5 Sonnet model to analyze large datasets and generate concise reports that highlight critical information. These reports can then be sent to Amazon Simple Storage Service (Amazon S3), a cloud object storage service with high availability and security, and combined with Lambda to further automate the summarization process. Agencies often employ prompt engineering techniques to maximize the effectiveness of these AI models.
Prompt engineering is the process of designing and refining text inputs to guide generative AI systems in producing desired and high-quality outputs.
The following is an example of a prompt for data summarization.
Analyze the provided legal document and extract the following key insights:
Identify the main contractual terms, such as payment schedules, delivery dates, and performance requirements.
Extract any important obligations or responsibilities of the parties involved.
Identify any references to applicable laws, regulations, or industry standards, and provide details about compliance requirements or potential legal/regulatory risks.
Identify any references to intellectual property, such as patents, trademarks, or copyrights, and extract details about ownership, licensing, or other IP-related terms.
Identify key financial information, such as pricing, fees, payment terms, or revenue-sharing arrangements, as well as any commercial considerations, such as exclusivity, territory, or distribution agreements.
Identify any references to potential risks, challenges, or areas of concern, and extract details about any known or anticipated problems that may need to be addressed.
Provide a concise summary of the key insights extracted from the document, organized by the categories above. Highlight any particularly important or noteworthy information that may be relevant to the stakeholders.
Read Introducing ‘Get started with generative AI on AWS: A guide for public sector organizations’ for more information.
Enhancing government chat-based assistant services
Public-facing government agencies need to offer fast, accurate, and compliant customer service. With citizens interacting more frequently through digital platforms, there is an increasing demand for responsive and efficient chat assistant solutions. State and local governments are already experiencing success by automating Department of Motor Vehicle (DMV) workflows through chat assistants to reduce wait times.
Solution: Using Amazon Bedrock, public sector organizations can build generative AI–powered chat assistants that enhance citizen interactions. Retrieval-Augmented Generation (RAG) and databases such as Amazon Relational Database Service (Amazon RDS) or Amazon OpenSearch Service can be used to develop FedRAMP compliant chat assistants. These chat assistants can handle a wide range of requests, from providing information on public services to guiding citizens through complex processes. Users can access the chat assistant where a knowledge dataset (including PDFs and documents) is stored in Amazon S3. Amazon database services can be used to store embedded vectors, then Amazon Bedrock serves to query the database using natural language. This workflow gives users a complete step-by-step chat assistant that answers questions only based on information stored in the provided dataset.
Read Build a FedRAMP compliant generative AI–powered chatbot using Amazon Aurora Machine Learning and Amazon Bedrock for more information.