Integration & Automation

Build an automated deployment of generative AI with agent lifecycle changes using Terraform

Generative AI technology holds immense potential for driving innovation, enhancing content creation, improving customer experiences, streamlining operations, and boosting productivity across various industries. To unlock this transformative power, Amazon Bedrock offers a fully managed service that integrates high-performance foundation models from leading AI companies like AI21 Labs, Anthropic, Cohere, Meta, Stability AI, Mistral AI, and Amazon. By providing a single API to access these advanced models and supporting the development of generative AI applications with a strong focus on security, privacy, and responsible AI practices, Amazon Bedrock empowers organizations to explore new avenues for innovation and elevate their overall offerings through the power of AI.

In this blog post, we’ll guide you through the process of creating a fast, reusable, and repeatable Retrieval-Augmented Generation (RAG)-based generative AI infrastructure as code (IaC) using Terraform; keeping in mind best practices for a sample use case chatbot as a blueprint. This solution provides a foundational framework that you can customize and expand to develop robust, scalable, and efficient chatbot applications on the AWS Cloud using Amazon Bedrock. You’ll learn how to automate the provisioning of essential components required for a generative AI chatbot such as Amazon Bedrock Agents, Amazon Bedrock Knowledge Bases, Amazon OpenSearch Serverless, AWS Lambda, Amazon Simple Storage Service (Amazon S3) buckets for Amazon Bedrock Knowledge Bases, Amazon Bedrock Guardrails and the AWS Identity and Access Management (IAM) roles required to permit and link the services mentioned.

We also dive deep into an automated custom logic lifecycle to prepare an agent and create and update an agent alias when there are changes to the agent action group Lambda code or instructions to the agent or the knowledge base.

Solution infrastructure diagram

The following figure shows the solution infrastructure.

This architecture diagram illustrates the integration of Amazon Bedrock with various AWS services to create an intelligent, secure, and automated workflow. The Amazon Bedrock Agent is at the core, utilizing Bedrock Knowledge Bases for handling queries and Bedrock Agent Action Groups to invoke necessary actions through Open API schema, Lambda and Amazon S3. It incorporates a VPC (Virtual Private Cloud) endpoint, which securely connects to resources within a private subnet. Inside the private subnet, a Lambda function operates to perform automated tasks triggered by the Bedrock Agent. These functions interact with various AWS resources while helping to maintain security and compliance. This setup makes sure that user requests are processed efficiently, securely, and in a scalable manner within the AWS environment.

Architecture diagram

Solution setup

The code for the solution is available on the GitHub repository in aws-samples. Clone the code on your machine using the following command:

git clone "git@github.com:aws-samples/intelligent-rag-bedrockagent-iac.git"

Package structure

After the repository is cloned, you will find the structure of code as shown in the following figure. Each service required to build the solution has its own module conforming to the best practices of writing repeatable IaC code. The code can be integrated into a DevOps tool or lifecycle process.

Package structure

The root module calls the submodules to build the stack in the sequence required to set up the solution stack.

Each module references other modules as module outputs. The following list describes the purpose of each module:

  • The knowledge_base_bucket module is used to build the knowledge base data source S3 bucket with encryption, access logging for the bucket access, versioning, object lifecycle configuration, and (optional) Amazon Simple Notification Service (Amazon SNS) alerting for object uploads. The resources in the module can be configured to fit your use case.
  • The roles module builds the roles with security best practices for the agent, knowledge bases, and Lambda functions with KMS encryption and decryption permissions.
  • The bedrock_knowledge_base module creates a knowledge base and the knowledge base data source, and it creates a policy to allow an OpenSearch Serverless collection to be integrated with the knowledge base through the aoss module. This module also creates AWS Systems Manager parameters to store the values of the knowledge base ID and knowledge base data source ID.
  • The aoss module builds the OpenSearch Serverless collection with a FAISS index and three OpenSearch Serverless encryption, data, and network policies. This module uses respective roles created in the roles module and references in the module, such as the knowledge base role Amazon Resource Name (ARN).
  • The bedrock_agent module creates an Amazon Bedrock agent, a knowledge base association with the agent, an action group Lambda function that references the OpenAPI schema provided for the sample application, and finally Systems Manager parameters to store the agent IDs, agent alias IDs, agent instructions, agent action group instructions, and the Lambda function’s last update status, automating Lambda function code updates. This module references roles created by the roles module (such as the Lambda role and agent role), knowledge base bucket names, knowledge base ID, and the ARNs to build the agent.
  • The bedrock_guardrail module creates Amazon Bedrock agent guardrails with the user’s chosen method to safeguard user inputs and responses to detect sensitive information such as personally identifiable information (PII) data. Amazon Bedrock Guardrails assesses user-provided information and foundation model outputs according to scenario-specific guidelines, offering an extra level of protection irrespective of the base foundation model (FM) used. The module blocks access based on the following properties (configurable based on user input in variables.tf).
    • Content policy configuration for the Amazon Bedrock guardrails
    • Sensitive information policy configuration for the Amazon Bedrock guardrails
    • Topic policy configuration for the Amazon Bedrock guardrails
    • Word policy configuration for the Amazon Bedrock guardrails
  • The vpc_endpoints module creates interface and gateway endpoints for private communication between the services and the infrastructure. The endpoints created are for the following services:
  • The agent_update_lifecycle module builds custom logic that will automatically help developers by creating a new agent alias. This Alias points to the existing agent configuration. See the following section for the detailed workflow of this module.

Prerequisites

  1. Access management: The operations team must carefully delegate access for the services created by the solution. Proper access control measures should be in place to help secure and control access by, for example, using Systems Manager, models, an IAM role for deployment, and service-based roles, disabling public access to S3 buckets, and setting up a dead-letter queue.
  2. AWS CLI version: Make sure that the AWS Command Line Interface (AWS CLI) v2 is installed and configured correctly on the deployment environment.
  3. Provider versions: Before proceeding with the deployment, update the provider versions to the latest available versions to benefit from the latest features and security updates.
  4. API security: Review the open API schema and implement appropriate authentication mechanisms to help protect against unauthorized access and maintain data integrity.
  5. Model dependencies: Verify that the required Amazon Titan Text v2 and Claude3 Haiku models are enabled in your AWS account. These models are essential for the solution’s functionality and can be changed in the variables. See Add or remove access to Amazon Bedrock foundation models for how to enable model access for the AWS Region where you are deploying the solution.
  6. Regional compatibility: Confirm that the target deployment region supports the required models. Cross-check the model access and availability in the chosen region to avoid deployment failures.
  7. Network architecture: Provision a regulated virtual private cloud (VPC) following the AWS Well Architected Framework best practices. This VPC will host the solution and support a secure and high-performing network architecture.

Deployment instructions

This section below should be used to perform deployment of the solution containing a sample application.

  1. Clone the repository.

git clone "git@github.com:aws-samples/intelligent-rag-bedrockagent-iac.git"

  1. Navigate to the project base directory.

cd intelligent-rag-bedrockagent-iac

  1. Initialize Terraform.

terraform init

  1. Run terraform plan to view the list of resources that this solution will create.

terraform plan

  1. Run terraform apply. The flag —auto-approve is optional. If you don’t use that, you will be prompted to manually enter a confirmation.

terraform apply —auto-approve

Note: ensure User Input is enabled post deployment.

Solution workflow

The solution workflow is shown in the following figure.

Architecture diagram

The knowledge base is a repository of information that the agent can query to retrieve additional context and data to augment its responses. The agent orchestrates the workflow, which includes:

  1. Preprocessing the user input to contextualize and categorize it.
  2. Invoking the appropriate action group Lambda function or querying the knowledge base based on the user’s request.
  3. Generating an observation (output) from the action group Lambda function or knowledge base query.
  4. Formatting the final response to return to the user.
  5. The action group Lambda function is a custom function that the agent can invoke to perform specific tasks. During the orchestration step, the agent determines which action group Lambda function to invoke based on the user’s input.

Ultimately, the workflow of the Amazon Bedrock agent and knowledge base integration is as follows:

  1. The agent interprets the user’s input and generates a rationale for the next steps to take.
  2. Based on the rationale, the agent predicts which action group Lambda function to invoke or which knowledge base to query.
  3. If the agent needs more information to invoke the action, it will query the knowledge base to retrieve additional context.
  4. The agent then generates an observation (output) from the action group Lambda function or knowledge base query and uses it to augment the base prompt.
  5. This process continues in a loop until the agent can provide a final response to the user.

Agent update lifecycle workflow

The agent update lifecycle workflow is shown in the following figure.

Architecture diagram

Developers or DevOps administrators frequently test knowledge base and agent responses for their code changes, and for this developers need to prepare and create a new agent alias pointing to the latest version whenever there are changes to the knowledge bases, action group, or agent metadata.

The custom code provided has logic (agent_update_lifecycle module) to do this automatically for the developers. The logic will create a new alias when any of the following occur:

  1. Developers update the Amazon Bedrock agent instructions using the AWS Console or the AWS CLI
  2. Developers update the action group Lambda function
  3. The knowledge base instructions for the agent are changed
  4. (Optional) Based on added logic to initiate creating a new agent alias.

Chatbot application workflow

In this blog post, we’ll explore the creation of a simple fitness chatbot. Here’s a quick overview of its key features and the technology behind them:

  • Natural language understanding: Using Claude 3 Haiku, our chatbot generates human-like responses to user queries, providing a smooth and intuitive conversation experience.
  • Knowledge base integration: We’ve implemented a sophisticated knowledge retrieval system. Documents are uploaded to an S3 bucket, converted into embeddings using the Amazon TITAN v2 model, and stored in OpenSearch Serverless as a vector database. This allows the chatbot to quickly access and deliver relevant fitness information.
  • Flexible LLM capabilities: While we’re currently using Claude 3 Haiku, the code base is designed with configurability in mind, allowing simple switching between different large language models (LLMs) as an IaC variable.
  • Comprehensive fitness knowledge: The chatbot is equipped with a broad understanding of fitness-related topics, making it a valuable resource for users seeking general health and exercise information.
  • BMI calculation feature: As a practical application, we’ve integrated a body mass index (BMI) calculator. This feature is implemented as a Lambda function and is connected to the agent through an action group, demonstrating how to extend the chatbot’s capabilities with custom actions.

Agent configuration

Amazon Bedrock Agents streamline workflows and automate repetitive tasks. Amazon Bedrock Agents securely connect to your company’s data sources and augments the user request with the right information to generate an accurate response. The following instructions illustrate the configuration of the Bedrock Agent.

Agent instructions

You are a helpful fitness assistant. You have general knowledge about sports. You can answer questions related to fitness, diet plans. Use only the tools or knowledge base provided to answer user questions. Choose between the tools or the knowledge base. Do not use both. Do not respond without using a tool or knowledge base.

When a user asks to calculate their BMI:
  1. Ask for their weight in kilograms.
  2. Ask for their height in meters
  3. If the user provides values in any other unit, convert it into kilograms for weight and meters for height. Do not make any comments about health status.

The key to creating an effective chatbot lies in the prompt—the instructions given to the AI to shape its behavior and responses. For our fitness chatbot, we developed a prompt that defines its role, capabilities, and limitations. Here’s a breakdown of the main components:

  • Role definition: The chatbot is introduced as a helpful fitness assistant with general knowledge about sports and the ability to answer fitness and diet-related questions.
  • Knowledge base: The AI is instructed to use only the provided tools or knowledge base to answer questions, maintaining accuracy and consistency in its responses.
  • Specific functionality: We included a detailed process for calculating BMI, which demonstrates how to guide the AI through multi-step interactions.
  • Unit conversion: To enhance user-friendliness, the chatbot is programmed to convert units if users provide weight or height in non-standard measurements.
  • Ethical considerations: The AI is explicitly instructed not to make comments about health status, maintaining a neutral and non-judgmental stance.

The BMI calculation feature

One of the standout features of this chatbot is its ability to calculate BMI. Here’s how it works:

  1. When a user asks the chatbot to calculate their BMI, it initiates a structured dialogue.
  2. It first asks for the user’s weight in kilograms.
  3. Then, it requests the user’s height in meters.
  4. If the user provides values in different units (for example, pounds or feet and inches), the chatbot automatically converts these to kilograms and meters.
  5. After obtaining the necessary information, it calculates the BMI without offering unsolicited health advice.

The AI understands that it must calculate BMI. It understands that it needs to ask the user for height and weight. However, after that information is received it cannot immediately compute BMI. To perform the computation, it needs to first perform an action. This is defined in the agent’s action group.

Action group

First, let’s clarify what an action group is. In the context of Amazon Bedrock agents, an action group is a configuration that enables the agent to perform specific actions. These actions are defined in Lambda functions, allowing for custom functionality beyond the agent’s built-in capabilities.

How does the agent know when to call a particular Lambda function? The answer lies in the OpenAPI Schema definition file.

The decision-making process:

  • Knowledge base check: When a user asks a question, the agent first consults its knowledge base. This is where it determines whether it can answer directly or if it needs to perform an action.
  • Action determination: If the agent decides an action is necessary, it turns to the OpenAPI schema for guidance.
  • Schema consultation: The OpenAPI schema acts as a roadmap, defining available actions and their corresponding Lambda functions.

A real-world example: BMI calculation

Let’s illustrate this with our fitness chatbot’s BMI calculation feature:

  • User query: The user asks the chatbot to calculate their BMI.
  • Agent analysis: The agent recognizes this request requires an action rather than a simple response based on the knowledge base.
  • Schema reference: The agent consults the OpenAPI schema, which includes a definition for BMI calculation.
  • Function identification: Within the schema, the agent finds the specific Lambda function designated for BMI calculation.
  • Invocation: With this information, the agent invokes the correct Lambda function to perform the BMI calculation.

The OpenAPI schema: A closer look

In our BMI calculation example, the OpenAPI schema looks something like the following figure.

OpenAPI schema explanation:

{
  "openapi": "3.0.0",
  "info": {
      "title": "BMI Calculator",
      "description": "Lambda function to calculate Body Mass Index (BMI)",
      "version": "1.0.0"
  },
  "paths": {
    // Name of the lambda function, configured in the action group of the agent
      "/calculate_bmi": {
          "post": {
              "summary": "Calculate BMI",
              "description": "Calculates the Body Mass Index (BMI) based on weight and height",
              "operationId": "calculate_bmi",
              "requestBody": {
                  "required": true,
                  "content": {
                    // Input definition for the agent that accepts weight and height
                      "application/json": {
                          "schema": {
                              "type": "object",
                              "required": [
                                  "weight",
                                  "height"
                              ],
                              "properties": {
                                  "weight": {
                                      "type": "number",
                                      "description": "Weight in kilograms",
                                      "example": 70
                                  },
                                  "height": {
                                      "type": "number",
                                      "description": "Height in meters",
                                      "example": 1.75
                                  }
                              }
                          }
                      }
                  }
              },
              "responses": {
                // This section contains response codes.
                  "200": {
                      "description": "Successful BMI calculation",
                      "content": {
                          "application/json": {
                              "schema": {
                                  "type": "object",
                                  "properties": {
                                      "statusCode": {
                                          "type": "integer",
                                          "example": 200
                                      },
                                      "body": {
                                          "type": "string",
                                          "description": "JSON string containing the BMI result",
                                          "example": "{\"bmi\": 22.86}"
                                      }
                                  }
                              }
                          }
                      }
                  },
                  "400": {
                      "description": "Bad request - missing or invalid parameters",
                      "content": {
                          "application/json": {
                              "schema": {
                                  "type": "object",
                                  "properties": {
                                      "statusCode": {
                                          "type": "integer",
                                          "example": 400
                                      },
                                      "body": {
                                          "type": "string",
                                          "description": "Error message",
                                          "example": "\"Missing parameter: weight\""
                                      }
                                  }
                              }
                          }
                      }
                  },
                  "500": {
                      "description": "Internal server error",
                      "content": {
                          "application/json": {
                              "schema": {
                                  "type": "object",
                                  "properties": {
                                      "statusCode": {
                                          "type": "integer",
                                          "example": 500
                                      },
                                      "body": {
                                          "type": "string",
                                          "description": "Error message",
                                          "example": "\"An unexpected error occurred\""
                                      }
                                  }
                              }
                          }
                      }
                  }
              }
          }
      }
  }
}

Test the chatbot

In this section, we test the chatbot by asking it a series of questions and evaluating the answers. The figures show the questions and answers. All but the first figure also show the trace window, which shows the steps taken by the chatbot to return an answer.

Please note that since the chatbot leverages LLM, the language and the response can vary. However, the sentiments and the essence of the response should remain the same. Also, this is a sample application that helps you calculate BMI and contains a generic formula for BMI calculation and does not infer actual scenarios.

Scenario 1: Ask a generic question to understand the capabilities of the chatbot.

Question: What can you help me with?

Answer: As a fitness agent, I can help you with questions related to fitness, diet plans, and calculating your Body Mass Index(BMI). I have access to set of functions that can assist you with these tasks. Let me know if you have any specific questions and I'll do my best to help.

Screenshot of Amazon Bedrock Fitness Bot

Scenario 2: In the data source file, there is some data that contains information about how IT professionals can improve their fitness. The purpose of this question is to demonstrate that we can ask the chatbot to search the knowledge base and provide a corresponding answer.

Question: what are the fitness tips for IT professionals?

Answer: 1. Combat sedentary lifestyle by standing up and stretching every hour, using a standing or treadmill desk, and taking short walks during breaks or phone call. 2. Maintain eye health by following the 20-20-20 rule (look at something 20 feet away for 20 seconds every 20 mins, using blue light filtering glasses, and adjusting monitor brightness and contrast. 3. Incorporate desk exercises like desk push-ups, chair squats, seated leg raises, and wrist finger stretches to stay active throughout the workday. 4. Prioritize stress management through mindfulness, meditation and taking regular breaks to reset mentally. 5. Maintain good nutrition and hydration habits, such as eating brain boosting foods and avoiding excessive caffeine and sugar, and staying hydrated.

Screenshot of Amazon Bedrock Fitness Bot

Scenario 3: Let’s ask a generic question which has some overlap with fitness. We want to know if the LLM can answer questions from its pre-trained LLM data.

Question: What is table tennis?

Answer: Table tennis, also known as ping-pong, is a popular indoor sport where two or four players hit a lightweight ball back and forth across a net on a table. The objective is to hit the ball in such a way that the opponent cannot return it successfully. Table tennis is played on a table with a net in the center, and players use small paddles to hit the ball. It is an Olympic sport and is enjoyed by people of all ages as a recreational activity or competitive sport.

Screenshot of Amazon Bedrock Fitness Bot

Scenario 4: Now, we ask a question for which the agent must perform an action. The action in this example is to calculate BMI.

Question: Can you calculate BMI

Answer: <Q&A flow, as displayed in the following screenshot>

Screenshot of Amazon Bedrock Fitness Bot

Clean up

Use the following steps to clean up the infrastructure created for the solution. The solution can be built, updated, or modified using a variety of DevOps tools or procedures.

To clean up the infrastructure:

  1. Empty the data source S3 bucket.
  2. Navigate into the project base directory.

cd intelligent-rag-bedrockagent-iac

  1. The flag —auto-approve is optional. If you don’t use that, you will be prompted to enter a confirmation manually.

terraform —destroy --auto-approve

Conclusion

This blog post guides you through implementing a generative AI intelligent chatbot using Amazon Bedrock and AWS services, including developer lifecycle changes. It covers creating a scalable chatbot infrastructure with Terraform, using services like Amazon Bedrock Agents, Amazon Bedrock Knowledge Bases, OpenSearch Service, Lambda, Amazon S3, and IAM. The focus on IaC and automation brings key DevOps benefits: version control, consistent deployments, reduced errors, faster provisioning, and improved collaboration.

This guide serves as a blueprint for organizations to develop code based on re-usable, modular, and security best practices to deploy a generative AI based infrastructure into multiple environments with minimal changes (using DevOps if required).

  • Review the Amazon Responsible AI policy before implementing this solution.
  • This solution does not include a chat history feature and the chat is not stored.
  • This solution deploys the default guardrail policies.

If you have feedback about this blog post, use the Comments section on this page.


About the authors

Anand Krishna Varanasi is a seasoned AWS builder and architect who began his career over 16 years ago. He guides customers with cutting-edge cloud technology migration strategies (the 7 Rs) and modernization. He is very passionate about the role that technology plays in bridging the present with all the possibilities for our future. Anand is also a meditation coach who teaches the benefits of meditation to hundreds of villages, towns, colleges and corporations in India and beyond.

Anand Bukkapatnam Tirumala is a seasoned AWS builder and architect who began his career 11 years ago. Anand has vast experience working with customers of different scales and magnitude ranging from startups to MNCs. He enjoys streamlining cluttered work streams through automation that drives business success. Anand is an avid aquascaper and mentors others interested in the hobby.