AWS for Industries

Elevating customer experience with generative AI and Telecom APIs using Agents for Amazon Bedrock

As customers increasingly demand personalized and efficient experiences, companies are using the power of artificial intelligence (AI) to enhance their customer support capabilities. By using the power of generative AI (GenAI), Agents for Amazon Bedrock, and the Telecom Networks API (under GSMA’s Open Gateway and Camara initiative), organizations can create intelligent chatbots and agents that deliver personalized, context-aware interactions across various communication channels.

Imagine a chatbot agent that can understand and respond to customer queries with human-like fluency, while also accessing and using customer-specific and “network-aware” data offered by Telco operator APIs across the world. This potent combination empowers companies to offer unparalleled customer service and deliver personalized experiences tailored to individual customer needs.

It is illustrated in the previous AWS blog the power of opening telecom networks to AWS developers, exploring capabilities that developers can add to their applications using APIs in the Linux Foundation’s CAMARA API Repositories. In this post, we explore creating powerful chatbot agents, using the capabilities of Agents for Amazon Bedrock, Telco Camara API, and GenAI’s advanced natural language processing (NLP) capabilities. We delve into the process to create an AI-based customer chatbot, defining an Agent in Amazon Bedrock, indicating how to integrate it by calling Telecom Networks API, and interacting with other Amazon services such as Amazon Location service.

We use an example scenario with one specific Telco API and Amazon Location Service to show the building blocks of the solution (GenAI agent interacting with both Telco API and AWS services). Although this is an example scenario to better illustrate the solution, the combination of Telecom APIs, AWS services, and GenAI techniques powered by Amazon Bedrock presents a fertile ground for innovation. This enables the creation of sophisticated customer care applications tailored to diverse industries and businesses, offering endless possibilities for innovations, customization, and personalization. The general high-level concept and architecture can be modelled as illustrated in Figure 1.

Figure 1 - High level Architecture

Figure 1 – High level Architecture

In Figure 1, we show a high level diagram of the functional architecture we are demonstrating in this blog. The core of the solution is the Digital Assistant powered by the Agent for Amazon Bedrock, that translates Natural Language conversation with Humans into actions orchestrated by LLM.

The actions are here logically grouped into three main areas: the Knowledge Bases of Enterprises, the Telco APIs and the AWS Services APIs.

The Knowledge Base of the Enterprise, gives to the LLM and the agent contextual information from company private data sources, to deliver relevant, accurate and customized responses and generate actions relevant to company business.

The Telco APIs is the focus of this blog and we’ll show how the Agent can create value using the Telco Network APIs. They are delivered by the GSMA Open Gateway initiative with its comprehensive API portfolio segmented into three core families: 1) The Anti-Fraud API family equips enterprises with subscriber identity verification, device status monitoring, and secure authentication mechanisms like one-time passwords, fortifying fraud prevention and bolstering security posture. 2) The Mobile Connectivity APIs empower location-based services, real-time connectivity insights, and network quality optimization capabilities, enabling enriched mobile app experiences and enhanced service delivery. 3) The Cloud, Edge and Fixed Connectivity API family facilitates the adoption of edge computing architectures, home network quality optimization tools, and efficient traffic routing strategies, ensuring a good connectivity experiences across these diverse domains. They are unique capabilities of the Telco Industry that are made available as flexible APIs to solve business problems and boost new use cases.

Last but not least, the Telco APIs and the Enterprise Knowledge base are complemented by the AWS Services APIs, that offers full access to the broad a wide set of AWS Services, that are effective to empower use case as we will show in the next paragraph.

Versatility and possible increased use cases footprint

In our blog we will illustrate a specific use case in the logistics sector with a step-by-step guide about how to implement it. However, we also wish to emphasize that leveraging the combined capabilities of GenAI agents’ orchestration and reasoning, Camara Telco API, and AWS services, it is possible to drive innovation and solve customer problems across various industries.

Help avoiding frauds to Bank’s Customer: fraudster can activate call forwarding to bank customer, to get fraudulent call forwarded to them from the bank in case of bank generated assistance call. The bank employee can ask support to bedrock agent to call the customer. Before calling the customer using Amazon Connect API (AI powered contact center from AWS), the Agent looks at the Bank customer database and fetch the customer phone number corresponding to the customer name, then it invokes the Camara CallForwardingSignal API to verify if the call forwarding is active on that number. In case it is not, it proceeds calling the customer, otherwise it alerts the bank employee of possible fraud attempt.

Support deployment of latency sensitive Applications at the Edge: thanks to the Service APIs for Edge Cloud, Amazon Bedrock Agent has the tools to discover the closest edge cloud zone to a given device, get proper application images from Amazon ECR API and deploy on network operator reserved resources, influence the traffic routing from user device to edge instance of the deployed application. With such tools, a company can serve their customer base, providing on the mobile device the preferred application stored as container in Amazon ECR, and deploy it close to the customer on Telco edge for a better user experience, simply asking it in natural language to Amazon Bedrock Agent.

Provide hyper-personalized recommendation: a company that want to recommend a product or a service, can ask Amazon Bedrock Agent to send personalized recommendation to mobile customer that can be interested to it. The Agent can use Amazon Personalize API, to get relevant users that can be interested to the product, according to the information that the company included in Amazon Personalize. Before sending recommendation to the customer, the Agent invokes the “KnowYourCustomer” Camara API, to receive additional information about the customer and refine the personalized recommendation a message. This workflow can be enriched by the Location Insights API from Camara to enrich the customer knowledge about possible home location and latest visited locations, for example shopping areas to restrict the footprint of the recommendation receivers.

Build a Telemedicine Support System: In the healthcare industry, it is possible to build a Telemedicine Support System that leverages the GenAI agent’s ability to analyze patient data and provide diagnostic and treatment recommendations, acting as a virtual medical assistant engaging in natural language conversations with patients, while the Camara Device Location API locates patients accurately for emergency services. AWS services like AWS HealthLake, Amazon Comprehend Medical, and AWS IoT Core enable storage, analysis, and connectivity of medical data.

Create a multi-purpose Connected Vehicle Service in the automotive industry: A company can employ the GenAI agent to adjust vehicle settings, schedule maintenance, and provide driving recommendations based on vehicle performance, driver behavior, and environmental data, serving as an in-vehicle virtual assistant enabling natural language interactions, while the Camara Quality on Demand and Device Location APIs ensure reliable in-car connectivity and accurate vehicle location, and AWS services like AWS IoT Greengrass and Amazon Kinesis enable edge computing, real-time data processing, and conversational interfaces.

These are some possible ideas to illustrate the versatility and increased problem-solving potential that can be achieved by orchestrating GenAI agents, Camara Telco API, and AWS services to drive innovation across various industries.

Example scenario and solution overview

We use a Logistics company use case to build the customer service bot.

A typical logistics company aims to revolutionize the customer experience by developing a GenAI-powered customer agent application. This innovative solution provides real-time updates on the status and location of ordered parcels, ensuring transparency and convenience for their customers.

By understanding the current stage of the delivery process, the system determines whether the parcel is in the “last-mile” status or is still traversing the “long-mile” path. If the parcel is detected to be in the last-mile delivery stage, the logistics company integrates with a Telecom Camara Location Retrieval API. This API allows the application to localize the courier’s position by using a mobile SIM number, for example, one associated with the driver or the vehicle. By leveraging this technology, the system can pinpoint the precise location of the parcel in the last leg of the journey.

Additionally, the logistics company incorporates powerful location services to provide customers with a comprehensive view of their parcel’s current whereabouts. The application displays the address and/or an interactive map to locate the package and allow customers to organize for the delivery.

The following diagram illustrates a typical process flow for this sample last-mile logistics use case. Notice that different workflows can be automatically generated by the Amazon Bedrock Agent, to properly respond to customer need.

Figure 2 - The process flow

Figure 2 – The process flow

The following diagram depicts a high-level architecture of this solution.

Figure 3 Solution architectureFigure 3 – Solution architecture

Walkthrough

Using Agents for Amazon Bedrock, AWS developers can create fully managed GenAI agents and customer service assistants in just a few clicks. Agents extend Foundational models (FMs) to run business tasks with a low code approach and without provisioning and managing the GenAI infrastructure. Agents for Amazon Bedrock uses the reasoning skills of a Large Language Model (LLM) to break down user/customer requests into a logical sequence of multi-step tasks that are orchestrated and executed by the Agent on behalf of dedicated personnel across company systems, data sources, external APIs, and customer interactions (refer to this Amazon machine learning (ML) post to better understand Agents for Amazon Bedrock components and processes).

The following steps describe the solution workflow:

a) Create the tools / APIs available to the Agents to perform their task using the following:

  • An OpenAPI schema JSON format to describe the API, such as how to invoke it, and a detailed description of the provided functionalities to enable the Agent to select the right APIs for the required action.
  • A Python-based AWS Lambda function contains the business logic needed to perform API calls.

As described in the “Example scenario and solution overview” section, we implement three APIs and Lambda functions:

  1. A Lambda function that can simulate an interaction with an Internal IT system to determine the parcel status and to provide the courier SIM number.
  2. A Lambda function that can simulate a Camara Location Retrieval API toward a Telco operator to localize the courier’s position (GPS coordinates) by using the previously indicated SIM number.
  3. A Lambda function that can call the Amazon Location in AWS to obtain the current address to locate the parcel.

b) Create the agent with Amazon Bedrock.

c) Configure the Agent with the following:

  • The preferred LLM from the broad Amazon Bedrock model choice.
  • A high-level description of the expected Agent functionality (how to interact with the end users and what tasks to perform to help them).
  • The available action groups that the Agent can use to break down the customer request into tasks. This is the “toolbox” that the Agent can use to fulfill customer requests (this step is realized by associating to the Agent the API calls defined in Step b).

d) Test and deploy your agent created with Amazon Bedrock.

Prerequisites

To implement the solution provided in this post, you should have an AWS account and access to Amazon Bedrock with Agents enabled (see this documentation for the supported AWS Regions and models). A basic knowledge about GenAI, Amazon Bedrock, Lambda functions in Python, Amazon Simple Storage Service (S3) and Amazon Location Service is required.

In your AWS account, you need to have created an Amazon Location Service resource with a “Place Index” in Amazon Location Service that you want to search against. A Place Index is a graphical search engine to be specified in the API request. See the Amazon Location Service guide to review Amazon Location Service concepts and features. In our solution we use the Amazon Location Service “Places search” feature to convert a latitude/longitude coordinate pair into a street address (reverse geocoding).

Another prerequisite is the creation of an S3 bucket in your AWS account in the Region where you want to create your Amazon Bedrock Agent.

A detailed guide to implement the pre-requisite is available in the reference github repository for this blog (see the Prerequisites section of the README).

Step-by-step section

An Amazon Bedrock Agent consists of the following key components:

  • Foundation model: You choose an FM that the agent invokes to interpret user input and subsequent prompts in its orchestration process. The agent also invokes the FM to generate responses and follow-up steps in its process.
  • Instructions: You write instructions that describe what the agent is designed to do. You can customize instructions for the agent at every step of orchestration.
  • Action groups: An action is a task that the agent can perform automatically by making API calls. A set of actions is defined in an action group. You define the actions that the agent should use to perform the task indicated in the Instructions by providing the following resources:
    • An OpenAPI schema to define the API operations, such as a detailed description of the Action Group, the available APIs in the action group, and the provided output. This description is pivotal to let the Agent understand what “tools” to use to accomplish the customer request.
    • A Lambda function with the following components:
      • Input Analysis: The API operation and parameters passed by the agent during orchestration.
      • Execution code: This is where the actual action is performed, such as invoking the Telco API, making a query to a database, and so on.
      • Output: The result of the API invocation according to the OpenAPI definition.

Step a) Create the tools (APIs) available to the Agent to perform their task

For the implementation of the step a), related to the preparation of the tools and APIs available to the Agent to perform its tasks, we provide two options. The first option is using the AWS Management Console and the second option is using an Infrastructure as Code (IaC) approach with AWS CloudFormation.

In this blog, the first option (via AWS Management Console) is described, as we consider it more suitable for understanding the logical flow and the necessary steps for implementing the solution.

If you prefer the second option (via AWS CloudFormation) follow the instructions in the github repository from the “Preparation of the tools using AWS CloudFormation” section.

In this section we explain how to create (via AWS management console) the APIs that can then be used by the Agent through the Action groups:

  1. /LocationRetrieval API – to simulate a Camara Location Retrieval API toward a Telco operator to localize the courier’s position (GPS coordinates) using the SIM number
  2. /parcelStatus API – to simulate an interaction with an Internal IT system to determine the parcel status and to provide the courier SIM number.
  3. /revGeocode API – to call the Amazon Location in AWS, and obtain the address and locate the parcel.

As an example, we illustrate how to create the /LocationRetrieval and the /parcelStatus APIs. For the /revGeocode API you can follow a similar approach (in this GitHub link you can find the related OpenAPI specifications and Python-based Lambda functions). It is possible to validate your OpenAPI specification file using https://editor.swagger.io/.

Before proceeding with the next steps, please ensure that you have successfully deployed the resources and files listed as prerequisites, after following the steps outlined in Section “Prerequisites” (and its subsections) of the referenced GitHub repository.

Location retrieval API Lambda function: “location-retrieval”

The location retrieval Action Group consists of a single API, as defined by the Camara Device Location API.

Lambda function:

  • Go to the Lambda console: Create a Lambda function with a Python runtime. Go to “Upload from” button on the right, Select “Upload a file from Amazon S3” and upload the zip file from the S3 bucket, insert the path of the zip file created in the pre-requirement phase (“s3://customer-agent-with-camara-api-${AWS_ACCOUNT_ID}/my_lambda_layer.zip” where ${AWS_ACCOUNT_ID is your AWS AccountId, for example s3://customer-agent-with-camara-api-12345678902/my_lambda_layer.zip)
  • In order to allow the Amazon Bedrock Agent to invoke this Lambda function through the defined API, configure a Resource-based policy statement with the permission for Agents for Amazon Bedrock to invoke the Lambda function:
    • In Lambda function page Select the Configuration tab, the on the left list select Permissions, then go to the “Resource-based policy statements” section, click on the “Add permissions” button, select “AWS Services:” and choose “Other” in the Service field, now compile the other fields with the following information
      • Statement ID: insert a unique statement ID to differentiate this statement within the policy (such as agentsforbedrock-telcoapi-agent-ResourcePolicy-statement)
      • Principal: bedrock.amazonaws.com
      • Source ARN: “arn:aws:bedrock:<REGION-NAME>:<Account-ID>:agent/*”
      • Action: lambda:InvokeFunction

The Lambda function is used to invoke the external Location Retrieval API provided by the Telco company. In case you don’t have access to an actual API as defined by Camara, you can simulate it using an Amazon API Gateway and a serving Lambda function, as illustrated in the Figure 6. To set it up, follow the instructions available through this Github link. As indicated in this instruction, remember to insert the created API Gateway endpoint in the “API_URL” environment variable of the “location-retrieval” Lambda function.

Figure 4 - Camara API simulationFigure 4 – Camara API simulation

To optionally test your Lambda function: Select the Test tab near the top of the page. Configure a test event that matches how the Agent sends a request using “Lambda_TEST_EVENT_Locationretrieval.json”. Select the Test button and see the result in “Executing function” section, explore “Details” and eventually analyze logs.

In the case of a timeout error, increment the default Lambda timeout settings (Configuration Tab, General configuration, Edit, Timeout).

Parcel Status API Lambda function: “Parcel_Status_api”

Now we can create the “Parcel_Status_api” lambda function.

Consider this as an example Lambda function simulating an interaction with an Internal IT system to determine the Parcel status and provide the courier SIM number.

Our Lambda code is backed by an in-memory SQLite database. You can use similar constructs to write to a persistent data store. The SQLite example database file (file.sqli) was uploaded in your S3 bucket in the pre-requisites section.

Lambda function:

  • Go to the Lambda console: Create a Lambda function with a Python runtime. Insert the example code from this Github link that simulates an API response. In the code replace the S3_BUCKET_NAME value with the name of the S3 bucket you created in the pre-requisite phase
  • Add the “s3-read-policy-agent-camara-api.json” permission policy to the Lambda role to allow for reading of the database file loaded in the S3 bucket as explained in the reference github repository
  • To optionally test your Lambda function: Select the Test tab near the top of the page. Configure a test event that matches how the Agent sends a request using Lambda_TEST_EVENT_ParcelStatus.json. Select the Test button and see the result in “Executing function” section, explore “Details” and eventually analyze logs.
  • In case of timeout error, Increment the default Lambda timeout settings (Configuration Tab, General configuration, Edit, Timeout).
  • In order to allow the Amazon Bedrock Agent to invoke this Lambda function through the defined API, configure a Resource-based policy statement with the permission for Agents for Amazon Bedrock to invoke the Lambda function (as for the other Lambda functions configuration).

revGeocode API Lambda function: “Place-Search-AWS-Location”

Repeat the previous operations: create and test the related Lambda function “Place-Search-AWS-Location”, for the /revGeocode API as explained in the specific section of the reference github repository (section named “/revGeocode API integration in the Agent”).

Step b) Create an agent for Amazon Bedrock

To create an agent, open the Amazon Bedrock console and choose Agents in the left navigation pane. Then select Create Agent.

This starts the agent creation workflow.

  1. Provide the agent details: Give the agent a name (such as AnyLogistic-Agent) and description (optional), and then select Create.
  2. In the Agent Builder page, in Agent resource role choose Create and use a new service role. In the Select Model choose “Anthropic” and “Claude v2.1”, Leave the other settings as default and select Save and Exit.
  3. Now in the Amazon Bedrock console, choose Agents in the left navigation pane, and you should see the created Agent in the list with a Not prepared status.

\Figure 5 - Amazon Bedrock Agent StatusFigure 5 – Amazon Bedrock Agent Status

Step c) Configure the Agent

  1. Go to the Amazon Bedrock console and choose Agents in the left navigation pane. Select the Agent created in the first step. Select Edit in Agent Builder.
  2. In Select model verify that the Claude V2.1 from Anthropic is selected.
  3. In Instructions for the Agent provide an instruction for your agent in natural language. The instruction tells the agent what task it’s supposed to perform and the persona it’s supposed to assume. In our case we provided the following instructions: “You are an assistant of a logistic company that interact with the final user to provide the street address where the parcel is currently. Customer is supposed to provide parcel ID, while you as an assistant are provided the tools to retrieve courier phone number, mobile phone coordinates and street address”. Copy and paste it in the relevant area. Different formulations of the instructions are possible, and tuning it is part of the “prompt engineering” process.

Figure 6 - Amazon Bedrock Agent configuration

Figure 6 – Amazon Bedrock Agent configuration

Create Action Group

For every one of our three API (/parcelStatus, /Location retrieval, /revGeocode) execute the following:

1. In Action groups select the Add button to create a new Action Group for the agent.

2. Provide the Action Group Name and Description (Optional). You can use Action Group name and description from the following figure.

  • Select Action group type “Define with API schemas”, as we describe the interface between Agent and lambda through an Open API schema.
  • In the Action Group Invocation, use the “Select an existing Lambda function” option. And choose the specific Lambda function (i.e. location-retrieval)
    • In the “Action group schema” section, Select “Select an existing API Schema” option, Click on “Browse S3” button, Select the “s3://customer-agent-with-camara-api-${AWS_ACCOUNT_ID}” S3 Bucket, Select the file with OpenAPI specification (i.e. location-retrieval.yaml), click “Choose”

3. Select Create at the bottom of the Action Group page.

Repeat the same procedure for remaining API, using the specified values for lambda function name (i.e.Parcel_Status_api, Place-Search-AWS-Location) and the API Schema file in the S3 bucket (i.e. ParcelStatus-API.json and Place-Search-AWS-Location-API.json).

Finally, select the Save and exit button in the Agent Builder page to save the new Agent’s configuration.

Figure 7 - Actions Groups listFigure 7 – Actions Groups list

Verify that the “Instructions for the Agent”, in the “Agent Details” section is filled with the instruction previously configured.

Select the Prepare Button in the rightmost console section to prepare the Agent (the Agent Status must be in a Ready state to allow a testing).

Step d) Test your Agent created with Amazon Bedrock

1. Check the Agent is in the Prepared status (otherwise, in the Agent go to Working draft and select Prepare).
2. Test the Agent entering your message and in the Test session (click Run button to enter it).
3. Try out some different prompts in Working draft – Instruction for the Agent (Prepare the Agent after any prompt/instruction update).

To understand how to customize the Agent orchestration prompt and the model inference parameters, see the Amazon Bedrock user guide.

Deploy your Agent

After successful testing, you can deploy your Agent. To deploy an Agent in your application, you must create an alias. Then, Amazon Bedrock automatically creates a version for that alias.

Understanding the reasoning capabilities with the trace functionality

Here we show an example of one Amazon Bedrock Agent Version, focusing on the reasoning capabilities that you can inspect in the Agent trace functionality. After selecting the alias you want to use, you can start a conversation either from the API or from the Amazon Bedrock Agent console. For simplicity, in this example we engage with the Agent directly through the console.

We start by saying Hello! The first step in the Agent is the pre-processing, where the invoked LLM is prompted to classify the user input according to the following definitions:

1. Category A: Malicious and/or harmful inputs.

2. Category B: Getting information about APIs that the calling agent has been provided.

3. Category C: Questions the agent cannot answer given the provided APIs.

4. Category D: Questions that can be answered by the agent from the history, knowledge base, or APIs. This is the category into which we hope the user answer falls.

5. Category E: Answer to Agent questions. The Agent can make questions in the case that relevant information is missing.

6. Any other type of user request?

Here, the model recognizes that it is simply a greeting, and categorizes it as “Any other type” of request.

Figure 8 – Pre-Processing Trace of the first input

Figure 8 – Pre-Processing Trace of the first input

The pre-processing triggers the Orchestration and Knowledge Base steps. Here one step is enough to create the response. Note that the LLM is requested to create a rationale and then the last response that is sent back to the user. The LLM answers as a parcel delivery assistant as is instructed in the Instructions for the Agent section.

Figure 9 – “Orchestration and Knowledge Base” step of the first input

Figure 9 – “Orchestration and Knowledge Base” step of the first input

In the second step, we ask for support to know where the parcel is for which we are waiting. The pre-processing is categorizing as Category D (question can be answered from the provided APIs).

Figure 10 – Pre-Processing Trace of the second input

Figure 10 – Pre-Processing Trace of the second input

The LLM knows the APIs available and what it can do with them, as the APIs are described in the OpenAPI file definition. Thanks to that, it reasons about it and comes up with the following rationale and final response, asking about the parcel ID that is required to know where the parcel is.

Figure 11 - “Orchestration and Knowledge Base” step of the second input

Figure 11 – “Orchestration and Knowledge Base” step of the second input

As we provide the parcel ID, we get the address of the parcel! This actually requires LLM reasoning and multiple steps. Let’s review them:

First, the pre-processing step (Trace Step 1) categorizes the user input as the answer to a question:

   Figure 12 - Pre-Processing step of the third input

Figure 12 – Pre-Processing step of the third input

The Orchestration generates the following four steps (Trace Steps 2-5). Initially the LLM model uses the parcel ID to create the JSON required to invoke the Lambda, according to the OpenAPI definition.

Figure 13 - “Orchestration and knowledge base” step of the third input

Figure 13 – “Orchestration and knowledge base” step of the third input

Then, the LLM model is using the received phone number to create the JSON to fetch the device location in terms of geo-location coordinates, as provided by the Telco location retrieve API.

Figure 14 – Agent’s Camara API Invocation

Figure 14 – Agent’s Camara API Invocation

After getting the latitude and longitude of the device, the LLM creates the JSON to invoke the Amazon Location API, again according to the OpenAPI file that we defined in the previous steps.

Figure 15 – Agent’s Amazon Location Service API Invocation

Figure 15 – Agent’s Amazon Location Service API Invocation

Finally, after getting the street address, the LLM recognizes that it completed its tasks, and it is ready to create the conclusive answer for the user.

Figure 16 - Task completed by the Amazon Bedrock Agent

Figure 16 – Task completed by the Amazon Bedrock Agent

Cleaning up

To avoid incurring future charges, delete the created resources.

Conclusion

In this post, we introduced a solution that leverages GenAI and Telco APIs to generate customer agents. We encourage you to explore this solution and the accompanying code in the related GitHub repository. This GitHub repository and blog serve as an example starter project, designed to provide a demonstration and foundation for builders to create their own customized solutions.

By integrating Agents for Amazon Bedrock reason capabilities, and orchestrating Camara based Telecom APIs and AWS services, it is possible to create an effective expert assistant for an unparalleled customer experience. This allows enterprises in the industries to cost-effectively deliver personalized and dynamic online customer care applications that captivate their final end users. Although this was demonstrated for the context of a logistics company, using Location Retrieval as one of the Telco APIs and Amazon Location Service as an example of a service manageable with Amazon API Gateway, the principles apply across all markets and using the extended catalogs of Telco Camara APIs and AWS services. Overall, this post showcased how AWS serverless building blocks can be combined with the unique capabilities of GenAI, available on Amazon Bedrock, and Telco API power, to transform a customer’s digital assistants for an advanced and personalized customer experience. We encourage you to experiment with these technologies in your industry. As an additional component, you can also integrate your Agent with a Knowledge Base, containing FAQs, troubleshooting guides, and other useful documentation that your agent can automatically query to further assist your customers.

Massimo Sassi

Massimo Sassi

Massimo Sassi is a Solutions Architect in the Telco Industry Business Unit at AWS. With over 25 years of experience in the telecom industry, Massimo helps customers leverage the power of AWS services to build innovative solutions that address the unique challenges facing telecommunications providers. In his free time, Massimo enjoys music, playing tennis, and spending time with his family.

Luca Vignali

Luca Vignali

Luca Vignali is Solutions Architect in the Telco IBU at AWS. With 25 years of professional experience, Luca is passionate about technologies that can innovate and simplify customers’ life, including ML, AI, Gen-AI, IoT and Quantum Compute. Avid learner and eager to spread AWS service knowledge, Luca loves spending time travelling with family and enjoys healthy biking activity.