AWS Database Blog
Category: Amazon Bedrock
Analyzing PL/SQL and T-SQL code using Amazon Bedrock
In this post, we use the Anthropic Claude3 Sonnet large language model (LLM) on Amazon Bedrock to provide a detailed breakdown of the complex PL/SQL and T-SQL code, making it more understandable and comprehensible for developers who are new to a code base or working with unfamiliar code, because it helps them understand the logic and flow of the code more effectively.
Improve speed and reduce cost for generative AI workloads with a persistent semantic cache in Amazon MemoryDB
In this post, we present the concepts needed to use a persistent semantic cache in MemoryDB with Knowledge Bases for Amazon Bedrock, and the steps to create a chatbot application that uses the cache. We use MemoryDB as the caching layer for this use case because it delivers the fastest vector search performance at the highest recall rates among popular vector databases on AWS. We use Knowledge Bases for Amazon Bedrock as a vector database because it implements and maintains the RAG functionality for our application without the need of writing additional code.
Analyze blockchain data with natural language using Amazon Bedrock
Data within public blockchain networks such as Bitcoin and Ethereum can be accessed by anyone. However, accessing and making sense of this information has traditionally been a complex and technical undertaking. Much of the data is encoded and stored as bytes, rather than in a human-readable format. In this post, we introduce a solution that demonstrates how you can chat with blockchain data using Amazon Bedrock and the AWS Public Blockchain datasets. We discuss Amazon Bedrock, review the solution architecture, provide example prompts, share interesting findings, and go over how you can extend the solution to integrate with different data sources.
Using knowledge graphs to build GraphRAG applications with Amazon Bedrock and Amazon Neptune
Retrieval Augmented Generation (RAG) is an innovative approach that combines the power of large language models with external knowledge sources, enabling more accurate and informative generation of content. Using knowledge graphs as sources for RAG (GraphRAG) yields numerous advantages. These knowledge bases encapsulate a vast wealth of curated and interconnected information, enabling the generation of responses that are grounded in factual knowledge. In this post, we show you how to build GraphRAG applications using Amazon Bedrock and Amazon Neptune with LlamaIndex framework.
Schneider Electric automates Salesforce account hierarchy management with generative artificial intelligence (AI) using Amazon Aurora and Amazon Bedrock
Schneider Electric is a leader in digital transformation in energy management and industrial automation. To effectively manage customer account hierarchies in its CRM at scale, Schneider Electric started leveraging advances in generative artificial intelligence (AI) large language models (LLMs) in April 2023. They created a solution to make timely updates to their customer account hierarchies in their CRM by linking customer account information to the correct parent company based on the latest information retrieved from the Internet and proprietary datasets. In this post, we explore further iterations of this project and how the team applied what they learned to the Salesforce CRM system using Amazon Aurora and Amazon Bedrock.
Build a FedRAMP compliant generative AI-powered chatbot using Amazon Aurora Machine Learning and Amazon Bedrock
In this post, we explore how to use Amazon Aurora PostgreSQL and Amazon Bedrock to build Federal Risk and Authorization Management Program (FedRAMP) compliant generative artificial intelligence (AI) applications using Retrieval Augmented Generation (RAG).
Executive Conversations: Putting generative AI to work in omnichannel customer service with Prashant Singh, Chief Operating Officer at LeadSquared
Prashant Singh, Chief Operating Officer at LeadSquared, joins Pravin Mittal, Director of Engineering of Amazon Aurora, for a discussion on using generative artificial intelligence (AI) to scale their omnichannel customer service application while controlling costs. LeadSquared helps customers build truly connected, empowered, and self-reliant sales and service organizations, with the power of automation. This Executive […]
How LeadSquared accelerated chatbot deployments with generative AI using Amazon Bedrock and Amazon Aurora PostgreSQL
LeadSquared is a new-age software as a service (SaaS) customer relationship management (CRM) platform that provides end-to-end sales, marketing, and onboarding solutions. Tailored for sectors like BFSI (banking, financial services, and insurance), healthcare, education, real estate, and more, LeadSquared provides a personalized approach for businesses of every scale. LeadSquared Service CRM goes beyond basic ticketing, […]
A generative AI use case using Amazon RDS for SQL Server as a vector data store
Generative artificial intelligence (AI) has reached a turning point, capturing everyone’s imaginations. Integrating generative capabilities into customer-facing services and solutions has become critical. Current generative AI offerings are the culmination of a gradual evolution from machine learning and deep learning models. The leap from deep learning to generative AI is enabled by foundation models. Amazon […]
Use LangChain and vector search on Amazon DocumentDB to build a generative AI chatbot
Amazon DocumentDB (with MongoDB compatibility) offers benefits to customers building modern applications across multiple domains, including healthcare, gaming, and finance. As a fully managed document database, it can improve user experiences through flexibility, scalability, high performance, and advanced functionality. Enterprises that use the JSON data model supported by Amazon DocumentDB can achieve faster application development […]