AWS Database Blog
Category: Learning Levels
Optimize Amazon Aurora PostgreSQL auto scaling performance with automated cache pre-warming
When clients start running queries on new Amazon Aurora replicas, they will notice a longer runtime for the first few times that queries are run; this is due to the cold cache of the replica. As the database runs more queries, the cache gets populated and the clients notice faster runtimes. In this post, we focus on how to address the cold cache so clients that are connecting through a load-balanced endpoint get a consistent experience regardless of whether the replicas are automatically or manually scaled. In addition, we also look at other caching solutions such as Amazon ElastiCache, a fully managed Memcached, Redis, and Valkey compatible service, that can further improve the overall experience for latency-sensitive applications and, in some situations (such as higher cache hits), lead to less frequent auto-scaling events of the Aurora read replicas.
Amazon DynamoDB data models for generative AI chatbots
Amazon DynamoDB is ideal for storing chat history and metadata due to its scalability and low latency. DynamoDB can efficiently store chat history, allowing quick access to past interactions. User-specific metadata, such as preferences and session information, can be stored to personalize responses and manage active sessions, enhancing the overall chatbot experience.In this post, we explore how to design an optimal schema for chatbots, whether you’re building a small proof of concept application or deploying a large-scale production system.
Build a scalable, context-aware chatbot with Amazon DynamoDB, Amazon Bedrock, and LangChain
Amazon DynamoDB, Amazon Bedrock, and LangChain can provide a powerful combination for building robust, context-aware chatbots. In this post, we explore how to use LangChain with DynamoDB to manage conversation history and integrate it with Amazon Bedrock to deliver intelligent, contextually aware responses. We break down the concepts behind the DynamoDB chat connector in LangChain, discuss the advantages of this approach, and guide you through the essential steps to implement it in your own chatbot.
Use a DAO to govern LLM training data, Part 4: MetaMask authentication
In Part 1 of this series, we introduced the concept of using a decentralized autonomous organization (DAO) to govern the lifecycle of an AI model, focusing on the ingestion of training data. In Part 2, we created and deployed a minimalistic smart contract on the Ethereum Sepolia using Remix and MetaMask, establishing a mechanism to govern which training data can be uploaded to the knowledge base and by whom. In Part 3, we set up Amazon API Gateway and deployed AWS Lambda functions to copy data from InterPlanetary File System (IPFS) to Amazon Simple Storage Service (Amazon S3) and start a knowledge base ingestion job, creating a seamless data flow from IPFS to the knowledge base. In this post, we demonstrate how to configure MetaMask authentication, create a frontend interface, and test the solution.
Use a DAO to govern LLM training data, Part 3: From IPFS to the knowledge base
In Part 1 of this series, we introduced the concept of using a decentralized autonomous organization (DAO) to govern the lifecycle of an AI model, focusing on the ingestion of training data. In Part 2, we created and deployed a minimalistic smart contract on the Ethereum Sepolia testnet using Remix and MetaMask, establishing a mechanism to govern which training data can be uploaded to the knowledge base and by whom. In this post, we set up Amazon API Gateway and deploy AWS Lambda functions to copy data from InterPlanetary File System (IPFS) to Amazon Simple Storage Service (Amazon S3) and start a knowledge base ingestion job.
Use a DAO to govern LLM training data, Part 2: The smart contract
In Part 1 of this series, we introduced the concept of using a decentralized autonomous organization (DAO) to govern the lifecycle of an AI model, specifically focusing on the ingestion of training data. In this post, we focus on the writing and deployment of the Ethereum smart contract that contains the outcome of the DAO decisions.
Use a DAO to govern LLM training data, Part 1: Retrieval Augmented Generation
Blockchain and generative AI are two technical fields that have received a lot of attention in the recent years. There is an emerging set of use cases that can benefit from these two technologies. In this four-part series, we build a solution that governs the training data ingestion process of an AI model, using a smart contract and serverless components. We guide you through the different steps to build the solution. In this post, we review the overall architecture of the solution, and set up a large language model (LLM) knowledge base.
Load vector embeddings up to 67x faster with pgvector and Amazon Aurora
pgvector is the open source PostgreSQL extension for vector similarity search that powers generative artificial intelligence (AI) applications using techniques such as semantic search and retrieval-augmented generation (RAG). Amazon Aurora PostgreSQL-Compatible Edition has supported pgvector 0.5.1 since 2023. Amazon Aurora now supports pgvector version 0.7.0, which adds parallelism to improve the performance of building Hierarchical Navigable Small Worlds […]
How Dafiti migrated its most critical database to Amazon Aurora MySQL with minimal downtime and improved operational efficiency
In the dynamic world of digital retail, performance, resilience, and availability are not only desirable qualities, they are essential. Recently, Dafiti, a leading fashion and lifestyle ecommerce conglomerate operating in Brazil, Argentina, Chile, and Colombia, undertook a significant transformation of its critical database infrastructure by migrating from self-managed MySQL Server 5.7 on Amazon EC2 to Amazon Aurora MySQL. This strategic move improved the resiliency and efficiency of its database operations. In this post, we show you why we chose Aurora MySQL-Compatible and how we migrated our critical database infrastructure.
Build a streaming ETL pipeline on Amazon RDS using Amazon MSK
Customers who host their transactional database on Amazon Relational Database Service (Amazon RDS) often seek architecture guidance on building streaming extract, transform, load (ETL) pipelines to destination targets such as Amazon Redshift. This post outlines the architecture pattern for creating a streaming data pipeline using Amazon Managed Streaming for Apache Kafka (Amazon MSK). Amazon MSK offers a fully managed Apache Kafka service, enabling you to ingest and process streaming data in real time.