AWS Database Blog

How Dafiti migrated its most critical database to Amazon Aurora MySQL with minimal downtime and improved operational efficiency

In the dynamic world of digital retail, performance, resilience, and availability are not only desirable qualities, they are essential. Recently, Dafiti, a leading fashion and lifestyle ecommerce conglomerate operating in Brazil, Argentina, Chile, and Colombia, undertook a significant transformation of its critical database infrastructure by migrating from self-managed MySQL Server 5.7 on Amazon EC2 to Amazon Aurora MySQL. This strategic move improved the resiliency and efficiency of its database operations. In this post, we show you why we chose Aurora MySQL-Compatible and how we migrated our critical database infrastructure.

Build a streaming ETL pipeline on Amazon RDS using Amazon MSK

Customers who host their transactional database on Amazon Relational Database Service (Amazon RDS) often seek architecture guidance on building streaming extract, transform, load (ETL) pipelines to destination targets such as Amazon Redshift. This post outlines the architecture pattern for creating a streaming data pipeline using Amazon Managed Streaming for Apache Kafka (Amazon MSK). Amazon MSK offers a fully managed Apache Kafka service, enabling you to ingest and process streaming data in real time.

Embed textual data in Amazon RDS for SQL Server using Amazon Bedrock

In Part 1 of this post, we covered how Retrieval Augmented Generation (RAG) can be used to enhance responses in generative AI applications by combining domain-specific information with a foundation model (FM). However, we stayed focused on the semantic search aspect of the solution, assuming that our vector store was already built and fully populated. In this post, we explore how to generate vector embeddings on Wikipedia data stored in a SQL Server database hosted on Amazon RDS. We also use Amazon Bedrock to invoke the appropriate FM APIs and an Amazon SageMaker Jupyter Notebook to help us orchestrate the overall process.

Modernize your legacy databases with AWS data lakes, Part 1: Migrate SQL Server using AWS DMS

This is a three-part series in which we discuss the end-to-end process of building a data lake from a legacy SQL Server database. In this post, we show you how to build data pipelines to replicate data from Microsoft SQL Server to a data lake in Amazon S3 using AWS DMS. You can extend the solution presented in this post to other database engines like PostgreSQL, MySQL, and Oracle.

Performance testing MySQL migration environments using query playback and traffic mirroring – Part 3

This is the third post in a series where we dive deep into performance testing of MySQL environments being migrated from on premises. In Part 1, we compared the query playback and traffic mirroring approaches at a high level. In Part 2, we showed how to set up and configure query playback. In this post, we show you how to set up and configure traffic mirroring.

Performance testing MySQL migration environments using query playback and traffic mirroring – Part 2

This is the second post in a series where we dive deep into performance testing MySQL environments being migrated from on premises. In Part 1, we compared the query playback and traffic mirroring approaches at a high level. In this post, we dive into the setup and configuration of query playback.

Performance testing MySQL migration environments using query playback and traffic mirroring – Part 1

In this series of posts, we dive deep into performance testing of MySQL environments being migrated from on-premises to AWS. In this post, we review two different approaches to testing migrated environments with traffic that is representative of real production traffic: capturing and replaying traffic using a playback application, and mirroring traffic as it comes in using a proxy. This means you’re validating your environment using realistic data access patterns.

Use HammerDB to run performance tests on Amazon RDS for Db2

To ensure that you properly size your Amazon RDS for Db2 instances and achieve comparable or better performance than your on-premises systems, you can use HammerDB. By using this tool, you can generate OLTP-type workloads using TPC-C tests, enabling you to compare performance between your on-premises Db2 and Amazon RDS for Db2 systems. This post guides you through running HammerDB tests on RDS for Db2. We provide a step-by-step process for creating an RDS for Db2 instance using an AWS CloudFormation template, setting up a Db2 client, and configuring HammerDB. You learn how to execute tests and interpret results to properly size your RDS for Db2 instances.

Schedule modifications of Amazon RDS using Amazon EventBridge Scheduler and AWS Lambda

Amazon RDS provides different instance types optimized to fit different relational database use cases. You can modify provisioned instances manually from the Amazon RDS console or using an API. When modifications need to be done on a recurring basis, such as scaling an instance up and down during predefined periods of time, you can automate the task using EventBridge Scheduler and Lambda. In this post, we present a solution using Amazon EventBridge Scheduler and AWS Lambda that allows you to schedule a programmatic modification of a DB instance with specific tags.

How Claroty Improved Database Performance and Scaled the Claroty xDome Platform using Amazon Aurora Optimized Reads

Claroty is a leading provider of industrial cybersecurity solutions, protecting cyber-physical systems (CPS), such as industrial control systems, operational technology networks, and healthcare networks from cyber threats. Claroty’s business is rooted in its need to efficiently manage large volumes of data and run complex queries to ensure a great user experience for its customers who are reducing security risks to cyber-physical systems. One key workload involves an API that provides users with an interface to extract device, alert, and vulnerability data from the Claroty xDome dashboard, enabling seamless integration into their own data stores. In this post, we share how Claroty improved database performance and scaled Claroty xDome using the advanced features of Aurora.