AWS Database Blog
Performance testing MySQL migration environments using query playback and traffic mirroring – Part 2
This is the second post in a series where we dive deep into performance testing MySQL environments being migrated from on premises. In Part 1, we compared the query playback and traffic mirroring approaches at a high level. In this post, we dive into the setup and configuration of query playback.
Performance testing MySQL migration environments using query playback and traffic mirroring – Part 1
In this series of posts, we dive deep into performance testing of MySQL environments being migrated from on-premises to AWS. In this post, we review two different approaches to testing migrated environments with traffic that is representative of real production traffic: capturing and replaying traffic using a playback application, and mirroring traffic as it comes in using a proxy. This means you’re validating your environment using realistic data access patterns.
Use HammerDB to run performance tests on Amazon RDS for Db2
To ensure that you properly size your Amazon RDS for Db2 instances and achieve comparable or better performance than your on-premises systems, you can use HammerDB. By using this tool, you can generate OLTP-type workloads using TPC-C tests, enabling you to compare performance between your on-premises Db2 and Amazon RDS for Db2 systems. This post guides you through running HammerDB tests on RDS for Db2. We provide a step-by-step process for creating an RDS for Db2 instance using an AWS CloudFormation template, setting up a Db2 client, and configuring HammerDB. You learn how to execute tests and interpret results to properly size your RDS for Db2 instances.
Schedule modifications of Amazon RDS using Amazon EventBridge Scheduler and AWS Lambda
Amazon RDS provides different instance types optimized to fit different relational database use cases. You can modify provisioned instances manually from the Amazon RDS console or using an API. When modifications need to be done on a recurring basis, such as scaling an instance up and down during predefined periods of time, you can automate the task using EventBridge Scheduler and Lambda. In this post, we present a solution using Amazon EventBridge Scheduler and AWS Lambda that allows you to schedule a programmatic modification of a DB instance with specific tags.
How Claroty Improved Database Performance and Scaled the Claroty xDome Platform using Amazon Aurora Optimized Reads
Claroty is a leading provider of industrial cybersecurity solutions, protecting cyber-physical systems (CPS), such as industrial control systems, operational technology networks, and healthcare networks from cyber threats. Claroty’s business is rooted in its need to efficiently manage large volumes of data and run complex queries to ensure a great user experience for its customers who are reducing security risks to cyber-physical systems. One key workload involves an API that provides users with an interface to extract device, alert, and vulnerability data from the Claroty xDome dashboard, enabling seamless integration into their own data stores. In this post, we share how Claroty improved database performance and scaled Claroty xDome using the advanced features of Aurora.
Unlock cost savings using compression with Amazon DocumentDB
In the post Reduce cost and improve performance by migrating to Amazon DocumentDB 5.0, we discussed various ways to reduce costs by migrating your workload to Amazon DocumentDB. In this post, we demonstrate the document compression feature in Amazon DocumentDB to reduce storage usage and I/O cost.
Achieve a high-speed InnoDB purge on Amazon RDS for MySQL and Amazon Aurora MySQL
This post outlines a set of design and tuning strategies for a high-speed purge in an Amazon Relational Database Service (Amazon RDS) for MySQL DB instance and Amazon Aurora MySQL-Compatible Edition DB cluster. Purge is a housekeeping operation in a MySQL database. The InnoDB storage engine relies on it to clean up undo logs and delete-marked table records that are no longer needed for multiversion concurrency control (MVCC) or rollback operations.
Migrate or upgrade your like-to-like databases using AWS DMS homogeneous migration
In this post, we highlight common challenges encountered during homogeneous database migrations and how using AWS DMS homogeneous migration can help address them.
Visualize vector embeddings stored in Amazon Aurora PostgreSQL and explore semantic similarities
In this post, we show how you can visualize vector embeddings and explore semantic similarities. We use PCA for dimensionality reduction. PCA is a well-known dimensionality reduction technique that transforms high-dimensional data into a lower-dimensional space while preserving as much of the original variance as possible. By projecting data onto orthogonal axes called principal components, PCA enables you to visualize the underlying structure of the data in a more manageable form
Evaluating the right fit for your Amazon Aurora workloads: provisioned or Serverless v2
In this post, we cover important concepts of Aurora provisioned and Aurora Serverless v2 databases including cost, performance, features, and how to determine which to use for your workload type.