AWS Database Blog

Category: Intermediate (200)

Transition from AWS DMS to zero-ETL to simplify real-time data integration with Amazon Redshift

The zero-ETL integrations for Amazon Redshift are designed to automate data movement into Amazon Redshift, eliminating the need for traditional ETL pipelines. With zero-ETL integrations, you can reduce operational overhead, lower costs, and accelerate your data-driven initiatives. This enables organizations to focus more on deriving actionable insights and less on managing the complexities of data integration. In this post, we discuss the best practices for migrating your ETL pipeline from AWS DMS to zero-ETL integrations for Amazon Redshift.

How Monzo Bank reduced cost of TTL from time series index tables in Amazon Keyspaces

At Monzo, we use Amazon Keyspaces (for Apache Cassandra) as our main operational database. Today, we store over 350 TB of data across more than 2,000 tables in Amazon Keyspaces, handling over 2,000,000 reads and 100,000 writes per second at peak. In this post, we share how we used a different mechanism for row expiry than the Time to Live setting in Amazon Keyspaces to reduce our operating costs for an index while preserving its semantics.

Reduce latency and cost in read-heavy applications using Amazon DynamoDB Accelerator

Amazon DynamoDB Accelerator (DAX) is a fully managed, in-memory cache for DynamoDB. By using DAX with DynamoDB, you can improve the latency for read requests in your application. In this post, we discuss how to improve latency and reduce cost when using DynamoDB for your read-heavy applications.

FundApps’s journey from SQL Server to Amazon Aurora Serverless v2 with Babelfish

FundApps, founded in 2010, is one of the pioneers in the Regulatory Technology (RegTech) space, which includes compliance monitoring and reporting. FundApps decided to rearchitect their environment and transform it to a cloud-based architecture on AWS to better support the growth of their business. For more information, see Faster, cheaper, greener: Pick three — FundApps modernization journey. In this post, we focus on the persistence layer of the FundApps regulatory data service. You learn how FundApps improved the service scalability, reduced cost, and streamlined operations by migrating from SQL Server database to a cloud-centered solution combining Amazon Aurora Serverless v2 with Babelfish for Aurora PostgreSQL and Amazon Simple Storage Service (Amazon S3).

Shrink storage volumes for your RDS databases and optimize your infrastructure costs

Recently, Amazon RDS launched the ability to shrink storage volumes using Amazon RDS Blue/Green Deployments – a nice addition to the list of new use cases that Blue/Green Deployments now supports. In this post, we cover how to use the new storage volume shrink feature in Amazon RDS Blue/Green Deployments to minimize the downtime required to perform the storage size reduction operation. We also review various mechanisms to monitor the progress of storage shrink and best practices on how to arrive at the optimal storage size for your shrink storage task.

Understand the benefits of physical replication in Amazon RDS for PostgreSQL Blue/Green Deployments

With the recent addition of physical replication as an option for RDS Blue/Green Deployments, you can overcome most of the limitations of logical replication. This makes physical replication particularly well-suited for use cases like minor version upgrades, schema changes (DDL operations) in the blue environment, and storage adjustments. In this post, we delve into the advantages of using physical replication in RDS for PostgreSQL blue/green deployments to simplify database operations and scale with application demands. We explore the key benefits of physical replication and provide a step-by-step guide to help you get started with this new capability.

Scaling to 70M users: How Flo Health optimized Amazon DynamoDB for cost and performance

Flo is the largest app in the Health and Fitness category worldwide, with 70 million monthly active users. In this post, we explain best practices Flo implemented to scale to more than 70 million monthly active users while achieving 60% cost efficiency with Amazon DynamoDB.

Using RDS Proxy with Amazon RDS Multi-AZ DB instance deployment to improve planned failover time

In this post, we demonstrate improvements in planned failover downtime of Multi-AZ instance deployment with Amazon RDS Proxy, a result of several optimizations made by RDS. In the event of a failure, Amazon RDS automatically switches the roles of the primary and standby instances and updates the IP address associated with the database’s DNS (hostname). This allows client applications to maintain their connection settings during failover. This process, known as DNS propagation, can take up to 35 seconds to complete. RDS Proxy eliminates the 35 seconds of DNS propagation delay by continuously monitoring both instances, allowing it to bypass DNS propagation. This allows RDS Proxy to deliver a faster failover response for client applications, maximizing availability during failovers.

From caching to real-time analytics: Essential use cases for Amazon ElastiCache for Valkey

Valkey is an open-source, distributed, in-memory key-value data store that offers high-performance data retrieval and storage capabilities, making it an ideal choice for scalable, low-latency modern application development. Originating as a fork of Redis OSS following recent licensing changes, Valkey maintains full compatibility with its predecessor while providing high performance alternative for its developers. Valkey […]

New – Accelerate database modernization with generative AI using AWS Database Migration Service Schema Conversion

Today, we’re excited to inform you about a new generative AI feature in DMS SC. You can now use advanced language models to streamline and enhance your migration workflow. In this post, we discuss the key capabilities of DMS SC with generative AI and how to enable it to offer you additional recommendations to reduce manual conversion effort and time.