AWS Database Blog

Archival solutions for Oracle database workloads in AWS: Part 1

This is a two-part series. In this post, we explain three archival solutions that allow you to archive Oracle data into Amazon Simple Storage Service (Amazon S3). In Part 2 of this series, we explain three archival solutions using native Oracle products and utilities. All of these options allow you to join current Oracle data with archived data.

Archival solutions for Oracle database workloads in AWS: Part 2

This post is a continuation of Archival solutions for Oracle database workloads in AWS: Part 1. Part 1 explains three archival solutions that allow you to archive Oracle data into Amazon Simple Storage Service (Amazon S3). In this post, we explain three archival solutions using native Oracle products and utilities.

Data modeling best practices to unlock the value of your time-series data

Amazon Timestream is a fast, scalable, and serverless time-series database service that makes it easier to store and analyze trillions of events per day. In this post, we guide you through the essential concepts of Timestream and demonstrate how to use them to make critical data modeling decisions. We walk you through how data modeling helps for query performance and cost-effective usage. We explore a practical example of modeling video streaming data, showcasing how these concepts are applied and the resulting benefits. Lastly, we provide more best practices that directly or indirectly relate to data modeling.

Troubleshoot networking issues during database migration with the AWS DMS diagnostic support AMI

In this post, we introduce the key functionalities, architecture, and configurations of the AWS DMS diagnostic support AMI. Then, we show you how to launch the AMI with proper networking configurations and AWS Identity and Access Management (IAM) permissions using AWS CloudFormation. Last, we demonstrate an example of how network latency results in significant replication lag and how to use the AMI to diagnose the issue.

Improve availability of Amazon Neptune during engine upgrade using blue/green deployment

Amazon Neptune is a fully managed graph database service built for the cloud that makes it easier to build and run graph applications that work with highly connected datasets. Neptune provides built-in security, continuous backups, serverless compute, and integrations with other AWS services. Neptune supports in-place upgrades of cluster and database instances. Upgrade of a Neptune cluster can be done either manually or automatically (during the database maintenance window).

Introducing the Advanced JDBC Wrapper Driver for Amazon Aurora

Today’s modern applications are expected to be scalable and resilient. The top of this list is scalability, which depending on the size of the application workload could mean the ability to handle millions of users on demand. With stateful applications such as eCommerce, Financial Services and Games, this means having highly available databases. With the release of Amazon Aurora in 2015, customers could run relational databases in an Aurora cluster comprising of one writer and up to 15 low-latency reader nodes. This enables applications to scale reads significantly. However, as with any database supporting multiple instances, developers have built complex application logic to deal with special events such as switchover or failover.

Analyze Amazon DocumentDB workloads with Performance Insights

Amazon DocumentDB (with MongoDB compatibility) is a fast, reliable, and fully managed database service. Amazon DocumentDB makes it easy to set up, operate, and scale MongoDB API-compatible databases in the cloud. With Amazon DocumentDB, you can run the same application code and use the same drivers and tools that you use with MongoDB API. Performance Insights adds to the existing Amazon DocumentDB monitoring features to illustrate your cluster performance and help you analyze any issues that affect it. With the Performance Insights dashboard, you can visualize the database load and filter the load by waits, query statements, hosts, or application. Performance Insights is included with Amazon DocumentDB instances and stores seven days of performance history in a rolling window at no additional cost.

Create custom PostgreSQL data types using Trusted Language Extensions

In this post, we demonstrate how to create custom PostgreSQL data types using TLE. PostgreSQL ships with many robust data types that accommodate most customer workloads in a performant manner. Although PostgreSQL has the capabilities to deploy custom data types natively, introducing new data types at scale in architectures spanning multiple AWS accounts and Regions poses a unique challenge for builders. With Trusted Language Extensions (TLE), you can create and manage your custom data types, allowing the quick and easy deployment of PostgreSQL data types across your infrastructures in a secure and efficient manner.

Manage Amazon RDS Custom for SQL Server CEV AMIs using EC2 Image Builder

Amazon Relational Database Service (Amazon RDS) Custom for SQL Server allows you to use a custom engine version (CEV) by providing an Amazon Machine Image (AMI), which includes specific customizations and database media installed on it. In this post we provide you guidance and best practices to build, test, and distribute AMIs using an EC2 Image Builder pipeline.

Use a SQL Server secondary replica in an availability group as a source to migrate data into Amazon Redshift with AWS DMS

In this post, we show you how to use a secondary replica instance in an Always On availability group as the source of migrating your data from SQL Server to Amazon Redshift. Using a secondary replica reduces the utilization overhead on your busy primary replica.