AWS Database Blog
Category: Advanced (300)
Build an ultra-low latency online feature store for real-time inferencing using Amazon ElastiCache for Redis
Over the last several years, the growth of Machine Learning (ML) has changed the paradigm of how a business operates at its core, forcing the conversation of how to tightly integrate ML into critical decision points for a user. ML can help businesses by improving customer interactions, boosting sales, and improving operating efficiency. This can […]
How to deploy SQL Server Analysis Services on RDS Custom in a new VPC environment
A common use case for Amazon RDS Custom for SQL Server is to offload the undifferentiated heavy lifting of managing the underlying infrastructure of the cluster when running SQL Server Analysis Services (SSAS). SSAS is an analytical data engine, based on the VertiPaq technology which is used in decision support and business analytics.In this post, we explain how to launch and setup Amazon RDS Custom for SQL Server and enable SSAS to run either Tabular or Multi-Dimensional model modes.
Migrate logins, database roles, users and object-level permissions to Amazon RDS for SQL Server using T-SQL
In this post, we explain how to migrate the logins, database roles, users, and object-level permissions from on-prem or Amazon Elastic Compute Cloud (Amazon EC2) for SQL Server to Amazon Relational Database Service (Amazon RDS) for SQL Server using the T-SQL.
Build aggregations for Amazon DynamoDB tables using Amazon DynamoDB Streams
In this post, we discuss how to perform aggregations on a DynamoDB table using Amazon DynamoDB Streams and AWS Lambda. The content includes a reference architecture, a step-by-step guide on enabling DynamoDB Streams for a table, sample code for implementing the solution within a scenario, and an accompanying AWS CloudFormation template for easy deployment and testing.
Amazon RDS for Oracle Transportable Tablespaces using RMAN
In this post, we show you how you can use the RMAN XTTS functionality to migrate from an Oracle database hosted on Amazon Elastic Compute CLoud (Amazon EC2) to Amazon RDS for Oracle. Combined with Amazon Elastic File System (Amazon EFS) integration, XTTS can help reduce the complexity of your migration strategy, reduce the number and copies of data and backups required (as well as associated storage space consumption), and reduce the application downtime associated with completing the migration of your data.
Model hierarchical automotive component data using Amazon DynamoDB
In this post, we discuss an automotive manufacturing information management use case where we store information about components within a vehicle as well as the hierarchy between each of the components. For our automotive use case, we use Amazon DynamoDB to deliver transactional queries, such as component attribute lookups. We will also show you how to use DynamoDB for larger responses such as a recursive query for all the components in a vehicle. While recursive object relationships can be represented in graph databases and possibly traditional RDBMS (with complex joins), these deeper queries can also be represented in DynamoDB.
Use the DBMS_CLOUD package in Amazon RDS Custom for Oracle for direct Amazon S3 integration
In this post, we demonstrate how to use the DBMS_CLOUD package to transfer files between S3 buckets and directories in an RDS Custom for Oracle database. We also show how you can access data from Amazon S3 directly using Oracle features such as external tables and hybrid partition tables. The features provided by DBMS_CLOUD could vary between different Oracle releases, so pay close attention to the steps in the post and make sure you reference DBMS_CLOUD in the Oracle Database 19c documentation. To avoid confusion, the option discussed in this post is for RDS Custom for Oracle, not for RDS for Oracle. RDS for Oracle offers S3 integration.
Archival solutions for Oracle database workloads in AWS: Part 1
This is a two-part series. In this post, we explain three archival solutions that allow you to archive Oracle data into Amazon Simple Storage Service (Amazon S3). In Part 2 of this series, we explain three archival solutions using native Oracle products and utilities. All of these options allow you to join current Oracle data with archived data.
Archival solutions for Oracle database workloads in AWS: Part 2
This post is a continuation of Archival solutions for Oracle database workloads in AWS: Part 1. Part 1 explains three archival solutions that allow you to archive Oracle data into Amazon Simple Storage Service (Amazon S3). In this post, we explain three archival solutions using native Oracle products and utilities.
Data modeling best practices to unlock the value of your time-series data
Amazon Timestream is a fast, scalable, and serverless time-series database service that makes it easier to store and analyze trillions of events per day. In this post, we guide you through the essential concepts of Timestream and demonstrate how to use them to make critical data modeling decisions. We walk you through how data modeling helps for query performance and cost-effective usage. We explore a practical example of modeling video streaming data, showcasing how these concepts are applied and the resulting benefits. Lastly, we provide more best practices that directly or indirectly relate to data modeling.