AWS Partner Network (APN) Blog
Category: Storage
Change Data Capture from On-Premises SQL Server to Amazon Redshift Target
Change Data Capture (CDC) is the technique of systematically tracking incremental change in data at the source, and subsequently applying these changes at the target to maintain synchronization. You can implement CDC in diverse scenarios using a variety of tools and technologies. Here, Cognizant uses a hypothetical retailer with a customer loyalty program to demonstrate how CDC can synchronize incremental changes in customer activity with the main body of data already stored about a customer.
How to Orchestrate and Test Recovery Scenarios with N2WS
N2W Software (N2WS) offers backup and disaster recovery for anyone using AWS. It protects your environment against the inevitable by coping backups across AWS regions or AWS accounts—with the ability to easily restore when needed. Version 3.0 of N2WS Backup and Recovery includes a new feature known as Recovery Scenarios. It allows you to define different sequences of recovery for your protected AWS resources, and test them with a Dry Run.
How to Use AWS Glue to Prepare and Load Amazon S3 Data for Analysis by Teradata Vantage
Customers want to use Teradata Vantage to analyze the data they have stored in Amazon S3, but the AWS service that prepares and loads data stored in S3 for analytics, AWS Glue, does not natively support Teradata Vantage. To use AWS Glue to prep and load data for analysis by Teradata Vantage, you need to rely on AWS Glue custom database connectors. Follow step-by-step instructions and learn how to set up Vantage and AWS Glue to perform Teradata-level analytics on the data you have stored in Amazon S3.
Automated Migration of Multi-Tier Applications to AWS at Scale Using Veritas Cloud Mobility
For many organizations, the amount of data they own and manage is growing at a very high rate, but so is the number of applications they’re responsible for. This growth, coupled with the heterogeneity of technology infrastructure, makes managing IT systems and applications more complex. Veritas solutions, including Veritas Cloud Mobility, help enterprises address information management challenges including backup and recovery, business continuity, software-defined storage, and information governance.
Protecting Your Amazon EBS Volumes at Scale with Clumio
Many AWS customers who use Amazon EBS to store persistent data need to back up that data, sometimes for long periods of time. Clumio’s SaaS solution protects Amazon EBS volumes from multiple AWS accounts though a single policy via tagging. Amazon EBS backups by Clumio are securely stored outside of your AWS account in the Clumio service built on AWS, which is protected by end-to-end encryption and stored in an immutable format.
Microsoft SQL Standard Clustering Across AWS Availability Zones with Zadara Storage as a Service
With Zadara offering Storage-as-a-Service across Availability Zones, the platform’s centralized storage services release the ability to connect multiple Microsoft (MSSQL) servers in a standard Windows Server Failover Cluster model to a single set of shared storage volumes. This removes the need for MSSQL Enterprise Edition licensing and the doubling up of Amazon EBS disk for Amazon EC2 instances. In this post, explore the use of high availability MSSQL Standard Clustering on AWS with Zadara.
Improving Dataset Query Time and Maintaining Flexibility with Amazon Athena and Amazon Redshift
Analyzing large datasets can be challenging, especially if you aren’t thinking about certain characteristics of the data and what you’re ultimately looking to achieve. There are a number of factors organizations need to consider in order to build systems that are flexible, affordable, and fast. Here, experts from CloudZero walk through how to use AWS services to analyze customer billing data and provide value to end users.
Turning Data into a Key Enterprise Asset with a Governed Data Lake on AWS
Data and analytics success relies on providing analysts and data end users with quick, easy access to accurate, quality data. Enterprises need a high performing and cost-efficient data architecture that supports demand for data access, while providing the data governance and management capabilities required by IT. Data management excellence, which is best achieved via a data lake on AWS, captures and makes quality data available to analysts in a fast and cost-effective way.
How Insurity Architected ClaimsXPress for High Availability and Resiliency on AWS
ISVs servicing the insurance industry can take advantage of AWS infrastructure and services to architect solutions that meet these needs, without requiring large up-front capital expenditures. Insurity ClaimsXPress is a claims management system built on AWS that provides a redundant, secure, multi-region architecture. In this post, we will outline how Insurity has architected their ClaimsXPress solution on AWS to deliver an enterprise-grade SaaS solution for the commercial insurance market.
MongoDB Atlas Data Lake Lets Developers Create Value from Rich Modern Data
With the proliferation of cost-effective storage options such as Amazon S3, there should be no reason you can’t keep your data forever, except that with this much data it can be difficult to create value in a timely and efficient way. MongoDB’s Atlas Data Lake enables developers to mine their data for insights with more storage options and the speed and agility of the AWS Cloud. It provides a serverless parallelized compute platform that gives you a powerful and flexible way to analyze and explore your data on Amazon S3.