AWS Partner Network (APN) Blog
Category: Amazon Simple Storage Service (S3)
Protect Your SaaS Data and Leverage it to Gain Insights Using Own and AWS
The 3-2-1 backup strategy ensures data protection by maintaining multiple copies across different storage media, with at least one copy offsite. As companies adopt SaaS applications, they need strategies to backup critical SaaS data. Own Company (formerly OwnBackup) enables SaaS data backups to Amazon S3, providing data autonomy, meeting recovery objectives, and allowing data analysis.
Unify Analytics Leveraging Amazon Athena and Teradata for Robust Query Federation
The Amazon Athena Teradata Connector enables Athena to query data in Teradata Vantage using SQL, and comprises two AWS Lambda functions for metadata and record reading. This post describes deploying the connector, creating a Lambda layer for the Teradata JDBC driver, and running queries on Teradata from Athena, including a federated query joining Teradata and S3 data. This provides a scalable, serverless way to analyze data across different data stores without data duplication.
How to Analyze Fastly Content Delivery Network Logs with Amazon QuickSight Powered by Generative BI
A content delivery network (CDN) caches content closer to users and reduces load times. Monitoring CDN performance is crucial for optimizing user experience, and this post demonstrates building an Amazon QuickSight dashboard with generative AI capabilities to gain insights from Fastly CDN logs in Amazon S3. It covers configuring real-time log streaming, using AWS Glue to catalog data, accelerating dashboard creation using natural language, and creating rich data stories for stakeholders.
The Composable CDP: Activating Data from Amazon Redshift to 200+ Tools Using Hightouch
Customer data is critical for modern digital organizations, but is often scattered across tools which can render it useless. Historically, customer data platforms (CDPs) aimed to consolidate data for insights and activation, but organizations now prefer a composable CDP architecture on AWS to sync data from Amazon Redshift and Amazon S3 into downstream tools. Hightouch facilitates this composable CDP approach, making it easy to activate AWS data across channels without engineering work.
How Leidos Standardized its Application Logging into Amazon Security Lake with LOIS
As systems generate increasing data, making sense of it is critical. Application logs are unique and not standardized. Leidos addresses logging issues using the Open Cybersecurity Schema Framework (OCSF) and Amazon Security Lake via the Leidos OCSF Integration Suite (LOIS), which bridges applications to generate OCSF-compliant messages and ingest them into Amazon Security Lake for analysis and visualization.
How to Accelerate Interface Development with Skuid’s No-Code Studio on AWS
Skuid by Nintex is a low-code platform for rapidly building enterprise web apps. This post demonstrates using Skuid to connect to Amazon S3, listing bucket contents in a table, and enabling upload, download, and delete actions. With just a few configuration steps and zero coding, Skuid integrates data from services like S3 into polished, branded experiences, and streamlines building cloud-native apps without compromising power or flexibility.
Optimize Spatial Data Management and Analytics with Ellipsis Drive and Amazon S3
Spatial data creates data management challenges. Ellipsis Drive on Amazon S3 solves pain points: no scalable ingestion into a data lake, no interoperable searchability for analytics, no on-demand rendering. Benefits include scalability and time savings on management and transformation by automating ingestion and structuring; faster querying using patent-pending archives; and instant access to data to feed models and apps.
Filter and Stream Logs from Amazon S3 Logging Buckets into Splunk Using AWS Lambda
This post showcases a way to filter and stream logs from centralized Amazon S3 logging buckets to Splunk using a push mechanism leveraging AWS Lambda. The push mechanism offers benefits such as lower operational overhead, lower costs, and automated scaling. We’ll provide instructions and a sample Lambda code that filters virtual private cloud (VPC) flow logs with “action” flag set to “REJECT” and pushes it to Splunk via a Splunk HTTP Event Collector (HEC) endpoint.
Building a Data Lakehouse with Amazon S3 and Dremio on Apache Iceberg Tables
Learn how to implement a data lakehouse using Amazon S3 and Dremio on Apache Iceberg, which enables data teams to quickly, easily, and safely keep up with data and analytics changes. This helps businesses realize fast turnaround times to process the changes end-to-end. Dremio is an AWS Partner whose data lake engine delivers fast query speed and a self-service semantic layer operating directly against S3 data.
Automating Secure and Scalable Website Deployment on AWS with Amazon CloudFront and AWS CDK
There is no easier way to run HTTPS-enabled static websites on AWS than by using Amazon CloudFront and Amazon S3. In this post, we’ll look at automating website deployment on AWS using AWS Cloud Development Kit (AWS CDK) and TypeScript. We’ll use the architecture that combines CloudFront as the content delivery network, AWS Certificate Manager for secure certificate provisioning, Amazon S3 for reliable website hosting, and Amazon Route 53 as the domain name system.