AWS Big Data Blog
Category: Announcements
Accelerate queries on Apache Iceberg tables through AWS Glue auto compaction
In this post, we explore new features of the AWS Glue Data Catalog, which now supports improved automatic compaction of Iceberg tables for streaming data, making it straightforward for you to keep your transactional data lakes consistently performant. Enabling automatic compaction on Iceberg tables reduces metadata overhead on your Iceberg tables and improves query performance
Recap of Amazon Redshift key product announcements in 2024
Amazon Redshift made significant strides in 2024, that enhanced price-performance, enabled data lakehouse architectures by blurring the boundaries between data lakes and data warehouses, simplified ingestion and accelerated near real-time analytics, and incorporated generative AI capabilities to build natural language-based applications and boost user productivity. This blog post provides a comprehensive overview of the major product innovations and enhancements made to Amazon Redshift in 2024.
Introducing AWS Glue 5.0 for Apache Spark
Today, we are launching AWS Glue 5.0, a new version of AWS Glue that accelerates data integration workloads in AWS. AWS Glue 5.0 upgrades the Spark engines to Apache Spark 3.5.2 and Python 3.11, giving you newer Spark and Python releases so you can develop, run, and scale your data integration workloads and get insights faster. This post describes what’s new in AWS Glue 5.0, performance improvements, key highlights on Spark and related libraries, and how to get started on AWS Glue 5.0.
Simplify data integration with AWS Glue and zero-ETL to Amazon SageMaker Lakehouse
AWS has introduced zero-ETL integration support from external applications to AWS Glue, simplifying data integration for organizations. This new feature allows for seamless replication of data from popular platforms like Salesforce, ServiceNow, and Zendesk into Amazon SageMaker Lakehouse and Amazon Redshift. This blog post demonstrates a use case involving ServiceNow data integration, outlining the process of setting up a connector, creating a zero-ETL integration, and verifying both initial data load and change data capture (CDC). It also highlights the advantages of using Apache Iceberg for data versioning and time travel capabilities within zero-ETL integrations.
Introducing AWS Glue Data Catalog automation for table statistics collection for improved query performance on Amazon Redshift and Amazon Athena
The AWS Glue Data Catalog now automates generating statistics for new tables. These statistics are integrated with the cost-based optimizer (CBO) from Amazon Redshift Spectrum and Amazon Athena, resulting in improved query performance and potential cost savings. In this post, we discuss how the Data Catalog automates table statistics collection and how you can use it to enhance your data platform’s efficiency.
Introducing generative AI troubleshooting for Apache Spark in AWS Glue (preview)
This post demonstrates how generative AI troubleshooting for Spark in AWS Glue helps your day-to-day Spark application debugging. It simplifies the debugging process for your Spark applications by using generative AI to automatically identify the root cause of failures and provides actionable recommendations to resolve the issues.
Introducing generative AI upgrades for Apache Spark in AWS Glue (preview)
Today, we are excited to announce the preview of generative AI upgrades for Spark, a new capability that enables data practitioners to quickly upgrade and modernize their Spark applications running on AWS. Starting with Spark jobs in AWS Glue, this feature allows you to upgrade from an older AWS Glue version to AWS Glue version 4.0. This new capability reduces the time data engineers spend on modernizing their Spark applications, allowing them to focus on building new data pipelines and getting valuable analytics faster.
Accelerate your data workflows with Amazon Redshift Data API persistent sessions
In this post, we’ll walk through an example ETL process that uses session reuse to efficiently create, populate, and query temporary staging tables across the full data transformation workflow—all within the same persistent Amazon Redshift database session. You’ll learn best practices for optimizing ETL orchestration code, reducing job runtimes by eliminating connection overhead, and simplifying pipeline complexity
From data lakes to insights: dbt adapter for Amazon Athena now supported in dbt Cloud
We are excited to announce that the dbt adapter for Amazon Athena is now officially supported in dbt Cloud. This integration enables data teams to efficiently transform and manage data using Athena with dbt Cloud’s robust features, enhancing the overall data workflow experience. In this post, we discuss the advantages of dbt Cloud over dbt Core, common use cases, and how to get started with Amazon Athena using the dbt adapter.
AWS Glue Data Catalog supports automatic optimization of Apache Iceberg tables through your Amazon VPC
The AWS Glue Data Catalog supports automatic table optimization of Apache Iceberg tables, including compaction, snapshots, and orphan data management. The data compaction optimizer constantly monitors table partitions and kicks off the compaction process when the threshold is exceeded for the number of files and file sizes. This post demonstrates how it works with step-by-step instructions.