AWS Big Data Blog
An integrated experience for all your data and AI with Amazon SageMaker Unified Studio (preview)
Organizations are building data-driven applications to guide business decisions, improve agility, and drive innovation. Many of these applications are complex to build because they require collaboration across teams and the integration of data, tools, and services. Data engineers use data warehouses, data lakes, and analytics tools to load, transform, clean, and aggregate data. Data scientists use notebook environments (such as JupyterLab) to create predictive models for different target segments.
However, building advanced data-driven applications poses several challenges. First, it can be time consuming for users to learn multiple services’ development experiences. Second, because data, code, and other development artifacts like machine learning (ML) models are stored within different services, it can be cumbersome for users to understand how they interact with each other and make changes. Third, configuring and governing access to appropriate users for data, code, development artifacts, and compute resources across services is a manual process.
To address these challenges, organizations often build bespoke integrations between services, tools, and their own access management systems. Organizations want the flexibility to adopt the best services for their use cases while empowering their data practitioners with a unified development experience.
We launched Amazon SageMaker Unified Studio in preview to tackle these challenges. SageMaker Unified Studio is an integrated development environment (IDE) for data, analytics, and AI. Discover your data and put it to work using familiar AWS tools to complete end-to-end development workflows, including data analysis, data processing, model training, generative AI app building, and more, in a single governed environment. Create or join projects to collaborate with your teams, share AI and analytics artifacts securely, and discover and use your data stored in Amazon S3, Amazon Redshift, and more data sources through the Amazon SageMaker Lakehouse. As AI and analytics use cases converge, transform how data teams work together with SageMaker Unified Studio.
This post demonstrates how SageMaker Unified Studio unifies your analytic workloads.
The following screenshot illustrates the SageMaker Unified Studio.
The SageMaker Unified Studio provides the following quick access menu options from Home:
- Discover:
- Data catalog – Find and query data assets and explore ML models
- Generative AI playground – Experiment with the chat or image playground
- Shared generative AI assets – Explore generative AI applications and prompts shared with you.
- Build with projects:
- ML and generative AI model – Build, train, and deploy ML and foundation models with fully managed infrastructure, tools, and workflows.
- Generative AI app development – Build generative AI apps and experiment with foundation models, prompts, agents, functions, and guardrails in Amazon Bedrock IDE.
- Data processing and SQL analytics – Analyze, prepare, and integrate data for analytics and AI using Amazon Athena, Amazon EMR, AWS Glue, and Amazon Redshift.
- Data and AI governance – Publish your data products to the catalog with glossaries and metadata forms. Govern access securely in the Amazon SageMaker Catalog built on Amazon DataZone.
With SageMaker Unified Studio, you now have a unified development experience across these services. You only need to learn these tools once and then you can use them across all services.
With SageMaker Unified Studio notebooks, you can use Python or Spark to interactively explore and visualize data, prepare data for analytics and ML, and train ML models. With the SQL editor, you can query data lakes, databases, data warehouses, and federated data sources. The SageMaker Unified Studio tools are integrated with Amazon Q, can quickly build, refine, and maintain applications with text-to-code capabilities.
In addition, SageMaker Unified Studio provides a unified view of an application’s building blocks such as data, code, development artifacts, and compute resources across services to approved users. This allows data engineers, data scientists, business analysts, and other data practitioners working from the same tool to quickly understand how an application works, seamlessly review each other’s work, and make the required changes.
Furthermore, SageMaker Unified Studio automates and simplifies access management for an application’s building blocks. After these building blocks are added to a project, they are automatically accessible to approved users from all tools—SageMaker Unified Studio configures any required service-specific permissions. With SageMaker Unified Studio, data practitioners can access all the capabilities of AWS purpose-built analytics, AI/ML, and generative AI services from a single unified development experience.
In the following sections, we walk through how to get started with SageMaker Unified Studio and some example use cases.
Create a SageMaker Unified Studio domain
Complete the following steps to create a new SageMaker Unified Studio domain:
- On the SageMaker platform console, choose Domains in the navigation pane.
- Choose Create domain.
- For How do you want to set up your domain?, select Quick setup (recommended for exploration).
Initially, no virtual private cloud (VPC) has been specifically set up for use with SageMaker Unified Studio, so you will see a dialog box prompting you to create a VPC.
- Choose Create VPC.
You’re redirected to the AWS CloudFormation console to deploy a stack to configure VPC resources.
- Choose Create stack, and wait for the stack to complete.
- Return to the SageMaker Unified Studio console, and inside the dialog box, choose the refresh icon.
- Under Quick setup settings, for Name, enter a name (for example, demo).
- For Domain Execution role, Domain Service role, Provisioning role, and Manage Access role, leave as default.
- For Virtual private cloud (VPC), verify that the new VPC you created in the CloudFormation stack is configured.
- For Subnets, verify that the new private subnets you created in the CloudFormation stack are configured.
- Choose Continue.
- For Create IAM Identity Center user, search for your SSO user through your email address.
If you don’t have an IAM Identity Center instance, you will be prompted to enter your name after your email address. This will create a new local IAM Identity Center instance.
- Choose Create domain.
Log in to the SageMaker Unified Studio
Now that you have created your new SageMaker Unified Studio domain, complete the following steps to visit the SageMaker Unified Studio:
- On the SageMaker platform console, open the details page of your domain.
- Choose the link for Amazon SageMaker Unified Studio URL.
- Log in with your SSO credentials.
Now you signed in to the SageMaker Unified Studio.
Create a project
The next step is to create a project. Complete the following steps:
- On the SageMaker Unified Studio, choose Select a project on the top menu, and choose Create project.
- For Project name, enter a name (for example, demo).
- For Project profile, choose Data analytics and AI-ML model development.
- Choose Continue.
- Review the input, and choose Create project.
You need to wait for the project to be created. Project creation can take about 5 minutes. Then the SageMaker Unified Studio console navigates you to the project’s home page.
Now you can use a variety of tools for your analytics, ML, and AI workload. In the following sections, we provide a few example use cases.
Process your data through a multi-compute notebook
SageMaker Unified Studio provides a unified JupyterLab experience across different languages, including SQL, PySpark, and Scala Spark. It also supports unified access across different compute runtimes such as Amazon Redshift and Amazon Athena for SQL, Amazon EMR Serverless, Amazon EMR on EC2, and AWS Glue for Spark.
Complete the following steps to get started with the unified JupyterLab experience:
- Open your SageMaker Unified Studio project page.
- On the top menu, choose Build, and under IDE & APPLICATIONS, choose JupyterLab.
- Wait for the space to be ready.
- Choose the plus sign and for Notebook, choose Python 3.
The following screenshot shows an example of the unified notebook page.
There are two dropdown menus on the top left of each cell. The Connection Type menu corresponds to connection types such as Local Python, PySpark, SQL, and so on.
The Compute menu corresponds to compute options such as Athena, AWS Glue, Amazon EMR, and so on.
- For the first cell, choose PySpark, spark, which defaults to AWS Glue for Spark, and enter the following code to initialize
SparkSession
and create a DataFrame from an Amazon Simple Storage Service (Amazon S3) path, then run the cell: - For the next cell, enter the following code to rename columns and filter the records, and run the cell:
- For the next cell, enter the following code to create another DataFrame from another S3 path, and run the cell:
- For the next cell, enter the following code to join the frames and apply custom SQL, and run the cell:
- For the next cell, enter following code to write to a table, and run the cell (replace the AWS Glue database name with your project database name, and the S3 path with your project’s S3 path):
Now you have successfully ingested data to Amazon S3 and created a new table called venue_event_agg
.
- In the next cell, switch the connection type from PySpark to SQL.
- Run following SQL against the table (replace the AWS Glue database name with your project database name):
The following screenshot shows an example of the results.
The SQL ran on AWS Glue for Spark. Optionally, you can switch to other analytics engines like Athena by switching the compute.
Explore your data through a SQL Query Editor
In the previous section, you learned how the unified notebook works with different connection types and different compute engines. Next, let’s use the data explorer to explore the table you created using a notebook. Complete the following steps:
- On the project page, choose Data.
- Under Lakehouse, expand
AwsDataCatalog
. - Expand your database starting from
glue_db_
. - Choose
venue_event_agg
, choose Query with Athena.
- Choose Run all.
The following screenshot shows an example of the query result.
As you enter text in the query editor, you will notice it provides suggestions for statements. The SQL query editor provides real-time autocomplete suggestions as you write SQL statements, covering DML/DDL statements, clauses, functions, and schemas of your catalogs like databases, tables, and columns. This enables faster, error-free query building.
You can complete editing the query and run it.
You can also open a generative SQL assistant powered by Amazon Q to help your query authoring experience.
For example, you can ask “Calculate the sum of eventid_count
across all venues” in the assistant, and the query is automatically suggested. You can choose Add to querybook to copy the suggested query is copied to the querybook, and run it.
Next, coming back to the original query, and let’s try a quick visualization to analyze the data distribution.
- Choose the chart view icon.
- Under Structure, choose Traces.
- For Type, choose Pie.
- For Values, choose
eventid_count
. - For Labels, choose
venuename
.
The query result will display as a pie chart like the following example. You can customize the graph title, axis title, subplot styles, and more on the UI. The generated images can also be downloaded as PNG or JPEG files.
In the above instruction, you learned how the data explorer works with different visualizations.
Clean up
To clean up your resources, complete the following steps:
- Delete the AWS Glue table
venue_event_agg
and S3 objects under the table S3 path. - Delete the project you created.
- Delete the domain you created.
- Delete the VPC named
SageMakerUnifiedStudioVPC
.
Conclusion
In this post, we demonstrated how SageMaker Unified Studio (preview) unifies your analytics workload. We also explained the end-to-end user experience of the SageMaker Unified Studio for two different use cases of notebook and query. Discover your data and put it to work using familiar AWS tools to complete end-to-end development workflows, including data analysis, data processing, model training, generative AI app building, and more, in a single governed environment. Create or join projects to collaborate with your teams, share AI and analytics artifacts securely, and discover and use your data stored in Amazon S3, Amazon Redshift, and more data sources through the Amazon SageMaker Lakehouse. As AI and analytics use cases converge, transform how data teams work together with SageMaker Unified Studio.
To learn more, visit Amazon SageMaker Unified Studio (preview).
About the Authors
Noritaka Sekiyama is a Principal Big Data Architect on the AWS Glue team. He works based in Tokyo, Japan. He is responsible for building software artifacts to help customers. In his spare time, he enjoys cycling with his road bike.
Chiho Sugimoto is a Cloud Support Engineer on the AWS Big Data Support team. She is passionate about helping customers build data lakes using ETL workloads. She loves planetary science and enjoys studying the asteroid Ryugu on weekends.
Zach Mitchell is a Sr. Big Data Architect. He works within the product team to enhance understanding between product engineers and their customers while guiding customers through their journey to develop data lakes and other data solutions on AWS analytics services.
Chanu Damarla is a Principal Product Manager on the Amazon SageMaker Unified Studio team. He works with customers around the globe to translate business and technical requirements into products that delight customers and enable them to be more productive with their data, analytics, and AI.