AWS Machine Learning Blog
Category: Developer Tools
Secure AWS CodeArtifact access for isolated Amazon SageMaker notebook instances
AWS CodeArtifact allows developers to connect internal code repositories to upstream code repositories like Pypi, Maven, or NPM. AWS CodeArtifact is a powerful addition to CI/CD workflows on AWS, but it is similarly effective for code-bases hosted on a Jupyter notebook. This is a common development paradigm for Machine Learning developers that build and train […]
Improve your data science workflow with a multi-branch training MLOps pipeline using AWS
In this post, you will learn how to create a multi-branch training MLOps continuous integration and continuous delivery (CI/CD) pipeline using AWS CodePipeline and AWS CodeCommit, in addition to Jenkins and GitHub. I discuss the concept of experiment branches, where data scientists can work in parallel and eventually merge their experiment back into the main […]
Create a cross-account machine learning training and deployment environment with AWS Code Pipeline
A continuous integration and continuous delivery (CI/CD) pipeline helps you automate steps in your machine learning (ML) applications such as data ingestion, data preparation, feature engineering, modeling training, and model deployment. A pipeline across multiple AWS accounts improves security, agility, and resilience because an AWS account provides a natural security and access boundary for your […]
How Intel Olympic Technology Group built a smart coaching SaaS application by deploying pose estimation models – Part 1
February 9, 2024: Amazon Kinesis Data Firehose has been renamed to Amazon Data Firehose. Read the AWS What’s New post to learn more. The Intel Olympic Technology Group (OTG), a division within Intel focused on bringing cutting-edge technology to Olympic athletes, collaborated with AWS Machine Learning Professional Services (MLPS) to build a smart coaching software […]
Build a CI/CD pipeline for deploying custom machine learning models using AWS services
Amazon SageMaker is a fully managed service that provides every developer and data scientist with the ability to build, train, and deploy machine learning (ML) models quickly. SageMaker removes the heavy lifting from each step of the ML process to make it easier to develop high-quality ML artifacts. AWS Serverless Application Model (AWS SAM) is […]
Private package installation in Amazon SageMaker running in internet-free mode
Amazon SageMaker Studio notebooks and Amazon SageMaker notebook instances are internet-enabled by default. However, many regulated industries, such as financial industries, healthcare, telecommunications, and others, require that network traffic traverses their own Amazon Virtual Private Cloud (Amazon VPC) to restrict and control which traffic can go through public internet. Although you can disable direct internet […]
Training and serving H2O models using Amazon SageMaker
Model training and serving steps are two essential pieces of a successful end-to-end machine learning (ML) pipeline. These two steps often require different software and hardware setups to provide the best mix for a production environment. Model training is optimized for a low-cost, feasible total run duration, scientific flexibility, and model interpretability objectives, whereas model […]
Using the Amazon SageMaker Studio Image Build CLI to build container images from your Studio notebooks
The new Amazon SageMaker Studio Image Build convenience package allows data scientists and developers to easily build custom container images from your Studio notebooks via a new CLI. The new CLI eliminates the need to manually set up and connect to Docker build environments for building container images in Amazon SageMaker Studio. Amazon SageMaker Studio […]