AWS Machine Learning Blog

Scaling Rufus, the Amazon generative AI-powered conversational shopping assistant with over 80,000 AWS Inferentia and AWS Trainium chips, for Prime Day

In this post, we dive into the Rufus inference deployment using AWS chips and how this enabled one of the most demanding events of the year—Amazon Prime Day.

Efficient Pre-training of Llama 3-like model architectures using torchtitan on Amazon SageMaker

Efficient Pre-training of Llama 3-like model architectures using torchtitan on Amazon SageMaker

In this post, we collaborate with the team working on PyTorch at Meta to showcase how the torchtitan library accelerates and simplifies the pre-training of Meta Llama 3-like model architectures. We showcase the key features and capabilities of torchtitan such as FSDP2, torch.compile integration, and FP8 support that optimize the training efficiency.

Time series forecasting with Amazon SageMaker AutoML

In this blog post, we explore a comprehensive approach to time series forecasting using the Amazon SageMaker AutoMLV2 Software Development Kit (SDK). SageMaker AutoMLV2 is part of the SageMaker Autopilot suite, which automates the end-to-end machine learning workflow from data preparation to model deployment.

Architecture diagram

Automate user on-boarding for financial services with a digital assistant powered by Amazon Bedrock

In this post, we present a solution that harnesses the power of generative AI to streamline the user onboarding process for financial services through a digital assistant.

Create your fashion assistant application using Amazon Titan models and Amazon Bedrock Agents

Create your fashion assistant application using Amazon Titan models and Amazon Bedrock Agents

In this post, we implement a fashion assistant agent using Amazon Bedrock Agents and the Amazon Titan family models. The fashion assistant provides a personalized, multimodal conversational experience.

How Aviva built a scalable, secure, and reliable MLOps platform using Amazon SageMaker

How Aviva built a scalable, secure, and reliable MLOps platform using Amazon SageMaker

In this post, we describe how Aviva built a fully serverless MLOps platform based on the AWS Enterprise MLOps Framework and Amazon SageMaker to integrate DevOps best practices into the ML lifecycle. This solution establishes MLOps practices to standardize model development, streamline ML model deployment, and provide consistent monitoring.