AWS Machine Learning Blog
Category: Artificial Intelligence
Transitioning from Amazon Rekognition people pathing: Exploring other alternatives
After careful consideration, we made the decision to discontinue Rekognition people pathing on October 31, 2025. New customers will not be able to access the capability effective October 24, 2024, but existing customers will be able to use the capability as normal until October 31, 2025. This post discusses an alternative solution to Rekognition people pathing and how you can implement this solution in your applications.
Unlocking generative AI for enterprises: How SnapLogic powers their low-code Agent Creator using Amazon Bedrock
In this post, we learn how SnapLogic’s Agent Creator leverages Amazon Bedrock to provide a low-code platform that enables enterprises to quickly develop and deploy powerful generative AI applications without deep technical expertise.
Fine-tune a BGE embedding model using synthetic data from Amazon Bedrock
In this post, we demonstrate how to use Amazon Bedrock to create synthetic data, fine-tune a BAAI General Embeddings (BGE) model, and deploy it using Amazon SageMaker.
Boost post-call analytics with Amazon Q in QuickSight
In this post, we show you how to unlock powerful post-call analytics and visualizations, empowering your organization to make data-driven decisions and drive continuous improvement.
Create a next generation chat assistant with Amazon Bedrock, Amazon Connect, Amazon Lex, LangChain, and WhatsApp
In this post, we demonstrate how to deploy a contextual AI assistant. We build a solution which provides users with a familiar and convenient interface using Amazon Bedrock Knowledge Bases, Amazon Lex, and Amazon Connect, with WhatsApp as the channel.
Generative AI foundation model training on Amazon SageMaker
In this post, we explore how organizations can cost-effectively customize and adapt FMs using AWS managed services such as Amazon SageMaker training jobs and Amazon SageMaker HyperPod. We discuss how these powerful tools enable organizations to optimize compute resources and reduce the complexity of model training and fine-tuning. We explore how you can make an informed decision about which Amazon SageMaker service is most applicable to your business needs and requirements.
Automate fine-tuning of Llama 3.x models with the new visual designer for Amazon SageMaker Pipelines
In this post, we will show you how to set up an automated LLM customization (fine-tuning) workflow so that the Llama 3.x models from Meta can provide a high-quality summary of SEC filings for financial applications. Fine-tuning allows you to configure LLMs to achieve improved performance on your domain-specific tasks.
Implement Amazon SageMaker domain cross-Region disaster recovery using custom Amazon EFS instances
In this post, we guide you through a step-by-step process to seamlessly migrate and safeguard your SageMaker domain from one active Region to another passive or active Region, including all associated user profiles and files.
Amazon Bedrock Custom Model Import now generally available
We’re pleased to announce the general availability (GA) of Amazon Bedrock Custom Model Import. This feature empowers customers to import and use their customized models alongside existing foundation models (FMs) through a single, unified API.
Deploy a serverless web application to edit images using Amazon Bedrock
In this post, we explore a sample solution that you can use to deploy an image editing application by using AWS serverless services and generative AI services. We use Amazon Bedrock and an Amazon Titan FM that allow you to edit images by using prompts.