AWS Machine Learning Blog

Category: Amazon Machine Learning

From RAG to fabric: Lessons learned from building real-world RAGs at GenAIIC – Part 2

This post focuses on doing RAG on heterogeneous data formats. We first introduce routers, and how they can help managing diverse data sources. We then give tips on how to handle tabular data and will conclude with multimodal RAG, focusing specifically on solutions that handle both text and image data.

How GoDaddy built Lighthouse, an interaction analytics solution to generate insights on support interactions using Amazon Bedrock

In this post, we discuss how GoDaddy’s Care & Services team, in close collaboration with the  AWS GenAI Labs team, built Lighthouse—a generative AI solution powered by Amazon Bedrock. Amazon Bedrock is a fully managed service that makes foundation models (FMs) from leading AI startups and Amazon available through an API, so you can choose from a wide range of FMs to find the model that is best suited for your use case. With Amazon Bedrock, GoDaddy’s Lighthouse mines insights from customer care interactions using crafted prompts to identify top call drivers and reduce friction points in customers’ product and website experiences, leading to improved customer experience.

Principal Financial Group uses QnABot on AWS and Amazon Q Business to enhance workforce productivity with generative AI

In this post, we explore how Principal used QnABot paired with Amazon Q Business and Amazon Bedrock to create Principal AI Generative Experience: a user-friendly, secure internal chatbot for faster access to information. Using generative AI, Principal’s employees can now focus on deeper human judgment based decisioning, instead of spending time scouring for answers from data sources manually.

Governing ML lifecycle at scale: Best practices to set up cost and usage visibility of ML workloads in multi-account environments

Cloud costs can significantly impact your business operations. Gaining real-time visibility into infrastructure expenses, usage patterns, and cost drivers is essential. To allocate costs to cloud resources, a tagging strategy is essential. This post outlines steps you can take to implement a comprehensive tagging governance strategy across accounts, using AWS tools and services that provide visibility and control. By setting up automated policy enforcement and checks, you can achieve cost optimization across your machine learning (ML) environment.

Automate invoice processing with Streamlit and Amazon Bedrock

In this post, we walk through a step-by-step guide to automating invoice processing using Streamlit and Amazon Bedrock, addressing the challenge of handling invoices from multiple vendors with different formats. We show how to set up the environment, process invoices stored in Amazon S3, and deploy a user-friendly Streamlit application to review and interact with the processed data.

Centralize model governance with SageMaker Model Registry Resource Access Manager sharing

We recently announced the general availability of cross-account sharing of Amazon SageMaker Model Registry using AWS Resource Access Manager (AWS RAM), making it easier to securely share and discover machine learning (ML) models across your AWS accounts. In this post, we will show you how to use this new cross-account model sharing feature to build your own centralized model governance capability, which is often needed for centralized model approval, deployment, auditing, and monitoring workflows.

Revolutionize trip planning with Amazon Bedrock and Amazon Location Service

In this post, we show you how to build a generative AI-powered trip-planning service that revolutionizes the way travelers discover and explore destinations. By using advanced AI technology and Amazon Location Service, the trip planner lets users translate inspiration into personalized travel itineraries. This innovative service goes beyond traditional trip planning methods, offering real-time interaction through a chat-based interface and maintaining scalability, reliability, and data security through AWS native services.

Understanding prompt engineering: Unlock the creative potential of Stability AI models on AWS

Stability AI’s newest launch of Stable Diffusion 3.5 Large (SD3.5L) on Amazon SageMaker JumpStart enhances image generation, human anatomy rendering, and typography by producing more diverse outputs and adhering closely to user prompts, making it a significant upgrade over its predecessor. In this post, we explore advanced prompt engineering techniques that can enhance the performance of these models and facilitate the creation of compelling imagery through text-to-image transformations.