AWS Machine Learning Blog
Category: Generative AI
Reinvent personalization with generative AI on Amazon Bedrock using task decomposition for agentic workflows
In this post, we present an automated solution to provide a consistent and responsible personalization experience for your customers by using smaller LLMs for website personalization tailored to businesses and industries. This decomposes the complex task into subtasks handled by task / domain adopted LLMs, adhering to company guidelines and human expertise.
Build RAG-based generative AI applications in AWS using Amazon FSx for NetApp ONTAP with Amazon Bedrock
In this post, we demonstrate a solution using Amazon FSx for NetApp ONTAP with Amazon Bedrock to provide a RAG experience for your generative AI applications on AWS by bringing company-specific, unstructured user file data to Amazon Bedrock in a straightforward, fast, and secure way.
Improve RAG performance using Cohere Rerank
In this post, we show you how to use Cohere Rerank to improve search efficiency and accuracy in Retrieval Augmented Generation (RAG) systems.
Unlock AWS Cost and Usage insights with generative AI powered by Amazon Bedrock
In this post, we explore a solution that uses generative artificial intelligence (AI) to generate a SQL query from a user’s question in natural language. This solution can simplify the process of querying CUR data stored in an Amazon Athena database using SQL query generation, running the query on Athena, and representing it on a web portal for ease of understanding.
Build ultra-low latency multimodal generative AI applications using sticky session routing in Amazon SageMaker
In this post, we explained how the new sticky routing feature in Amazon SageMaker allows you to achieve ultra-low latency and enhance your end-user experience when serving multi-modal models.
Build a RAG-based QnA application using Llama3 models from SageMaker JumpStart
In this post, we provide a step-by-step guide for creating an enterprise ready RAG application such as a question answering bot. We use the Llama3-8B FM for text generation and the BGE Large EN v1.5 text embedding model for generating embeddings from Amazon SageMaker JumpStart.
Best prompting practices for using Meta Llama 3 with Amazon SageMaker JumpStart
In this post, we dive into the best practices and techniques for prompting Meta Llama 3 using Amazon SageMaker JumpStart to generate high-quality, relevant outputs. We discuss how to use system prompts and few-shot examples, and how to optimize inference parameters, so you can get the most out of Meta Llama 3.
How healthcare payers and plans can empower members with generative AI
In this post, we discuss how generative artificial intelligence (AI) can help health insurance plan members get the information they need. The solution presented in this post not only enhances the member experience by providing a more intuitive and user-friendly interface, but also has the potential to reduce call volumes and operational costs for healthcare payers and plans.
Generative AI-powered technology operations
In this post we describe how AWS generative AI solutions (including Amazon Bedrock, Amazon Q Developer, and Amazon Q Business) can further enhance TechOps productivity, reduce time to resolve issues, enhance customer experience, standardize operating procedures, and augment knowledge bases.
Enabling complex generative AI applications with Amazon Bedrock Agents
In this post, we take a closer look at Amazon Bedrock Agents. They empower you to build intelligent and context-aware generative AI applications, streamlining complex workflows and delivering natural, conversational user experiences.