AWS Machine Learning Blog

Category: Artificial Intelligence

A guide to Amazon Bedrock Model Distillation (preview)

This post introduces the workflow of Amazon Bedrock Model Distillation. We first introduce the general concept of model distillation in Amazon Bedrock, and then focus on the important steps in model distillation, including setting up permissions, selecting the models, providing input dataset, commencing the model distillation jobs, and conducting evaluation and deployment of the student models after model distillation.

Build generative AI applications quickly with Amazon Bedrock IDE in Amazon SageMaker Unified Studio

In this post, we’ll show how anyone in your company can use Amazon Bedrock IDE to quickly create a generative AI chat agent application that analyzes sales performance data. Through simple conversations, business teams can use the chat agent to extract valuable insights from both structured and unstructured data sources without writing code or managing complex data pipelines.

Scale ML workflows with Amazon SageMaker Studio and Amazon SageMaker HyperPod

The integration of Amazon SageMaker Studio and Amazon SageMaker HyperPod offers a streamlined solution that provides data scientists and ML engineers with a comprehensive environment that supports the entire ML lifecycle, from development to deployment at scale. In this post, we walk you through the process of scaling your ML workloads using SageMaker Studio and SageMaker HyperPod.

Introducing Amazon Kendra GenAI Index – Enhanced semantic search and retrieval capabilities

Amazon has introduced the Amazon Kendra GenAI Index, a new offering designed to enhance semantic search and retrieval capabilities for enterprise AI applications. This index is optimized for Retrieval Augmented Generation (RAG) and intelligent search, allowing businesses to build more effective digital assistants and search experiences.

Building Generative AI and ML solutions faster with AI apps from AWS partners using Amazon SageMaker

Today, we’re excited to announce that AI apps from AWS Partners are now available in SageMaker. You can now find, deploy, and use these AI apps privately and securely, all without leaving SageMaker AI, so you can develop performant AI models faster.

Elevate customer experience by using the Amazon Q Business custom plugin for New Relic AI

The New Relic AI custom plugin for Amazon Q Business creates a unified solution that combines New Relic AI’s observability insights and recommendations and Amazon Q Business’s Retrieval Augmented Generation (RAG) capabilities, in and a natural language interface for ease of use. This post explores the use case, how this custom plugin works, how it can be enabled, and how it can help elevate customers’ digital experiences.

Amazon SageMaker launches the updated inference optimization toolkit for generative AI

Today, Amazon SageMaker is excited to announce updates to the inference optimization toolkit, providing new functionality and enhancements to help you optimize generative AI models even faster.In this post, we discuss these new features of the toolkit in more detail.

Syngenta develops a generative AI assistant to support sales representatives using Amazon Bedrock Agents

In this post, we explore how Syngenta collaborated with AWS to develop Cropwise AI, a generative AI assistant powered by Amazon Bedrock Agents that helps sales representatives make better seed product recommendations to farmers across North America. The solution transforms the seed selection process by simplifying complex data into natural conversations, providing quick access to detailed seed product information, and enabling personalized recommendations at scale through a mobile app interface.

Speed up your AI inference workloads with new NVIDIA-powered capabilities in Amazon SageMaker

At re:Invent 2024, we are excited to announce new capabilities to speed up your AI inference workloads with NVIDIA accelerated computing and software offerings on Amazon SageMaker. In this post, we will explore how you can use these new capabilities to enhance your AI inference on Amazon SageMaker. We’ll walk through the process of deploying NVIDIA NIM microservices from AWS Marketplace for SageMaker Inference. We’ll then dive into NVIDIA’s model offerings on SageMaker JumpStart, showcasing how to access and deploy the Nemotron-4 model directly in the JumpStart interface. This will include step-by-step instructions on how to find the Nemotron-4 model in the JumpStart catalog, select it for your use case, and deploy it with a few clicks.

Unlock cost savings with the new scale down to zero feature in SageMaker Inference

Today at AWS re:Invent 2024, we are excited to announce a new feature for Amazon SageMaker inference endpoints: the ability to scale SageMaker inference endpoints to zero instances. This long-awaited capability is a game changer for our customers using the power of AI and machine learning (ML) inference in the cloud.