Back to Basics Video Series
Basic architectural building blocks and best practices
Watch | Back to Basics: Best Practices for Selecting Inference Options to Deploy SageMaker ML Models
Learn how to choose the best Amazon SageMaker inferencing option for deploying your machine learning models based on your requirements like latency, throughput, payload size, and traffic patterns. Using a real-world fraud detection example, we'll walk through how to set up a SageMaker Real-Time Inference endpoint, make requests, and get predictions in real-time to meet low latency and high throughput needs.
Additional Resources:
Series Content Library
Filter
Filter
Filter
Additional Video Series:
- Date
- Title (A-Z)
- Title (Z-A)