AWS Machine Learning Blog
Amazon SageMaker price reductions: Up to 18% lower prices on ml.p3 and ml.p2 instances
Effective October 1st, 2020, we’re reducing the prices for ml.p3 and ml.p2 instances in Amazon SageMaker by up to 18% so you can maximize your machine learning (ML) budgets and innovate with deep learning using these accelerated compute instances. The new price reductions apply to ml.p3 and ml.p2 instances of all sizes for Amazon SageMaker Studio notebooks, on-demand notebooks, processing, training, real-time inference, and batch transform.
Customers including Intuit, Thomson Reuters, Cerner, and Zalando are already reducing their total cost of ownership (TCO) by at least 50% using Amazon SageMaker. Amazon SageMaker removes the heavy lifting from each step of the ML process and makes it easy to apply advanced deep learning techniques at scale. Amazon SageMaker provides lower TCO because it’s a fully managed service, so you don’t need to build, manage, or maintain any infrastructure and tooling for your ML workloads. Amazon SageMaker also has built-in security and compliance capabilities including end-to-end encryption, private network connectivity, AWS Identity and Access Management (IAM)-based access controls, and monitoring so you don’t have to build and maintain these capabilities, saving you time and cost.
We designed Amazon SageMaker to offer costs savings at each step of the ML workflow. For example, Amazon SageMaker Ground Truth customers are saving up to 70% in data labeling costs. When it’s time for model building, many cost optimizations are also built into the training process. For example, you can use Amazon SageMaker Studio notebooks, which enable you to change instances on the fly to scale the compute up and down as your demand changes to optimize costs.
When training ML models, you can take advantage of Amazon SageMaker Managed Spot Training, which uses spare compute capacity to save up to 90% in training costs. See how Cinnamon AI saved 70% in training costs with Managed Spot Training.
In addition, Amazon SageMaker Automatic Model Tuning uses ML to find the best model based on your objectives, which reduces the time needed to get to high-quality models. See how Infobox is using Amazon SageMaker Automatic Model Tuning to scale while also improving model accuracy by 96.9%.
When it’s time to deploy ML models in production, Amazon SageMaker multi-model endpoints (MME) enable you to deploy from tens to tens of thousands of models on a single endpoint to reduce model deployment costs and scale ML deployments. For more information, see Save on inference costs by using Amazon SageMaker multi-model endpoints.
Also, when running data processing jobs on Amazon SageMaker Processing, model training on Amazon SageMaker Training, and offline inference with batch transform, you don’t manage any clusters or have high utilization of your instances, and you only pay for the compute resources for the duration of the jobs.
Price reductions for ml.p3 and ml.p2 instances, optimized for deep learning
Customers are increasingly adopting deep learning techniques to accelerate their ML workloads. Amazon SageMaker offers built-in implementations of the most popular deep learning algorithms, such as object detection, image classification, semantic segmentation, and deep graph networks, in addition to the most popular ML frameworks such as TensorFlow, MxNet, and PyTorch. Whether you want to run single-node training or distributed training, you can use Amazon SageMaker Debugger to identifies complex issues developing in ML training jobs and use Managed Spot Training to lower deep learning costs by up to 90%.
Amazon SageMaker offers the best-in-class ml.p3 and ml.p2 instances for accelerated compute, which can significantly accelerate deep learning applications to reduce training and processing times from days to minutes. The ml.p3 instances offer up to eight of the most powerful GPU available in the cloud, with up to 64 vCPUs, 488 GB of RAM, and 25 Gbps networking throughput. The ml.p3dn.24xlarge instances provide up to 100 Gbps of networking throughput, significantly improving the throughput and scalability of deep learning training models, which leads to faster results.
Effective October 1st, 2020, we’re reducing the price up to 18% on all ml.p3 and ml.p2 instances in Amazon SageMaker, making them an even more cost-effective solution to meet your ML and deep learning needs. The new price reductions apply to ml.p3 and ml.p2 instances of all sizes for Amazon SageMaker Studio notebooks, on-demand notebooks, processing, training, real-time inference, and batch transform.
The price reductions for the specific instance types are as follows:
Instance Type | Price Reduction |
ml.p2.xlarge | 11% |
ml.p2.8xlarge | 14% |
ml.p2.16xlarge | 18% |
ml.p3.2xlarge | 11% |
ml.p3.8xlarge | 14% |
ml.p3.16xlarge | 18% |
ml.p3dn.24xlarge | 18% |
The price reductions are available in the following AWS Regions:
- US East (Ohio)
- US East (N. Virginia)
- US West (Oregon)
- Asia Pacific (Singapore)
- Asia Pacific (Sydney)
- Asia Pacific (Seoul)
- Asia Pacific (Tokyo)
- Asia Pacific (Mumbai)
- Canada (Central)
- EU (Frankfurt)
- EU (Ireland)
- EU (London)
- AWS GovCloud (US-West)
Conclusion
We’re very excited to make ML more cost-effective and accessible. For more information about the latest pricing information for these instances in each Region, see Amazon SageMaker Pricing.
About the Author
Urvashi Chowdhary is a Principal Product Manager for Amazon SageMaker. She is passionate about working with customers and making machine learning more accessible. In her spare time, she loves sailing, paddle boarding, and kayaking.