- Machine Learning›
- AWS Trainium›
- AWS Trainium Customers
AWS Trainium Customers
See how customers are using AWS Trainium to build, train, and fine-tune deep learning models.
Anthropic
Performance and scale aren't just technical requirements - they're essential to achieving that mission. That's why we partnered with AWS as our primary cloud provider to build Project Rainier, one of the world's most powerful operational AI supercomputers. With almost a million Trainium2 chips training and serving Claude today, we're excited about Trainium3 and expect to continue to scale Claude well beyond what we've built with Project Rainier, pushing the boundaries of what's possible in AI.
James Bradbury, Head of Compute, Anthropic.
Poolside
Our partnership with AWS gives us both. Trainium allows our customers to scale their usage of poolside at a price performance ratio unlike other AI accelerators. And, Trainium's upcoming native Pytorch and vLLM support will unlock even more innovation and flexibility for Trainium users, including poolside. Above all, AWS' customer focus shines through, and AWS were able to quickly iterate and use our feedback to adapt Trainium to our needs. We look forward to deepening our collaboration on all aspects of Trainium.
Joe Rowell founding Engineer
Decart
Trainium’s unique architecture—efficient memory hierarchy and high-throughput AI engines—proved ideal for Decart’s real-time video models, driving full utilization of the hardware. Early testing shows up to 4x higher frame throughput and 2x better cost efficiency compared to top GPUs, with latency reduced from 40 ms to 10 ms. This performance enables live, dynamic, and interactive video generation at scale—previously impractical on standard hardware. Through Bedrock, these capabilities will soon be directly accessible to AWS customers.
Dean Leitersdorf Co-founder & CEO
Karakuri
By adopting AWS Trainium, we reduced our LLM training costs by over 50 percent while maintaining consistent infrastructure availability. This has enabled us to create Japan's most accurate Japanese language model while staying well below budget. The infrastructure stability has also delivered unexpected productivity gains, allowing our team to focus on innovation rather than troubleshooting.
Tomofumi Nakayama CPO
AWS Trainium Partners
AGI House
Partnering with AWS Trainium has allowed us to better serve our AI founders and researchers by offering state-of-the-art training resources and creating groundbreaking events and challenges. These collaborations have helped us tap into previously overlooked parts of our community, strengthening existing connections while driving continued growth. Our developer community in particular has thrived throughout this partnership, consistently noting how powerful and easy to use Trainium has been during our build days especially with the thoughtful support of the team.”
Hugging Face
In 2025 the AI community reached an inflection point with over 10 million AI Builders using and sharing millions of open models and datasets on Hugging Face. It’s now more important than ever to reduce the cost of running ever larger and more diverse open models, to make sure AI benefits everyone and every industry. At Hugging Face, we have been working hand in hand with the engineering teams at AWS building purpose-built AI chips since the first Inferentia1 instances became available. So today, we are incredibly excited about Trainium3, the next generation of AWS AI Chips that will power the most demanding AI applications, from MoE LLMs to Agents and video generation models. With Optimum Neuron, we are committed to bringing the high memory and cost efficiency benefits of Trainium 3 to the millions of users of Transformers, Accelerate, Diffusers and TRL, so they can build their own AI while controlling their costs.
RedHat
By integrating our enterprise-grade inference server, built on the innovative vLLM framework, with AWS's purpose-built Inferentia chips, we're enabling customers to deploy and scale production of AI workloads more efficiently than ever before. Our solution delivers up to 50% better price-performance compared to traditional GPU-based inference, while maintaining the flexibility to run any AI model across any environment. This partnership builds on Red Hat's trusted open-source innovation and our deep expertise in enterprise AI deployments across 90% of Fortune 500 companies.
Dean Leitersdorf Co-founder & CEO
PyTorch
PyTorch's vision is straightforward: the same code should run everywhere on any hardware platform. AWS's native Trainium support brings this hardware choice to researchers who need to rapidly experiment and iterate freely. With the launch of AWS Trainium3, PyTorch developers can research, build and deploy their ideas at higher performance, lower latency and better token economics, all while maintaining their familiar PyTorch workflows and staying within the ecosystem they already know.