- Machine Learning›
- AWS Trainium›
- AWS Trainium Research
Build on Trainium
A $110m investment program to accelerate AI research and education with AWS Trainium
What is Build on Trainium?
AWS Trainium research cluster
Amazon Research Awards
Neuron Kernel Interface
Benefits
Participating Universities
Here is how leading universities are benefiting from the Build on Trainium Program.
Massachusetts Institute of Technology
At MIT’s Device Realization Lab, we’re using AWS Trainium to push the limits of medical AI research. Our 3D ultrasound segmentation and speed-of-sound estimation models train faster and more efficiently than ever, cutting experimental time by more than half while achieving state-of-the-art accuracy. AWS Trainium has enabled us to scale our research in ways that were not feasible with traditional GPU systems. By training our 3D fully convolutional neural networks on AWS Trainium (trn.32xlarge), we achieved state-of-the-art performance with 50% higher throughput and lower cost compared to NVIDIA A100 instances. Using a 32-node Trainium cluster, we efficiently conducted over 180 ablation experiments, reducing total training time from months to weeks and accelerating medical AI innovation at MIT. In the future, we plan to use Trainium to train AI agent models that can operate and automate the digital ultrasound workflow, saving significant clinician time and providing better care to patients."
Carnegie Mellon University
" CMU Catalyst research group works on optimizing ML systems. Our project aims to make it easier to optimize across different ML systems. Trainium is unique in providing both low-level control and an accessible programming interface through Neuron Kernel Interface (NKI).
with the support of AWS through the Build on Trainium program, our researcher was able to explore advanced optimizations on a critical kernel—FlashAttention. What amazed us most was the speed at which we could iterate: we achieved meaningful improvements on top of the prior state of the art in just a week using publicly available NKI, Neuron profiler, and architecture documentation. The combination of powerful tools and clear hardware insights made sophisticated, low-level optimization accessible to our team.
AWS Trainium and Neuron Kernel Interface (NKI) empowers researchers like us to innovate faster, removing barriers that typically slow down hardware-specific optimization work."
Berkeley University of California
"Through the Build on Trainium program, his team has gained full access to AWS Neuron’s new NKI open-source compiler stack — including direct visibility into the Trainium ISA and APIs for precise scheduling and memory allocation. This level of visibility and control allows his students to more easily analyze the opportunities for optimization and more effectively discover performant implementations."
Christopher Fletcher, Associate Professor of Computer Science, University of California, Berkeley
University of Illinios Urbana/Champaign
"Access to AWS Trainium and Inferentia has been instrumental in advancing our research and education on large-scale, efficient AI systems. We use these platforms for Mixture-of-Experts training and inference optimizations, prototyping new runtime and scheduling techniques that improve scalability, efficiency, and portability on emerging accelerator architectures. By leveraging the Neuron Developer stack, UIUC researchers are developing new runtime and scheduling techniques that advance the efficiency and portability of AI workloads. The team is particularly impressed by the openness of Neuron Developer stack, which make these platforms valuable for runtime research and enable innovations in sparsity, memory hierarchies, and communication efficiency that go beyond traditional GPU architectures."
University of California Los Angeles
"By leveraging AWS Trainium and the Build on Trainium program, my students and I were able to accelerate our quantum circuit simulations significantly. The project brought together a strong group of students who collaboratively built a high-performance simulator, enabling deeper experimentation and hands-on learning at a scale that simply wasn't possible before."
University of Technology Sydney
"Our research team at UTS is exploring the integration of tree-ring watermarking algorithms by developing custom Neuron NKI kernels. Having access to the open-source Neuron stack through the Build on Trainium program has been transformative. It gives us unprecedented visibility into the Trainium architecture and the ability to work directly at the hardware level. Access to Trainium has enabled our team to accelerate our watermarking workloads significantly, reducing iteration cycles and allowing us to explore more complex models and techniques. This depth of access allows our researchers to prototype new ideas, experiment with low-level optimizations, and push the boundaries of what watermarking systems can achieve on modern AI accelerators. "