AWS HPC Blog
Instance sizes in the Amazon EC2 Hpc7 family – a different experience
Hpc7g is the first Amazon EC2 HPC instance offering with multiple instance sizes, but this is quite different from the experience of getting smaller instances from other non-HPC instance families. Today, we want to take a moment to explore why this is different, and how it helps.
Application deep-dive into the AWS Graviton3E-based Amazon EC2 Hpc7g instance
In this post we’ll show you application performance and scaling results from Hpc7g, a new instance powered by AWS Graviton3E across a wide range of HPC workloads and disciplines.
Accelerating the shift towards a sustainable economy using HPC on AWS
The transition to a sustainable economy is a major goal of many organizations today. HPC plays a pivotal role and in this post we’ll explore how HPC on AWS is enabling the shift towards a sustainable future.
How SeatGeek simulates massive load with AWS Batch to prepare for big events
In this post we explore SeatGeek’s load testing system that simulates 50k simultaneous users. Originally built to prep SeatGeek for large-event traffic spikes, it now runs weekly to help them harden their code.
Customize Slurm settings with AWS ParallelCluster 3.6
With AWS ParallelCluster 3.6, you can directly specify Slurm settings in the cluster config file – improving reproducibility and another step towards self-documentation for your HPC infrastructure.
Protein Structure Prediction at Scale using AWS Batch
In this post, we discuss how Novo Nordisk approached the deployment of a scale-out HPC platform for running AlphaFold, while meeting their enterprise IT requirements and keeping the user experience simple.
HTC-Grid – examining the operational characteristics of the high throughput compute grid blueprint
The HTC-Grid blueprint meets the challenges that financial services industry (FSI) organizations for high throughput computing on AWS. This post goes into detail on the operational characteristics (latency, throughput, and scalability) of HTC-Grid to help you to understand if this solution meets your needs.
Deploying predictive models and simulations at scale using TwinFlow on AWS
AWS TwinFlow is an open source framework to build and deploy predictive models using heterogenous compute pipelines on AWS. In this post, we show the versatility of the framework with examples of engineering design, scenario analysis, systems analysis, and digital twins.
Rigor and flexibility: the benefits of agent-based computational economics
In this post, we describe Agent-Based Computational Economics (ACE), and how extreme scale computing makes it beneficial for policy design.
Streamlining distributed ML workflow orchestration using Covalent with AWS Batch
Complicated multi-step workflows can be challenging to deploy, especially when using a variety of high-compute resources. Covalent is an open-source orchestration tool that streamlines the deployment of distributed workloads on AWS resources. In this post, we outline key concepts in Covalent and develop a machine learning workflow for AWS Batch in just a handful of steps.