AWS HPC Blog
Deploying predictive models and simulations at scale using TwinFlow on AWS
AWS TwinFlow is an open source framework to build and deploy predictive models using heterogenous compute pipelines on AWS. In this post, we show the versatility of the framework with examples of engineering design, scenario analysis, systems analysis, and digital twins.
Rigor and flexibility: the benefits of agent-based computational economics
In this post, we describe Agent-Based Computational Economics (ACE), and how extreme scale computing makes it beneficial for policy design.
Streamlining distributed ML workflow orchestration using Covalent with AWS Batch
Complicated multi-step workflows can be challenging to deploy, especially when using a variety of high-compute resources. Covalent is an open-source orchestration tool that streamlines the deployment of distributed workloads on AWS resources. In this post, we outline key concepts in Covalent and develop a machine learning workflow for AWS Batch in just a handful of steps.
Introducing GPU health checks in AWS ParallelCluster 3.6
AWS ParallelCluster 3.6.0 can now detect GPU failures in HPC and AI/ML tasks. Health checks run at the start of Slurm jobs and if they fail, the job is requeued on another instance. This can increase reliability and prevent wasted spend.
Benchmarking the Oxford Nanopore Technologies basecallers on AWS
Oxford Nanopore sequencers enables direct, real-time analysis of long DNA or RNA fragments. They work by monitoring changes to an electrical current as nucleic acids are passed through a protein nanopore. The resulting signal is decoded to provide the specific DNA or RNA sequence by virtue of compute-intensive algorithms called basecallers. This blog post presents the benchmarking results for two of those Oxford Nanopore basecallers — Guppy and Dorado — on AWS. This benchmarking project was conducted in collaboration between G42 Healthcare, Oxford Nanopore Technologies and AWS.
Deploying a Level 3 Digital Twin Virtual Sensor with Ansys on AWS
AWS is developing new tools to enable easier and faster deployment of level 3 digital twin virtual sensors. in this post we show why L3 digital twins are needed for virtual sensors and how to elastically deploy one the cloud at scale.
Run Celery workers for compute-intensive tasks with AWS Batch
Many applications leverage distributed task systems like Celery to handle asynchronous work. In this post, we describe how to handle compute-intensive Celery tasks using AWS Batch to scale the compute resources and run worker agents.
How Evolvere Biosciences performs macromolecule design on AWS
In this blog post, we catch a glimpse into drug discovery to see how Evolvere Biosciences has deployed a customized architecture w/ AWS Batch and Nextflow to quickly and easily run its macromolecule design pipeline.
Simulating climate risk scenarios for the Amazon Rainforest
In this post, we discuss the “tipping point” problem, using HPC at a large scale to simulate the impact of deforestation to the risk of accelerating damage to the Amazon rainforest.
The benefits of computational chemistry for the circular economy
In this blog post, we’ll explore the benefits of computational chemistry for the circular economy, how it can help reduce waste, and describe the potential for new innovative materials.