AWS HPC Blog
Category: Compute
Amazon’s renewable energy forecasting: continuous delivery with Jupyter Notebooks
Interested in eliminating friction between data science and engineering teams? Read this post to learn how Amazon successfully transitioned Jupyter Notebooks from the lab to production.
Dynamic HPC budget control using a core-limit approach with AWS ParallelCluster
Balancing fixed budgets with fluctuating HPC needs is challenging. Discover a customizable solution for automatically setting weekly resource limits based on previous spending.
Accelerating molecule discovery with computational chemistry and Promethium on AWS
Interested in performing high-accuracy computational chemistry simulations faster? Check out this new post about Promethium, a solution from QC Ware that leverages AWS to accelerate simulations by up to 100x.
Leveraging Seqera Platform on AWS Batch for machine learning workflows – Part 2 of 2
In this second part of using Nextflow for machine learning for life science workloads, we provide a step-by-step guide, explaining how you can easily deploy a Seqera environment on AWS to run ML and other pipelines.
Leveraging Seqera Platform on AWS Batch for machine learning workflows – Part 1 of 2
Nextflow is popular workflow framework for genomics pipelines, but did you know you can also use it for machine-learning? ML is already being used for medical imaging, protein folding, drug discovery, and gene editing. In this post, we explain how to build an example Nextflow pipeline that performs ML model-training and inference for image analysis.
Save up to 90% using EC2 Spot, even for long-running HPC jobs
New OS-level checkpointing tools can let you run existing HPC codes on EC2 Spot instances with minimal impact from interruptions. Read on for the details.
Enhancing ML workflows with AWS ParallelCluster and Amazon EC2 Capacity Blocks for ML
No more guessing if GPU capacity will be available when you launch ML jobs! EC2 Capacity Blocks for ML let you lock in GPU reservations so you can start tasks on time. Learn how to integrate Caacity Blocks into AWS ParallelCluster to optimize your workflow in our latest technical blog post.
Using a Level 4 Digital Twin for scenario analysis and risk assessment of manufacturing production on AWS
This post was contributed by Orang Vahid (Dir of Engineering Services) and Kayla Rossi (Application Engineer) at Maplesoft, and Ross Pivovar (Solution Architect) and Adam Rasheed (Snr Manager) from Autonomous Computing at AWS One of the most common objectives for our Digital Twin (DT) customers is to use DTs for scenario analysis to assess risk […]
Slurm REST API in AWS ParallelCluster
Looking to integrate AWS ParallelCluster into an automated workflow? This post shows how to submit and monitor jobs programmatically with Slurm REST API (code examples included).
New: Research and Engineering Studio on AWS
Today we’re announcing Research and Engineering Studio on AWS, a self-service portal to help scientists and engineers access and manage virtual desktops to see their data and run their interactive applications in the cloud.