AWS Public Sector Blog

Tag: high performance computing

AWS branded background design with text overlay that says "How AWS can help mission-focused organizations comply with the White House National Security Memorandum on AI"

How AWS can help mission-focused organizations comply with the White House National Security Memorandum on AI

On October 24, 2024, the White House released a National Security Memorandum (NSM) on Artificial Intelligence (AI), which focuses on ensuring US leadership in developing advanced AI technologies. Amazon Web Services (AWS) is uniquely positioned to address the critical needs of the defense and national security customers in advancing their AI capabilities. Our comprehensive suite of AI and high performance computing (HPC) capabilities offers flexible and robust solutions to meet the NSM’s goals and empower national security missions.

AWS branded background design with text overlay that says "University of British Columbia Cloud Innovation Centre: Governing an innovation hub using AWS management services"

University of British Columbia Cloud Innovation Centre: Governing an innovation hub using AWS management services

In January 2020, Amazon Web Services (AWS) inaugurated a Cloud Innovation Centre (CIC) at the University of British Columbia (UBC). The CIC uses emerging technologies to solve real-world problems and has produced more than 50 prototypes in sectors like healthcare, education, and research. The Centre’s work has involved 300-plus AWS accounts across various groups, including external collaborators, UBC staff, students, and researchers. This post discusses the management of AWS in higher education institutions, emphasizing governance to securely foster innovation without compromising security and detailing policies and responsibilities for managing AWS accounts across projects and research.

AWS branded background design with text overlay that says "5 best practices for accelerating research computing with AWS"

5 best practices for accelerating research computing with AWS

Amazon Web Services (AWS) works with higher education institutions, research labs, and researchers around the world to offer cost-effective, scalable, and secure compute, storage, and database capabilities to accelerate time to science. In our work with research leaders and stakeholders, users often ask us about best practices for leveraging cloud for research. In this post, we dive into five common questions we field from research leaders as they build the academic research innovation centers of the future.

AWS branded background design with text overlay that says "Emory University supports AI.Humanity initiative with high-performance computing on AWS"

Emory University supports AI.Humanity initiative with high-performance computing on AWS

In 2022, Emory launched the AI.Humanity initiative to explore the societal impacts of artificial intelligence (AI) and influence its future development to serve humanity. Emory aims to be a leading advocate for ethical use of AI and a top destination for students and faculty seeking to understand and apply its transformative technologies. Read this blog post to learn how Emory uses Amazon Web Services (AWS) to support the computing needs of AI.Humanity.

Accelerating economic research at UBC with high performance computing using RONIN and AWS

Dr. Kevin Leyton-Brown and Neil Newman are computer scientists at the University of British Columbia (UBC) working at the intersection of artificial intelligence (AI) and microeconomic theory. Their research demands large-scale, high-performance computing, in episodic bursts, to run parallel simulations of complex auctions. When Leyton-Brown and Newman began research into the computationally complex auction theory behind the 2016 United States wireless spectrum auction, their ML models required significantly more computing power than their on-premises infrastructure could provide. The UBC team turned to RONIN, an AWS Partner, and the virtually unlimited infrastructure of the AWS Cloud, to accelerate their time to answers and new discoveries.

Optimizing operations for ground-based, extremely large telescopes with AWS

Ground-based, extremely large telescopes (ELTs), such as the Giant Magellan Telescope (GMT), will play a crucial role in modern astronomy by providing observations of the universe with remarkable clarity and detail. However, managing the vast amount of data generated by these instruments and supporting optimal performance can be a challenging task. AWS provides a suite of cloud-based solutions that can help address these challenges and streamline ELT operations. Learn how various AWS services can be used to optimize data storage, management, and processing, as well as advanced monitoring and remote continuity techniques, leading to improved overall performance and efficiency for ELTs.

How to put a supercomputer in the hands of every scientist

The AWS Cloud gives you access to virtually unlimited infrastructure suitable for high performance computing (HPC) workloads. With HPC, you can remove long queues and waiting times so you don’t have to choose availability over performance. In this technical guide, learn how to use AWS ParallelCluster to set up and manage an HPC cluster in a flexible, elastic, and repeatable way.

IceCube Experiment and University of California

AWS helps researchers study “messages” from the universe

Researchers at the IceCube Experiment and University of California, San Diego just completed the largest cloud simulation in history using 51,500 cloud GPUs including Amazon Elastic Compute Cloud (Amazon EC2) On-Demand and Spot Instances to understand messages from the universe. The IceCube experiment searches for ghost-like massless particles called neutrinos deep within the ice at the South Pole using a unique buried cubic kilometer-size telescope consisting of 5,160 optical sensors.

genomics DNA image

Tracking global antimicrobial resistance among pathogens using Nextflow and AWS

The Centre for Genomic Pathogen Surveillance (CGPS) is based at the Wellcome Genome Campus, Cambridge and The Big Data Institute, University of Oxford in the United Kingdom. Much of its work involves collaborating with laboratories around the world to enhance genomic surveillance by using big data, engineering, training, and genomic capacity building. Ultimately, the Centre hopes to enable the linking and real-time interpretation of data globally to track pathogens and antimicrobial resistance at an affordable rate. Typically, spikes in cost for research are a common challenge for laboratories. With the cloud, the team wanted to mitigate their costs, and particularly those of their partners in low and middle-income countries, by exploring the Amazon Web Services (AWS) Cloud’s pay-as-you-go infrastructure.

Unleashing Seismic Modeling at Scale: We Can’t Stop Quakes, But We Can Be Better Prepared

The terrible headlines are all too familiar. A major earthquake strikes, with a devastating impact on lives, economies, and the environment. The initial event often triggers additional disasters, such as fires or tsunamis that unleash substantial damage. The 2004 Indian Ocean undersea earthquake spawned a tsunami that took more than 225,000 lives. More than 100,000 people were lost as a result of a 7.0 temblor in Haiti in 2010. In some cases, the impact of a major seismic event will continue for decades or longer, as we’ve seen in the Fukushima Daiichi nuclear disaster of 2011.