The just-released TOP500 supercomputer list shows a record 34 new NVIDIA-accelerated systems, bringing our total to 87.
But we’re only just getting started. By the time the next list surfaces in June, some of the first supercomputers with our new Volta GPU architecture will be online. Summit, at Oak Ridge National Laboratory (ORNL), will be among the world’s most powerful. Not far behind are Sierra, at Lawrence Livermore National Laboratory in the U.S., and AI Bridging Cloud Infrastructure (ABCI) in Japan.
The three are in the spotlight this week as makers of the world’s most powerful supercomputers gather for SC17 in Denver.
At the show this week, you can attend a variety of talks and exhibits to learn more about how GPUs and Volta will advance science and AI.
AI Extends HPC
Volta delivers 5x the performance of its predecessor, Pascal. Like Pascal, it combines AI and traditional HPC applications on single platform.
High performance computing is a cornerstone of modern science, allowing researchers to simulate and predict what’s likely to happen in the real world — for example, the body’s response to a new drug treatment, or the efficiency of a new energy source. By combining AI and HPC, Volta lets researchers use AI to gain insights from data to speed up scientific discovery.
Summit, Sierra and ABCI are all powered by NVIDIA Tesla V100 GPU accelerators, which pack the computing power of 100 CPUs into a single GPU, while using half the energy of our previous-generation GPUs. The three contain a combination of CPUs and GPUs, all linked by our NVIDIA NVLink high-speed interconnect technology.
Scaling New Heights with Summit
Summit is chartered to meet the insatiable need for computing resources by the world’s researchers and scientists. With expected peak performance at 200 petaflops (and over three exaflops for AI), it will top the reigning world champion of supercomputing, China’s Sunway TaihuLight, which has a peak of 125.4 petaflops. It will have more than 5x the computing power of ORNL’s Titan supercomputer, long the most powerful system in the U.S.
“For us, it’s not about the peak performance. It’s about the science we can do with Summit,” said Tjerk Straatsma, group leader for scientific computing at ORNL’s National Center for Computational Sciences.
With the added computing power, researchers can solve larger and more challenging problems, perform more accurate simulations and make more accurate predictions, Straatsma said. For example, one project planned for Summit is designed to forecast the long-term effects of climate change. Other applications could speed drug discovery, make plant-based fuels more cost effective or enable fusion as a source of clean, abundant energy.
Sierra will be the U.S. Department of Energy’s primary system for managing and securing the nation’s nuclear weapons, and managing nuclear nonproliferation and counterterrorism programs. With expected peak performance of 125 petaflops, Sierra delivers five to 10 times the performance of Sequoia, the current fastest system at LLNL.
With the added capability, scientists will be able to carry out simulations with higher fidelity and run three-dimensional simulations that are out of reach of today’s high-performance computers, said Chris Clouse, associate program director for computational physics at LLNL.
The lab also plans to use Sierra for basic science applications and AI research designed to make simulations more robust and accurate, he said.
Designed for AI
ABCI, operated by Japan’s National Institute of Advanced Industrial Science and Technology, will come online as a global innovation platform for AI in 2018. With a planned peak of 37 petaflops and 550 petaflops for deep learning, ABCI will be that nation’s fastest supercomputer.
Tailored for AI, machine learning and deep learning, ABCI will “accelerate the deployment of AI into real businesses and society,” AIST said.
At SC17, you can learn more more about Volta supercomputers in development by attending talks at the NVIDIA GPU Technology Theater or by visiting our booth 1809 in the exhibit hall. We’re hosting more than 30 talks about how HPC, accelerated supercomputing and deep learning are advancing computational science. Here are a few highlights:
Tuesday, Nov. 14
Wednesday, Nov. 15
Thursday, Nov. 16
The image at the top of this story is courtesy of Mike Matheson, ORNL. It shows how researchers used experimental data to create a 23.7 million atom biomass model that shows cellulose (purple), lignin (brown), and enzymes (green).