Seven finalists together with each winners of the 2020 Gordon Bell awards used supercomputers to see extra clearly atoms, stars and extra — all accelerated with NVIDIA applied sciences.

Their efforts required the normal quantity crunching of excessive efficiency computing, the most recent knowledge science in graph analytics, AI strategies like deep studying or mixtures of all the above.

The Gordon Bell Prize is thought to be a Nobel Prize within the supercomputing neighborhood, attracting among the most bold efforts of researchers worldwide.

AI Helps Scale Simulation 1,000x

Winners of the normal Gordon Bell award collaborated throughout universities in Beijing, Berkeley and Princeton in addition to Lawrence Berkeley National Laboratory (Berkeley Lab). They used a mix of HPC and neural networks they referred to as DeePMDkit to create advanced simulations in molecular dynamics, 1,000x quicker than earlier work whereas sustaining accuracy.

In in the future on the Summit supercomputer at Oak Ridge National Laboratory, they modeled 2.5 nanoseconds within the lifetime of 127.4 million atoms, 100x greater than the prior efforts.

Their work aids understanding advanced supplies and fields with heavy use of molecular modeling like drug discovery. In addition, it demonstrated the facility of mixing machine studying with physics-based modeling and simulation on future supercomputers.

Atomic-Scale HPC May Spawn New Materials 

Among the finalists, a workforce together with members from Berkeley Lab and Stanford optimized the BerkeleyGW utility to bust by the advanced math wanted to calculate atomic forces binding greater than 1,000 atoms with 10,986 electrons, about 10x greater than prior efforts.

“The idea of working on a system with tens of thousands of electrons was unheard of just 5-10 years ago,” mentioned Jack Deslippe, a principal investigator on the challenge and the appliance efficiency lead at the U.S. National Energy Research Scientific Computing Center.

Their work may pave a option to new supplies for higher batteries, photo voltaic cells and vitality harvesters in addition to quicker semiconductors and quantum computer systems.

The workforce used all 27,654 GPUs on the Summit supercomputer to get ends in simply 10 minutes, because of harnessing an estimated 105.9 petaflops of double-precision efficiency.

Developers are persevering with the work, optimizing their code for Perlmutter, a next-generation system utilizing NVIDIA A100 Tensor Core GPUs that sport {hardware} to speed up 64-bit floating-point jobs.

Analytics Sifts Text to Fight COVID

Using a type of knowledge mining referred to as graph analytics, a workforce from Oak Ridge and Georgia Institute of Technology discovered a option to seek for deep connections in medical literature utilizing a dataset they created with 213 million relationships amongst 18.5 million ideas and papers.

Their DSNAPSHOT (Distributed Accelerated Semiring All-Pairs Shortest Path) algorithm, utilizing the workforce’s custom-made CUDA code, ran on 24,576 V100 GPUs on Summit, delivering outcomes on a graph with 4.43 million vertices in 21.3 minutes. They claimed a report for deep search in a biomedical database and confirmed the way in which for others.

Graph analytics finds deep patterns in biomedical literature associated to COVID-19.

“Looking forward, we believe this novel capability will enable the mining of scholarly knowledge … (and could be used in) natural language processing workflows at scale,” Ramakrishnan Kannan, workforce lead for computational AI and machine studying at Oak Ridge, mentioned in an article on the lab’s web site.

Tuning in to the Stars

Another workforce pointed the Summit supercomputer at the celebrities in preparation for one of many greatest big-data initiatives ever tackled. They created a workflow that dealt with six hours of simulated output from the Square Kilometer Array (SKA), a community of 1000’s of radio  telescopes anticipated to come back on-line later this decade.

Researchers from Australia, China and the U.S. analyzed 2.6 petabytes of knowledge on Summit to supply a proof of idea for one in every of SKA’s key use instances. In the method they revealed crucial design components for future radio telescopes and the supercomputers that examine their output.

The workforce’s work generated 247 GBytes/second of knowledge and spawned 925 GBytes/s in I/O. Like many different finalists, they relied on the quick, low-latency InfiniBand hyperlinks powered by NVIDIA Mellanox networking, extensively utilized in supercomputers like Summit to hurry knowledge amongst 1000’s of computing nodes.

Simulating the Coronavirus with HPC+AI

The 4 groups stand beside three different finalists who used NVIDIA applied sciences in a contest for a particular Gordon Bell Prize for COVID-19.

The winner of that award used all of the GPUs on Summit to create the biggest, longest and most correct simulation of a coronavirus up to now.

“It was a total game changer for seeing the subtle protein motions that are often the important ones, that’s why we started to run all our simulations on GPUs,” mentioned Lilian Chong, an affiliate professor of chemistry at the University of Pittsburgh, one in every of 27 researchers on the workforce.

“It’s no exaggeration to say what took us literally five years to do with the flu virus, we are now able to do in a few months,” mentioned Rommie Amaro, a researcher at the University of California at San Diego who led the AI-assisted simulation.