ADVERTISEMENT
  About the SA Blog Network













Observations

Observations


Opinion, arguments & analyses from the editors of Scientific American
Observations HomeAboutContact

Captain of Crunch: U.S. Nuclear Stockpile Watchdog Boasts Fastest Supercomputer in the West–or Anywhere Else, for That Matter

The views expressed are those of the author and are not necessarily those of Scientific American.


Email   PrintPrint



IBM Blue Gene/Q supercomputerThe U.S. Department of Energy supercomputer standing watch over the nation’s nuclear arsenal was crowned the world’s fastest supercomputer Monday at the 2012 International Supercomputing Conference in Hamburg, Germany. The ascension of Sequoia—run by the National Nuclear Security Administration (NNSA)—on the TOP500 list of the supercomputers marks the first time since November 2009 that a U.S. system has reached the global performance pinnacle.

In the two decades since the U.S. first declared a moratorium on nuclear weapons testing, it has relied on computers to track the integrity, effectiveness and safety of its stockpile. Sequoia—an IBM BlueGene/Q system with more than 1.5 million processing cores installed at the Energy Department’s Lawrence Livermore National Laboratory in California—bested the competition by operating at 16.3 petaflops per second, or more than 16 quadrillion floating-point operations per second. The supercomputer enables NNSA’s Advanced Simulation and Computing program to study weapons performance, in particular hydrodynamics and properties of materials at extreme pressures and temperatures.

Sequoia’s performance knocked Fujitsu’s “K Computer” at the RIKEN Advanced Institute for Computational Science in Kobe, Japan, to second place after the latter had held the pole position for the past two lists, which have been compiled twice annually since June 1993. K Computer registered 10.5 petaflops per second using 705,024 processing cores. RIKEN conducts research in a number of areas, including physics, chemistry, biology, medical science, engineering and computational science. In addition to being 55 percent faster than K Computer, Sequoia is also 150 percent more energy efficient.

Argonne National Laboratory’s Mira supercomputer, also an IBM BlueGene/Q system, debuted in the TOP500′s third spot with about 8.2 petaflops per second using more than 786,000 cores. The other top 10 U.S. system is the upgraded Jaguar at Oak Ridge National Laboratory in Tennessee, which was the top U.S. system on the previous list and now sits in the number-six slot on the current tally. In all, four of the top 10 supercomputers are IBM Blue Gene/Q systems.

The TOP500—compiled by Hans Meuer of the University of Mannheim, Germany, Erich Strohmaier and Horst Simon of Lawrence Berkeley National Laboratory, and Jack Dongarra of the University of Tennessee Knoxville—lists the 500 most powerful commercially available computer systems. Supercomputing technology, often used by scientists to crunch massively large numbers and run complex simulations, has advanced markedly in recent months. Whereas the total performance of all the systems on the list was 74.2 petaflops per second in November 2011, the most recent list boasted a combined performance of 123.4 petaflops per second.

BlueGene/Q image courtesy of IBM

Editor’s Note (11/01/12): This article is translated to Serbo-Croatian language by Jovana Milutinovich from Web Geeks Resources.

About the Author: Larry is the associate editor of technology for Scientific American, covering a variety of tech-related topics, including biotech, computers, military tech, nanotech and robots. Follow on Twitter @lggreenemeier.

The views expressed are those of the author and are not necessarily those of Scientific American.





Rights & Permissions

Comments 4 Comments

Add Comment
  1. 1. jtdwyer 8:40 am 06/19/2012

    Someone always buy the first new fastest computer – their lofty status will be superseded soon by a newer faster computer…

    “1.5 million processing cores.” Hmmm… for problems that can be partitioned into millions of concurrent computations this is just the thing. But with the overhead of managing so many system components, I wonder if it can compute 1+1 any faster than my old laptop?

    Link to this
  2. 2. oldfartfox 5:23 pm 06/19/2012

    Isn’t petaflops per second redundant?
    I had the impression that the “ps” at the end of the acronym represented “per second.”

    Link to this
  3. 3. jtdwyer 6:02 pm 06/19/2012

    oldfartfox – you are correct, sir!

    Link to this
  4. 4. mhenriday 2:25 pm 06/23/2012

    Would it not be wonderful if further competition between the powers could be exclusively in the form of «fastest supercomputer in the West» (or elsewhere for that matter), rather than, as alas, is currently the case, «fastest gun in the West» ? I’d like to live in that sort of world….

    Henri

    Link to this

Add a Comment
You must sign in or register as a ScientificAmerican.com member to submit a comment.

More from Scientific American

Scientific American Special Universe

Get the latest Special Collector's edition

Secrets of the Universe: Past, Present, Future

Order Now >

X

Email this Article

X