Jump to content
Main menu
Main menu
move to sidebar
hide
Navigation
Main page
Recent changes
Random page
Help about MediaWiki
Special pages
Niidae Wiki
Search
Search
Appearance
Create account
Log in
Personal tools
Create account
Log in
Pages for logged out editors
learn more
Contributions
Talk
Editing
IBM Blue Gene
(section)
Page
Discussion
English
Read
Edit
View history
Tools
Tools
move to sidebar
hide
Actions
Read
Edit
View history
General
What links here
Related changes
Page information
Appearance
move to sidebar
hide
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
=== Installations === The following is an incomplete list of Blue Gene/Q installations. Per June 2012, the TOP500 list contained 20 Blue Gene/Q installations of 1/2-rack (512 nodes, 8192 processor cores, 86.35 TFLOPS Linpack) and larger.<ref name=Top500/> At a (size-independent) power efficiency of about 2.1 GFLOPS/W, all these systems also populated the top of the June 2012 [[Green500#Green500 List|Green 500]] list.<ref name=Green500/> * A Blue Gene/Q system called [[Sequoia (supercomputer)|Sequoia]] was delivered to the [[Lawrence Livermore National Laboratory]] (LLNL) beginning in 2011 and was fully deployed in June 2012. It is part of the [[Advanced Simulation and Computing Program]] running nuclear simulations and advanced scientific research. It consists of 96 racks (comprising 98,304 compute nodes with 1.6 million processor cores and 1.6 [[Petabyte|PB]] of memory) covering an area of about {{convert|3000|sqft|m2}}.<ref>{{cite web |last=Feldman |first=Michael |url=http://www.hpcwire.com/features/Lawrence-Livermore-Prepares-for-20-Petaflop-Blue-GeneQ-38948594.html |title=Lawrence Livermore Prepares for 20 Petaflop Blue Gene/Q |publisher=HPCwire |date=2009-02-03 |access-date=2011-03-11 |url-status=dead |archive-url=https://web.archive.org/web/20090212034132/http://www.hpcwire.com/features/Lawrence-Livermore-Prepares-for-20-Petaflop-Blue-GeneQ-38948594.html |archive-date=2009-02-12 }}</ref> In June 2012, the system was ranked as the world's fastest supercomputer.<ref>{{cite web |last=B Johnston |first=Donald |url=https://www.llnl.gov/news/newsreleases/2012/Jun/NR-12-06-07.html |title=NNSA's Sequoia supercomputer ranked as world's fastest |date=2012-06-18 |access-date=2012-06-23 |archive-url=https://web.archive.org/web/20140902012510/https://www.llnl.gov/news/newsreleases/2012/Jun/NR-12-06-07.html |archive-date=2014-09-02 |url-status=dead }}</ref><ref>{{Cite web|url=http://www.top500.org/lists/2012/06/press-release|archive-url=https://web.archive.org/web/20120624041004/http://www.top500.org/lists/2012/06/press-release|url-status=dead|title=TOP500 Press Release|archive-date=June 24, 2012}}</ref> at 20.1 [[FLOPS|PFLOPS]] peak, 16.32 [[FLOPS|PFLOPS]] sustained (Linpack), drawing up to 7.9 [[MegaWatt|megawatts]] of power.<ref name=Top500/> In June 2013, its performance is listed at 17.17 [[FLOPS|PFLOPS]] sustained (Linpack).<ref name=Top500/> * A 10 PFLOPS (peak) Blue Gene/Q system called ''[[Mira (supercomputer)|Mira]]'' was installed at [[Argonne National Laboratory]] in the [http://www.alcf.anl.gov/ Argonne Leadership Computing Facility] in 2012. It consist of 48 racks (49,152 compute nodes), with 70 [[Petabyte|PB]] of disk storage (470 GB/s I/O bandwidth).<ref>{{cite web|url=http://www.alcf.anl.gov/articles/mira-worlds-fastest-supercomputer|title=MIRA: World's fastest supercomputer - Argonne Leadership Computing Facility|website=Alcf.anl.gov|access-date=13 October 2017}}</ref><ref>{{cite web|url=http://www.alcf.anl.gov/mira|title=Mira - Argonne Leadership Computing Facility|website=Alcf.anl.gov|access-date=13 October 2017}}</ref> * ''JUQUEEN'' at the [[Forschungszentrum Jülich|Forschungzentrum Jülich]] is a 28-rack Blue Gene/Q system, and was from June 2013 to November 2015 the highest ranked machine in Europe in the Top500.<ref name=Top500/> * ''Vulcan'' at [[Lawrence Livermore National Laboratory]] (LLNL) is a 24-rack, 5 PFLOPS (peak), Blue Gene/Q system that was commissioned in 2012 and decommissioned in 2019.<ref>{{cite web|url=https://hpc.llnl.gov/hardware/platforms/vulcan-decommissioned|website=hpc.llnl.gov|title=Vulcan—decommissioned|access-date=10 April 2019}}</ref> Vulcan served Lab-industry projects through Livermore's High Performance Computing (HPC) Innovation Center<ref>{{cite web|url=http://hpcinnovationcenter.llnl.gov/|title=HPC Innovation Center|website=hpcinnovationcenter.llnl.gov|access-date=13 October 2017}}</ref> as well as academic collaborations in support of DOE/National Nuclear Security Administration (NNSA) missions.<ref>{{cite web|url=https://www.llnl.gov/news/newsreleases/2013/Jun/NR-13-06-05.html|title=Lawrence Livermore's Vulcan brings 5 petaflops computing power to collaborations with industry and academia to advance science and technology|date=11 June 2013|website=Llnl.gov|access-date=13 October 2017|archive-date=9 December 2013|archive-url=https://web.archive.org/web/20131209231310/https://www.llnl.gov/news/newsreleases/2013/Jun/NR-13-06-05.html|url-status=dead}}</ref> * ''Fermi'' at the [[CINECA]] Supercomputing facility, Bologna, Italy,<ref>{{cite web |url=http://www.hpc.cineca.it/content/ibm-fermi |title=Ibm-Fermi | Scai |access-date=2013-05-13 |url-status=dead |archive-url=https://web.archive.org/web/20131030045547/http://www.hpc.cineca.it/content/ibm-fermi |archive-date=2013-10-30 }}</ref> is a 10-rack, 2 PFLOPS (peak), Blue Gene/Q system. * As part of [[DiRAC]], the [[EPCC]] hosts a 6 rack (6144-node) Blue Gene/Q system at the [[University of Edinburgh]]<ref>{{cite web|url=https://www.epcc.ed.ac.uk/facilities/dirac|website=epcc.ed.ac.uk|title=DiRAC BlueGene/Q}}</ref> * A five rack Blue Gene/Q system with additional compute hardware called ''AMOS'' was installed at Rensselaer Polytechnic Institute in 2013.<ref>{{cite web|url=http://news.rpi.edu/content/2013/10/03/amos-among-world%E2%80%99s-fastest-and-most-powerful-supercomputers|title=Rensselaer at Petascale: AMOS Among the World's Fastest and Most Powerful Supercomputers|website=News.rpi.edu|access-date=13 October 2017}}</ref> The system was rated at 1048.6 teraflops, the most powerful supercomputer at any private university, and third most powerful supercomputer among all universities in 2014.<ref>{{cite web|url=http://news.rpi.edu/content/2014/08/04/amos-ranks-43rd-list-world%E2%80%99s-top-500-supercomputers|title=AMOS Ranks 1st Among Supercomputers at Private American Universities|author=Michael Mullaneyvar |website=News.rpi.edi|access-date=13 October 2017}}</ref> * An 838 TFLOPS (peak) Blue Gene/Q system called ''Avoca'' was installed at the [[Victorian Life Sciences Computation Initiative]] in June, 2012.<ref>{{cite web|url=http://themelbourneengineer.eng.unimelb.edu.au/2012/02/worlds-greenest-computer-comes-to-melbourne/|title=World's greenest supercomputer comes to Melbourne - The Melbourne Engineer|date=16 February 2012|website=Themelbourneengineer.eng.unimelb.edu.au/|access-date=13 October 2017|archive-date=2 October 2017|archive-url=https://web.archive.org/web/20171002042114/http://themelbourneengineer.eng.unimelb.edu.au/2012/02/worlds-greenest-computer-comes-to-melbourne/|url-status=dead}}</ref> This system is part of a collaboration between IBM and VLSCI, with the aims of improving diagnostics, finding new drug targets, refining treatments and furthering our understanding of diseases.<ref>{{cite web|url=http://www.vlsci.org.au/|title=Melbourne Bioinformatics - For all researchers and students based in Melbourne's biomedical and bioscience research precinct|website=Melbourne Bioinformatics|access-date=13 October 2017}}</ref> The system consists of 4 racks, with 350 TB of storage, 65,536 cores, 64 TB RAM.<ref>{{cite web|url=http://www.vlsci.org.au/page/computer-software-configuration|title=Access to High-end Systems - Melbourne Bioinformatics|website=Vlsci.org.au|access-date=13 October 2017}}</ref> * A 209 TFLOPS (peak) Blue Gene/Q system was installed at the [[University of Rochester]] in July, 2012.<ref>{{cite web|url=http://www.rochester.edu/news/show.php?id=4192|title=University of Rochester Inaugurates New Era of Health Care Research|website=Rochester.edu|access-date=13 October 2017}}</ref> This system is part of the [http://www.rochester.edu/provost/hscci/ Health Sciences Center for Computational Innovation] {{Webarchive|url=https://web.archive.org/web/20121019071900/http://www.rochester.edu/provost/hscci/ |date=2012-10-19 }}, which is dedicated to the application of [[high-performance computing]] to research programs in the [[healthcare science|health sciences]]. The system consists of a single rack (1,024 compute nodes) with 400 [[Terabyte|TB]] of high-performance storage.<ref name="circ.rochester.edu">{{cite web|url=http://www.circ.rochester.edu/resources.html|title=Resources - Center for Integrated Research Computing|website=Circ.rochester.edu|access-date=13 October 2017}}</ref> * A 209 TFLOPS peak (172 TFLOPS LINPACK) Blue Gene/Q system called ''Lemanicus'' was installed at the [[École Polytechnique Fédérale de Lausanne|EPFL]] in March 2013.<ref>{{cite web|url=http://bluegene.epfl.ch/ |title=EPFL BlueGene/L Homepage |date= |archive-url=https://web.archive.org/web/20071210070627/http://bluegene.epfl.ch/ |access-date=2021-03-10|archive-date=2007-12-10 }}</ref> This system belongs to the Center for Advanced Modeling Science CADMOS (<ref>{{cite web|url=http://www.cadmos.org|title=À propos|first=Super|last=Utilisateur|website=Cadmos.org|access-date=13 October 2017|archive-url=https://web.archive.org/web/20160110132825/http://cadmos.org/|archive-date=10 January 2016|url-status=dead}}</ref>) which is a collaboration between the three main research institutions on the shore of the [[Lake Geneva]] in the French speaking part of Switzerland : [[University of Lausanne]], [[University of Geneva]] and [[École Polytechnique Fédérale de Lausanne|EPFL]]. The system consists of a single rack (1,024 compute nodes) with 2.1 [[Petabyte|PB]] of IBM GPFS-GSS storage. * A half-rack Blue Gene/Q system, with about 100 TFLOPS (peak), called ''Cumulus'' was installed at A*STAR Computational Resource Centre, Singapore, at early 2011.<ref>{{cite web|url=https://www.acrc.a-star.edu.sg/21/hardware.html|title=A*STAR Computational Resource Centre|website=Acrc.a-star.edu.sg|access-date=2016-08-24|archive-date=2016-12-20|archive-url=https://web.archive.org/web/20161220065923/https://www.acrc.a-star.edu.sg/21/hardware.html|url-status=dead}}</ref>
Summary:
Please note that all contributions to Niidae Wiki may be edited, altered, or removed by other contributors. If you do not want your writing to be edited mercilessly, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource (see
Encyclopedia:Copyrights
for details).
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)
Search
Search
Editing
IBM Blue Gene
(section)
Add topic