This article is more than 1 year old

Player three has entered Cray's supercomputing game: First AMD Epyc, now Fujitsu's Arm chips

A64FX: Big in Japan, big in the US, UK at this rate

Cray has said it will build a family of supercomputers for government research labs and universities. The kicker? The exascale machines will be powered by Arm-compatible microprocessors.

The HPE-owned biz has partnered with Fujitsu to roll out the beefy big iron. Fujitsu will supply its homegrown A64FX processors – understood to be 48-core 64-bit Armv8-compatible beasts – to drive applications on the systems, while Cray will integrate the chippery into its line of CS500 supers.

It’s still early days, so the full specs aren’t out yet nor even the codenames for the exaFLOPS-grade computers. The exascale kit is expected to ship from 2020 to the US Department of Energy’s Oak Ridge National Laboratory and Los Alamos National Laboratory, as well as Stony Brook University in New York. Elsewhere in the world, other institutions including the RIKEN center for computational science in Japan, and the University of Bristol in the UK are eagerly awaiting the toy sets.

RIKEN is due to receive its own highly customized A64FX-powered exascale super, dubbed Post-K, from Fujitsu; the Cray-built one will sit alongside it.

A medieval archer

UK govt snubs Intel, seeks second-gen AMD Epyc processors for 28PFLOPS Archer2 supercomputer

READ MORE

Cray’s gear, in recent times anyway, usually houses x86 processors, such as Intel Xeons and lately AMD Epycs. These latest additions to its portfolio of machines, however, will be decked out with Arm-based CPU cores.

Fujitsu’s A64FX supports high-bandwidth memory (HBM), and Arm's Scalable Vector Extensions, a set of instructions to accelerate matrix calculations, making it ideal for physics simulations, machine-learning workloads, and such number-crunching. The maximum theoretical HBM RAM bandwidth will be greater than one terabyte per second, Cray claimed.

The new supercomputers will likely be used to model complex 3D systems, from the weather and materials to nuclear energy and weapons.

“The most demanding computing work at Los Alamos National Laboratory involves sparse, irregular, multi-physics, multi-link-scale, highly resolved, long running 3D simulations,” said Gary Grider, deputy division leader of the HPC division at Los Alamos National Laboratory on Wednesday. “There are few existing architectures that currently serve this workload well.”

You can read more details and technical analysis over on our HPC and AI sister site, The Next Platform. Also, tune into TNP for coverage of Supercomputing 2019 next week. ®

More about

TIP US OFF

Send us news


Other stories you might like