Dell launches high-performance computing for its Apex multicloud services


Dell may have inadvertently revealed that it has high-performance computing services on its roadmap for its Apex multicloud services.

“If I was looking downstream, I would be looking for Apex versions of HPC,” said Jeff Clarke, vice president and co-chief operating officer at Dell, during a livestreamed press conference Tuesday.

Project Apex is Dell’s multicloud strategy, which is a combination of hardware and software products to aggregate clouds, storage, services and hardware across remote areas to behave as a single system. Dell Tuesday too announced Project Frontiermaking it easy to connect peripheral devices such as robots, CCTV, and other sensing equipment in enterprise multicloud environments.

Clarke’s comment surprised the moderator, JJ Davis, who is senior vice president of corporate affairs, and she tried to reframe it as Clarke’s guess. “Pre-announced and all kinds of stuff,” Davis said.

Clarke took JJ Davis’ comment in stride and tried to reframe his comment, but then doubled down to bring HPC to Apex as part of a larger offering that will go to all sorts of devices.

“There will be PC versions of this. Laptops and desktops, whether it’s vertical, whether it’s a VDI solution or an HPC solution, whether it’s horizontal capabilities, whether we’re talking about extensions to the public cloud is what we’re building,” Clarke said.

“So while we’re having a little fun with it, it’s a pre-disclosure — that’s the direction we’re going with our company’s capabilities in this multicloud world,” Clarke said.

A few sentences later, JJ Davis cut Clarke off just as he started with another sentence, perhaps to limit the damage of any further pre-disclosures.

But Clarke, and later CEO Michael Dell, reiterated that high-performance computing was an important market for Dell from a technology and financial perspective.

“They tend to buy real capable servers with lots of memory, with lots of GPU capabilities, we like this company. We will continue to be involved,” Clarke said.

The Dell CEO said the HPC sector was driving the golden age of processor architecture, with systems going beyond CPUs and tasks being offloaded to alternative chips like GPUs.

“Think about it way beyond the CPU with DPUs and QPUs — all kinds of offload engines that process this explosion of data and the incredible advancements in computing to process that data,” Dell said.

Addison Snell, CEO of supercomputing research firm Intersect360, noted that Dell has its own supercomputing facilities. On the current Top500 list, Dell has the largest supercomputer at a commercial customer site, which is 12e fastest supercomputer called HPC5, which is at Eni in Italy. It also has the largest university supercomputer, which is the sixteenth fastest supercomputer called Frontera, which is deployed at the Texas Advanced Computing Center in Austin.

“Dell is often underestimated for the scale of its contributions to the HPC market, likely because it hasn’t had the cutting-edge sales cachet from the Department of Energy that HPE has through its acquisition of Cray” , Snell said.

HPC has always been at the forefront of computer architecture, Dell said. He gave the example of the Stampede supercomputer at the University of Texas at Austin, which went up in 2013 and was once the sixth-fastest supercomputer in the world. The system was jointly developed by Intel and the Dell system and was retired in 2017.

“[HPC has] been incredibly important to us and…the UT Stampede clusters are a great example of that,” Dell said.

After Dell was done, Clarke returned to the importance of HPC to Dell. There’s been a data explosion, and HPC-like computing will be needed to handle that, Clarke said.

“What you find is an architectural change that is happening. You’re going to see compute resources and storage resources track where data is created,” Clarke said.

There will be a distribution of compute and storage resources to track data, which will increase computing requirements. A lot of data is already being created at the edge, fueling the need for high performance computing, especially for learning models.

“You’re going to do real-time data processing at the edge to get better results there and those two worlds have to connect,” Clarke said.

Dell has the application engineering and services for this space, but lacks a local interconnect, Snell said. HPE’s big DOE wins were based on the Cray Slingshot now HPE Slingshot – interconnect, while Fujitsu has its Tofu interconnect in Fugaku, while Atos has BXI.

“When Dell builds a massive system, it uses standard and InfiniBand server building blocks. Maybe that makes it less special somehow, like anyone could have – but Dell is chosen,” Snell said.

A new supercomputer called Horizon will be housed at TACC around 2026. Dell’s exact role is unclear, but they have been listed as a partner in the project. The TACC system is part of the NSF’s plan to create a class-leading computing facility (LCCF) at TACC.

Previous Halton Freeway closures for planned roadwork (October 12)
Next French authorities say they arrested a cyber gang of young people who stole $2.5 million in NFTs