Extending AMD leadership in AI and high-performance computing, “Helios” provides the foundation to deliver the open, scalable infrastructure that will power the world’s growing AI demands. Designed to meet the demands of gigawatt-scale data centers, the new ORW specification defines an open, double-wide rack optimized for the power, cooling, and serviceability needs of next-generation AI systems. By adopting ORW and OCP standards, “Helios” provides the industry with a unified, standards-based foundation to develop and deploy efficient, high-performance AI infrastructure at scale.
“Open collaboration is key to scaling AI efficiently,” said Forrest Norrod, executive vice president and general manager, Data Center Solutions Group, AMD. “With ‘Helios,’ we’re turning open standards into real, deployable systems — combining AMD Instinct GPUs, EPYC CPUs, and open fabrics to give the industry a flexible, high-performance platform built for the next generation of AI workloads.”
Built for Open, Efficient, and Sustainable AI Infrastructure
The AMD “Helios” rack scale platform integrates open compute standards including OCP DC-MHS, UALink, and Ultra Ethernet Consortium (UEC) architectures, supporting both open scale-up and scale-out fabrics. The rack features quick-disconnect liquid cooling for sustained thermal performance, a double-wide layout for improved serviceability, and standards-based Ethernet for multi-path resiliency.
As a reference design, “Helios” enables OEMs, ODMs, and hyperscalers to adopt, extend, and customize open AI systems quickly — reducing deployment time, improving interoperability, and supporting efficient scaling for AI and HPC workloads. The Helios platform reflects the ongoing collaboration from AMD across the OCP community to enable open, scalable infrastructure for AI deployments worldwide.
“Open collaboration is key to scaling AI efficiently,” said Forrest Norrod, executive vice president and general manager, Data Center Solutions Group, AMD. “With ‘Helios,’ we’re turning open standards into real, deployable systems — combining AMD Instinct GPUs, EPYC CPUs, and open fabrics to give the industry a flexible, high-performance platform built for the next generation of AI workloads.”
Built for Open, Efficient, and Sustainable AI Infrastructure
The AMD “Helios” rack scale platform integrates open compute standards including OCP DC-MHS, UALink, and Ultra Ethernet Consortium (UEC) architectures, supporting both open scale-up and scale-out fabrics. The rack features quick-disconnect liquid cooling for sustained thermal performance, a double-wide layout for improved serviceability, and standards-based Ethernet for multi-path resiliency.
As a reference design, “Helios” enables OEMs, ODMs, and hyperscalers to adopt, extend, and customize open AI systems quickly — reducing deployment time, improving interoperability, and supporting efficient scaling for AI and HPC workloads. The Helios platform reflects the ongoing collaboration from AMD across the OCP community to enable open, scalable infrastructure for AI deployments worldwide.
0 Comments