AMD has launched its latest accelerator chips and offered a glimpse into its AI infrastructure strategy, aiming to expand its role in the enterprise market, which Nvidia currently dominates.
At its 2025 Advancing AI event, the chipmaker unveiled the AMD Instinct MI350 series accelerators and previewed a rack-scale AI infrastructure platform built on open industry standards.
“The MI350 Series, consisting of both Instinct MI350X and MI355X GPUs and platforms, delivers a 4x, generation-on-generation AI compute increase and a 35x generational leap in inferencing, paving the way for transformative AI solutions across industries,” the company said in a statement.
The MI355X accelerator offers up to 40% higher token-per-dollar efficiency compared to rival products, according to the company.
AMD’s rack-scale AI infrastructure, featuring MI350 Series GPUs, 5th Gen EPYC processors, and Pensando Pollara NICs, is already being deployed by hyperscalers including Oracle Cloud Infrastructure, with broader availability expected in the second half of 2025, AMD added.
The company also unveiled a next-generation AI rack platform called Helios, which will incorporate MI400 Series GPUs, projected to deliver up to 10 times higher inference performance on Mixture of Experts models.
Other announcements included ROCm 7, the latest version of AMD’s open-source AI software stack, and the broad availability of its Developer Cloud, a fully managed platform aimed at accelerating high-performance AI development.
Openness and Nvidia challenge
AMD underscored its commitment to open standards and ecosystem collaboration, positioning itself in contrast to rival Nvidia, which depends heavily on a proprietary software stack.
“We are entering the next phase of AI, driven by open standards, shared innovation, and AMD’s expanding leadership across a broad ecosystem of hardware and software partners who are collaborating to define the future of AI,” AMD’s chair and CEO, Lisa Su, said in the statement.
The announcement follows AMD’s recent acquisition of AI software startup Brium, a deal the company said brought in deep expertise to accelerate the open-source tools that power its AI software stack.
“When you look at the specs, AMD’s MI355X is taped out on TSMC’s N3P process, while Nvidia’s GB300 uses 4NP,” said Neil Shah, vice president for research and partner at Counterpoint Research. “This gives AMD a process node advantage in performance and power efficiency, especially compared to the Nvidia Blackwell GB200/B200.”
However, factors such as memory bandwidth, precision optimization, and software framework support ultimately determine training and inference performance.
“AMD excels at optimizations for higher precisions (FP64, FP32) where it holds an advantage, but Nvidia is better optimized for lower precisions (FP4, FP8),” Shah added. “AMD now matches Nvidia on core capabilities, allowing it to compete head-on, which could lead to improved theoretical TCO.”
However, AMD’s ROCm software stack is still catching up to Nvidia’s more established CUDA ecosystem, which remains a key factor in real-world performance and total cost of ownership. Broader adoption will depend on how quickly AMD can narrow this gap, though it continues to make steady progress with each generation.
For enterprises with memory-intensive and cost-sensitive AI workloads, AMD is clearly emerging as a credible alternative, according to Prabhu Ram, VP of the industry research group at Cybermedia Research.
“While Nvidia maintains clear leadership in software ecosystem maturity and broad enterprise adoption, AMD’s momentum — fueled by deeper integration with major hyperscalers and cost-effective hardware — is expanding the AI infrastructure landscape,” Ram said.
Industry adoption
Seven of the ten largest AI developers are now running production workloads on AMD Instinct accelerators, including Meta, OpenAI, Microsoft, and xAI, AMD said in the statement.
At the event, AMD Chair and CEO Lisa Su was joined on stage by OpenAI CEO Sam Altman and executives from xAI, Meta Platforms, and Oracle, according to Reuters.