Welcome to the Cloud Wars Minute — your daily cloud news and commentary show. Each episode provides insights and perspectives around the “reimagination machine” that is the cloud.
This episode is sponsored by Acceleration Economy’s “Cloud Wars Top 10 Course,” which explains how Bob Evans builds and updates the Cloud Wars Top 10 ranking, as well as how C-suite executives use the list to inform strategic cloud purchase decisions. The course is available today.
Kenny Mullican guest-hosts today’s Cloud Wars Minute, examining the fast-evolving artificial intelligence (AI) chip industry and some of its key players, outside of much-ballyhooed NVIDIA: IBM, Huawei, AMD, and d-Matrix.
00:56 — To tackle AI’s intricate workloads, specialized chips such as graphics processing units (GPUs) and tensor processing units (TPUs) are employed. For some time, NVIDIA has reigned supreme in the realm of AI-specialized chips. Now, several other established chip makers and innovative startups are mustering the courage to compete.
02:45 — First off, we have IBM, which has a new chip that’s being tested for speech recognition. But it also may be useful for generative AI. I think this chip will eventually make it possible for AI to live inside our mobile devices. This chip is still in its early days, but it shows the direction that IBM is heading, and it has a very positive outlook.
03:45 — Next up is Huawei, a Chinese company that is the second-largest smartphone manufacturer in the world. It’s also big in AI chips and cloud computing. Huawei has now produced a GPU chip that is on par with NVIDIA’s A100. The A100 is still the most commonly used AI chip, although not NVIDIA’s fastest.
04:35 — Next is AMD, which has a chip coming out called the MI300x. It hasn’t been launched yet, so we don’t have pricing, but it apparently competes with NVIDIA’s H100 flagship and planned shipments for the components that go into this chip are jumping up big time. This could be a huge revenue boost for AMD.
05:02 — Finally, there’s newcomer d-Matrix. It just raised another $110 million in funding with backing from Microsoft. It’s solely focused on AI chips and “the inference portion.” It’s not competing with NVIDIA for that big compute power that’s being used for training the large language models (LLMS).
05:50 — As this AI revolution takes hold, having newer, smaller, faster, more energy-efficient chips becomes critical. Competition between the chipmakers will only help keep up the momentum, as well as lowering the price for companies and consumers alike.