Artificial intelligence, Qualcomm launches the AI200 and AI250 chips and challenges Nvidia

Qualcomm has announced the launch of its new accelerator chips for artificial intelligence, marking its official entry into the data center semiconductor sector, so far dominated by Nvidia. The news pushed Qualcomm shares up 11%. The company, historically focused on processors for wireless connectivity and mobile devices, is now targeting the most dynamic sector of technology: hardware infrastructure for artificial intelligence. The new chips, called AI200 and AI250, will hit the market in 2026 and 2027 respectively and will be available in complete liquid-cooled rack server configurations, comparable to the high-performance systems already offered by Nvidia and AMD.

Qualcomm’s strategy

According to Durga Malladi, Qualcomm’s general manager for data centers and edge computing, the company first wanted to consolidate its leadership in areas such as mobile telephony, and then “level up” towards more complex computing infrastructures. The new chips are based on Hexagon NPU technology, already used in the company’s smartphones, but now adapted to provide rack-scale computing power. This evolution reflects the growing global demand for AI infrastructure: according to McKinsey estimates, around $6.7 trillion will be invested in data centers by 2030, the majority of which will be dedicated to systems based on artificial intelligence chips.

Efficiency, flexibility and inference: Qualcomm’s strengths

Unlike Nvidia, which dominates the AI ​​model training segment, Qualcomm focuses primarily on inference, i.e. running already trained models. The new chips are designed to deliver high performance with low power consumption and a lower total cost of ownership (TCO). An AI200 or AI250-based rack draws about 160 kW, which is competitive with Nvidia’s GPU systems. The solutions offer up to 768 GB of memory per card, surpassing major competitors, and introduce an innovative memory architecture in the AI250 model, capable of improving effective bandwidth by more than 10 times and reducing power consumption.

An open software ecosystem

In addition to the hardware, Qualcomm offers a complete software ecosystem, optimized for AI inference and compatible with the main machine learning and generative artificial intelligence frameworks. The Qualcomm AI Inference suite enables direct deployment of Hugging Face models with a single click, simplifying adoption for businesses and developers. With an annual release plan for new generations of chips, Qualcomm aims to consolidate itself as the third pole in the AI ​​hardware race, alongside Nvidia and AMD. If the promise of efficiency and flexibility is delivered, the mobile semiconductor giant could soon become a very important player in the next phase of the artificial intelligence revolution.