Nvidia Enhances Its Flagchip to Accommodate Larger AI Systems

Nvidia has introduced the H200, an upgraded AI chip, set to launch next year, with initial partnerships with Amazon, Google, and Oracle.

The H200 surpasses Nvidia's H100 chip, primarily due to an increase in high-bandwidth memory, a crucial element for rapid data processing.

This upgrade in high-bandwidth memory and chip processing connectivity will result in faster response times for AI services, including OpenAI's ChatGPT.

The H200 boasts 141 gigabytes of high-bandwidth memory, an improvement from the previous H100, although memory suppliers were not disclosed.

Major cloud service providers, as well as niche AI cloud providers like CoreWeave, Lambda, and Vultr, will offer access to the H200 chips.