1. NVIDIA GTC 2026 Unveils the Vera Rubin Full‑Stack Platform
At GTC 2026, NVIDIA officially launched the Vera Rubin AI computing platform. Rather than a single chip, it is a complete AI supercomputer system comprising seven chips (including the Vera CPU, Rubin GPU, and Groq 3 LPU) and five rack configurations. The platform is 100% liquid‑cooled and can be deployed in as little as two hours.
2. NVIDIA Raises Revenue Guidance to $1 Trillion, Signaling Strong Compute Demand
NVIDIA CEO Jensen Huang provided an exceptionally bullish revenue forecast at GTC, projecting that cumulative revenue from the Blackwell and Rubin series chips in computing and networking will exceed $1 trillion between 2025 and 2027—double previous estimates, reflecting robust demand for AI compute.
3. Groq LPU Integrated into NVIDIA Ecosystem to Accelerate Inference
NVIDIA has deeply integrated the Groq technology it acquired last year, introducing the Groq 3 LPU (Language Processing Unit) into the Vera Rubin platform. Built on SRAM‑based on‑chip memory, the LPU is designed for low‑latency decoding and works heterogeneously with GPUs to reduce token costs for trillion‑parameter models to one‑tenth that of Blackwell platforms.
4. Next‑Generation “Feynman” Architecture and Space Computing Plans Revealed
NVIDIA also previewed Rubin’s successor, codenamed “Feynman,” which will utilize TSMC’s 1.6nm process and debut on‑chip optical interconnects. Additionally, the company unveiled the “Space‑1” Vera Rubin module, designed for low‑Earth orbit, extending AI compute to edge computing in space.
5. AI Infrastructure Shifts Toward “AI Factories” with Upgraded Liquid Cooling and Power
GTC underscored the transition of data centers into “AI factories.” To handle rising GPU power consumption, the Vera Rubin NVL72 rack’s power delivery has been increased to 440 kW—over 60% higher than the previous generation. Jensen Huang predicted that liquid cooling penetration in traditional data centers will exceed 50%, with AI data centers eventually reaching 100% liquid cooling.
6. Domestic AI Supply Chain Shows Strong Growth; Multiple Companies Report Strong Earnings
Driven by surging compute demand, Chinese listed companies in the AI supply chain delivered impressive 2025 results. Foxconn Industrial Internet saw net profit rise 51.99% year‑on‑year; Innolight posted a 108.81% increase; Eoptolink expects growth of over 230%, highlighting the critical role of Chinese optical modules and AI servers in the global compute supply chain.
7. Domestic GPU Advances: Lishang Technology Showcases Proprietary Products
At the AWE 2026 exhibition held on March 12, Chinese GPU developer Lishang Technology demonstrated its self‑developed “TrueGPU” architecture product line and shared its go‑to‑market plans. The showcase reflects continued productization momentum among domestic GPU players.
8. South Korean Government Launches Large‑Scale GPU Distribution Program to Support Domestic AI
South Korea’s Ministry of Science and ICT announced it has begun distributing GPUs to industry, academia, and research institutions as part of efforts to expand domestic AI capabilities. The program plans to allocate approximately 10,000 GPUs; the first batch already supports 159 projects, aiming to provide computing resources for small and medium‑sized enterprises and startups while preventing resource concentration.
9. Africa Debuts Its First NVIDIA RTX Pro Server in South Africa
Africa’s compute infrastructure saw a major milestone with HOSTAFRICA launching the first locally hosted NVIDIA RTX Pro GPU server in South Africa. This addresses previous challenges faced by African teams—high latency, data sovereignty issues, and foreign exchange constraints—by providing localized compute support for AI research.
10. Brokerages Highlight Investment Opportunities in CPO, Memory, and PCBs from GTC
Multiple securities firms noted in research reports that the technology roadmaps laid out at GTC—such as co‑packaged optics (CPO) switches, high‑bandwidth memory (HBM), and advanced PCBs—will drive upgrades across the supply chain. In particular, the demand for SRAM driven by Groq LPU and the capacity constraints on DRAM caused by HBM are expected to benefit related semiconductor and substrate manufacturers.