Nvidia CEO Jensen Huang doubled down on the company's explosive growth at the company's annual GPU Technology Conference (GTC) 2026, projecting at least $1 trillion in cumulative revenue through 2027 from AI chips and infrastructure, double his earlier estimate of $500 billion.
This signals Huang's confidence that Nvidia will remain the biggest company in the market for AI chips even as competition grows. Investors have also doubted whether its strategy of plowing back its profits into the AI ecosystem is paying off.
At a hockey arena with a capacity of more than 18,000, Huang laid out how the top AI chipmaker plans to adapt to a rapidly changing AI landscape at the chipmaker's four-day developer conference.
Huang did not offer more details on his $1 trillion forecast.
What will drive the revenue projection?
Surging compute demand: Huang said in his keynote that global computing needs have risen "a million times in two years", fuelling demand for Nvidia’s Blackwell and Rubin chips plus networking gear. He noted that every company, from big names such as OpenAI and Anthropic to young startups, feel like they could grow revenue and their AI "if they could just get more capacity."
Shift to inference
“The inference inflection has arrived,” Huang said. The company said a key driver is inference, where trained models run real-time queries and tasks at scale, shifting from training dominance. Huang said this is a multi-trillion-dollar opportunity, with Nvidia's GPUs offering superior performance and cost efficiency.
To help navigate the transition into the inference field, Nvidia has struck a multi-billion-dollar licensing deal with market specialist Groq, including hiring that startup's top engineers.
Sector-wide expansion
Huang also demonstrated Nvidia's latest innovations when it came to GPUs and platforms for building AI into nearly everything, from robots and apps to data centres orbiting the planet.
Nvidia is seemingly aiming its AI expertise at all sectors, from automobiles to health care. "Every single enterprise company, every single software company in the world needs an AI agent strategy," Huang said. "This is going to become a multi-trillion-dollar industry, offering not just tools for people to use, but agents that are specialized," he added.
Deals struck
Sovereign AI tie-ups were announced, including with Palantir for secure, localised deployment in regulated sectors and Indian AI startup Sarvam AI as a key partner for localised model development. Nebius Group has also committed to multi-billion-dollar gigawatt-scale AI factories.
Amazon Web Services (AWS) said it will integrate Blackwell, Rubin GPUs, RTX PRO workstations, and Groq LPUs across its compute lineup. Microsoft Azure said it will add Nvidia Cosmos world models and its open models Alpamayo for robotics/physical AI, accessible via GitHub and Foundry.
Automotive/robotics deals included BYD, Hyundai, Nissan, and Geely. Uber also announced a partnership with Nvidia for ride-hailing AI integration.
Huang said Samsung was producing Nvidia's new AI chips, which sent the South Korean company's shares up.
This signals Huang's confidence that Nvidia will remain the biggest company in the market for AI chips even as competition grows. Investors have also doubted whether its strategy of plowing back its profits into the AI ecosystem is paying off.
At a hockey arena with a capacity of more than 18,000, Huang laid out how the top AI chipmaker plans to adapt to a rapidly changing AI landscape at the chipmaker's four-day developer conference.
Huang did not offer more details on his $1 trillion forecast.
What will drive the revenue projection?
Surging compute demand: Huang said in his keynote that global computing needs have risen "a million times in two years", fuelling demand for Nvidia’s Blackwell and Rubin chips plus networking gear. He noted that every company, from big names such as OpenAI and Anthropic to young startups, feel like they could grow revenue and their AI "if they could just get more capacity."
Shift to inference
“The inference inflection has arrived,” Huang said. The company said a key driver is inference, where trained models run real-time queries and tasks at scale, shifting from training dominance. Huang said this is a multi-trillion-dollar opportunity, with Nvidia's GPUs offering superior performance and cost efficiency.
To help navigate the transition into the inference field, Nvidia has struck a multi-billion-dollar licensing deal with market specialist Groq, including hiring that startup's top engineers.
Sector-wide expansion
Huang also demonstrated Nvidia's latest innovations when it came to GPUs and platforms for building AI into nearly everything, from robots and apps to data centres orbiting the planet.
Nvidia is seemingly aiming its AI expertise at all sectors, from automobiles to health care. "Every single enterprise company, every single software company in the world needs an AI agent strategy," Huang said. "This is going to become a multi-trillion-dollar industry, offering not just tools for people to use, but agents that are specialized," he added.
Deals struck
Sovereign AI tie-ups were announced, including with Palantir for secure, localised deployment in regulated sectors and Indian AI startup Sarvam AI as a key partner for localised model development. Nebius Group has also committed to multi-billion-dollar gigawatt-scale AI factories.
Amazon Web Services (AWS) said it will integrate Blackwell, Rubin GPUs, RTX PRO workstations, and Groq LPUs across its compute lineup. Microsoft Azure said it will add Nvidia Cosmos world models and its open models Alpamayo for robotics/physical AI, accessible via GitHub and Foundry.
Automotive/robotics deals included BYD, Hyundai, Nissan, and Geely. Uber also announced a partnership with Nvidia for ride-hailing AI integration.
Huang said Samsung was producing Nvidia's new AI chips, which sent the South Korean company's shares up.




