Just when you thought that Nvidia cannot surprise the market anymore, it did it again at its GPU Technology Conference (GTC) on Monday. CEO Jensen Huang’s keynote announcement of its new products blew the competition away. Again!
Just less than a month ago, Nvidia’s stunning quarterly financial results (which I wrote about in an earlier blog) proved that the whole AI community is beholden to Nvidia’s Hopper H100 chips. It had such a stranglehold on this niche GPU chip that the US government had to ban China from buying it.
Never in the history of financial markets had we seen a Top 5 company in the world announcing it had just tripled its revenues, net income surged 769% and earnings per share up 486% year on year… Wow!
Analysts are still trying to catch their breath to recalibrate what the expected share price will be as earnings blew up the old forecast. We saw its share price jumping by more than $100 overnight to reach new highs as it’s capitalization breached $2 trillion.
And now they have to redo their spreadsheets again after seeing on Monday for the first time what Jensen Huang has in store for their pipeline. The lineup was amazing and would bring AI to the next level. While very technical in nature, his presentation indicates that Nvidia is ready to bring the world to a never before seen level of compute power which would turbo charge new breakthroughs in tech.
Jensen announced 3 items on Monday that were earth shattering: (1) Nvidia GB200 Grace Blackwell superchip with 208 Billion transistors. It is heavily optimized for AI work, boasting 192GB of HBM3E memory as well as the the ability to train 1 trillion parameter models. Nvidia said that the system can deploy a 27-trillion-parameter model. That’s much larger than even the biggest models, such as GPT-4, which reportedly has 1.7 trillion parameters.
Blackwell is also 2.5 times faster than Hopper in training AI (feeding AI models data to improve their performance). And it’s five times faster than its Hopper architecture at inference, the process by which AI models can draw conclusions from new data.
(2) DGX SuperPOD purpose-built for training and inferencing trillion-parameter generative AI models and (3) Project GROOT a general-purpose foundation model designed for humanoid robots, enabling them to learn from minimal human demonstrations.
Before competitors like ARM can even try to catch up to Nvidia’s Hopper chip, Jensen has just upped the game again to another level with the Blackwell chip that would be multiple times faster than the H100. Coupled with Project Groot that will ease the programming and transformation of robotics training, the whole development process will be sped up by leaps and bounds.
The new GPU speeds will also help to accelerate AI development as an increase in computing speed will greatly enhance the LLMs neural network’s deep learning absorption of unlimited data.
Analyst now have to evaluate the company in new light and assume that the demand for Blackwell could create a stampede of new orders. That could mean a repeat of another amazing quarterly result of revenue. The stock price has just crossed $900, a new high. Forecast of Nvidia stock is now into the 1.2 to 1.4k range as profits are expected to blip up again.
There are also as many new AI news coming out this week again. Seems like Apple is trying to pair up with Google Gemini to head off the OpenAI onslaught. ChatGPT 5.0 launch is expected to be around the corner soon. Multi-modal input/output is the minimum standard now for new entrants to this highly competitive AI war where only the big boys can play in and invest billions of funding. Everyone wants to be number one as being second is like admitting defeat.
There is a fear that AI governance, guard rails and boundary settings are being pushed aside and ignored for the sake of being a champion in this high stakes I-win-you-lose game. Data is plentiful and LLMs will crunch trillions of variables easily very soon with the help of the new Nvidia Blackwell chips.
Leave a Reply