Category Archives: Nvidia

Nvidia Q4 2025: Revenue and Earnings Beat. Robust Demand and Growing Market, more reasonable Valuation.

March 3rd, 2025.  On February 26, 2025, Nvidia announced Financial Results for the 4th Quarter and Fiscal Year 2025, which ended on Jan 26.  Note that since Nvidia’s Fiscal Year ends in January, it is termed the Fiscal Year of the new year, even though most of the financial events included, transpired during the preceding calendar year. 

Recently the market has apparently been concerned with certain key issues.  There is a widespread interest in investing in AI compute, and adapting workflows in many industries to employ AI applications in order to boost productivity.  But there is uncertainty regarding the persistence of high levels of demand for Nvidia GPUs needed to produce AI compute. Another issue relates to whether the return on investment (ROI) on the high levels of capex required to build AI related datacenters, will justify the investment. 

We can address these issues with information contained in the commentary of CFO Colette Kress, with full narrative available on the webcast, from which I quote liberally in the following outline of financial and operating results.

Revenue for Q4 was $39.3billion, exceeding management’s outlook of $37.5B.  This was up 12% sequentially, and up 78% year over year (yoy). For the entire FY 2025, revenue was $130.5B, up 114%, yoy.  Over 88% of this was for Data Center, which reflects the demand for AI related compute. 

Data center revenue for fiscal 2025 was $115.2 billion, up 142% from the prior year. In Q4, Data Center revenue of $35.6 billion was up 16% sequentially and 93% year-on-year.

The most current GPU in production is the Grace-Blackwell, which was introduced at Nvidia GTC (GPU Technology Conference) 2024, in March 2024. Last August there had been concern among analysts with delays in Blackwell production.  The issue of inadequate manufacturing yield was subsequently successfully addressed.  Currently, this is the fastest product ramp in the company’s history, unprecedented in its speed and scale. Blackwell production is in full gear across multiple configurations for varying datacenter architectures.  In Q4, Blackwell sales exceeded management expectations. Nvidia delivered $11 billion of Blackwell revenue in Q4 to meet strong demand. Grace Blackwell systems have been installed by Nvidia for its own development efforts, as well as by other notable customers including Microsoft, CoreWeave and OpenAI. 

Datacenter revenue includes the categories of Compute, and Networking. Of Q4 Data Center revenue of $35.580B, Compute accounted for $32.556B (91.5%), with the remainder being Networking. Q4 Compute revenue jumped 18% sequentially and over 116% year-on-year. Customers are racing to scale infrastructure to train the next generation of cutting-edge models and unlock the next level of AI capabilities. 

Post training and model customization and inference demands orders of magnitude more compute than training models.  This is creating an expanding market of application specific models.  This drives continuing demand for AI compute.  The latest Nvidia GPU platforms provide markedly improved power efficiency (performance per watt) and speed response.  The company’s performance and pace of innovation are unmatched, driving a 200x reduction in inference costs in just the last 2 years.  Blackwell was architected for reasoning AI inference. Blackwell supercharges reasoning AI models with up to 25x higher token throughput and 20x lower cost versus Hopper 100. Blackwell has great demand for inference. Many of the early GB200 deployments are earmarked for inference, a first for a new architecture. Blackwell addresses the entire AI market from pretraining, post-training to inference across cloud, to on-premise, to enterprise.

CUDA’s programmable architecture accelerates every AI model and over 4,400 applications, ensuring large infrastructure investments against obsolescence in rapidly evolving market. 

Therefore, the addressable market for Nvidia’s products continues is growing rapidly. The improved performance per cost of products continues to optimize ROI for customers.  Customers can install new Nvidia hardware and enjoy continuing application software compatibility. The CUDA platform provides backwards compatibility, while supporting a wide and growing ecosystem of applications.  It is the key element providing switching costs (customers are captive to the platform) competitive advantage.  Competing chip providers cannot lure developers from the market dominating CUDA platform to their noncompatible hardware and software. 

Parenthetically, upcoming generations of GPU (Blackwell Ultra) will launch in H2 of this year, to be followed by the Vera Rubin system.  According to CEO Jensen Huang, datacenter GPU system hardware, power delivery and architecture changed considerably from Hopper to Blackwell and made the introduction of Blackwell challenging.  The system architecture does not change from Blackwell to Blackwell Ultra.  This could create an easier ramping of supply in the future, although that will be next year presumably.

Regarding valuation, in the year between Q4 FY 2024, in January 2024, and Q4 2025 in January 2025, total net revenue rose 114.2% from $60.922B to $130.497B.  Diluted EPS rose 147.06% from $1.19 to $2.94 per share.  Trailing PE fell from 50.65 to 47.92. Note that the Jan 2025 PE (at date of earnings release for Q4 2025) is less than the mean PE of 53.47 of the past decade from 2016

There was a recent drop in stock price, apparently in reaction to the revelation of DeepSeek R1 LLM. Hangzhou DeepSeek Artificial Intelligence Basic Technology Research Co., Ltd., does business as DeepSeek. It’s R1 LLM outperformed some leading LLMs created by US corporations, and was allegedly created for only $6 million, instead of the usually required hundreds of millions of dollars, using Nvidia chips which are not the most advanced available, and which are therefore permitted by US trade law to be exported to PRC.  DeepSeek “raised alarms on whether America’s global lead in artificial intelligence is shrinking and called into question Big Tech’s massive spend on building AI models and data centers.”

As we know, until this moment, analysts had been anguishing over whether the unprecedented capex expense to acquire Nvidia GPU powered datacenters, required for production of AI, would allow adequate ROI.  Now, the possibility of cheaper production appeared and provoked the reverse crisis: AI would be abundant, but there would be no need for the higher end, expensive Nvidia GPUs. Perhaps if the Communist totalitarian country had produced an LLM which was somewhat cheaper to produce, but not that much cheaper, a sort of happy medium.  Then, investors might have taken the announcement with equanimity, since improved ROI of AI would happily coexist with some easing of capex. 

The reality is, there will always be a new, cheaper, or more powerful chip, or other novel innovations in this rapidly developing industry. Some of these will improve business economics, others will not.  Zooming out to gain some perspective, it is apparent that AI will in the course of time pervade the global economy in countless ways. Demand for the required compute hardware and software shows no sign of abating and will continue for some time.  I doubt that the time has come to lower interest in Nvidia.  Nvidia dominates the market for the GPUs which are indispensable for creating AI tools.  And the accompanying , CUDA software platform contributes to the switching costs which maintain the ecosystem of application developers using Nvidia GPUs.

In view of the recent price drop exceeding a 20% discount from the 52 week high, accompanied by the reduction in PE ratio over the last year, I took the opportunity to transfer some portfolio allocation to Nvidia.  I sold a very modest portion of MSFT holdings, in the range of 2%, at a weighted average of a 6.14% reduction from its 52 week high, and likewise for Visa, which was at its 52 week high.  I bought NVDA stock at a weighted average of a 20.8% reduction from its 52 week high.  Nvidia now makes up approximately 12% of my portfolio.  I did not feel brave enough to buy more. Probably because of the issue of near term volatility. I feel safer buying smaller amounts during periods of recurrent price drops. The stock is after all, not grossly undervalued.

Nvidia: a company in a cyclical industry, with a competitive advantage

June 15, 2024. A quantitative feature that I have used to help identify companies with a durable competitive advantage, is the relatively consistent growth of revenue over many years.  I reason that the persistent climb in revenue is related to the indispensable nature of company products or services, with related durable competitive advantage, and to a large, persistent demand for them, commonly referred to as a “long runway” of persistent total addressable market.  I had avoided cyclical companies, whose stock price rises or falls with the demand cycle.  In order to obtain a superior return, you need to buy at the bottom of the cycle. But in practice it is difficult to know when the company’s fortunes will recover, or when the stock price will stop falling.

Visa and Microsoft are examples of companies with strong competitive advantages and products that are indispensable for large and growing markets.  Since its IPO in 2008, Visa’s  sole decrease in annual revenue occurred in 2020. Government policies enacted during the history making Covid 19 pandemic abruptly curtailed international travel. The fall in purchases by persons travelling internationally slowed cross border payment volumes.  Impressively, Visa total annual revenue did not decrease during the Great recession of 2008-2009.

Microsoft has had minor if any decrease in revenue except in 2009 due to great recession.  In that economic crisis, primarily revenue from Windows software sold to corporations (the Client business segment) decreased, and price cuts for gaming software and hardware slowed gaming revenue.  By 2010 both revenue and earnings exceeded the 2008 levels.  Of note, revenue from the Business Division, including the Office suite of productivity software, was essentially flat. This product was more economically resilient because it has powerful switching costs and generates revenue largely through multiyear contracts.

Nvidia was brought to my attention by the rise of Artificial Intelligence (AI) in public awareness.  Review of its Annual Reports , shows that Nvidia has in the past had prolonged decreases in revenue and earnings, related to adverse macroeconomic conditions. For instance, in 2009 revenue decreased relative to 2008, and did not recover until 2013. Diluted Earnings per Share decreased as well in 2009, showing a loss in 2009 and 2010, and not recovering the level of 2008 until 2017.  In previous periods of its history, rising costs caused earnings to show multiple years of losses or declines without a loss of gross revenue.

Revenue and earnings fell during the Great Recession because Nvidia at that time, derived most of its revenue from products for the PC market. In FY 2009, desktop GPU product sales decreased 29% year over year.  Moreover, cyclical decline can have prolonged effects. PC makers build inventory during periods of anticipated growth. They are left with excess inventory in the event this growth fails.  They can then delay additional purchases of the GPU inputs to PC manufacture, until end customer demand has resumed.

The datacenter GPU business segment, which now contributes the lion’s share of Nvidia revenue, did not yet exist in that era.

And yet, during the period of the Great Recession of 2009-2009, the research engineers at Nvidia were laying the foundation for its history making advances and competitive advantage of the next decade and more.

During 2008 as the global financial crisis accelerated, Nvidia “announced a workforce reduction to allow for continued investment in strategic growth areas… we eliminated … about 6.5% of our global workforce. … expenses associated with the workforce reduction, totaled $8.0 million. We anticipate that the expected decrease in operating expenses from this action will be offset by continued investment in strategic growth areas. ” (Nvidia 10K FY 2009) (Nvidia Fiscal Year ends in January of that year, so it reports on business activity occurring chiefly in the previous calendar year). 

Indeed, R&D expenditures continued to climb during this period.  In fiscal years 2008, 2009, 2010, R&D expenses continued to climb, making up 17%, 25% and 27.3% of revenue for the respective years. (Nvidia 10K FY 2010, p36).

Nvidia GPU chips were originally designed for use in gaming and graphics software applications and by the mid-1990s Nvidia had come to dominate that market. The company IPO’d in 1999. In 2006 Nvidia conceived CUDA, a software platform that enables software to employ the parallel processing and accelerated computing of the GPU, for diverse applications other than solely graphics.  It supports a range of languages and a comprehensive armamentarium of tools allowing it to be used for a wide range of applications. CUDA builds competitive advantage in several ways. The CUDA software platform enables engineers to use accelerated computing driven by Nvidia GPU chips, for a wide variety of other useful and novel applications. This expands the addressable market of use cases for the GPU. Nvidia has fostered an ecosystem of software centered around diverse applications of CUDA, collaborating with myriad companies in diverse industries, from healthcare to pharmacy to automotive, to nurture vertical stacks of software supported by the CUDA platform. Approximately 5 million developers in that ecosystem create a network effect competitive advantage for other GPU manufacturers such as AMD. CUDA is compatible only with Nvidia chips. While it is optimized for upcoming, ever more potent chips, the software ensures backwards compatibility, so developers and end users can often update application software with their current hardware.  There is a strong switching cost competitive advantage versus other chip makers.

Nvidia is one of the rare companies that has consistently done the massive, risky work, anticipating emerging market segments, to  persistently adapt its competitive advantage to continue to dominate the market as it evolves.  During these decades, beginning well before the advent of real AI in 2012, and continuing today, the cultivation of the CUDA centered ecosystem, along with consistent hardware innovation including strategic acquisitions, enabled Nvidia to come to dominate the market  for datacenter GPUs and related high performance computing equipment which is required for AI.

Some general and striking realities about Nvidia’s current competitive position are fairly clear.  There is high demand for AI and accelerated computing capability, and it is not a transient fad.  The use cases are becoming permanent fixtures in the evolving economy. For example, AI is raising productivity of knowledge workers, and widening accessibility to computing applications by making them easier for non-specialists to use.  Accelerated computing makes attainable tasks previously too large to take on. For example, it enables health systems to harness their unstructured clinical or administrative data to yield insights regarding care provision or costs. Virtual digital twin factories can be designed and tested before building the actual plant, avoiding costs of trial and error.

It is clear that the accelerated computing that makes AI possible, largely requires Nvidia products. Specifically, these are the datacenter computer chips needed for accelerated computing, and the technology and software tools needed for the datacenter to efficiently produce (train) and work (inference) with AI models. 

Based on company communications to investors, demand is predicted to persistently outstrips supply.  In view of innumerable essential use cases, the runway and total addressable market is massive.  Nvidia commands at least 80% market share. Most likely revenue and income of the company will climb for some time.

With cyclical companies, there is the concern about avoiding stock purchase at the peak of the cycle. Nvidia stock fell in the macroeconomic perturbations of 2022, among predictions of recession and rising interest rates.  The PE in the quarter ending October 30, 2022 was about 60. It was 50 at the end of the quarter ending in April 2024.  Meanwhile, revenue had climbed more than 4 times, and earnings had grown 20X. 

Therefore it seemed, especially in view of the expectation for continued revenue growth, the stock was not overvalued.  I decided to reallocate some funds from United Health Group (UNH) to Nvidia.  Which I did on February 29th, at a purchase price of $793.16 with 5.4% of my portfolio in Nvidia at the close. The stock has since split 10 to 1.  At some time, it may be that demand for Nvidia data center products will decline. That is not happening soon.  At some point, Nvidia may become involved in a financial mania related to AI. That point has not yet arrived. Should it do so at some point in the future, I might reallocate some funds back to a non-cyclical company, such as United Health Group.