Tag Archives: AI

Microsoft Q4 FY 2025 earnings: Azure growth accelerates as legacy business applications transition to Agentic AI

August 16, 2025.

As recounted by CEO Satya Nadella, the Microsoft public cloud service provider Azure revenue surpassed $75B, up 34% year over year (yoy), taking share every quarter.  Azure and other cloud services revenue, as reported in the earnings call slides, grew 39% yoy.  Nadella said Microsoft continued to lead the AI infrastructure wave, now with over 400 datacenters, in over 70 regions, more than any other provider. 

Microsoft  stood up over 2 GW of new datacenter capacity in the last 12 months.  1 GW would supply roughly 800,000 homes in the US. This shows how much new power supply the AI revolution is requiring. 

Data:

Knowledge workers require access to relevant data produced by their enterprise, in order to put it to work using applications. In enterprises whose primary function is not data science, access to data must be provided while enabling unified management and relatively low learning curve. Microsoft Fabric is becoming the complete data and analytics platform for the AI era.  Fabric enables access to all of the enterprise data in one application using OneLake, and integrates with the Microsoft ecosystem.  Revenue was up 55% yoy, now with over 25,000 customers. 

Microsoft also supports third party data analytics platforms on Azure.  Azure Databricks and Snowflake on Azure are also growing.  Cosmos DB and PostgreSQL both play crucial roles in operation of critical OpenAI ChatGPT applications.

In the area of Data, Microsoft continues to evolve crucial third party data management tools on Azure. While continuing its close partnership with Databricks, it is growing Fabric as an accessible and versatile tool that adds value to the existing Microsoft software ecosystem, in this way nurturing its competitive advantage of switching costs.  Customers should be able to access the resources they need to evolve innovative applications, while finding the new capabilities they seek, within the ecosystem.

Agentic AI

This year Microsoft launched Azure AI Foundry to help customers design, customize, and manage AI applications and agents, at scale.

Customers increasingly want to use multiple AI models to meet their specific performance, cost, and use case requirements.  Foundry enables them to provision inferencing throughput once and apply it across more models than any other hyper scale cloud provider, including models from OpenAI, DeepSeek, Meta, xAI’s Grok, and very soon, Black Forest Labs and Mistral AI.

Azure AI Foundry includes the Foundry Agent Service, now being used by 14,000 customers to build agents that automate complex tasks.

As a specific measure of usage, the number of tokens served by Foundry APIs, exceeded 500 trillion this year, up over 7X.

The family of Copilot apps surpassed 100 million monthly active users across commercial and consumer.

Hundreds of partners like Adobe, SAP, ServiceNow, and Workday have built their own third-party agents that integrate with Copilot and Teams.  Also, customers use Copilot Studio to extend Microsoft 365 Copilot and build their own agents.  Customers created 3 million such agents using SharePoint and Copilot Studio this year.

GitHub Copilot users have reached 20 million.  GitHub Copilot Enterprise customers increased 75% quarter over quarter. 

In healthcare, Dragon Copilot usage is surging.  Customers used ambient AI solutions to document over 13 million physician-patient encounters this quarter, up nearly 7X year-over-year.  In a typical use case the copilot creates a progress note of the patient encounter based on the dialogue between the physician and the patient.  The physician is relieved of the administrative work of writing the note after the patient has left.

CEO Nadella stated that revenue from Azure AI services was generally in line with expectations. And, “while we brought additional datacenter capacity online this quarter, demand remains higher than supply.”

As legacy applications transition to Agentic AI, Microsoft is adding productivity and capability to its ecosystem, while enabling new capabilities, such as relieving doctors of admin work!  The success of this strategy is seen in the growth of usage of the various applications, from Dragon AI to Fabric for the enterprise, to Agentic AI for large scale customer service organizations.  But this success is based on more than successful usage.  It translates into cash flow, the key indicator of a truly successful business.

The reason for this ability to translate sales into cashflow, is one of the central competitive advantages of Microsoft: scale.  This has two components. First, is the availability of captive customers.  When a new capability, for example Fabric, is launched, sales are efficiently executed across a massive number of existing ecosystem customers.   In addition, software vendor partners multiply this. That is, the new source of revenue is efficiently scaled across a large number of receptive customers. Scale enables the company to be able to afford the cost of development of the product. Costs which include for example employing engineers, and building or modifying datacenter compute.

Second, the costs of incremental new business is quite low for Microsoft’s software business.  There is a minimal additional cost of selling the 2nd or 3rd or 100th instance of the software.  We refer to the costs of development of the software products as being primarily fixed. The cost of selling incremental additional volumes of the software, which might include setting up additional technical support services, which can serve customers globally from one center, are relatively low. These are called variable costs.

Compare with a company like Starbucks. Each new store that Starbucks sets up, requires investment in real estate, new worker recruitment, training, and logistics for the coffee and food products. That store can sell only in its own geographic location. We seek to invest in companies with a scale competitive advantage, and fixed costs outweighing the variable costs.  

The competitive advantage of scale, is what drives high Gross Margin.  For a company where Salary, General and Administrative costs are well controlled, and debt is conservative, therefore generating low Interest Expense, this means that Operating Cash Flow continues to grow, to supply the Capex which enables the company to readily exploit emerging markets. 

 This cash production is impressively made manifest when we consider the ability of Microsoft to ramp up its capital expenditure to eye popping amounts in recent years, while maintaining large amounts of cash flow.

The table above shows Operating cash flow, capex (PPE expense) and free cash flow for Microsoft, from FY 2020 through 2025.  As shown, Microsoft was able to more than triple capex  from 15.4B in 2020 to $64.55B in 2025, while maintaining free cash flow.  While free cash flow decreased slightly from $74B in 2024 to $71.6B in 2025, it is still ample to supply the company’s needs. Out of Free cash flow of $71.6B in 2025, Microsoft paid approximately $24B in dividends, repurchased $18.4B in common  stock, and had $2.4B indebt interest expense.

In comparison, a startup company supplying an analogous product, has a cost of sales and marketing to purchase each new customer. That’s after building or paying for compute capacity. Furthermore, it must supply at a lower price in order to attract customers who would otherwise use the more complete suite available from Microsoft.  This special effort to  take customers from the large scale provider is necessary because the startup lacks the other, more fundamental competitive advantage that Microsoft has, that of switching costs.  Its customers are (not unwilling) captive customers.  Switching costs are sustained by the continuous research and development of increased value provision in the product ecosystem. Ecosystem customers find it more economic to remain in the system. This never ending evolution of the product sustains the customer base, and is essential to provide the basis for scale.

Outlook for FY 2026:

CFO Amy Hood gave the outlook for the coming quarter and fiscal year.

“Capital expenditure growth, as we shared last quarter, will moderate compared to FY25 with a greater mix of short-lived assets. Due to the timing of delivery of additional capacity in H1 (first half of year), including large finance lease sites, we expect growth rates in H1 will be higher than in H2. Revenue will continue to be driven by Azure.

In Azure, we expect Q1 revenue growth of approximately 37% in constant currency driven by strong demand for our portfolio of services on a significant base. Even as we continue bringing more datacenter capacity online, we currently expect to remain capacity constrained through the first half of our fiscal year”.

In summary, as aggressive but prudent expansion of Azure infrastructure and software tools enables the expansion of data analytics platforms and agentic AI, Microsoft continues to advance as an indispensable host and enabler of the AI transformation, while further enriching its ecosystem.

Nvidia Q4 2025: Revenue and Earnings Beat. Robust Demand and Growing Market, more reasonable Valuation.

March 3rd, 2025.  On February 26, 2025, Nvidia announced Financial Results for the 4th Quarter and Fiscal Year 2025, which ended on Jan 26.  Note that since Nvidia’s Fiscal Year ends in January, it is termed the Fiscal Year of the new year, even though most of the financial events included, transpired during the preceding calendar year. 

Recently the market has apparently been concerned with certain key issues.  There is a widespread interest in investing in AI compute, and adapting workflows in many industries to employ AI applications in order to boost productivity.  But there is uncertainty regarding the persistence of high levels of demand for Nvidia GPUs needed to produce AI compute. Another issue relates to whether the return on investment (ROI) on the high levels of capex required to build AI related datacenters, will justify the investment. 

We can address these issues with information contained in the commentary of CFO Colette Kress, with full narrative available on the webcast, from which I quote liberally in the following outline of financial and operating results.

Revenue for Q4 was $39.3billion, exceeding management’s outlook of $37.5B.  This was up 12% sequentially, and up 78% year over year (yoy). For the entire FY 2025, revenue was $130.5B, up 114%, yoy.  Over 88% of this was for Data Center, which reflects the demand for AI related compute. 

Data center revenue for fiscal 2025 was $115.2 billion, up 142% from the prior year. In Q4, Data Center revenue of $35.6 billion was up 16% sequentially and 93% year-on-year.

The most current GPU in production is the Grace-Blackwell, which was introduced at Nvidia GTC (GPU Technology Conference) 2024, in March 2024. Last August there had been concern among analysts with delays in Blackwell production.  The issue of inadequate manufacturing yield was subsequently successfully addressed.  Currently, this is the fastest product ramp in the company’s history, unprecedented in its speed and scale. Blackwell production is in full gear across multiple configurations for varying datacenter architectures.  In Q4, Blackwell sales exceeded management expectations. Nvidia delivered $11 billion of Blackwell revenue in Q4 to meet strong demand. Grace Blackwell systems have been installed by Nvidia for its own development efforts, as well as by other notable customers including Microsoft, CoreWeave and OpenAI. 

Datacenter revenue includes the categories of Compute, and Networking. Of Q4 Data Center revenue of $35.580B, Compute accounted for $32.556B (91.5%), with the remainder being Networking. Q4 Compute revenue jumped 18% sequentially and over 116% year-on-year. Customers are racing to scale infrastructure to train the next generation of cutting-edge models and unlock the next level of AI capabilities. 

Post training and model customization and inference demands orders of magnitude more compute than training models.  This is creating an expanding market of application specific models.  This drives continuing demand for AI compute.  The latest Nvidia GPU platforms provide markedly improved power efficiency (performance per watt) and speed response.  The company’s performance and pace of innovation are unmatched, driving a 200x reduction in inference costs in just the last 2 years.  Blackwell was architected for reasoning AI inference. Blackwell supercharges reasoning AI models with up to 25x higher token throughput and 20x lower cost versus Hopper 100. Blackwell has great demand for inference. Many of the early GB200 deployments are earmarked for inference, a first for a new architecture. Blackwell addresses the entire AI market from pretraining, post-training to inference across cloud, to on-premise, to enterprise.

CUDA’s programmable architecture accelerates every AI model and over 4,400 applications, ensuring large infrastructure investments against obsolescence in rapidly evolving market. 

Therefore, the addressable market for Nvidia’s products continues is growing rapidly. The improved performance per cost of products continues to optimize ROI for customers.  Customers can install new Nvidia hardware and enjoy continuing application software compatibility. The CUDA platform provides backwards compatibility, while supporting a wide and growing ecosystem of applications.  It is the key element providing switching costs (customers are captive to the platform) competitive advantage.  Competing chip providers cannot lure developers from the market dominating CUDA platform to their noncompatible hardware and software. 

Parenthetically, upcoming generations of GPU (Blackwell Ultra) will launch in H2 of this year, to be followed by the Vera Rubin system.  According to CEO Jensen Huang, datacenter GPU system hardware, power delivery and architecture changed considerably from Hopper to Blackwell and made the introduction of Blackwell challenging.  The system architecture does not change from Blackwell to Blackwell Ultra.  This could create an easier ramping of supply in the future, although that will be next year presumably.

Regarding valuation, in the year between Q4 FY 2024, in January 2024, and Q4 2025 in January 2025, total net revenue rose 114.2% from $60.922B to $130.497B.  Diluted EPS rose 147.06% from $1.19 to $2.94 per share.  Trailing PE fell from 50.65 to 47.92. Note that the Jan 2025 PE (at date of earnings release for Q4 2025) is less than the mean PE of 53.47 of the past decade from 2016

There was a recent drop in stock price, apparently in reaction to the revelation of DeepSeek R1 LLM. Hangzhou DeepSeek Artificial Intelligence Basic Technology Research Co., Ltd., does business as DeepSeek. It’s R1 LLM outperformed some leading LLMs created by US corporations, and was allegedly created for only $6 million, instead of the usually required hundreds of millions of dollars, using Nvidia chips which are not the most advanced available, and which are therefore permitted by US trade law to be exported to PRC.  DeepSeek “raised alarms on whether America’s global lead in artificial intelligence is shrinking and called into question Big Tech’s massive spend on building AI models and data centers.”

As we know, until this moment, analysts had been anguishing over whether the unprecedented capex expense to acquire Nvidia GPU powered datacenters, required for production of AI, would allow adequate ROI.  Now, the possibility of cheaper production appeared and provoked the reverse crisis: AI would be abundant, but there would be no need for the higher end, expensive Nvidia GPUs. Perhaps if the Communist totalitarian country had produced an LLM which was somewhat cheaper to produce, but not that much cheaper, a sort of happy medium.  Then, investors might have taken the announcement with equanimity, since improved ROI of AI would happily coexist with some easing of capex. 

The reality is, there will always be a new, cheaper, or more powerful chip, or other novel innovations in this rapidly developing industry. Some of these will improve business economics, others will not.  Zooming out to gain some perspective, it is apparent that AI will in the course of time pervade the global economy in countless ways. Demand for the required compute hardware and software shows no sign of abating and will continue for some time.  I doubt that the time has come to lower interest in Nvidia.  Nvidia dominates the market for the GPUs which are indispensable for creating AI tools.  And the accompanying , CUDA software platform contributes to the switching costs which maintain the ecosystem of application developers using Nvidia GPUs.

In view of the recent price drop exceeding a 20% discount from the 52 week high, accompanied by the reduction in PE ratio over the last year, I took the opportunity to transfer some portfolio allocation to Nvidia.  I sold a very modest portion of MSFT holdings, in the range of 2%, at a weighted average of a 6.14% reduction from its 52 week high, and likewise for Visa, which was at its 52 week high.  I bought NVDA stock at a weighted average of a 20.8% reduction from its 52 week high.  Nvidia now makes up approximately 12% of my portfolio.  I did not feel brave enough to buy more. Probably because of the issue of near term volatility. I feel safer buying smaller amounts during periods of recurrent price drops. The stock is after all, not grossly undervalued.

Q3 2024 Earnings: Adobe integrates AI into its Apps to strengthen its competitive advantage.  Analyst Disappointment with Guidance for Q4 as sideshow.

October 26, 2024: In its 3rd Quarter of FY 2024 ending August 30th, Adobe beat earnings and revenue expectations, both those of analysts and its own.  For no rational reason, the stock price fell up to 10% after the earnings call. The media attributed this to disappointment of analysts with lower-than-expected company guidance for the Fourth Quarter of FY 2024 revenue, of $5.5 to 5.55 B. 

Did it make sense to sell the stock based on this guidance?  Should we worry that the company will miss its standing full FY 2024 guidance? At the end of FY 2023, Adobe gave FY 2024 revenue estimate of $21.3 to $21.5 Billion.   This was updated to a raised estimate, at end of Q2 2024.  The updated FY 2024 guidance is $21.4B to $21.5B (midpoint is $21.45B)

Total revenue in the first three quarters of FY 2024 is $15.899B.  So if Adobe meets the quarterly guidance for Q4 2024 of $5.5B to $5.55B, FY revenue would be anywhere from $21.399B to $21.449B, very likely enough to meet the FY 2024  guidance of $21.4B to $21.5B. Especially when we bear in mind that over 10 years from 2014 to 2023, Adobe has beat its annual revenue guidance 7 of 10 times.

Zooming to the big picture of Adobe’s business growth over time, we see quarterly yoy revenue growth consistently at 10% or better over the last 10 years. Annual revenue growth slowed to 10% in 2022, 2023 and 2023, having been over 20% from 2016 to 2019.   Annual revenue quadrupled from $4.796 Billion in 2015 to $19.409 Billion in 2023.  The years from 2012 to 2014 were marked by a transient period of slowed revenue growth because of the shift from perpetual license sales of the software, to cloud distribution of software paid by subscription.  Instead of receiving revenue for perpetual licenses as an upfront payment, revenue was now spread out over serial subscription periods.  However, the economics of software subscriptions distributed from the cloud mean lower priced software packages with a more limited app selection can be targeted at a much more diverse, larger number of more precisely defined market segments. Demand and usage can be monitored in real time and in response, software SKUs can be designed and released much more responsively than previously, when comprehensive updates of a full suite of software on CDs had to be distributed to brick and mortar stores.  This meant that revenue could ultimately reach a higher level of growth, targeted to a larger Total Addressable Market (TAM).  It is possible that Adobe’s revenue growth slowed in 2022 because once growth had accelerated to meet the new TAM demand, the growth rate slowed to reflect intrinsic market growth.  On the other hand, the slowing in revenue growth might well have been caused by the contemporaneous interest rate increases, fears of recession and consequent slowing in business spending.

I have not done the research to find the answer to this question.  It is not our job to forecast or explain changes in the economy and whether they affect the company.  Our business is to examine whether our investment of choice is maintaining and strengthening its competitive advantages, and extending them to the evolving markets in order to perpetuate its growth Ad Infinitum.

The AI revolution presents a surge in market demand. Adobe is exploiting this by integrating AI capabilities into its current applications and developing new AI-first applications.

Generative AI facilitates and accelerates content creation, its processing into final product, and its targeting to personalized market segments at scale. It makes this work more accessible to a wider range of knowledge workers. 

As described in the earnings call and recent investor conferences, Adobe is executing its strategy of Integrating AI into its flagship Apps. Multiple Adobe Firefly AI powered features have been integrated into the Creative Cloud applications. The online platform Adobe Express incorporated generative AI in 2023 to accelerate and ease content creation using the Creative Cloud apps. In the Document Cloud, the Acrobat AI Assistant enables users to quickly extract value from documents

The strategy is to use the AI features to streamline repetitive tasks and accelerate workflows within the apps, removing the pain points and some of the learning curves of content creation.  As Adobe leadership have repeatedly described over the past year at various investor presentations,  AI integration increases customer acquisition, retention and profitability.  Usage of Firefly Generative AI in the apps continues to accelerate, crossing 12 billion generations since launch. Usage of Adobe Acrobat AI Assistant grew 70% quarter over quarter.

Adobe Express is a streamlined, user-friendly, AI-first platform in which Firefly AI capabilities are made immediately available to utilize Adobe creative applications.  Adobe Express showcases the power of AI integration in the exemplary flagship creative applications, to attract and retain new, non-professional users. In the business environment, Express empowers knowledge workers who are not professional graphic design content creators, for example in marketing, sales and others, to customize branded content for the final intended business marketing application.  Adobe’s AI-powered features are designed to be commercially safe because all AI models created and used in Adobe software are guaranteed to be free of 3rd party intellectual property.

In fact, as stated by David Wadhwani, President of the Digital Media Business, With Express, “we’re on a multi-year strategic journey to dramatically expand our reach across customer segments.” As go-to-market activities are ramped up, Express is targeted to individuals, Education, Teams and Enterprises. As a result, Q3 saw 70 % yoy growth in cumulative exports. Over 1,500 businesses and millions of students were onboarded.  Exports are a relevant indicator of customer acceptance of the product, because if the user created and exported content to another application, that means that Express was used to finish the content product.

In the Adobe Experience Cloud Adobe Sensei has evolved as an AI assistant in the Adobe Experience Platform URL.  This facilitates usage by providing guidance and streamlining tasks. It is based on LLMs of Adobe product facts and best practices, as well as AI models of customer data and goals.

Firefly Services is a comprehensive set of generative AI and Creative services that automates workflows using the suite of apps in Creative Cloud and Experience Cloud.  It takes over repetitive and labor intensive tasks to accelerate production of content at scale, facilitating personalization or modification for specified target audiences.  In addition, customers using Firefly Services can order Firefly Custom Model Integration. Custom models are trained on customer’s branded assets to create campaigns that match a brand’s specific style. 

In Adobe GenStudio, AI is natively integrated with Creative Cloud  and Experience Cloud (including Adobe Experience Manager and Experience Platform) apps to empower marketers to quickly plan, create, store, deliver and measure marketing content and drive greater efficiency in their organizations.  GenStudio was released to beta testing at Adobe Max last year and was just released to general availability as GenStudio for Performance Marketing at Adobe Max 2024. 

Adobe has Competitive Advantages Particularly in the Enterprise.

The integration of the Creative Cloud and Experience Cloud apps means they mutually reinforce their competitive advantages of switching cost.  This integration serves the customer desire to streamline and simplify usage and execution.  As Anil Chakravarthy, President, Digital Experience Business, stated, “Through the integration of Experience Cloud and Creative Cloud, Adobe is uniquely positioned to combine the right content, data and journeys in real time for every customer experience.”   Enterprise customers of Creative Cloud apps are disincentivized from using alternatives to Adobe Experience Cloud software, when this is already integrated into the comprehensive suite of content creation and marketing applications.

Adobe has a large installed base in enterprises. It has high gross and operating margins, with relatively low cost of goods sold. Fixed costs such as R&D are greater than the Variable Costs of goods sold.  For instance, in 2023 R&D was $3.473 Billion, and Total Cost of Revenue was $2.354 Billion. Total Revenue was $19.409 Billion.  This means it enjoys scale advantages.  For instance, developing and integrating AI software into its market dominating Creative Cloud generates a relatively high return, expanding usage across its installed base, while attracting new users.  This would be seen as a healthy Return on Invested Capital (ROIC) in the financial accounting.  The various small younger companies which are introducing generative AI to produce raw content, are faced with the prospect of raising capital and spending heavily on development, including acquiring datacenter infrastructure and attracting software engineers, and when they have created a product, then fighting to win customers in a competitive market. While some of them may prosper in the battle for casual content creators, in the enterprise, they are limited by the switching costs, captive customers that Adobe has nurtured for decades.

Indeed, AI strengthens Adobe competitive advantages.  As described above, AI Assistant in Adobe Acrobat in the Document Cloud, the AI assistant in the Experience Cloud, and Adobe Express AI integration, make the products more accessible by more people in an enterprise or other user group.  For example, In Adobe GenStudio, content produced initially by professional marketing creators using Creative Cloud flagship apps, is subsequently modified and finalized for targeting to specific market segments using Adobe Express, by non-professional marketing or other staff in the enterprise.  This means that more people in the enterprise become habituated to the Adobe software suite, and are disincentivized from switching to alternative creative solutions.  This strengthens the network effect competitive advantage against potential competitors.

Adobe combines competitive advantages of network effects and switching costs.  With its large installed base, it commands a high return on investment of the fixed cost of development of innovation such as AI, relative to the scale of its market share.  This confers economies of scale.

As long as Adobe continues its culture of profitable innovation, which it has since 1982, It will continue to defend and extend its domination of its markets.

Microsoft 2024 Q4 Earnings Call: Microsoft strains to meet AI Datacenter Demand, AI Product Usage Ramping.

August 15, 2024. In the current tech company earnings season, a pressing concern in the investment community is that AI related applications may be exacting more capital investment than is justified by their usefulness and profitability to businesses.

Businesses are beginning to depend on AI. But indeed it may not be possible to clearly foresee the many ways it will actually be used, even as it becomes integrated into daily business functions.  And this normal uncertainty, arouses concern, given the very high level of investment required to build the computation resources required for AI.  The investment community has become concerned that AI related capital investment may strain the corporate balance sheet, reducing cash flow and increasing debt. 

One of the core traits of MSFT culture of operations is that it seeks to adapt and exploit its competitive advantage in a profitable way. The company has never commoditized its products and has avoided being vulnerable to the boom/bust cycle of some tech hardware companies. It has a differentiation strategy and strong competitive advantages of switching costs, largely based on software application usage and platforms.

In the webcast of its 4th Quarter of 2024 Earnings, Microsoft CEO Satya Nadella and CFO Amy Hood explicitly articulated that they are conservative in spending on building infrastructure needed for AI workloads, and that in fact demand by their customers for AI computation is outstripping the company’s ability to supply the computing infrastructure.  The deficit in available AI computation is partly relieved by leasing datacenters from companies which maintain large datacenter assets such as Oracle. These have the advantage of being relatively short term leases, purportedly until Microsoft can establish its own new datacenters, and this helps avoid overinvesting.

CEO Nadella emphasized, as one of two corporate goals in navigating the platform shift of AI, that they are using “customer demand signal and time to value to manage our cost structure dynamically and generate durable, long-term operating leverage. “

(The first strategic goal was, no surprise, driving innovation in infrastructure and applications products, while continuing to scale the cloud business, and prioritizing security.)

As per Nadella, AI was central in MSFT progress this quarter. Azure share gains were driven by AI.  Azure growth included 8 points from AI services where demand remained higher than their available capacity.

Despite the incipient growth of the AI business, cloud margins remain extremely strong.  As CFO Amy Hood stated, Microsoft Cloud gross margin declined from 72% to a still excellent 69%, driven by sales mix shift to Azure, partially offset by improvement in Azure even with the impact of scaling AI infrastructure. Microsoft Cloud includes Azure and other cloud services, Office 365 Commercial, the commercial portion of LinkedIn, Dynamics 365 and other cloud properties.

To get a high level view of the ability of Microsoft to lay out Capital Expenditure (Capex) needed to grow its business, including AI, we can compare the growth of capex, net income,  free cash flow (FCF), and revenue, over say, a decade. 

Based on review of the relevant Annual Reports, in the decade from 2015 to 2024, Revenue increased 2.6X from  $93.580B to $245.122B.  Net income to common shares rose 7.2X from $12.193B to $88.136B, as operating and nonoperating expenses rose less than income.  Capex rose 7.5X from $5.944B to $44.477B.  Capex accelerated throughout the decade, and ramped up sharply in 2024.  Of the rise in capex between 2024 and 2015, 42% of this rise took place in 2024 relative to 2023.

Microsoft businesses generate enough cash to fund Capex needs and more.  Even with accelerating Capex, Free Cash Flow (FCF) still increased 3.1X from $23.724B to $74.071B. IN 2024, debt interest coverage is still over 20x, and Debt/Equity ratio at end of 2024 was at the lowest level in the decade. 

Remember that FCF is primarily cash remaining after capex expenditure has been subtracted from the cash flow from operations.

Therefore, Microsoft has shown it can generate the cash required to supply capex to adapt and grow its business as cloud computing and AI shape its markets.  Given its historical record extending back decades, I think it is safe to assume it will continue to do so.

The company outlook forecast for 2025 capex was that it would be larger than 2024.  Let’s hypothetically assume capex doubles from the 2024 level of $44.477 billion to $88.954B (note that annual doubling of capex is unprecedented in the history of Microsoft). Given that operating cash flow has increased at an average annual rate of 14.47% for past decade. Assuming operating cash flow in 2025 increases at this same rate over that of 2024, making $135.144 billion. Then hypothetically FCF in 2025 would be $135.144B – capex of $88.954B = $46.190B.  This is a significant decrease from the 2024 level of FCF, of $74.071B.

Should the company be faced with persistent jumps in Capex requirements, we might be concerned to estimate when the resulting datacenters will produce revenue to maintain cash flow at its accustomed growth level. In this, we can look to clues which the leadership team gave at the earnings conference. 

Just taking two instances of AI related products, we see signs of AI-induced growth in usage and revenue which are promising.

Copilot for Microsoft 365.

Copilot for Microsoft 365 is becoming a daily habit for knowledge workers as it transforms workflow.  The number of people who use Copilot daily at work nearly doubled quarter-over-quarter, as they use it to complete tasks faster, hold more effective meetings, and automate business workflows and processes.  Copilot customers increased more than 60% quarter-over-quarter.

Feedback has been positive, with majority of enterprise customers coming back to purchase more seats.  The number of customers with more than 10,000 seats more than doubled quarter-over-quarter. 

With Copilot Studio, customers can extend Copilot for Microsoft 365 and build custom copilots that proactively respond to data and events using their own first and third-party business data.  To date, 50,000 organizations – from Carnival Corp., Cognizant, and Eaton, to KPMG, Majesco, and McKinsey – have used Copilot Studio, up over 70% quarter-over-quarter.

 Copilot is being extended to specific verticals, including healthcare, with DAX Copilot.  More than 400 healthcare organizations – including Community Health Network, Intermountain, Northwestern Memorial Healthcare, and Ohio State University Wexner Medical Center – have purchased DAX Copilot to date, up over 40% quarter-over-quarter.

Github copilot

GitHub Copilot is by far the most widely adopted AI-powered developer tool.

Just over two years since its general availability, more than 77,000 organizations – from BBVA, FedEx, and H&M, to Infosys and Paytm – have adopted Copilot, up 180% year-over-year. “Copilot accounted for over 40% of GitHub’s revenue growth this year, and is already a larger business than all of GitHub was when we acquired it. “.  I assume that means revenue from Github Copilot is larger than Github revenue was, when acquired by Microsoft for $7.5B in 2018.  And Github has a current annual revenue run rate of $2B.  So, not only is its usage growing, but this has been turbocharged by Copilot AI.

In summary, based on review of relevant past financial disclosures including Annual Reports, the Q4 2024 earnings conference, and understanding of Microsoft business culture as it has operated historically, Microsoft should be able to continue to spend as required for Capex to build out AI and cloud computation.  Growth in usage of AI-related products should lead to continued earnings growth. Most importantly, over the long run, the novel AI-related features will become indispensable tools for knowledge workers and will confer the strong competitive advantages of switching costs which enable Microsoft to keep dominating its markets.

Nvidia: a company in a cyclical industry, with a competitive advantage

June 15, 2024. A quantitative feature that I have used to help identify companies with a durable competitive advantage, is the relatively consistent growth of revenue over many years.  I reason that the persistent climb in revenue is related to the indispensable nature of company products or services, with related durable competitive advantage, and to a large, persistent demand for them, commonly referred to as a “long runway” of persistent total addressable market.  I had avoided cyclical companies, whose stock price rises or falls with the demand cycle.  In order to obtain a superior return, you need to buy at the bottom of the cycle. But in practice it is difficult to know when the company’s fortunes will recover, or when the stock price will stop falling.

Visa and Microsoft are examples of companies with strong competitive advantages and products that are indispensable for large and growing markets.  Since its IPO in 2008, Visa’s  sole decrease in annual revenue occurred in 2020. Government policies enacted during the history making Covid 19 pandemic abruptly curtailed international travel. The fall in purchases by persons travelling internationally slowed cross border payment volumes.  Impressively, Visa total annual revenue did not decrease during the Great recession of 2008-2009.

Microsoft has had minor if any decrease in revenue except in 2009 due to great recession.  In that economic crisis, primarily revenue from Windows software sold to corporations (the Client business segment) decreased, and price cuts for gaming software and hardware slowed gaming revenue.  By 2010 both revenue and earnings exceeded the 2008 levels.  Of note, revenue from the Business Division, including the Office suite of productivity software, was essentially flat. This product was more economically resilient because it has powerful switching costs and generates revenue largely through multiyear contracts.

Nvidia was brought to my attention by the rise of Artificial Intelligence (AI) in public awareness.  Review of its Annual Reports , shows that Nvidia has in the past had prolonged decreases in revenue and earnings, related to adverse macroeconomic conditions. For instance, in 2009 revenue decreased relative to 2008, and did not recover until 2013. Diluted Earnings per Share decreased as well in 2009, showing a loss in 2009 and 2010, and not recovering the level of 2008 until 2017.  In previous periods of its history, rising costs caused earnings to show multiple years of losses or declines without a loss of gross revenue.

Revenue and earnings fell during the Great Recession because Nvidia at that time, derived most of its revenue from products for the PC market. In FY 2009, desktop GPU product sales decreased 29% year over year.  Moreover, cyclical decline can have prolonged effects. PC makers build inventory during periods of anticipated growth. They are left with excess inventory in the event this growth fails.  They can then delay additional purchases of the GPU inputs to PC manufacture, until end customer demand has resumed.

The datacenter GPU business segment, which now contributes the lion’s share of Nvidia revenue, did not yet exist in that era.

And yet, during the period of the Great Recession of 2009-2009, the research engineers at Nvidia were laying the foundation for its history making advances and competitive advantage of the next decade and more.

During 2008 as the global financial crisis accelerated, Nvidia “announced a workforce reduction to allow for continued investment in strategic growth areas… we eliminated … about 6.5% of our global workforce. … expenses associated with the workforce reduction, totaled $8.0 million. We anticipate that the expected decrease in operating expenses from this action will be offset by continued investment in strategic growth areas. ” (Nvidia 10K FY 2009) (Nvidia Fiscal Year ends in January of that year, so it reports on business activity occurring chiefly in the previous calendar year). 

Indeed, R&D expenditures continued to climb during this period.  In fiscal years 2008, 2009, 2010, R&D expenses continued to climb, making up 17%, 25% and 27.3% of revenue for the respective years. (Nvidia 10K FY 2010, p36).

Nvidia GPU chips were originally designed for use in gaming and graphics software applications and by the mid-1990s Nvidia had come to dominate that market. The company IPO’d in 1999. In 2006 Nvidia conceived CUDA, a software platform that enables software to employ the parallel processing and accelerated computing of the GPU, for diverse applications other than solely graphics.  It supports a range of languages and a comprehensive armamentarium of tools allowing it to be used for a wide range of applications. CUDA builds competitive advantage in several ways. The CUDA software platform enables engineers to use accelerated computing driven by Nvidia GPU chips, for a wide variety of other useful and novel applications. This expands the addressable market of use cases for the GPU. Nvidia has fostered an ecosystem of software centered around diverse applications of CUDA, collaborating with myriad companies in diverse industries, from healthcare to pharmacy to automotive, to nurture vertical stacks of software supported by the CUDA platform. Approximately 5 million developers in that ecosystem create a network effect competitive advantage for other GPU manufacturers such as AMD. CUDA is compatible only with Nvidia chips. While it is optimized for upcoming, ever more potent chips, the software ensures backwards compatibility, so developers and end users can often update application software with their current hardware.  There is a strong switching cost competitive advantage versus other chip makers.

Nvidia is one of the rare companies that has consistently done the massive, risky work, anticipating emerging market segments, to  persistently adapt its competitive advantage to continue to dominate the market as it evolves.  During these decades, beginning well before the advent of real AI in 2012, and continuing today, the cultivation of the CUDA centered ecosystem, along with consistent hardware innovation including strategic acquisitions, enabled Nvidia to come to dominate the market  for datacenter GPUs and related high performance computing equipment which is required for AI.

Some general and striking realities about Nvidia’s current competitive position are fairly clear.  There is high demand for AI and accelerated computing capability, and it is not a transient fad.  The use cases are becoming permanent fixtures in the evolving economy. For example, AI is raising productivity of knowledge workers, and widening accessibility to computing applications by making them easier for non-specialists to use.  Accelerated computing makes attainable tasks previously too large to take on. For example, it enables health systems to harness their unstructured clinical or administrative data to yield insights regarding care provision or costs. Virtual digital twin factories can be designed and tested before building the actual plant, avoiding costs of trial and error.

It is clear that the accelerated computing that makes AI possible, largely requires Nvidia products. Specifically, these are the datacenter computer chips needed for accelerated computing, and the technology and software tools needed for the datacenter to efficiently produce (train) and work (inference) with AI models. 

Based on company communications to investors, demand is predicted to persistently outstrips supply.  In view of innumerable essential use cases, the runway and total addressable market is massive.  Nvidia commands at least 80% market share. Most likely revenue and income of the company will climb for some time.

With cyclical companies, there is the concern about avoiding stock purchase at the peak of the cycle. Nvidia stock fell in the macroeconomic perturbations of 2022, among predictions of recession and rising interest rates.  The PE in the quarter ending October 30, 2022 was about 60. It was 50 at the end of the quarter ending in April 2024.  Meanwhile, revenue had climbed more than 4 times, and earnings had grown 20X. 

Therefore it seemed, especially in view of the expectation for continued revenue growth, the stock was not overvalued.  I decided to reallocate some funds from United Health Group (UNH) to Nvidia.  Which I did on February 29th, at a purchase price of $793.16 with 5.4% of my portfolio in Nvidia at the close. The stock has since split 10 to 1.  At some time, it may be that demand for Nvidia data center products will decline. That is not happening soon.  At some point, Nvidia may become involved in a financial mania related to AI. That point has not yet arrived. Should it do so at some point in the future, I might reallocate some funds back to a non-cyclical company, such as United Health Group.