Category Archives: Company Earnings, Events, Insights

Mercado Libre Q3 FY 2025: Credit Card and GMV Growth Driven by Shipping Scale

CFO Martín de Los Santos: “Investments across our ecosystem continue to deliver growth. Revenues rose 39% YoY, marking the 27th consecutive quarter above 30%.”

Mercado Pago: Credit Card Portfolio Fuels Expansion

Mercado Pago had a strong quarter. Monthly active users accelerated thanks to UX improvements, credit card investments, and growth in interest-bearing accounts. Credit card usage and share of wallet increased in Brazil, Mexico, Argentina, and Chile. The loan portfolio expanded without compromising credit quality, and more Credit Card accounts are reaching profitability. In Chile, Mercado Pago MAUs grew 75% YoY.

Mercado Libre funds credit card loans through borrowing. Profitability typically lags until users maintain balances and pay interest for a period. This margin compression is expected during portfolio growth. Commerce President Ariel Szarfsztejn noted that credit card customers become profitable after two years. In Brazil, where cards launched in 2021, cohorts from 2023 and earlier—about 50% of cards—are profitable. Mexico, which began issuing cards in 2023 with Visa, is not yet profitable. Argentina only started issuing cards late in the quarter.  The business is hampered by high rates and reduced net interest margins due to inflation.

Mercado Pago benefits from Mercado Libre’s e-commerce market share which lowers credit card customer acquisition costs. This is an example of the mutually reinforcing competitive advantages of Mercado Libre’s ecommerce and fintech businesses.

The investor might question,  Given the perennial economic difficulties in markets such as Argentina, and the expected margin compression as the company invests in Mercado Pago credit growth, what should he expectations?  Mercado Libre’s 25-year track record of overcoming impressive hurdles and building synergies supports confidence in management strategy and execution.

The credit card growth reinforces Mercado Libre ecommerce in various ways.

Credit cards drive financial inclusion. Many new holders never owned a card before—especially in underbanked markets like Mexico, where Mercado Libre ranks second in MAUs among all financial enterprises, including banks,  and first in app downloads. Credit cards accelerate e-commerce adoption.

The Mercado PagoCredit Card has boosted loyalty.  It is one of the few cards with no annual fee in its markets, and various incentives boost usage on MELI ecommerce as well as offline.  In Brazil, the Mercado Pago cards leads on-platform and dominates installment transactions, aided by extra installment incentives. Despite Brazil’s economic slowdown, Mercado Pago’s credit card TPV grew 28% YoY, with over 50% off-platform, signaling broader wallet adoption. Principality in Brazil rose 11 points, driven by three factors: credit cards, interest-bearing accounts, and salary deposits (the latter pending a banking license in Mexico).  The cards  integrate with the Meli Plus loyalty program, incentivizing usage both online and offline.

Mercado Pago’s credit card strategy has resulted in portfolio growth of 83% yoy.  The card introduces new consumers to digital payments, and  incentivizes purchases on Mercado Libre ecommerce, buttressing the competitive advantage of switching costs for the customer.

Reduced Delivery Charges leading to Ecommerce Sales Growth

Another salient achievement noted in the earnings call, was the growth spurt in Mercado Libre ecommerce during the quarter in Brazil.  This resulted from the strategic reduction in shipping charges to customers purchasing online.

In the previous quarter, in Brazil MELI had decided to lower the price threshold to obtain free shipping of purchased items  from R$79 to R$19 (R$19 is approximately $3.5).  During the current Q3 FY 2025, this resulted in increased rate of growth in number of items sold from 26% to 42% yoy. GMV growth increased from 29% to 34% yoy.  NPS reached an all time high. The number of listings has increased sharply in the R$19 to R$79 price range, as new sellers were attracted,

Not only did the expanded free shipping incentive to purchase result in increased ecommerce sales. In addition, the higher transaction volumes enabled MELI to reduce per unit shipping cost in Brazil by 8%, because the slow delivery shipping option enabled more complete utilization of shipping capacity. The slow delivery is a shipping option a purchaser can choose, which allows free shipping with more flexibility in delivery date, allowing MELI to optimize utilization of its containers and network.  As CFO Martin de los Santos described,  this will be part of a long term process of optimizing the slow shipping layer of the logistic network, as the company iteratively improves its process and technology

In Mexico as well, increased sales volume enabled optimized utilization of logistics. Here, as GMV growth accelerated, per unit shipping and fulfillment costs continued to fall. In this case, without the specific new incentive of expanded free shipping.

The decision to lower shipping costs resulted in higher sales volume, and Mercado Libre employed its competitive advantage of scale, as the relatively fixed costs of the logistics infrastructure carried the increased sales volumes. 

Microsoft Q4 FY 2025 earnings: Azure growth accelerates as legacy business applications transition to Agentic AI

August 16, 2025.

As recounted by CEO Satya Nadella, the Microsoft public cloud service provider Azure revenue surpassed $75B, up 34% year over year (yoy), taking share every quarter.  Azure and other cloud services revenue, as reported in the earnings call slides, grew 39% yoy.  Nadella said Microsoft continued to lead the AI infrastructure wave, now with over 400 datacenters, in over 70 regions, more than any other provider. 

Microsoft  stood up over 2 GW of new datacenter capacity in the last 12 months.  1 GW would supply roughly 800,000 homes in the US. This shows how much new power supply the AI revolution is requiring. 

Data:

Knowledge workers require access to relevant data produced by their enterprise, in order to put it to work using applications. In enterprises whose primary function is not data science, access to data must be provided while enabling unified management and relatively low learning curve. Microsoft Fabric is becoming the complete data and analytics platform for the AI era.  Fabric enables access to all of the enterprise data in one application using OneLake, and integrates with the Microsoft ecosystem.  Revenue was up 55% yoy, now with over 25,000 customers. 

Microsoft also supports third party data analytics platforms on Azure.  Azure Databricks and Snowflake on Azure are also growing.  Cosmos DB and PostgreSQL both play crucial roles in operation of critical OpenAI ChatGPT applications.

In the area of Data, Microsoft continues to evolve crucial third party data management tools on Azure. While continuing its close partnership with Databricks, it is growing Fabric as an accessible and versatile tool that adds value to the existing Microsoft software ecosystem, in this way nurturing its competitive advantage of switching costs.  Customers should be able to access the resources they need to evolve innovative applications, while finding the new capabilities they seek, within the ecosystem.

Agentic AI

This year Microsoft launched Azure AI Foundry to help customers design, customize, and manage AI applications and agents, at scale.

Customers increasingly want to use multiple AI models to meet their specific performance, cost, and use case requirements.  Foundry enables them to provision inferencing throughput once and apply it across more models than any other hyper scale cloud provider, including models from OpenAI, DeepSeek, Meta, xAI’s Grok, and very soon, Black Forest Labs and Mistral AI.

Azure AI Foundry includes the Foundry Agent Service, now being used by 14,000 customers to build agents that automate complex tasks.

As a specific measure of usage, the number of tokens served by Foundry APIs, exceeded 500 trillion this year, up over 7X.

The family of Copilot apps surpassed 100 million monthly active users across commercial and consumer.

Hundreds of partners like Adobe, SAP, ServiceNow, and Workday have built their own third-party agents that integrate with Copilot and Teams.  Also, customers use Copilot Studio to extend Microsoft 365 Copilot and build their own agents.  Customers created 3 million such agents using SharePoint and Copilot Studio this year.

GitHub Copilot users have reached 20 million.  GitHub Copilot Enterprise customers increased 75% quarter over quarter. 

In healthcare, Dragon Copilot usage is surging.  Customers used ambient AI solutions to document over 13 million physician-patient encounters this quarter, up nearly 7X year-over-year.  In a typical use case the copilot creates a progress note of the patient encounter based on the dialogue between the physician and the patient.  The physician is relieved of the administrative work of writing the note after the patient has left.

CEO Nadella stated that revenue from Azure AI services was generally in line with expectations. And, “while we brought additional datacenter capacity online this quarter, demand remains higher than supply.”

As legacy applications transition to Agentic AI, Microsoft is adding productivity and capability to its ecosystem, while enabling new capabilities, such as relieving doctors of admin work!  The success of this strategy is seen in the growth of usage of the various applications, from Dragon AI to Fabric for the enterprise, to Agentic AI for large scale customer service organizations.  But this success is based on more than successful usage.  It translates into cash flow, the key indicator of a truly successful business.

The reason for this ability to translate sales into cashflow, is one of the central competitive advantages of Microsoft: scale.  This has two components. First, is the availability of captive customers.  When a new capability, for example Fabric, is launched, sales are efficiently executed across a massive number of existing ecosystem customers.   In addition, software vendor partners multiply this. That is, the new source of revenue is efficiently scaled across a large number of receptive customers. Scale enables the company to be able to afford the cost of development of the product. Costs which include for example employing engineers, and building or modifying datacenter compute.

Second, the costs of incremental new business is quite low for Microsoft’s software business.  There is a minimal additional cost of selling the 2nd or 3rd or 100th instance of the software.  We refer to the costs of development of the software products as being primarily fixed. The cost of selling incremental additional volumes of the software, which might include setting up additional technical support services, which can serve customers globally from one center, are relatively low. These are called variable costs.

Compare with a company like Starbucks. Each new store that Starbucks sets up, requires investment in real estate, new worker recruitment, training, and logistics for the coffee and food products. That store can sell only in its own geographic location. We seek to invest in companies with a scale competitive advantage, and fixed costs outweighing the variable costs.  

The competitive advantage of scale, is what drives high Gross Margin.  For a company where Salary, General and Administrative costs are well controlled, and debt is conservative, therefore generating low Interest Expense, this means that Operating Cash Flow continues to grow, to supply the Capex which enables the company to readily exploit emerging markets. 

 This cash production is impressively made manifest when we consider the ability of Microsoft to ramp up its capital expenditure to eye popping amounts in recent years, while maintaining large amounts of cash flow.

The table above shows Operating cash flow, capex (PPE expense) and free cash flow for Microsoft, from FY 2020 through 2025.  As shown, Microsoft was able to more than triple capex  from 15.4B in 2020 to $64.55B in 2025, while maintaining free cash flow.  While free cash flow decreased slightly from $74B in 2024 to $71.6B in 2025, it is still ample to supply the company’s needs. Out of Free cash flow of $71.6B in 2025, Microsoft paid approximately $24B in dividends, repurchased $18.4B in common  stock, and had $2.4B indebt interest expense.

In comparison, a startup company supplying an analogous product, has a cost of sales and marketing to purchase each new customer. That’s after building or paying for compute capacity. Furthermore, it must supply at a lower price in order to attract customers who would otherwise use the more complete suite available from Microsoft.  This special effort to  take customers from the large scale provider is necessary because the startup lacks the other, more fundamental competitive advantage that Microsoft has, that of switching costs.  Its customers are (not unwilling) captive customers.  Switching costs are sustained by the continuous research and development of increased value provision in the product ecosystem. Ecosystem customers find it more economic to remain in the system. This never ending evolution of the product sustains the customer base, and is essential to provide the basis for scale.

Outlook for FY 2026:

CFO Amy Hood gave the outlook for the coming quarter and fiscal year.

“Capital expenditure growth, as we shared last quarter, will moderate compared to FY25 with a greater mix of short-lived assets. Due to the timing of delivery of additional capacity in H1 (first half of year), including large finance lease sites, we expect growth rates in H1 will be higher than in H2. Revenue will continue to be driven by Azure.

In Azure, we expect Q1 revenue growth of approximately 37% in constant currency driven by strong demand for our portfolio of services on a significant base. Even as we continue bringing more datacenter capacity online, we currently expect to remain capacity constrained through the first half of our fiscal year”.

In summary, as aggressive but prudent expansion of Azure infrastructure and software tools enables the expansion of data analytics platforms and agentic AI, Microsoft continues to advance as an indispensable host and enabler of the AI transformation, while further enriching its ecosystem.

Nvidia Q4 2025: Revenue and Earnings Beat. Robust Demand and Growing Market, more reasonable Valuation.

March 3rd, 2025.  On February 26, 2025, Nvidia announced Financial Results for the 4th Quarter and Fiscal Year 2025, which ended on Jan 26.  Note that since Nvidia’s Fiscal Year ends in January, it is termed the Fiscal Year of the new year, even though most of the financial events included, transpired during the preceding calendar year. 

Recently the market has apparently been concerned with certain key issues.  There is a widespread interest in investing in AI compute, and adapting workflows in many industries to employ AI applications in order to boost productivity.  But there is uncertainty regarding the persistence of high levels of demand for Nvidia GPUs needed to produce AI compute. Another issue relates to whether the return on investment (ROI) on the high levels of capex required to build AI related datacenters, will justify the investment. 

We can address these issues with information contained in the commentary of CFO Colette Kress, with full narrative available on the webcast, from which I quote liberally in the following outline of financial and operating results.

Revenue for Q4 was $39.3billion, exceeding management’s outlook of $37.5B.  This was up 12% sequentially, and up 78% year over year (yoy). For the entire FY 2025, revenue was $130.5B, up 114%, yoy.  Over 88% of this was for Data Center, which reflects the demand for AI related compute. 

Data center revenue for fiscal 2025 was $115.2 billion, up 142% from the prior year. In Q4, Data Center revenue of $35.6 billion was up 16% sequentially and 93% year-on-year.

The most current GPU in production is the Grace-Blackwell, which was introduced at Nvidia GTC (GPU Technology Conference) 2024, in March 2024. Last August there had been concern among analysts with delays in Blackwell production.  The issue of inadequate manufacturing yield was subsequently successfully addressed.  Currently, this is the fastest product ramp in the company’s history, unprecedented in its speed and scale. Blackwell production is in full gear across multiple configurations for varying datacenter architectures.  In Q4, Blackwell sales exceeded management expectations. Nvidia delivered $11 billion of Blackwell revenue in Q4 to meet strong demand. Grace Blackwell systems have been installed by Nvidia for its own development efforts, as well as by other notable customers including Microsoft, CoreWeave and OpenAI. 

Datacenter revenue includes the categories of Compute, and Networking. Of Q4 Data Center revenue of $35.580B, Compute accounted for $32.556B (91.5%), with the remainder being Networking. Q4 Compute revenue jumped 18% sequentially and over 116% year-on-year. Customers are racing to scale infrastructure to train the next generation of cutting-edge models and unlock the next level of AI capabilities. 

Post training and model customization and inference demands orders of magnitude more compute than training models.  This is creating an expanding market of application specific models.  This drives continuing demand for AI compute.  The latest Nvidia GPU platforms provide markedly improved power efficiency (performance per watt) and speed response.  The company’s performance and pace of innovation are unmatched, driving a 200x reduction in inference costs in just the last 2 years.  Blackwell was architected for reasoning AI inference. Blackwell supercharges reasoning AI models with up to 25x higher token throughput and 20x lower cost versus Hopper 100. Blackwell has great demand for inference. Many of the early GB200 deployments are earmarked for inference, a first for a new architecture. Blackwell addresses the entire AI market from pretraining, post-training to inference across cloud, to on-premise, to enterprise.

CUDA’s programmable architecture accelerates every AI model and over 4,400 applications, ensuring large infrastructure investments against obsolescence in rapidly evolving market. 

Therefore, the addressable market for Nvidia’s products continues is growing rapidly. The improved performance per cost of products continues to optimize ROI for customers.  Customers can install new Nvidia hardware and enjoy continuing application software compatibility. The CUDA platform provides backwards compatibility, while supporting a wide and growing ecosystem of applications.  It is the key element providing switching costs (customers are captive to the platform) competitive advantage.  Competing chip providers cannot lure developers from the market dominating CUDA platform to their noncompatible hardware and software. 

Parenthetically, upcoming generations of GPU (Blackwell Ultra) will launch in H2 of this year, to be followed by the Vera Rubin system.  According to CEO Jensen Huang, datacenter GPU system hardware, power delivery and architecture changed considerably from Hopper to Blackwell and made the introduction of Blackwell challenging.  The system architecture does not change from Blackwell to Blackwell Ultra.  This could create an easier ramping of supply in the future, although that will be next year presumably.

Regarding valuation, in the year between Q4 FY 2024, in January 2024, and Q4 2025 in January 2025, total net revenue rose 114.2% from $60.922B to $130.497B.  Diluted EPS rose 147.06% from $1.19 to $2.94 per share.  Trailing PE fell from 50.65 to 47.92. Note that the Jan 2025 PE (at date of earnings release for Q4 2025) is less than the mean PE of 53.47 of the past decade from 2016

There was a recent drop in stock price, apparently in reaction to the revelation of DeepSeek R1 LLM. Hangzhou DeepSeek Artificial Intelligence Basic Technology Research Co., Ltd., does business as DeepSeek. It’s R1 LLM outperformed some leading LLMs created by US corporations, and was allegedly created for only $6 million, instead of the usually required hundreds of millions of dollars, using Nvidia chips which are not the most advanced available, and which are therefore permitted by US trade law to be exported to PRC.  DeepSeek “raised alarms on whether America’s global lead in artificial intelligence is shrinking and called into question Big Tech’s massive spend on building AI models and data centers.”

As we know, until this moment, analysts had been anguishing over whether the unprecedented capex expense to acquire Nvidia GPU powered datacenters, required for production of AI, would allow adequate ROI.  Now, the possibility of cheaper production appeared and provoked the reverse crisis: AI would be abundant, but there would be no need for the higher end, expensive Nvidia GPUs. Perhaps if the Communist totalitarian country had produced an LLM which was somewhat cheaper to produce, but not that much cheaper, a sort of happy medium.  Then, investors might have taken the announcement with equanimity, since improved ROI of AI would happily coexist with some easing of capex. 

The reality is, there will always be a new, cheaper, or more powerful chip, or other novel innovations in this rapidly developing industry. Some of these will improve business economics, others will not.  Zooming out to gain some perspective, it is apparent that AI will in the course of time pervade the global economy in countless ways. Demand for the required compute hardware and software shows no sign of abating and will continue for some time.  I doubt that the time has come to lower interest in Nvidia.  Nvidia dominates the market for the GPUs which are indispensable for creating AI tools.  And the accompanying , CUDA software platform contributes to the switching costs which maintain the ecosystem of application developers using Nvidia GPUs.

In view of the recent price drop exceeding a 20% discount from the 52 week high, accompanied by the reduction in PE ratio over the last year, I took the opportunity to transfer some portfolio allocation to Nvidia.  I sold a very modest portion of MSFT holdings, in the range of 2%, at a weighted average of a 6.14% reduction from its 52 week high, and likewise for Visa, which was at its 52 week high.  I bought NVDA stock at a weighted average of a 20.8% reduction from its 52 week high.  Nvidia now makes up approximately 12% of my portfolio.  I did not feel brave enough to buy more. Probably because of the issue of near term volatility. I feel safer buying smaller amounts during periods of recurrent price drops. The stock is after all, not grossly undervalued.