Nvidia Earnings Preview: 5 Things To Know

Nvidia  Earnings Preview: 5 Things To Know


‘For every $1 spent on an Nvidia GPU chip there is an $8 to $10 multiplier across the tech sector,’ according to an August report by investment firm Wedbush.


Blackwell delays. Hopper demand. And the state of the emerging artificial intelligence market.

Nvidia’s earnings report Wednesday for the second quarter ended July 28 of its fiscal year has been hyped by investment firm Wedbush as making this “the most important week for the stock market this year and potentially in years.”

“There is one company in the world that is the foundation for the AI Revolution,” the firm said in an August report. “And that is Nvidia with the Godfather of AI Jensen (Huang, Nvidia’s CEO and co-founder) having the best perch and vantage point to discuss overall enterprise AI demand and the appetite for Nvidia’s AI chips looking forward. … We continue to estimate for every $1 spent on an Nvidia GPU chip there is an $8 to $10 multiplier across the tech sector.”

[RELATED: Nvidia Delays Next-Gen Blackwell GPUs Due To Design Issues: Reports]

Nvidia Second-Quarter Earnings

An August report by Melius Research said it is possible for Nvidia to do what it called a “Triple Lindy” in reference to the 1986 Rodney Dangerfield movie “Back to School.”

“The Triple Lindy would be to beat F2Q25 by $2B, guide up F3Q25 by $2B q/q and say enough to imply the F4Q25 grows q/q by another ~$2B,” according to the Melius report. “Nvidia basically did this last quarter with one of the best conference calls we’ve ever heard. As we write this piece, this scenario seems possible – but it isn’t necessary for those with a long-term horizon. It is pretty easy to be off by a billion here and there with numbers this big (Data Center sales at $105B annual revenue run rate in FY25).”

Bank of America said in a report in August that it expects Nvidia sales in line with or modestly better than the Wall Street consensus of $28.6 billion, which is also ahead of Nvidia’s $28 billion guidance. The investment firm specifically cited performance in the Hopper product line as fueling the positive outlook.

As for Nvidia’s third fiscal quarter, Bank of America predicts $30 billion in revenue, below the Wall Street consensus of $31.5 billion. The firm predicts the same revenue amounts for the fourth fiscal quarter and first quarter of Nvidia’s 2026 fiscal year against the Wall Street consensus of $31.5 billion for the fourth quarter and $34.5 billion for the first quarter of fiscal year 2026, with those Wall Street numbers potentially decreasing based on the severity of Blackwell delays.

Here’s more of what to expect on Nvidia’s earnings call Wednesday.


Blackwell GPU Timeline

Analysts will likely seek more information from Nvidia executives around its Blackwell-architecture GPUs, especially with a reported release delay due to technical issues with the underlying architecture.

Melius Research said in an August report that while “we were concerned when we first heard about issues with the new Blackwell chips in the supply chain and from customers—that included commentary around a 3-month delay due to potential overheating, a design issue and some packaging issues,” now the firm has seen that “volumes for Blackwell systems can still be quite strong in the April ’25 quarter.”

“Our checks right now indicate training of LLMs is still strong at big customers even as some of the applications that drive inferencing (using the models in production) still need to get going,” according to the Melius report. “For example, the rollout/adoption of OpenAI’s GPT-4o voice mode, SoraAI, and enterprise apps like MSFT’s Copilot are all taking a little longer than we expected. However, Nvidia’s sales of H100s and H200s seem to be benefiting from brisk efforts at Meta to train Llama 4, OpenAI for GPT-5 and (the remaining) well-funded labs. When training is brisk you need Nvidia.”

Melius also did not sweat a short-term underwhelming performance by Blackwell. “Nvidia reminds us of Apple in many ways in terms of a full stack approach and profitability,” according to the August report. “Whenever there’s been an issue with Apple product shipment timing the company generally tends to make up the volumes in future quarters. So in short as competition doesn’t seem to be hurting Nvidia and the big customers like Google, Amazon, Meta and CoreWeave still seem ready to spend big, any potential dip in the stock is likely a short-lived opportunity.”

The Melius Research said that Nvidia executives should say if Blackwell’s timeline “can still make the street’s forecast for $150B (up 41%) in FY26 Data Center revenue very conservative” and whether the chipmaker sees supply constraints around packaging and High Bandwidth Memory (HBM) availability.

An August report by Bank of America also shook off Blackwell delay concerns due in part to no mention of delays in the most recent earnings calls by major cloud customers, which also raised their capital expenditure expectations, or of major Nvidia supplier Taiwan Semiconductor Manufacturing Co. (TSMC).

Bank of America’s report said that the investment firm believes Nvidia can “extend the lifecycle of its current-gen Hopper, while launching less complex Blackwell versions as a stopgap,” should the delay grow more serious.

An August report by Morgan Stanley said “customer enthusiasm for Blackwell is at very high levels, but the utility of small volume of Blackwell in the initial ramp phase is somewhat lower.”

“For cloud businesses, it will take time, for instance, for demand to pick up, vs. Hopper demand which is exceptionally strong in cloud,” according to the investment firm.

Strong recent results from Supermicro and King Yuan Electronics Co. (KYEC) plus original design manufacturing (ODM) and foundry monthly numbers “point to very strong builds,” according to Morgan Stanley.

“Regardless of expectations issues, the fact that headwinds (including China export controls last year, challenges in bringing up powered data centers, potential pauses in front of a clearly more capital efficient Blackwell, and now a tactical delay in new products) simply do not affect the company’s strong momentum,” according to Morgan Stanley.

In a July report by Morgan Stanley, the firm increased “data center estimates for CY25 to about 35% growth” as visibility grows for Blackwell.


Hopper GPU Demand

As for Nvidia’s Hopper microarchitecture, an August report by Melius Research called sales of these GPUs “brisk and trending up q/q, not only into F3Q25 (October) but also into F4Q25 (January).”

The firm found that “generally positive June quarter Cloud capex news, checks in the optical, packaging and other parts of the supply chain (Foxconn, Super Micro, etc.) back a strong F2Q25 for NVDA and growth each of the next 2 quarters for Hopper (especially H200’s).”

Recent upbeat commentary from AI server and optical vendors bodes well for Nvidia, according to Melius, as does positive comments from former Google CEO Eric Schmidt and Walmart, which said that work handled by AI would have needed 100 times its current head count to complete in the same amount of time.

Wedbush’s August report said that Nvidia is “the only game in town with $1 trillion of AI Cap-Ex on the way for the next few years with Nvidia’s GPUs the new oil and gold in this world” and said that cloud and AI numbers from Microsoft, Amazon and Google indicate “massive enterprise AI demand is now underway.”

A July report by Morgan Stanley said that demand moving from Hopper to Blackwell will shift constraints back to silicon.

“H100 lead times are short, but H200 lead times are already long, and Blackwell should be even longer,” according to the report.

A separate Morgan Stanley report in July said that the Hopper builds and demand surge has removed concern over a pre-Blackwell pause.

“That said, it is clear that the market is at the tail end of the Hopper cycle, and the frothiness and visibility is lower than it was – and given the enthusiasm that Joe (Moore, Morgan Stanley research analyst) is hearing for Blackwell, lower than it will be,” according to the report. “But he notes that demand side indications remain robust for Hopper. Hopper builds continue to go up, as H100 starts to transition to H200 (bringing better memory bandwidth from HBM3e as well as higher memory content), and Joe is hearing confidence that sales of both products will remain strong.”


Data Center Revenue, AI Demand

A July report by KeyBanc estimated “$200B+ data center revenue in 2025 from Nvidia as demand for GB200 hardware ramps on top of AI server revenue growth of over 150% in 2024.”

“Hyperscaler capex continues to climb to new heights with the combination of Meta, Google, Microsoft, Oracle, Amazon, and Alibaba expected to cross $220B in capex in 2024, up 35.4% year-on-year,” according to KeyBanc. “This growth in investment fuels trickle-down demand for data centers, equipment, and the like, but without AI use-cases materializing in significant ways it is fair to wonder whether the model training and compute is getting ahead of itself.”

An August report by Morgan Stanley said that Nvidia’s data center business is expected “to drive much of the growth over the next 5 years, as enthusiasm for generative AI has created a strong environment for AI/ML hardware solutions – NVDA’s being one of the most important.”

“Incremental opportunities in AI/ML software & services, networking, and ADAS can drive growth even higher,” according to Morgan Stanley.

Melius Research said in the August report that it would like to see Nvidia executives address concerns “that AI apps are not taking off in the enterprise fast enough to drive inferencing.”

“Checks do indicate that training is where the beats are coming from vs. inferencing so it will be interesting to hear if inferencing is still 40% of NVDA GPU revenue on a run rate basis,” according to the Melius report.

The report also noted that “the consensus for revenue growth in CY26 (FY27) of just 18% and 15% in the Data Center seems conservative.”

The August report by Bank of America added that Meta expects 10 times the compute requirement for the next-generation Llama 4 large language model compared to Llama 3.1.

“Longer-term AI accelerator demand sounds solid,” according to the firm.

A separate report by Bank of America in August called concerns about the return on investment for high AI capital expenditures “premature and inconclusive.”

“AI capex will be front-loaded, similar to past hardware projects such as 3G/4G/5G network upfront deployments that lasted 3-4 years” is one part of Bank of America’s reasoning, according to the report. “AI capex is as much defensive (protecting search, social or ecommerce dominance) as it is offensive (new revenue streams)” and “enterprise and sovereign AI adoption has yet to start in a big way.”

Analysts on Wednesday’s call might ask Nvidia executives about the trend of hyperscalers increasing custom chip designs and application-specific integrated circuit (ASIC) development and whether that has eaten into Nvidia’s business.

“By developing their own ASIC chips, hyperscalers can effectively reduce their costs of buying NVDA’s GPU and servers for AI training and inference purposes,” Morgan Stanley said in an August report.


InfiniBand, Software Sales

Melius Research’s August report said to expect sales under the InfiniBand (IB) computer networking communications standard to be “up nicely from F1Q given supply and more high bandwidth memory availability could help in F4Q.”

The firm expects Nvidia’s networking and InfiniBand business to “be up q/q into fiscal year-end.”

Melius Research said that growing software sales from Nvidia could improve margins for the company, a growing interest from investors.

Nvidia software “could grow from $1B in ARR last year to $3B next year and ~$6B by FY27,” according to Melius.


Rubin, Power Use

The Melius Research report in August said that analysts might look for whether Nvidia co-founder and CEO Jensen Huang is “still on a one-year upgrade cadence with the successor to Blackwell, Rubin, driving growth in CY2026.”

The firm predicts that Nvidia will continue to tackle worries around power availability and “helping customers find affordable and available electricity.”

“Blackwell and Rubin innovations will help offset shortages and space constraints but the view is that customers will just buy more and still use a lot of power,” according to Melius. “Blackwell is expected to be 25x more energy efficient vs Hopper and Rubin will be even more efficient. Nvidia recently introduced cooling solutions with several companies—but does it need to do even more to make sure customers get power?”

The August report by Bank of America said that the firm is keeping lower expectations for Nvidia revenue to reflect “the potential for growing pains” as Nvidia’s high-power density and packaging increase Blackwell’s complexity.

The “top of the line Blackwell product (GB 200 NVL 72) is expected to be the industry’s densest AI computing rack but requires unprecedented complexity in power density (120 kW/rack), liquid cooling, TSMC chip packaging (CoWoS-L vs traditional CoWoS-S), and ARM-based Grace CPU (vs. conventional x86 CPU) for every 2 Blackwell GPU,” according to the investment firm.

Nvidia could potentially work around Blackwell delays by “selling more flexible (MGX) boards instead of complete rack-scale systems” and “depopulating rack-scale systems (using 36 GPU instead of 72 GPU version) and spacing racks further apart.”

Another possibility is lowering the “amount of high-bandwidth memory stacks to reduce packaging complexity (allows use of conventional CoWoS-S rather than CoWoS-L).”



Source link
lol

‘For every $1 spent on an Nvidia GPU chip there is an $8 to $10 multiplier across the tech sector,’ according to an August report by investment firm Wedbush. Blackwell delays. Hopper demand. And the state of the emerging artificial intelligence market. Nvidia’s earnings report Wednesday for the second quarter ended July 28 of its…

Leave a Reply

Your email address will not be published. Required fields are marked *