Nvidia NVDA Q1 2024 Earnings Call Transcript The Motley Fool

nvda stock earnings date

Overall, end demand was solid and consistent with seasonality, demonstrating resilience against a challenging consumer spending backdrop. Generative AI is also driving a step function increase in inference workflows. Because of their size and complexities, these workflows require acceleration. The latest MLPerf industry benchmark released in April showed NVIDIA’s inference platforms deliver performance that is orders of magnitude ahead of the industry with unmatched versatility across diverse workloads. The second table on the Estimates page shows analyst Earnings Estimates for the next two quarters, along with estimates for the current and next fiscal year. The Average Estimate for each time period is the average for the total number of contributing analysts.

  • And the large one would teach the smaller ones how to be good AIs, and so you use the large one to generate prompts to align the smaller ones and so on and so forth.
  • This adds to growing momentum for Grace with both CPU-only and CPU-GPU opportunities across AI and cloud and supercomputing applications.
  • This widget charts the estimated and reported earnings for the last four quarters.
  • You saw that I announced L4, L40, H100 NVL, which also has H100.

The second thing is that utilization, which talks about the amount of the types of applications that you can accelerate and the versatility of your architecture keeps that utilization high. If you can do one thing and do one thing only incredibly fast, then your data center is largely underutilized, https://g-markets.net/helpful-articles/candlestick-patterns-to-master-forex-trading-price-2/ and it’s hard to scale that out. And the networking and the fiber optics and the incredible transceivers and the NICs, the SmartNICs, the switches, all of that has to come together in order for us to stand up a data center. And so, we were already in full production when that moment came.

Nvidia Earnings Date & Event Calendar

The Santa Clara, Calif.-based company late Wednesday said it earned an adjusted $1.09 a share on sales of $7.19 billion in the quarter ended April 30. Analysts polled by FactSet had expected Nvidia earnings of 92 cents a share on sales of $6.53 billion. On a year-over-year basis, Nvidia earnings dropped 20% while sales declined 13% amid continued weak gaming-chip sales. All of the fiber optics that are optimized end to end, these things are running at incredible line rates.

nvda stock earnings date

GAAP and non-GAAP gross margins are expected to be 68.6% and 70%, respectively, plus or minus 50 basis points. GAAP and non-GAAP operating expenses are expected to be approximately 2.71 billion and 1.9 billion, respectively. GAAP and non-GAAP other income and expenses are expected to be an income of approximately 90 million, excluding gains and losses from nonaffiliated investments.

Investor Services

And the tiny sized ones, you could put in your phone and your PC and so on and so forth. I wanted to follow up on that in terms of the focus on inference. It’s pretty clear that this is a really big opportunity around large language models. When you collect new data, you train with the new data.

nvda stock earnings date

And we integrate our architecture into all the world’s clouds. From the moment of delivery of the product to the standing up and the deployment, the time to operations of a data center is measured not — it can — if you’re not good at it and not proficient at it, it could take months. But at the same time, we’re seeing incredible orders to retool the world’s data centers. And so, I think you’re starting — you’re seeing the beginning of, call it, a 10-year transition to basically recycle or reclaim the world’s data centers and build it out as accelerated computing. You’ll have a pretty dramatic shift in the spend of a data center from traditional computing and to accelerate computing with SmartNICs, smart switches, of course, GPUs and the workload is going to be predominantly generative AI. Q1 revenue was 7.19 billion, up 19% sequentially and down 13% year on year.

TD Ameritrade displays two types of stock earnings numbers, which are calculated differently and may report different values for the same period. GAAP earnings are the official numbers reported by a company, and non-GAAP earnings are adjusted to be more readable in earnings history and forecasts. This widget charts the estimated and reported earnings for the last four quarters. NVDA stock has risen 109% year to date through Wednesday’s close over investor enthusiasm for its critical role in the burgeoning artificial intelligence field. On Tuesday, Nvidia announced AI technology partnerships with Microsoft (MSFT) and Dell Technologies (DELL). “Nvidia delivered strong upside in terms of reported fiscal Q1 results, as well as fiscal Q2 guide,” Wells Fargo analyst Aaron Rakers said in a note to clients.

Nvidia Stock Soars on Earnings Beat. AI Is Even Bigger Than You Thought.

Chief Executive Jensen Huang said his company is ramping up production to meet the massive demand for artificial intelligence technology. For the current quarter, Nvidia forecast sales of $11 billion, up 64% year over year. That target obliterated Wall Street’s consensus estimate of $7.2 billion for fiscal second-quarter revenue. Good afternoon, and congratulations on the strong results and execution. I really appreciate more of the focus or some of the focus today on your networking products. I mean, it’s really an integral part to sort of maximize the full performance of your compute platforms.

“A trillion dollars of installed global data center infrastructure will transition from general purpose to accelerated computing as companies race to apply generative AI into every product, service and business process.” First, CSPs around the world are racing to deploy our flagship Hopper and Ampere architecture GPUs to meet the surge in interest from both enterprise and consumer, AI applications for training, and inference. In addition to enterprise AI adoption, these CSPs are serving strong demand for H100 from generative AI pioneers. Second, consumer internet companies are also at the forefront of adopting generative AI and deep learning-based recommendation systems, driving strong growth.

Zacks Research is Reported On:

Some of them, part that — many of them could come from companies like ServiceNow and Adobe that we’re partnering with in AI Foundations. And they’ll create a whole bunch of generative AI APIs that companies can then connect into their workflows or use as an application. And, of course, there’ll be a whole bunch of internet service companies.

  • You guys have been working on this for some time behind the scenes, where you sell in the hardware to your hyperscale partners and then lease it back for your own business.
  • We just have to be careful here, but we’re not here to guide on the second half.
  • I’m going to pause there and see if Jensen wants to add a little bit more.
  • Since 1988 it has more than doubled the S&P 500 with an average gain of +24.17% per year.
  • Guided by large language models, generative AI models can generate amazing content with models to fine-tune, guardrail, align to guiding principles.

That amount of data could consume some 40%, 50%, 60% of your computing time. NVIDIA AI Enterprise is the only accelerated stack — GPU accelerated stack in the world that is enterprise safe and enterprise supported. There are 4,000 different packages that build up NVIDIA AI Enterprise and represents the operating engine — end-to-end operating engine of the entire AI workflow. It’s the only one of its kind from data ingestion, data processing. NVIDIA’s universal GPU, the fact that we accelerate so many of these stacks, makes our utilization incredibly high. And so, number one is steer put, and that’s software-intensive problems and data center architecture problem.

We are significantly increasing our supply to meet their surging demand. Large language models can learn information encoded in many forms. To help customers deploy generative AI applications at scale, at GTC, we announced four major new inference platforms that leverage the NVIDIA AI software stack. These include L4 Tensor Core GPU for AI video; L40 for Omniverse and graphics rendering, H100 NBL for large language models; and the Grace Hopper Superchip for LLM and also recommendation systems and vector databases.

And the amount of engineering and distributed computing — fundamental computer science work is really quite extraordinary. And so, number one, it’s a full stack challenge, and you have to optimize it across the whole thing and across just a mind-blowing number of stacks. Good afternoon, everyone, and welcome to NVIDIA’s conference call for the first quarter of fiscal 2024. With me today from NVIDIA are Jensen Huang, president and chief executive officer; and Colette Kress, executive vice president and chief financial officer. I’d like to remind you that our call is being webcast live on NVIDIA’s investor relations website. Nvidia announced that it hit record revenue in its gaming, data center, and professional visualization platforms.

With Q gross margins expected to near 70%, maybe the company could lower the prices of some of those high-end GPUs. The first table on the Estimates page compares Actual vs. Estimated earnings for the last four quarters. This table is designed to uncover Earnings Surprises — times when a stock’s earnings either out-perform or under-perform the analysts’ expectations. Typically, when actual earnings out-perform the analysts’ estimate, a stock’s share price tends to rise; when the actual earnings come in under the analysts’ estimate, the stock’s share price tends to fall.

Software is really important to our accelerated platforms. Nvidia’s data center group reported $4.28 billion in sales, versus expectations of $3.9 billion, a 14% annual increase. Nvidia said that performance was driven by demand for its GPU chips from cloud vendors as well as large consumer internet companies, which use Nvidia chips to train and deploy generative AI applications like OpenAI’s ChatGPT. “A trillion dollars of installed global data center infrastructure will transition from general purpose to accelerated computing as companies race to apply generative AI into every product, service and business process. The computer industry is going through two simultaneous transitions, accelerated computing and generative AI. CPU scaling has slowed, yet computing demand is strong, and now, with generative AI, supercharged.