The 10 Biggest Nvidia News Stories Of 2024 (So Far)
- by nlqip
CRN rounds up the 10 biggest Nvidia news stories of 2024 so far, which range from three software startup acquisition deals and plans to boost AI PC development, to expanded partnerships with major tech vendors and significant financial milestones.
It’s halfway through 2024, and Nvidia has managed to fit what feels like a year’s worth of news developments into six months or so.
The flurry of activity around the full-stack computing company, née GPU designer, reflects how much work it has done to continue its dominance of the AI computing space, which allowed it to surpass semiconductor giant Intel in annual revenue last year.
[Related: The 10 Hottest Semiconductor Startups Of 2024 (So Far)]
In the past six months, Nvidia didn’t just start shipping its next-generation H200 data center GPU. It revealed plans to debut an even more powerful generation of GPUs later this year, then next year, then the year after and so on. And that doesn’t even touch upon the company’s plans for shipping new CPUs, networking chips and full systems.
The Santa Clara, Calif.-based company also spent ample time amping up its efforts to enable AI application development and deployment through expanded software and services offerings for data centers, cloud environments, edge devices and PCs.
All these efforts have been backed by new partnerships with hundreds of tech vendors, ranging from behemoths like Amazon Web Services and Dell Technologies to smaller companies like Databricks and Snowflake.
What follows are the 10 biggest Nvidia news stories of 2024 so far, which range from three software startup acquisition deals and plans to boost AI PC development, to expanded partnerships with major tech vendors and significant financial milestones.
10. Nvidia Intros New Industrial AI Solutions
Nvidia expanded its portfolio of industrial AI solutions with a general-purpose foundation model for humanoid robot learning and a next-generation edge computer, among other things.
The foundation model is called Project GR00T, and it’s designed to translate text, video and in-person demonstrations into actions performed by a humanoid robot.
To power such robots, Nvidia announced a new Jetson Thor-based “computer that is based on the company’s Thor system-on-chip. The computer features a Blackwell GPU that can provide 800 teraflops of 8-bit floating point AI performance to run multimodal models like GR00T.
The company also announced an expansion of its Isaac software platform for robotics. The new elements included Isaac Lab for enabling reinforcement learning, the OSMO compute orchestration service, Isaac Manipulator for enabling dexterity and modular AI capabilities in robotic arms and Isaac Perceptor for enabling multi-camera, 3-D surround-vision capabilities.
The new Isaac capabilities are expected to be available in the third quarter.
9. Nvidia Acquires Three Software Startups
Nvidia acquired three software startups in the first half of 2024 to build out its AI computing business.
In late April, the chip designer announced it had reached a deal to acquire Israel-based AI infrastructure management startup Run:ai. The startup provides Kubernetes-based workload management and orchestration software, which Nvidia said it plans to use to drive its DGX Cloud business and enhance the capabilities of DGX and HGX server customers.
Multiple reports said the Run:ai deal was valued at around $700 million.
In May, Nvidia acquired another Israeli startup called Deci, which develops software that can speed up the inference of AI models while retaining accuracy on any hardware. Israeli publication Calcalist said the Deci acquisition was valued at roughly $300 million.
On July 3, Nvidia acquired Shoreline.io, a startup founded by a former Amazon Web Services executive that provides software for automatically fixing issues in data center infrastructure. This is according to the startup, which confirmed the deal on its website, and one of its investors, Canvas Ventures, which said Shoreline’s team will join Nvidia’s DGX Cloud business.
Bloomberg first reported on the deal in June and said it was valued at about $100 million.
8. Nvidia Reveals Plan To Boost RTX AI PC Development
At the Computex 2024 event in Taiwan, Nvidia revealed a plan to enable developers to build AI applications for PCs running on its RTX GPUs with the new Nvidia RTX AI Toolkit.
Available later this month, the toolkit consists of tools and software development kits that allow developers to customize, optimize and deploy generative AI models on RTX AI PCs, a term the company started using for RTX-powered computers in January.
[Read More: 12 Big Nvidia, Intel And AMD Announcements At Computex 2024]
The toolkit includes the previously announced Nvidia AI Workbench, a free platform for creating, testing and customizing pretrained generative AI models.
It also includes the new Nvidia AI Inference Manager software development kit, which “enables an app to run AI locally or in the cloud, depending on the user’s system configuration or even the current workload,” according to the company.
Beyond RTX AI Toolkit, Nvidia said its working with Microsoft to give developers easy API access to “GPU-accelerated small language models that enable retrieval-augmented generation capabilities that run on-device as part of Windows Copilot Runtime.”
7. Nvidia Launches Inference Microservices
Nvidia used Computex 2024 to mark the launch of its inference microservices, which are meant to help speed up the development of generative AI applications for data centers and PCs.
Known officially as Nvidia NIM, the microservices consist of AI models served in optimized containers that developers can integrate within their applications. These containers can include Nvidia software components such as Nvidia Triton Inference Server and Nvidia TensorRT-LLM to optimize inference workloads on its GPUs.
The microservices include more than 40 AI models developed by Nvidia and other companies, such as Databricks DBRX, Google Gemma, Meta Llama 3, Microsoft Phi-3, Mistral Large, Mixtral 8x22B and Snowflake Arctic.
Nvidia said nearly 200 technology partners, including Cloudera, Cohesity and NetApp, “are integrating NIM into their platforms to speed generative AI deployments for domain-specific applications.” These apps include things such as copilots, code assistants and digital human avatars.
The company also highlighted that NIM is supported by data center infrastructure software providers VMware, Nutanix, Red Hat and Canonical as well as AI tools and MLOPs providers like Amazon SageMaker, Microsoft Azure AI and Domino Data Lab.
Global system integrators and service delivery partners such as Accenture, Deloitte, Quantiphi, Tata Consultancy Services and Wipro plan to use NIM to help businesses build and deploy generative AI applications, Nvidia added.
NIM is available to businesses through the Nvidia AI Enterprise software suite, which costs $4,500 per GPU per year. It’s also available to members of the Nvidia Developer Program, who can “access NIM for free for research, development and testing on their preferred infrastructure,” the company said.
6. Nvidia Reveals Extended Data Center Road Map
At Computex 2024, Nvidia revealed that it plans to release successors to its upcoming Blackwell GPUs in the next two years and launch a second-generation CPU in 2026.
The chip designer made the disclosures in an expanded data center road map that provided basic details for next-generation GPUs, CPUs, network switch chips and network interface cards.
[Read More: Analysis: As Nvidia Takes AI Victory Lap, AMD Doubles The Trouble For Intel]
The plan to release a new data center GPU architecture every year is part of a one-year release cadence Nvidia announced last year, representing an acceleration of the company’s previous strategy of releasing new GPUs roughly every two years.
“Our basic philosophy is very simple: build the entire data center scale, disaggregate it and sell it to you in parts on a one-year rhythm, and we push everything to the technology limits,” Nvidia CEO Jensen Huang said in his Computex keynote in early June.
The expanded road map is coming less than two months after Nvidia revealed its next-generation Blackwell data center GPU architecture, which is expected to debut later this year. At its GTC event in March, the company said Blackwell will enable up to 30 times greater inference performance and consume 25 times less energy for massive AI models compared to the Hopper architecture, which debuted in 2022 with the H100 GPU.
In the road map revealed by Huang during his Computex keynote, the company outlined a plan to follow up Blackwell in 2025 with an updated architecture called Blackwell Ultra, which the CEO said will be “pushed to the limits.”
In the same time frame, the company is expected to release an updated version of its Spectrum-X800 Ethernet Switch, called the Spectrum Ultra X800.
Then in 2026, Nvidia plans to debut an all-new GPU architecture called Rubin, which will use HBM4 memory. This will coincide with several other new chips, including a follow-up to Nvidia’s Arm-based Grace CPU called Vera.
The company also plans to release in 2026 the NVLink 6 Switch, which will double chip-to-chip bandwidth to 3600 GBps; the CX9 SuperNIC, which will be capable of 1600 GBps; and the X1600 generation of InfiniBand and Ethernet switches.
5. Competition Grows Against Nvidia’s AI Chip Dominance
As Nvidia moves forward with its plan to release increasingly powerful AI chips at a faster release cadence, large and small rivals announced new products in the first half of the year that are meant to whittle away at the GPU giant’s AI computing dominance.
At Computex 2024, AMD announced that plans to release a new Instinct data center GPU, the MI325X, later this year with significantly greater high-bandwidth memory than its MI300X chip or Nvidia’s H200, enabling servers to handle larger generative AI models than before.
[Related: Forrest Norrod On How AMD Is Fighting Nvidia With ‘Significant’ AI Investments]
AMD announced the details as part of a newly disclosed plan to release a new data center GPU every year starting with the CDNA 3-based MI325X. In an extended road map, AMD said it will then debut in 2025 the MI350 series, which will use its CDNA 4 architecture to provide increased compute performance and “memory leadership.” The next generation, the Instinct MI400 series, will use a future iteration of CDNA architecture and arrive in 2026.
Intel, on the other hand, announced its plan to debut its Gaudi 3 accelerator chip in the third quarter with the support of major vendors like Dell Technologies, Hewlett Packard Enterprise, Lenovo, Supermicro, Asus and Gigabyte.
The company later revealed that Gaudi 3’s eight-chip server platform will have a list price of $125,000, which it said will give Gaudi 3 2.3 times greater performance-per-dollar for inference performance and 90 percent better training throughput than Nvidia’s H100.
Meanwhile, cloud computing giant Google Cloud announced the general availability of its TPU v5p accelerator chip, which it said can train large language models nearly three times faster than the previous-generation TPU v4.
There are also a crop of AI chip startups looking to challenge Nvidia, including Cerebras Systems, which revealed its Wafer Scale Engine 3 chip in March.
4. Nvidia Reveals Blackwell GPU Architecture
Nvidia has revealed its next-generation Blackwell GPU architecture as the much-hyped successor to the AI chip giant’s Hopper platform, claiming it will enable up to 30 times greater inference performance and consume 25 times less energy for massive AI models.
At its first in-person GTC event in nearly five years, the chip designer in March uncovered the first GPU designs to use the Blackwell architecture, which it said comes with “six transformative technologies for accelerated computing” that will “help unlock breakthroughs” in fields like generative AI and data processing, among others.
[Read More: Nvidia Reveals Next-Gen Blackwell GPUs, Promised To ‘Unlock Breakthroughs’ In GenAI]
The designs are expected to arrive later this year, but the company gave no further clarification on timing. Cloud service providers expected to provide Blackwell-based instances include Amazon Web Services, Microsoft Azure, Google Cloud and Oracle Cloud Infrastructure as well as several other players, like Lambda, CoreWeave and IBM Cloud.
On the server side, Cisco Systems, Dell Technologies, Hewlett Packard Enterprise, Lenovo and Supermicro are expected to offer a plethora of Blackwell-based systems. Other OEMs supporting the GPUs include ASRock Rack, Asus, Eviden and Gigabyte.
The first confirmed designs to use Blackwell include the B100 and the B200 GPUs, the successors to the Hopper-based H100 and H200 for x86-based systems, respectively. The B200 is expected to include greater high-bandwidth memory capacity than the B100.
The initial designs also include the GB200 Grace Blackwell Superchip, which, on a single package, connects a B200 GPU with the company’s Arm-based, 72-core Grace CPU that has previously been paired with the H200 and H100.
3. Major Tech Vendors Extend Partnerships With Nvidia
Nvidia sought to continue its dominance in the AI computing space by announcing renewed and extended partnerships with major tech vendors in the first half of the year.
In the cloud infrastructure market, Amazon Web Services, Microsoft Azure, Google Cloud, Oracle Cloud Infrastructure and other providers announced plans to launch new services based on Nvidia’s upcoming Blackwell GPUs and DGX Cloud platform.
[Related: Nvidia’s 10 New Cloud AI Products For AWS, Microsoft And Google]
For example, AWS said it plans to build Project Ceiba, which the company said will become “one of the world’s fastest AI supercomputers,” by using Nvidia’s GB200 NVL72 systems, which are multi-node, liquid-cooled rack-scale servers that contain 36 of the AI chip giant’s upcoming GB200 Grace Blackwell Superchips as well as the company’s BlueField-3 DPUs.
Among server vendors, Hewlett Packard Enterprise announced a comprehensive generative AI solution portfolio co-branded as “Nvidia AI Computing By HPE,” Lenovo announced Lenovo AI Fast Start and other solutions based on Nvidia technologies, Dell Technologies announced the Dell AI Factory with Nvidia end-to-end AI enterprise solution, and Cisco Systems announced the Nvidia-based Cisco Nexus HyperFabric AI cluster. Other OEMs that announced new Nvidia-based solutions include Supermicro, Gigabyte and Asus.
Elsewhere in the IT industry, Nvidia announced new partnerships with Equinix, ServiceNow, SAP, NetApp, Nutanix, IBM, Databricks, Snowflake and many others.
2. Nvidia Surpasses Intel In Annual Revenue
When Nvidia reported its fourth-quarter earnings in February, they showed that the company surpassed Intel in total annual revenue for its recently completed fiscal year, mainly thanks to high demand for its GPUs driven by generative AI development.
The AI chip giant finished its 2024 fiscal year, which ended Jan. 28, with $60.9 billion in revenue, up 126 percent or more than double from the previous year.
[Read More: Analysis: How Nvidia Surpassed Intel In Annual Revenue And Won The AI Crown]
Meanwhile, Intel finished its 2023 fiscal year, which ended in December, with $54.2 billion in sales, down 14 percent from the previous year.
While Nvidia’s fiscal year finished roughly one month after Intel’s, this is the closest we’ll get to understanding how two industry titans compared in a year when demand for AI solutions propped up the data center and cloud markets in a shaky economy.
Nvidia pulled off this feat because the company had spent years building a comprehensive and integrated stack of chips, systems, software and services for accelerated computing—with a major emphasis on data centers, cloud computing and edge computing—then found itself last year at the center of a massive demand cycle due to hype around generative AI.
Intel, in the meantime, has been far behind Nvidia when it comes to adoption of its accelerator chips by developers, OEMs, cloud service providers, partners and customers that has allowed Nvidia to flourish. As a result, the semiconductor giant has had to rely on its traditional data center products, mainly Xeon server CPUs, to generate a majority of revenue for this business unit, and this area suffered due to lower demand.
1. Nvidia Became World’s Most Valuable Company For A Moment
In mid-June, Nvidia surpassed Apple and Microsoft in market capitalization, which made it the world’s most valuable company at the time.
The chip designer was able to achieve the milestone by hitting a $3.34 trillion market cap on June 18 as its stock price rose more than 3.5 percent.
[Read More: Analysis: Embraced By IT World, Nvidia Is Now World’s Most Valuable Company]
While Nvidia’s market cap has sunk below Microsoft’s and Apple’s since then and now stands as the world’s thirst most valuable company, reaching the top was nevertheless a reflection of how central it has become to small and large companies building the future of IT infrastructure.
While Nvidia has long stood out due to continuous advances in its GPUs—which have been key to providing the accelerated performance needed to make generative AI advancements possible—a great deal of its recent success has come from its focus on evolving from a chip designer to what CEO Jensen Huang calls a “full-stack” computing company.
Source link
lol
CRN rounds up the 10 biggest Nvidia news stories of 2024 so far, which range from three software startup acquisition deals and plans to boost AI PC development, to expanded partnerships with major tech vendors and significant financial milestones. It’s halfway through 2024, and Nvidia has managed to fit what feels like a year’s worth…
Recent Posts
- Arm To Seek Retrial In Qualcomm Case After Mixed Verdict
- Jury Sides With Qualcomm Over Arm In Case Related To Snapdragon X PC Chips
- Equinix Makes Dell AI Factory With Nvidia Available Through Partners
- AMD’s EPYC CPU Boss Seeks To Push Into SMB, Midmarket With Partners
- Fortinet Releases Security Updates for FortiManager | CISA