VCs look to AI data centers, local LLMs, and domain models for growth

VCs look to AI data centers, local LLMs, and domain models for growth

Looking to break out of the slowdown in deal activity and exit values, venture capital firms are going all in on emerging AI opportunities that show the potential to deliver long-term growth. Pitchbooks’ latest Artificial Intelligence and Machine Learning Report released today reflects the continued challenges VCs face, starting with dropping deal activity and exit values. Pitchbook’s analysis shows AI data centers, local large language models (LLMs) and domain-specific foundation models are three of the many growth catalysts VCs need to keep their firms growing and delivering returns.  

More market turbulence for VCs

AI and machine learning (ML) deal activity plummeted 19% in just one year, from 8,968 in 2022 to 7,238 in 2023. AI and ML deal values and counts also fell. Pitchbook tracked $2.7 billion in disclosed deal value in Q4, 2023, the lowest quarter since Q1 2019. M&A (merger and acquisition) activity continues to drop as the leading tech companies focus more on partnerships with LLM startups. 

Pitchbook notes the exception to this trend being AMD’s acquisition of Nod.AI in machine learning operations (MLOps), IBM‘s acquisition of Manta in database management and ServiceNow‘s acquisition of UltimateSuite in predictive analytics. It’s anticipated that semiconductor startup Astera Labs‘ IPO will reinvigorate deal values in Q1 or Q2 this year.

Amid the plummeting deal activity and lower deal values, there are also signs of long-term growth. Generative AI leaders raised $6 billion in Q4, 2023 alone, across 194 deals, largely supported by Microsoft, Google and other tech giants looking to gain access to the latest LLM technologies. Pitchbook notes that momentum in horizontal platforms also grew, setting a VC record in 2023 with $33 billion raised. Investments in vertical applications plummeted to levels not seen since 2020.


VB Event

The AI Impact Tour – NYC

We’ll be in New York on February 29 in partnership with Microsoft to discuss how to balance risks and rewards of AI applications. Request an invite to the exclusive event below.


Request an invite

Where VCs say new opportunities are  

Building an organizational structure and product strategy that can capitalize on Nvidia’s many innovations, including their rapid GPU advances, is at the core of new investment opportunities. Pitchbook’s analysis finds that the three emerging areas of AI data centers, local LLMs and domain-specific foundation models are well-positioned to benefit from Nvidia’s momentum as a primary driver of the AI market.

Nvidia reported $22.1 billion in revenue for their fourth quarter, FY 2024, up 265% year-over-year and 22% sequentially. The data center segment grew 409% from last year and 27% sequentially to $18.4 billion. Jensen Huang, founder and CEO of Nvidia, said, “Our Data Center platform is powered by increasingly diverse drivers — demand for data processing, training, and inference from large cloud-service providers and GPU-specialized ones, as well as from enterprise software and consumer internet companies. Vertical industries — led by auto, financial services, and healthcare — are now at a multibillion-dollar level.”

AI data centers show potential for breakout growth

Designed from the infrastructure layer up to scale and support more AI-intensive workloads, these data centers are optimized to get the most value out of high-performance servers, storage, networking, and specialized accelerators. AI data centers also need to be designed to optimize the power consumption and heat output of high-performance GPUs, balanced with a strong focus on sustainability. 

IDC estimates that $8 billion was invested in generative AI processors, storage and networking, yielding $2.1 billion in cloud revenue and $4.5 billion in application sales. Pitchbook predicts AI data centers won’t attain software-as-a-service (SaaS) level margins until 2027. Startups are focusing on offering cost-effective solutions and significant savings on GPU hours.

Pitchbook notes that “according to hourly on-demand pricing, startups are offering 50%-70% cost savings on GPU hours for advanced Nvidia A100s and offering unique access to the latest H100 chips.” The report notes that the leading startup GPU cloud provider Lambda has built the largest cluster of H100 chips of all public clouds, exceeding Google and Oracle.

VCs will be evaluating the opportunity to create and partner with ecosystems of colocation providers. Pitchbook notes that specialty cloud providers have carved out a $4.6 billion market from the nearly $150 billion internet-as-a-service market, more than 90% of which accrues to U.S.-based hyperscalers and China cloud giants. What makes specialty cloud providers unique is their ability to differentiate themselves based on AI chip availability, local presence, and multicloud support and support for multiple types of legacy hardware.

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.

Source link


Be the first to comment

Leave a Reply

Your email address will not be published.