Sneak Peek: The Future of Portfolio Management

PortfolioIQ
Table of contents:
blogMar 2025

Chipping Away: AI x Hardware

Loading

Software has been the favored child for Silicon Valley for decades; Hardware relegated to a niche. But AI is rewriting the playbook. Compute demand is soaring, and AI infrastructure spending is set to cross $500 billion in the next five years. Hardware is back in the spotlight.

 

GPUs are fueling AI’s rise, but supply constraints and high costs are becoming bottlenecks. Training a large model like GPT-3 can cost $1.4 million per session, making efficiency the next frontier. The race is now on to build AI hardware that doesn’t break the bank, as models scale.

 

Investment is pouring in. Tech giants Intel and AMD are pushing new architectures like the MI300 series, while top VCs like Sequoia Capital are backing startups like Cerebras and SambaNova. The market is shifting beyond general-purpose GPUs to specialized chips built for AI. Cerebras’s wafer-scale engines handle high-performance computing (HPC) for health and finance, while Groq’s LPUs process 500 tokens per second for smaller models.

 

Startups are creating AI accelerators, photonic systems, and neuromorphic chips. Innovation is fast, but whether they can truly deliver higher performance at lower costs is the question.

 

tweet pg.png

 


 

The AI Hardware Ecosystem

 

While Nvidia continues to dominate model training; inference, AI data centers, and on-device intelligence are emerging hotspots for startups to innovate in.

 

AI hardware is more than just GPUs

The battlefield is across the full compute stack — training chips, memory, datacenters, and power.

 

  • Training Compute: Nvidia’s Hold 
    AI model training demands raw compute, and Nvidia controls over 90% of the market. CUDA, Nvidia’s proprietary parallel computing framework, makes its GPUs exceptionally fast for AI and machine learning. It is deeply integrated with popular AI tools like TensorFlow and PyTorch, making Nvidia the go-to choice for developers. Since CUDA is exclusive to Nvidia, switching to other hardware like AMD or Intel means rewriting code, adapting to different systems, and losing key optimizations – a tedious and costly process. This entrenched ecosystem gives Nvidia a powerful moat.
  • Inference Compute 
    Beyond just training, there’s a growing emphasis on running AI models efficiently. The idea that more training alone leads to better models is evolving — inference compute is now a key priority. This has driven interest in AI accelerators — specialized chips built for faster, low-power inference. Groq, Lightmatter, Etched, and Celestial AI are key examples.
  • AI-Enabled Hardware: From Datacenters to Devices 
    AI is making its way into hardware across industries – enhancing efficiency, automation, and decision-making. Custom, on-device AI-chips on consumer devices such as Apple’s A17 Pro chip and Google’s Tensor SoC power AI-driven photography, voice assistants, and real-time language translation. In healthcare, Siemens’ AI-powered imaging systems and Intuitive Surgical’s da Vinci robot enhance diagnostics, precision surgeries and real-time adjustments. AI-driven defense systems like Anduril’s autonomous drones and Palantir’s real-time threat analysis tools improve surveillance and security.

 

Compute is the new oil

  • The U.S. has restricted China’s access to Nvidia’s top AI chips, but China is adapting. Huawei, Alibaba are optimizing older chips to make manufacturing advances in AI hardware.
  • Taiwan produces 90% of advanced AI chips. Any disruption could stall global AI progress. The U.S. is investing in domestic fabs, but scaling production is a long-term play.
  • Europe is pushing for semiconductor independence but lags behind. Israel, South Korea, and the UAE are ramping up AI chip manufacturing. AMD’s partnership with Silo AI signals growing European interest in AI compute.

 

Energy is the bottleneck

Powering AI hardware is prohibitively expensive. AI datacenters are consuming electricity at unprecedented levels, straining grids and forcing new energy strategies. OpenAI’s $500B Stargate project is in a remote location because city grids can’t support it. xAI built its own power plant to sustain AI compute. Power-efficient chips are the next frontier. Lightmatter and Groq are optimizing for efficiency, because soon, AI compute won’t be limited by chips — it’ll be limited by electricity.

 


 

Market Map: Inference, AI data-centers, and Edge AI are hotspots for innovation

 

The AI hardware market is still dominated by Nvidia, AMD, Intel, AWS, and Google, but the ecosystem is evolving. While these players continue to control AI model training, new entrants are expanding the market by focusing on inference, power efficiency, and specialized AI compute.

 

MM hardware.png
Interested in these startups? Download a curated list of 50+ AI hardware startups, with founder profiles, funding, HQ and more. 

 

Tech giants are focused on energy and cost efficiency

GPUs and CPUs have driven AI’s growth, but energy consumption and compute costs are climbing. Nvidia owns 90%+ of the market. AMD’s Instinct MI325X accelerators and Intel’s Gaudi AI chips offer alternatives, but adoption is slow. The focus now is on optimizing efficiency — not replacing GPUs, but designing hardware that better supports AI inference and power-sensitive workloads.

 

Emerging startups are expanding the AI inference compute stack

Emerging players are building AI-specific architectures that complement existing hardware rather than compete with it. The shift is toward inference acceleration, photonic computing, and low-power AI processing.

 

  • Graphcore, now under SoftBank, developed Intelligence Processing Units (IPUs) to accelerate complex AI workloads.
  • Rebellions is developing low-power AI accelerators optimized for inference efficiency.
  • Celestial AI is advancing photonic computing, using light-based chips for faster, high-bandwidth AI processing.

 

Cloud giants are designing in-house chips

AWS, Microsoft, and Google are investing in custom silicon, moving away from standard GPUs to control costs and optimize for AI workloads. Microsoft’s Azure Maia 100 and Amazon’s Graviton processors are designed to offer cost-effective AI compute tailored for their cloud environments. Custom silicon helps cloud providers differentiate their AI services while reducing reliance on external suppliers. Vertical integration strengthens their competitive edge against private data centers by offering more use-case-specific AI compute solutions.

 

Edge AI is coming

On-device inference reduces latency, energy use, and security risks. Startups like Mythic AI and Memryx are pioneering analog compute-in-memory to enable low-power AI processing. Meanwhile, Apple, Qualcomm, and AMD are integrating AI accelerators into consumer and enterprise devices, signaling a move toward custom-built AI chips optimized for real-time performance.

 

AI-enabled hardware

AI is being embedded directly into consumer and enterprise hardware, optimizing real-time decision-making across industries.

 

  • Wearables like the Apple Watch Ultra and Oura Ring use AI to enhance health tracking, personalized coaching and predictive analytics.
  • In defense, AI-driven systems like Axon’s body cameras and Anduril’s autonomous drones enable real-time threat detection and situational awareness.
  • Manufacturing devices powered by AI, such as Siemens’ Industrial Edge and Bright Machines’ automated assembly lines, improve quality control and operational efficiency.
  • In healthcare, AI is transforming diagnostics through Butterfly Network’s AI-powered ultrasound and Medtronic’s GI Genius, an AI-assisted colonoscopy tool.

 


 

Funding Landscape: It’s a VC gold rush

 

FL.png

 

A capital-heavy space with billions in investment

The AI hardware sector requires significant upfront capital due to high costs associated with R&D, semiconductor fabrication, and infrastructure development. Unlike software companies, AI chip startups must secure large-scale funding long before reaching commercialization.

 

The increasing demand for AI compute has resulted in heavy investments from tech giants, sovereign wealth funds, and venture capital firms. Microsoft, Google, and Amazon have emerged as both investors and customers, securing exclusive access to AI compute capacity through funding and strategic partnerships. Nvidia, Intel, and AMD continue to invest in next-generation chips, reinforcing their positions in AI hardware.

 

A hotbed for M&A activity

With steep capital requirements and long development cycles, M&A is a potential defining feature of the AI hardware space. Larger players are acquiring startups to accelerate innovation, secure talent, and gain an edge in AI compute.

 

  • HP acquired Humane, a startup that raised significant funding but struggled to commercialize its AI-powered wearable device.
  • Axon acquired Dedrone, strengthening its AI-driven surveillance and security solutions.
  • AMD acquired with Silo AI, marking another move by legacy semiconductor companies to align with emerging AI hardware startups.

 

Major investors betting big

Strategic investors — including Nvidia, Microsoft, Google, Amazon, Intel, and Samsung — are actively funding AI chip startups to ensure long-term access to compute infrastructure. Leading VCs and institutional investors have also increased their bets on AI hardware.

 

BlackRock, a16z, General Catalyst, and Fidelity have been among the most active investors, participating in multiple funding rounds.

 

Major AI hardware funding rounds in 2024 include:

  • Tenstorrent raised $693M in Series D at a $2.7B valuation, backed by Samsung Securities, Fidelity, and Bezos Expeditions.
  • Etched secured $120M in Series A, led by Primary Venture Partners and Two Sigma Ventures.
  • Lightmatter closed a $400M Series D, reaching a $4.4B valuation, with funding from T. Rowe Price.
  • Groq raised $640M in Series D, backed by BlackRock, Type One Ventures, and Verdure Capital.
  • Celestial AI completed a $175M Series C, bringing its total funding to $338M, led by US Innovative Technology Fund.

 


 

Looking Forward

 

  1. The inference market is the next battleground
    While Nvidia dominates AI training, the inference market is open. Companies like Groq, Lightmatter, Celestial AI, and others are optimizing AI compute for real-time efficiency.
  2. There will be big losers (and winners)
    With the unprecedented amount of capex in this space, there’s bound to be big losers down the line. Those that can’t scale top line fast enough, will run out and pay the piper (Nvidia). Investors are placing billion-dollar bets, but AI hardware remains risky. Satya Nadella hinted at (a little bit) caution recently – “One of the things is that there will be overbuild. I am thrilled that I'm going to be leasing a lot of capacity in '27, '28”. 
  3. Watch out for Edge AI
    Processing AI workloads directly on local devices instead of cloud means data processing in real time, reduced latency, and lesser bandwidth costs. We’re already seeing this happen in AI-first smartphones (Apple Intelligence, Google Pixel), healthcare & diagnostics (Siemens, Eko Health), and autonomous vehicles. As AI hardware advances, Edge AI will potentially enter retail (automated checkouts), agriculture (AI-driven crop monitoring), and defense (real-time battlefield intelligence) sectors as well.