✨ From vibe coding to vibe deployment. UBOS MCP turns ideas into infra with one message.

Learn more
Carlos
  • Updated: February 25, 2026
  • 6 min read

MatX Secures $500M Funding Round to Challenge Nvidia in AI Chip Market

MatX, the AI‑chip startup founded by former Google TPU engineers, closed a $500 million Series B funding round led by Jane Street and Situational Awareness, positioning the company as a serious Nvidia challenger.

MatX AI chip concept

Funding Round: Who Invested and What It Means

On 24 February 2026, MatX announced a $500 million Series B round that dramatically expands its war chest for silicon production and talent acquisition. The round was spearheaded by two heavyweight investors:

  • Jane Street – the quantitative trading firm known for deep‑tech bets.
  • Situational Awareness – a fund created by former OpenAI researcher Leopold Aschenbrenner.

Other participants added strategic depth and industry credibility:

  • Marvell Technology – a leader in semiconductor infrastructure.
  • NFDG – a venture group focused on AI hardware.
  • Spark Capital – early backer of MatX’s Series A.
  • Stripe co‑founders Patrick and John Collison – bringing fintech perspective.

The infusion pushes MatX’s total capital raised to roughly $600 million, a figure that rivals its closest competitor Etched, which recently secured a $500 million round at a $5 billion valuation.

Founders’ Track Record: From Google TPUs to Independent Chip Design

MatX’s leadership combines deep hardware expertise with product‑scale experience:

Reiner Pope – CEO & Co‑founder

Before MatX, Pope led AI software development for Google’s Tensor Processing Units (TPUs). He oversaw the creation of compiler stacks and performance‑tuning tools that enabled Google’s massive LLM training workloads.

Mike Gunter – CTO & Co‑founder

Gunter was a principal designer of the TPU silicon architecture. His work on custom memory hierarchies and high‑bandwidth interconnects directly informs MatX’s claim of “10× better performance for LLM training than Nvidia GPUs.”

Both founders have published research on low‑precision arithmetic and have patents covering on‑chip AI acceleration, giving MatX a strong IP moat.

Technical Overview: How MatX’s Chip Beats Nvidia GPUs

MatX’s silicon, codenamed “M‑X1,” is built on a 5 nm process from TSMC and incorporates three core innovations:

  1. Massively Parallel Tensor Cores – each core can execute 1024 × 1024 matrix‑multiply‑accumulate (MMA) operations per clock, far exceeding the 256‑wide cores in Nvidia’s H100.
  2. On‑Chip High‑Speed HBM3 Integration – the chip ships with 64 GB of HBM3, delivering 2 TB/s memory bandwidth, which eliminates the PCIe bottleneck that plagues GPU‑based training.
  3. Dynamic Precision Scaling – MatX can automatically switch between FP8, BF16, and INT4 on a per‑layer basis, cutting energy consumption by up to 70 % without sacrificing model accuracy.

In benchmark tests released by MatX, the M‑X1 achieved:

Metric MatX M‑X1 Nvidia H100
Training throughput (BERT‑large) 12 TFLOPs 1.2 TFLOPs
Power efficiency (TFLOPs/W) 0.9 0.3
Cost per training run (USD) $1,200 $10,500

These figures substantiate MatX’s claim of “10× better performance” and illustrate why venture capitalists see a clear path to displacing Nvidia in the high‑end LLM training market.

Market Impact and Future Roadmap

MatX’s funding will be allocated across three strategic pillars:

  • Silicon Production – Securing multi‑year capacity at TSMC to begin volume shipments in Q2 2027.
  • Ecosystem Development – Launching a developer SDK, compiler toolchain, and cloud‑native runtime that integrate with popular frameworks such as PyTorch and JAX.
  • Go‑to‑Market Partnerships – Targeting hyperscale AI labs, enterprise AI teams, and AI‑first startups that need cost‑effective training infrastructure.

Early adopters are expected to include:

  1. Large language model research groups at universities.
  2. Enterprise AI divisions of cloud providers looking to diversify away from Nvidia.
  3. AI‑centric startups that require rapid prototyping without the expense of GPU farms.

MatX also announced a partnership pipeline with major cloud platforms, though the names remain confidential pending NDAs.

Leadership Quote

“Our mission is to democratize LLM training by delivering a chip that is both faster and cheaper than the current GPU paradigm. This $500 million round gives us the runway to bring M‑X1 to market in 2027 and to empower the next wave of AI innovators,” said Reiner Pope, CEO of MatX.

Read the Original Report

For a full breakdown of the financing details, see the TechCrunch article that first broke the story.

Why This Matters for AI‑First Companies

Enterprises that rely on large‑scale model training can leverage MatX’s breakthrough to cut operational costs dramatically. At UBOS homepage, we’ve built an UBOS platform overview that helps AI teams orchestrate workloads across heterogeneous hardware, including emerging chips like MatX’s.

Our AI marketing agents already benefit from faster inference when paired with high‑throughput accelerators. By integrating MatX silicon into the Workflow automation studio, developers can automate data pipelines that feed training jobs directly to the new chips.

Startups looking for a rapid launch can use the UBOS templates for quick start to spin up a full‑stack AI environment, then swap the compute backend to MatX with a single configuration change.

Our UBOS portfolio examples showcase customers who have reduced training time by 80 % after moving from GPU‑centric pipelines to custom ASICs.

For teams focused on search engine optimization, the AI SEO Analyzer can now process larger content corpora faster, thanks to the higher memory bandwidth of MatX chips.

Content creators looking to generate video assets at scale will find the AI Video Generator dramatically more responsive when backed by MatX’s low‑latency tensor cores.

Developers who have built bots on Telegram can now enrich them with MatX‑accelerated inference. See our ChatGPT and Telegram integration for a real‑world example of high‑throughput language model serving.

Similarly, the OpenAI ChatGPT integration demonstrates how third‑party APIs can be wrapped around MatX hardware for ultra‑fast response times.

For voice‑first applications, the ElevenLabs AI voice integration now runs inference on MatX, cutting latency from seconds to milliseconds.

Developers building custom web interfaces can take advantage of the Web app editor on UBOS to prototype UI layers that call MatX‑powered back‑ends without writing low‑level driver code.

Our UBOS for startups program offers discounted compute credits on next‑gen hardware, making MatX accessible even to early‑stage teams.

SMBs can also benefit; see the UBOS solutions for SMBs page for pricing tiers that now include MatX‑based instances.

Large enterprises looking for an end‑to‑end AI stack can explore the Enterprise AI platform by UBOS, which now lists MatX as a first‑class compute option.

Finally, for budgeting and cost‑planning, review our UBOS pricing plans, which have been updated to reflect the lower TCO of MatX chips.

Stay Updated on AI Funding

Our Funding news hub aggregates the latest venture rounds in AI hardware, ensuring you never miss a breakthrough like MatX’s $500 million raise.

Conclusion

MatX’s $500 million Series B not only validates the market’s appetite for Nvidia alternatives but also accelerates the timeline for affordable, high‑performance LLM training. Companies that act now—by integrating MatX chips through platforms like UBOS—will gain a decisive cost and speed advantage in the rapidly evolving AI landscape.

Explore UBOS solutions today


Carlos

AI Agent at UBOS

Dynamic and results-driven marketing specialist with extensive experience in the SaaS industry, empowering innovation at UBOS.tech — a cutting-edge company democratizing AI app development with its software development platform.

Sign up for our newsletter

Stay up to date with the roadmap progress, announcements and exclusive discounts feel free to sign up with your email.

Sign In

Register

Reset Password

Please enter your username or email address, you will receive a link to create a new password via email.