Key Takeaways
- MatX, a startup founded by ex-Google TPU engineers, closed a $500 million Series B led by Jane Street and Leopold Aschenbrenner’s Situational Awareness fund, with backing from Marvell Technology, Spark Capital, NFDG, and Stripe co-founders Patrick and John Collison.
- The company is now valued at “several billion dollars,” up from a $300 million+ post-money valuation after its ~$100 million Series A in 2024.
- MatX plans to manufacture its chip, the MatX One, with TSMC and begin shipping in 2027, targeting 10x better performance than Nvidia GPUs for training and running large language models.
- AI chip startups have raised billions in 2025-2026 alone, with Etched raising $500M at a $5B valuation in January 2026 and Nvidia acquiring Groq for $20 billion in December 2025, underscoring intense competition in the sector.
Quick Recap
On February 24, 2026, MatX announced the close of a $500 million Series B funding round, making it one of the largest Series B raises in semiconductor history. The round was co-led by quantitative trading giant Jane Street and Situational Awareness, the investment firm founded by Leopold Aschenbrenner, a former OpenAI researcher known for his influential writings on artificial general intelligence. CEO Reiner Pope confirmed the raise via a LinkedIn post and a detailed company blog entry.
MatX One Built for LLMs
MatX was co-founded in 2023 by Reiner Pope, who led AI software development for Google’s Tensor Processing Units (TPUs), and Mike Gunter, a lead hardware designer on the same TPU program. Their thesis is simple but ambitious: a chip built from the ground up for large language models, with no legacy GPU baggage, can deliver an order-of-magnitude improvement in performance.
The MatX One chip is built around a splittable systolic array, a flexible architecture that achieves high energy and area efficiency on large matrix operations while maintaining high utilization on smaller, irregularly shaped matrices common in mixture-of-experts (MoE) models. The chip uses a hybrid memory system: weights are stored primarily in SRAM for low-latency access (enabling over 2,000 output tokens per second on large 100-layer MoE models), while key-value caches reside in HBM to support long-context workloads without performance degradation.
Pope described the design philosophy in a blog post: “The chip combines the low latency of SRAM-first designs with the long-context support of HBM. These elements, plus a fresh take on numerics, deliver higher throughput on LLMs than any announced system”. Unlike competitors such as the now-Nvidia-owned Groq (inference-only) or Etched (transformer-inference-only), the MatX One supports the full spectrum of LLM workloads: pre-training, reinforcement learning, prefill, and decoding.
AI Chip Race Intensifies
The timing of MatX’s raise is not accidental. The AI chip market is experiencing a funding surge unlike anything the semiconductor industry has seen in decades, driven by insatiable demand for compute from frontier AI labs like OpenAI, Anthropic, and Google DeepMind.
Consider the recent landscape: Etched, which builds a transformer-only ASIC called Sohu, raised $500 million at a $5 billion valuation in January 2026, bringing its total funding to nearly $1 billion. Just weeks before that, in December 2025, Nvidia moved to acquire Groq for $20 billion, absorbing the inference-focused chip startup’s Language Processing Unit technology and engineering talent. Meanwhile, Nvidia itself projected over $500 billion in data center operator sales by the end of 2026.
These moves reveal a market where the incumbents are not sitting still and the challengers are raising war chests to match. The participation of Situational Awareness is notable: Aschenbrenner’s deep understanding of frontier AI’s computational bottlenecks lends credibility to MatX’s technical direction. Strategic investment from Marvell Technology, a major player in data center semiconductors, further signals that MatX is not a paper-napkin concept but a company with serious manufacturing and integration pathways.
Competitive Landscape
The AI chip startup ecosystem has narrowed to a handful of well-capitalized players challenging Nvidia’s dominance. With Groq now under Nvidia’s umbrella, MatX’s most direct independent competitor is Etched, while Tenstorrent (led by legendary chip architect Jim Keller) represents another alternative approach.
| Feature / Metric | MatX (MatX One) | Etched (Sohu) | Tenstorrent |
| Founded | 2023 | 2022 | 2016 |
| Total Funding | ~$625M+ (Seed + Series A + B) | ~$1B total | ~$370M+ |
| Latest Valuation | “Several billion dollars” | $5 billion | ~$2.6B |
| Chip Architecture | Splittable systolic array, hybrid SRAM + HBM | Transformer-only ASIC (hardcoded transformer) | RISC-V based, general AI accelerator |
| Workload Support | Training + Inference (full LLM stack) | Inference only (transformer models only) | Training + Inference (general AI) |
| Performance Claim | 10x better than Nvidia GPUs on LLMs | 20x faster than H100 on transformer inference | Competitive with GPUs at lower cost |
| Manufacturing Partner | TSMC | TSMC (4nm) | GlobalFoundries / Samsung |
| Shipping Timeline | 2027 | Late 2025/2026 | Shipping (Wormhole, Blackhole) |
| Key Differentiator | Unified training + inference; hybrid memory | Extreme specialization for transformer inference | Open-source ISA, broad AI flexibility |
MatX stands out by targeting the full LLM lifecycle in a single chip, while Etched has placed an aggressive, singular bet on transformer inference. If transformers remain the dominant architecture (as most expect for the next 3-5 years), Etched’s specialization could deliver unmatched inference speed. However, MatX’s hybrid approach provides more versatility across training and inference, making it a more flexible option for frontier labs that need one chip for multiple workloads.
Techno Trenz’s Takeaway
I think this is a big deal, and here is why. In my experience covering funding rounds in the AI hardware space, $500 million for a pre-revenue chip startup signals something beyond typical venture optimism. When Jane Street, a firm known for rigorous quantitative analysis, co-leads a round alongside Leopold Aschenbrenner, a researcher who literally wrote the playbook on AGI compute requirements, that is a strong vote of confidence in both the team and the technology.