Key Takeaways
- ManaMind, a London, UK-based AI-driven autonomous testing startup founded in 2025, has raised $1.5 million in a pre-seed funding round, with the round led by Sure Valley Ventures.
- 5 investors participated: Sure Valley Ventures, EWOR, Ascension, SyndicateRoom, and Heartfelt, with the company issuing convertible preferred stock in the transaction.
- The platform uses 3 coordinated AI agents that test games using only video and audio inputs, completing full regression cycles in 6 hours instead of days, while catching 86% of critical bugs before shipping.
- ManaMind has already secured design partnerships with Included Games and Crazy Labs, and plans to use the fresh capital to expand its engineering team, accelerate proprietary AI model development, and grow in key international markets.
Quick Recap
London-based AI startup ManaMind has raised $1.5 million in pre-seed funding as officially reported by The SaaS News on April 30, 2026. Co-founded by Emil Kostadinov and Sabtain Ahmad in 2025, the company builds autonomous AI agents that play through video games, detect bugs, and auto-generate developer-ready bug reports, removing the dependency on slow, expensive manual quality assurance teams entirely. The round was led by Sure Valley Ventures with participation from EWOR, Ascension, SyndicateRoom, and Heartfelt.
[Insert Embed of Tweet from @TheSaaSNews here]
AI Plays Games Before Users
The technical core of ManaMind is what separates it from older-generation game testing tools. The platform runs three coordinated AI agents, each with a distinct role: one handles gameplay navigation and environment exploration, one identifies deviations from expected behavior, and the third auto-generates structured, developer-readable bug reports.
Critically, these agents interact with a game using only video and audio, exactly as a human player would, rather than relying on code hooks, engine-specific plugins, or scripted test sequences that break when the game environment changes. CTO Sabtain Ahmad built ManaMind’s proprietary vision language model specifically for this purpose, adapting it to understand the visual and interactive dynamics that make games fundamentally different from standard enterprise software.
The practical impact for game studios is measurable. Full regression test cycles, which previously took days of manual QA labor, can now be completed in approximately 6 hours, with 86% of critical bugs caught before the build ships.
arly traction is already visible: ManaMind has established design partnerships with Included Games and Crazy Labs, two companies with significant footprints in the mobile gaming market, providing real-world validation that the platform integrates into active development workflows rather than remaining a proof of concept. The $1.5 million will be directed at hiring additional engineering talent, advancing the proprietary AI models, and pushing into key international markets where the global gaming industry is concentrated.
Why Autonomous Game QA Is an Investor Priority Right Now?
The timing of ManaMind’s pre-seed reflects a market-wide realisation that game QA is one of the most expensive and least scalable bottlenecks in modern game development. The global gaming industry’s accelerating release cadence, driven by live-service models and continuous updates, has made the old model of warehoused manual testers economically unsustainable for studios of all sizes. Venture capital interest in AI-native testing platforms has surged in 2026, as investors move from backing foundation models toward application-layer startups that solve tangible operational problems.
ManaMind also fits squarely into the broader agentic AI investment thesis, a category projected to scale from $7.84 billion in 2025 to $52.62 billion by 2030, where autonomous agents handling visual and interactive environments represent one of the more technically credible early applications. ManaMind’s long-term ambition goes beyond gaming.
CEO Emil Kostadinov has stated that gaming is a launchpad, with the company’s eventual target being the autonomous testing layer for all software and robotics, a market substantially larger than game QA alone. The skill of reliably perceiving, navigating, and evaluating complex visual environments has clear applications in physical-world autonomy, and Sure Valley Ventures’ backing suggests investors believe the gaming wedge is a genuine bridge to that larger opportunity.
Competitive Landscape
The two most relevant direct competitors at a comparable stage are GameDriver, a California-based automated game testing platform that has raised $7.7 million across 4 rounds, and PlaytestCloud, a Berlin-based game research platform with 1.5 million+ players and AI-powered analysis tools used by studios including Ubisoft, Supercell, and Zynga.
| Feature / Metric | ManaMind | GameDriver | PlaytestCloud |
| Testing Approach | Autonomous AI agents using vision + audio, no code hooks required | Script-based automation via HierarchyPath object detection, engine-specific | Human player panel + AI-powered post-session analysis |
| Bug Detection | 86% critical bug catch rate, auto-generates structured reports | Broad test coverage via repeatable automated scripts | Player behavior analysis, not bug detection focused |
| Setup Requirement | No engine-specific setup, works from video/audio only | Requires Unity, Unreal Engine, or Godot integration | Game build submission + target player criteria definition |
| Regression Cycle Speed | Full cycle in approx. 6 hours | Faster than manual, speed depends on script complexity | Results and AI analysis within 48 hours |
| Total Funding | $1.5M pre-seed (April 2026) | $7.7M across 4 rounds | Minimal institutional funding (bootstrapped/grant-stage) |
| Target Market | Mobile and indie studios on any engine | Unity, Unreal, Godot developers specifically | Studios of all sizes, mobile to AAA |
| Agentic AI | Core architecture: 3 coordinated autonomous agents | No, script-replay model | Partial, AI analysis layer on top of human playtests |
ManaMind leads on autonomous adaptability, making it the strongest choice for studios that need engine-agnostic, self-directed testing without manual setup overhead. GameDriver holds an edge in test precision for teams that already use Unity or Unreal and want deep, deterministic script-based automation, while PlaytestCloud remains the dominant solution when studios need real human player insights rather than automated bug detection.
TechnoTrenz’s Takeaway
In my experience covering early-stage AI funding, the rounds that age best are not always the largest ones. They are the ones where the problem is visceral, the technical approach is genuinely non-obvious, and the founding team has built something that works before raising significant capital. ManaMind checks all three of those boxes.
I think this is a big deal because the game QA problem is one of those industry pains that insiders have complained about for years but that no one has solved convincingly. Manual testing is slow, inconsistent, and does not scale with modern release cadences. Script-based automation tools like GameDriver help, but they are brittle and engine-dependent.
ManaMind’s bet, that you can build AI agents smart enough to play and evaluate games the way a real human would, but faster and at machine scale, is the right bet to make in 2026 when vision-language models have matured enough to make it technically credible.