← Latest venture news

Runware raises a £38m Series A led by Dawn Capital to build a single, low-cost API for AI media generation

🔎
Runware
🧑
Ioana Hreninciuc; Flaviu Radulescu
💰
£38m
🌎
London, United Kingdom
Dec 11, 2025

Runware, which powers media generation for some of the largest marketing and social content platforms in the world through a single API with up to 10x lower inference pricing and higher performance, has raised a £38 million Series A round led by Dawn Capital.

Runware is meeting the demand from enterprises everywhere for the best way to incorporate AI into their media-creation processes, with current customers including Wix, Together.ai, ImagineArt, Quora, Higgsfield and dozens of private enterprise key accounts.

The company, which has offices in London and San Francisco, will use its Series A to continue to build ‘one API for all AI’, and aims to deploy all 2m+ new AI models from Hugging Face to its platform by the end of 2026. To reach this goal, the company is investing in platform development, extending its custom AI inference platform, the Sonic Inference Engine® and making new key hires to the team.

The media market has long been bottlenecked by three key issues: fragmented access and poor usability, latency that damages user experiences, and unit costs that fail to scale. Runware provides a solution to all three issues. It reduces fragmentation by offering all AI models under one API and it reduces latency and unit costs by designing high performance AI inference hardware and software that achieves 10x lower CAPEX and OPEX compared to traditional AI datacenter buildouts. The result is its Sonic Inference Engine® platform which delivers the performance achieved by top-of-the-line GPUs with up to 9x lower cost.

Runware aggregates almost 300 model classes and 400k+ model variants behind one consistent schema and endpoint, so teams can A/B, route, or swap models with only minor code changes. The solution is consistently 30-40% faster than other media inference platforms for open source models and delivers 5-10x better price-performance than incumbents for open source models, and 10-40% savings for closed source models.

The demand from developers and enterprises for AI capabilities is vast: the AI inference market is estimated to be $26bn in 2025, growing to $68bn by 2028.

Scaling AI to millions of users is now existential for product teams, but it’s still technically hard and expensive. We give clients the best price and developer experience in a single API, so they can roll out any new model in minutes - without integrating dozens of providers, managing RPMs, or negotiating huge commitments. Through our API, they offer unlimited AI features to end-users, and we see them hit repeated growth peaks as a result.
Ioana Hreninciuc, Co-founder & CEO
I believe that in the future, most of the products and services will be enhanced by AI. We are building a platform that can run AI faster, more cost-effectively, with higher redundancy and lower latency. Our inference PODs are 100x cheaper and faster to build, and deployed near users, in alignment with each country’s evolving AI regulations. We’ve designed our architecture so that we can place inference capacity wherever power is available and affordable, rather than constructing large data centres that require years of approvals, construction, and new power infrastructure. We can have an inference POD up and running in 3 weeks, not 3 years.
Flaviu Radulescu, Co-founder
Runware is already proving a hit with global companies building AI applications that require media inference. Flaviu and Ioana have built the rare platform that delights developers, satisfies enterprise checklists, and bends the cost curve in the customer’s favour. Runware has the right product at the right layer, built by the right team, and we’re thrilled to be on the journey with them as they take on this huge and urgent market.
Shamillah Bankiya, Partner at Dawn Capital
POWERED BY