Closing the evidence gap
in AI risk.
Arcola AI provides real-time intelligence on AI incidents for those responsible for governing, investigating, underwriting, and researching AI risk.
Our mission
We are building the data infrastructure the AI safety ecosystem currently lacks: a live, multi-source picture of how AI harms are emerging and evolving globally.
Our platform aggregates signals from independent sources, applies structured classification with confidence scoring, and delivers a triangulated picture of how AI risk is emerging and evolving. No single source can provide this view. We build the connective tissue that makes the ecosystem more effective.
The team
Ben Todd
Co-founder and CEO
Ben has an MEng in Engineering Science and a PhD in computational modelling, both from Cambridge, his doctoral research sponsored by Rolls-Royce. Over twenty years he has built and led innovation businesses bringing emerging technologies to market. His previous venture, Arcola Energy, grew from a startup into the UK's leading hydrogen fuel cell integrator before being acquired by Ballard Power Systems for a headline USD 40 million.
Ben is a Fellow of Unreasonable Impact and alumnus of the Barclays Scale-Up programme at Cambridge Judge Business School. He brings deep experience in technology commercialisation, government and industry engagement, and building companies around technically complex products in regulated markets.
At Arcola AI, Ben leads strategy, partnerships, and business development across government, insurance, and research sectors.
Cambridge · Unreasonable Impact · Cambridge Judge Business School
Simon Mylius
Co-founder and CTO
Chartered Engineer with a First Class degree in Linguistics and Cognitive Science from UCL and twenty years of product development experience, including thirteen years leading systems engineering teams developing hydrogen fuel cell and zero-emission drive systems. As Engineering Director at Arcola Energy, he built the team from startup to 65 engineers, delivering safety-critical systems through to acquisition by Ballard Power Systems.
Since 2023, Simon has focused on AI technical governance and post-deployment incident monitoring, completing a fellowship with the Centre for the Governance of AI (GovAI) where he applied systematic hazard analysis methodology to frontier AI.
He created the MIT AI Incident Tracker and now leads the project as a senior researcher at MIT FutureTech, where he also leads their AI Governance Mapping project. He is a member of the OECD Expert Group on AI Incidents, and his work on AI incident tracking was recently featured in Time Magazine. He has led or mentored research projects for Arcadia Impact, MARS and CBAI.
MIT FutureTech · OECD Expert Group on AI Incidents · UCL · Time Magazine
Our approach
Research-grade methodology
Our platform separates shared source infrastructure from partner-specific classification pipelines. The source layer handles continuous multi-source ingestion and AI-driven triage. Each partner defines their own taxonomy, scoring rubric, and classification logic in their pipeline. Every classification captures the model's reasoning, providing full traceability. We validate outputs against human expert review and publish our methodology.
Enterprise-ready delivery
REST APIs, data exports, real-time alerting, row-level security and multi-tenant authentication. Our dashboard is built for analysts who spend hours with the data. A production intelligence platform, not a research prototype.