Theory of change.
We are building the independent evidence layer the AI safety ecosystem is missing to learn from incidents.
Two blind spots
Two related problems hold the AI safety ecosystem back.
1. Decision-makers lack evidence of real-world harm. The people responsible for governing AI risk, including policymakers, regulators, insurers and investigators, often lack concrete evidence that specific kinds of harm are actually occurring in the real world. Without it, they cannot act with confidence, or end up acting on the wrong things.
2. Emerging patterns go unseen. The data on AI incidents is already out there, scattered across incident databases, social media, academic studies and global news. However critical patterns are only visible by joining the dots across sources and, because this data is siloed, they are entirely missed. The ecosystem relies on retrospective analysis, meaning trends are only spotted after harms have already scaled.
The result: critical gaps and interventions that arrive too late to prevent harm.
How change happens
Inputs
The raw signal, at scale
- Early signals: social media and journalism as incidents emerge
- Documented incidents: curated databases and research
- Formal records: regulatory actions and court filings
- Continuous monitoring in real time, across jurisdictions and languages
Outputs
Structured intelligence delivered to decision-makers
- Incidents classified using research-community taxonomies
- Evidenced claims and traceable decisions, auditable to source
- Cross-source triangulation surfaces patterns no single source can see
- Real-time delivery to decision-makers via dashboards, APIs, alerts and data feeds
Outcomes
Evidence-based actions by the people governing AI risk
- Regulators intervene earlier, on stronger evidence
- Insurers price risk more accurately
- Investigators open and close cases faster
- Researchers detect emerging patterns earlier
Impact
Prevention of harm
- Fewer and less severe AI harms, caught earlier in their lifecycle
- Risk-based insurance pricing incentivises developers and deployers to reduce harm at source
- A public record of AI harm the ecosystem can trust
- A better-informed, more resilient ecosystem that learns from incidents
A sustainable model
Public layer: our dashboards, alerts and data feeds are freely available to the AI safety research community, policy professionals and the public, subject to source data providers' terms.
Commercial layer: bespoke structured feeds, risk reports and case intelligence for insurers, enterprises and investigators.
A diversified commercial base keeps the public layer free, independent and scalable, without depending on any single funder.