Matched Incidents
Historical AI incidents that are relevant to a scanned company's risk profile.
What are matched incidents?
Exona maintains a curated database of documented AI-related incidents: cases where an AI system caused harm, failed in production, or triggered regulatory action. When a scan completes, the enriched company profile is compared against this database. Incidents that are semantically similar to the company's profile are included in the scan result.
Matched incidents are not a judgement that the company has done anything wrong. They are signals: cases involving similar companies, products, or AI use patterns that are relevant context for an underwriter assessing this risk.
How matching works
Matching is a two-stage process:
- Semantic similarity: The company's enriched profile (operations, product category, AI description) is converted to a vector embedding and compared against incident embeddings in the database. Candidates above a similarity threshold are selected.
- LLM validation: A language model reviews the top candidates and confirms which are genuinely relevant to this specific company's context. This filters out false positives that score high on surface similarity but are not substantively applicable.
The similarity_score in the result reflects the final relevance after both stages.
Incident object
| Field | Type | Description |
|---|---|---|
description | string | A summary of what happened: what the AI system did, what went wrong, and what the consequences were. |
year | integer | The year the incident occurred or was first reported. |
risk_domain | string | The category of risk the incident illustrates. See risk domains below. |
similarity_score | float | Relevance score from 0.0 (no relevance) to 1.0 (highly similar). Incidents with scores above 0.75 are strong signals. |
reference_urls | string[] | Links to primary sources: regulatory reports, news coverage, academic papers. |
Risk domains
| Domain | Description |
|---|---|
Algorithmic Bias | AI system produced systematically unfair outcomes for a demographic group. |
Model Reliability | AI system produced incorrect, erratic, or unpredictable outputs in production. |
Data Breach | A failure in an AI system led to the exposure of personal or sensitive data. |
Autonomous Decision Harm | An autonomous AI decision directly caused financial or physical harm to individuals. |
Regulatory Action | A regulator investigated or penalised a company for its AI system's behaviour. |
Adversarial Attack | An AI system was manipulated by malicious inputs, causing it to behave incorrectly. |
Transparency Failure | Users or regulators were not adequately informed about how an AI system was making decisions. |
Model Drift | An AI system's performance degraded over time as real-world data shifted from training data. |
Dual Use / Misuse | An AI system designed for a benign purpose was used to cause harm. |
Example
No incidents matched
If no incidents meet the relevance threshold for a given company, matched_incidents will be an empty array []. This does not mean the company is low risk: it may simply mean that no closely analogous incidents exist in the database yet.
Incident database updates
The Exona incident database is updated continuously as new cases are documented. Because incidents are matched at scan time, a scan run today may return different incident matches than one run six months ago: even for the same company. The data_freshness.sources_last_checked timestamp tells you when the enrichment data was gathered; the incident database version is not separately versioned in the API response.