ExonaExona API
Data Reference

Matched Incidents

Historical AI incidents that are relevant to a scanned company's risk profile.

What are matched incidents?

Exona maintains a curated database of documented AI-related incidents: cases where an AI system caused harm, failed in production, or triggered regulatory action. When a scan completes, the enriched company profile is compared against this database. Incidents that are semantically similar to the company's profile are included in the scan result.

Matched incidents are not a judgement that the company has done anything wrong. They are signals: cases involving similar companies, products, or AI use patterns that are relevant context for an underwriter assessing this risk.


How matching works

Matching is a two-stage process:

  1. Semantic similarity: The company's enriched profile (operations, product category, AI description) is converted to a vector embedding and compared against incident embeddings in the database. Candidates above a similarity threshold are selected.
  2. LLM validation: A language model reviews the top candidates and confirms which are genuinely relevant to this specific company's context. This filters out false positives that score high on surface similarity but are not substantively applicable.

The similarity_score in the result reflects the final relevance after both stages.


Incident object

FieldTypeDescription
descriptionstringA summary of what happened: what the AI system did, what went wrong, and what the consequences were.
yearintegerThe year the incident occurred or was first reported.
risk_domainstringThe category of risk the incident illustrates. See risk domains below.
similarity_scorefloatRelevance score from 0.0 (no relevance) to 1.0 (highly similar). Incidents with scores above 0.75 are strong signals.
reference_urlsstring[]Links to primary sources: regulatory reports, news coverage, academic papers.

Risk domains

DomainDescription
Algorithmic BiasAI system produced systematically unfair outcomes for a demographic group.
Model ReliabilityAI system produced incorrect, erratic, or unpredictable outputs in production.
Data BreachA failure in an AI system led to the exposure of personal or sensitive data.
Autonomous Decision HarmAn autonomous AI decision directly caused financial or physical harm to individuals.
Regulatory ActionA regulator investigated or penalised a company for its AI system's behaviour.
Adversarial AttackAn AI system was manipulated by malicious inputs, causing it to behave incorrectly.
Transparency FailureUsers or regulators were not adequately informed about how an AI system was making decisions.
Model DriftAn AI system's performance degraded over time as real-world data shifted from training data.
Dual Use / MisuseAn AI system designed for a benign purpose was used to cause harm.

Example

{
  "matched_incidents": [
    {
      "description": "An automated claims denial system was found to have systematically denied legitimate claims from certain demographic groups due to biased training data. The insurer faced regulatory action from the state insurance commissioner and a class-action lawsuit from affected policyholders.",
      "year": 2023,
      "risk_domain": "Algorithmic Bias",
      "similarity_score": 0.91,
      "reference_urls": [
        "https://example.com/incident-report/ai-claims-bias-2023",
        "https://example.com/regulatory/state-insurance-ai-order-2023"
      ]
    },
    {
      "description": "A machine learning-based fraud detection system began producing unexplainably high false-positive rates after a model update, causing thousands of legitimate claims to be delayed or denied. The insurer was unable to explain the system's decisions to regulators.",
      "year": 2022,
      "risk_domain": "Model Reliability",
      "similarity_score": 0.78,
      "reference_urls": [
        "https://example.com/incident-report/fraud-detection-fp-2022"
      ]
    }
  ]
}

No incidents matched

If no incidents meet the relevance threshold for a given company, matched_incidents will be an empty array []. This does not mean the company is low risk: it may simply mean that no closely analogous incidents exist in the database yet.


Incident database updates

The Exona incident database is updated continuously as new cases are documented. Because incidents are matched at scan time, a scan run today may return different incident matches than one run six months ago: even for the same company. The data_freshness.sources_last_checked timestamp tells you when the enrichment data was gathered; the incident database version is not separately versioned in the API response.

On this page