Efforts to regulate and ensure the safety of artificial intelligence systems are confronted by a fundamental measurement problem: without knowing the total number of AI-driven operations or exposures (the denominator), counting reported AI-related harms (the numerator) provides an incomplete and potentially misleading picture of AI risk.
This issue, known as the “denominator problem,” complicates the interpretation of incident reports under new regulatory frameworks such as the European Union’s AI incident reporting requirements, which will take effect in August 2026, and emerging mandates in several U.S. states.
Understanding the Denominator Problem in AI
The denominator problem arises because while many incidents involving AI—such as failures, harms, or algorithmic biases—are documented, the total number of AI system uses or interactions that could produce harm remains unknown or poorly tracked. Without this denominator, it is impossible to calculate meaningful rates of harm or to distinguish whether an increase in reported incidents reflects higher risk, better detection, or simply greater AI deployment.
For example, autonomous vehicles represent one domain where the denominator is measurable, as regulators can track miles driven or hours of autonomous driving alongside accident counts to yield crash rates. This enables accurate safety assessments and comparisons. However, in most other AI applications, including deepfakes, hiring algorithms, and healthcare, the denominator is elusive.
Challenges Across AI Domains
In the case of deepfakes, the total amount of synthetic media generated or the number of people exposed to harmful content is unknown, making the rise in reported deepfake incidents difficult to interpret. Similarly, in AI-driven hiring, although most large companies use automated systems, data on the actual volume of AI-influenced hiring decisions is lacking, hindering efforts to evaluate the scale and impact of discriminatory practices.
Healthcare represents the highest-stakes and most complex domain with respect to the denominator problem. While hospitals track adverse events and clinical decisions, no standardized methodology yet exists to calculate rates of AI-related harm per AI-assisted clinical action. Potential denominators range from every AI-generated output during patient care to the narrower subset of AI-informed clinical decisions actually acted upon by healthcare professionals. Each choice has implications for assessing responsibility and fairness.
Current U.S. and European regulatory frameworks require disclosure about AI systems in healthcare but do not mandate uniform, rate-based measurement or demographic stratification needed to audit equity effectively. Ongoing policy debates risk weakening these disclosure requirements, further impairing oversight.
Why it matters
Resolving the denominator problem is essential to advancing AI governance, safety, and accountability. Without well-defined denominators, regulators, insurers, and policymakers cannot accurately assess AI risks or enforce equitable standards. This threatens to undermine new reporting obligations and could lead to liability exposures for healthcare providers and corporate directors that are difficult to manage without clear safety metrics.
As AI becomes increasingly embedded in critical sectors, establishing standardized and granular measurement frameworks that include denominators stratified by factors such as race, gender, and clinical context becomes imperative. These data are crucial for detecting structural biases and ensuring that AI deployment does not perpetuate or exacerbate inequities.
Efforts by organizations such as the OECD and the U.S. Department of Health and Human Services to develop incident reporting and regulatory guidance must address the denominator problem to create meaningful accountability mechanisms. Until then, counting harms without a reliable context of total AI use will limit the effectiveness of governance interventions and obscure the true safety profile of AI technologies.
Read more AI Regulation stories on Goka World News.
Sources
This article is based on reporting and publicly available information from the following source:
