The paper argues that one of the hidden harms of AI isn’t just when algorithms are biased — but when they simply can’t “see” certain people at all. When individuals or groups generate little or no digital trace (due to lack of internet access, infrequent institutional interactions, or limited data collection) they end up in what the paper terms “data deserts.” As a result, AI systems often fail to deliver predictions or services for them — a form of structural exclusion the author calls “algorithmic exclusion.”
This problem, the paper explains, isn’t a bug or oversight — it’s woven into digital inequality: the same social and economic disadvantages that keep people offline or under‑documented also mean they are invisible to AI systems. That means many fairness‑oriented AI policies (which focus on algorithmic bias and discrimination) may miss a large class of harms: people who are systematically ignored by AI, rather than mis‑evaluated by it.
To address this, the proposal recommends that regulations and AI‑governance frameworks explicitly recognize algorithmic exclusion as a legitimate, policy‑relevant harm — on par with bias and discrimination. Instead of only auditing for unfair predictions, governance should also check for missing predictions: whether certain populations are systematically unserved by AI tools.
Practically, this could mean requiring developers, data‑collectors, and regulators to track “coverage gaps” — who is missing from the data, and make efforts to reduce those gaps (e.g. through targeted data collection, inclusion of alternative data sources, or human‑in‑the‑loop fallback for underserved groups). The aim is to ensure AI does not widen inequality by excluding the already marginalized.