An opinion piece a powerful comparison between modern artificial intelligence and the ancient Oracle of Delphi—a figure people once consulted for predictions about the future. The argument is that AI systems are increasingly being treated in a similar way: as authoritative sources that can predict outcomes, guide decisions, and reduce uncertainty. But this growing reliance, the author warns, carries serious risks.
The central concern is the rise of prediction-driven decision-making. AI is now used to forecast everything from hiring success to criminal behavior and financial risk. While this may seem efficient, it shifts power toward those who control predictive systems and can lead to decisions being made based on probabilities rather than human judgment or fairness. Over-reliance on predictions may also reduce individual autonomy, as people begin to act according to what algorithms expect rather than what they choose.
Another major issue is ethical and social impact. Predictions can reinforce existing inequalities—for example, if biased data leads AI to predict worse outcomes for certain groups, those predictions can become self-fulfilling. The article suggests that when society treats AI outputs as objective truth, it risks embedding bias, limiting opportunity, and normalizing unequal treatment under the guise of data-driven decisions.
Ultimately, the article argues that societies must confront the ethics of prediction itself, not just the technology. Like the ancient oracle, AI may offer answers—but those answers are not neutral or infallible. The key takeaway is that while AI can be a powerful tool, treating it as an unquestionable authority could lead to loss of human agency, fairness, and accountability in critical decisions.