A recent report highlighted by TechRadar reveals a major gap in how UK companies manage AI risk: many don’t know how to shut their systems down if something goes wrong. According to the study, 59% of businesses are unsure how quickly they could stop AI systems during a crisis, raising serious concerns about safety and preparedness.
Even among those with some confidence, readiness is limited. Only 21% of organizations believe they could disable AI within 30 minutes, which is critical in high-risk scenarios such as system failures, data breaches, or harmful automated decisions. This lack of rapid response capability highlights how AI adoption is outpacing governance and control mechanisms.
The problem goes beyond technical readiness to issues of accountability and transparency. Around 20% of employees don’t know who is responsible for AI-related decisions in emergencies, and only 42% of companies feel confident they could explain an AI failure to regulators or leadership. Additionally, about one-third of organizations fail to clearly disclose their use of AI systems.
Overall, the report warns that AI risks are not just technical—they are organizational and strategic. Experts recommend that businesses establish clear leadership responsibility, improve monitoring and auditing systems, and integrate AI risk management into broader cybersecurity strategies. Without these steps, companies risk losing control over the very systems they are increasingly relying on.