A growing group of more than 100 lawmakers in the UK Parliament — from across political parties — has come together to demand stricter, binding regulations on powerful artificial‑intelligence (AI) systems.
The coalition, backed by the nonprofit Control AI and supported by former senior officials including a past defence secretary and environment minister, warns that unchecked development of frontier AI could lead to existential-level risks, drawing comparisons to the potential harms of nuclear weapons.
Critics argue that despite previous efforts — including a major 2023 summit on AI safety and the establishment of the AI Security Institute — the government’s follow-through has been weak. Voluntary guidelines and soft oversight are seen as inadequate as AI becomes increasingly integrated into everyday life and national infrastructure.
The lawmakers’ demands come as the government, while planning to introduce a new AI bill in the future, appears to favor narrow, domain-specific regulations rather than an overarching law governing all AI systems. This approach leaves significant uncertainty about when or whether comprehensive AI safeguards will be implemented.