Recent reporting from Morning Brew highlights a notable shift: warnings about the risks and pace of artificial intelligence are now coming from inside the tech industry itself, not just outside critics. Senior engineers, safety researchers, and even some former employees of leading AI labs are increasingly speaking out about concerns ranging from ethical trade-offs and governance to societal impact, suggesting that unease with AI’s trajectory is spreading among those building the technology.
One key indicator is a spate of high-profile departures and public statements by AI researchers who have questioned whether their organisations are prioritising speed and product expansion over safety and alignment. At companies like Anthropic and OpenAI, senior contributors — including safety researchers — have left and, in some cases, published letters or posts expressing concerns about the broader implications of AI development and the need for deeper thought about ethical risks.
In addition to resignations, many AI insiders are openly discussing existential and societal risks associated with advanced models. These include worries about job displacement, bias in decision-making systems, misuse of AI for misinformation or surveillance, and challenges in aligning highly autonomous systems with human values. The fact that these conversations are happening publicly — including direct statements about risks within AI research communities — reflects a deeper internal debate about how fast development should proceed and where the guardrails should be placed.
While some industry leaders still champion AI’s benefits and potential, these internal warnings underscore a divergence in thinking within the AI sector. Insiders caution that without careful governance, ethical frameworks, and broader public policy engagement, AI’s rapid evolution could outpace society’s ability to manage its impacts responsibly — a message that resonates beyond tech circles into economic, regulatory, and public policy discussions worldwide.