The Technology Review article argues that traditional approaches to AI safety—like prompt controls or simple guardrails—are no longer sufficient in the era of agentic AI, where systems don’t just generate outputs but take actions autonomously. As AI evolves from passive tools to active agents that can interact with software, data, and real-world systems, organizations must rethink governance from the ground up. The focus shifts from controlling outputs to controlling what agents can access, decide, and execute.
A key idea is treating AI agents like digital employees rather than software tools. Unlike traditional programs that follow fixed instructions, agents make decisions dynamically—choosing which tools to use, what data to access, and how to act. This creates a major governance challenge: it becomes harder to track responsibility, enforce rules, and ensure accountability. As a result, organizations must implement identity-based controls, permissions, and audit trails similar to those used for human workers.
The article emphasizes that effective governance must be built around boundaries rather than prompts. Instead of relying on instructions that agents might ignore, companies need hard constraints at critical points—such as data access, system permissions, and action execution. This includes defining what an agent is allowed to do, when it must escalate to a human, and how its actions are monitored. Without these structural controls, even well-designed AI systems can behave unpredictably or cause unintended consequences.
Overall, the takeaway is that as AI becomes more autonomous, governance must evolve before deployment—not after problems arise. Organizations that adopt an “agent-first” approach—focusing on identity, control, and accountability from the start—will be better positioned to scale AI safely. Those that treat AI like traditional software risk creating systems that are powerful but difficult to control, audit, or trust.