Governments around the world are increasingly stepping in to regulate artificial intelligence in the workplace, aiming to shield workers from potential harms like mass job displacement, bias, and invasive surveillance. Projections suggest that AI could automate a significant portion of work tasks, raising serious concerns about economic inequality if productivity gains don’t translate into fair outcomes for employees. To tackle this, policies now range from retraining programs to ethical guidelines that prioritize worker protections.
One of the central challenges in crafting AI regulation is balancing protection with innovation. Regulators must craft rules that guard worker rights without stifling the development of beneficial technologies. To guide this balance, the International Labour Organization (ILO) has proposed guidelines emphasizing labor rights, social dialogue, and prevention of worker exploitation — ensuring that AI’s growth doesn’t come at the cost of fairness.
A key example is the European Union’s AI Act, which takes a risk-based approach. It places stricter rules on “high-risk” AI systems — including those used for hiring, performance evaluation, or employee monitoring — mandating measures such as data governance, bias assessments, and human oversight. Similarly, in the U.S., cities like New York have introduced laws requiring independent audits of automated employment decision tools and transparency regarding their use.
To make regulation more effective, some governments are also pushing for greater worker involvement in how AI is adopted in workplaces. Proposals include impact assessments conducted by third parties, joint labor-management oversight committees, and participatory bodies that set standards for AI deployment. These mechanisms aim to not just regulate AI, but also give workers a real voice in shaping how it’s used.