The US Department of Homeland Security (DHS) is rapidly expanding its use of artificial intelligence (AI) in immigration enforcement, aiming to streamline decision-making processes and improve efficiency. DHS issued a seven-step AI playbook in January 2025, outlining safe and mission-enhancing uses of generative AI with required human review.
The agency is developing a new platform called ImmigrationOS, built in collaboration with Palantir Technologies, designed to unify various tools into a single interface for immigration enforcement. Customs and Border Protection (CBP) has identified 75 AI use cases, with 31 already deployed, including facial recognition, cargo scanning, and predictive threat assessments. However, 13 of these use cases have been flagged for potential impacts on public safety and rights.
Advocacy groups and legal scholars warn that the rapid adoption of AI may compromise transparency and accountability in immigration practices. Concerns include bias in AI decision-making, particularly in facial recognition technology, which can have worse error rates for people of color. Additionally, there are worries about the impact on civil liberties, including wrongful detentions and due process harms tied to AI-driven surveillance and adjudication tools.
DHS maintains that AI changes are intended to speed routine work and help officers focus on complex files, with human review and oversight in place to prevent errors and ensure fairness. However, critics argue that the expansion of AI in immigration enforcement raises significant concerns about transparency, accountability, and the potential for bias and error.