Gujarat High Court Chief Justice Sunita Agarwal has defended the court’s newly introduced artificial intelligence policy, warning that AI use in judicial decision-making carries significant risks and could weaken public trust in the justice system. Speaking at a national judicial conclave, Justice Agarwal said the policy was designed not to ban AI entirely but to ensure that technology does not compromise judicial independence, constitutional values, or the integrity of legal reasoning. She emphasized that courts must remain rooted in “human conscience” and accountability rather than automated systems.
The Gujarat High Court’s AI policy places strict restrictions on how artificial intelligence may be used within courts. Judges are prohibited from using AI for adjudication, legal reasoning, judgment writing, sentencing considerations, bail decisions, interpretation of facts, or any substantive decision-making process. However, the policy still allows limited AI use for administrative functions such as legal research, scheduling, translation, transcription, case management, and productivity tasks, provided all outputs are independently reviewed by humans.
The court’s cautious approach reflects broader concerns about AI “hallucinations,” bias, confidentiality breaches, and the possibility of machine-generated legal errors. The policy specifically warns against relying on AI-generated case citations or legal references without independent verification from authoritative legal sources. It also prohibits court officials from uploading confidential litigant information, witness details, or sensitive legal records into public AI systems. Judicial officers remain personally responsible for every order and observation issued in their name, even if AI tools were involved during preparation or research.
The Gujarat High Court has simultaneously become increasingly active in addressing wider AI-related risks, including deepfakes and synthetic media. In recent months, the court issued notices regarding PILs seeking regulation of AI-generated fake content targeting constitutional authorities and public institutions. Legal experts say the court’s policy reflects a growing international debate over how AI should be integrated into legal systems without undermining fairness, transparency, and democratic accountability. Many judicial systems worldwide are exploring AI-assisted administration while resisting the idea of machines replacing human judicial reasoning.