A senior lawyer and Supreme Court advocate recently offered a striking analogy for how artificial intelligence functions in legal practice, likening AI tools to “a brilliant associate who’s also schizophrenic.” The comparison underscores the dual nature of AI in law — powerful and insightful at times, yet unpredictable and potentially erratic without careful supervision. The advocate’s comments reflect broader debates in the legal profession about integrating AI into research, drafting, and strategy while guarding against errors, hallucinations, and misplaced confidence in machine outputs.
The lawyer noted that AI can often provide highly valuable contributions — such as summarising case law, drafting documents, or spotting connections that might take humans much longer to identify. However, the analogy highlights a persistent concern: AI can also produce misleading, inconsistent, or contextually inappropriate results that may seem plausible at first glance. Just as an exceptionally talented but unstable colleague might deliver brilliant insights one moment and confusing or dangerous suggestions the next, AI requires constant human oversight to ensure its outputs are reliable and fit legal standards.
This perspective resonates with many legal professionals who see AI as a tool that enhances human capabilities but cannot replace critical judgment, ethical responsibility, or deep domain expertise. In practice, lawyers emphasise that AI should be treated as an assistant that accelerates routine work — saving time on research or drafting — while humans remain fully accountable for final decisions and quality control.
Overall, the comparison serves as both a caution and a compliment: AI can be extraordinarily useful in legal work, but its integration demands vigilance, skepticism, and clear professional protocols to mitigate risks associated with incorrect or unpredictable suggestions.