06-07-2025
Justice in the Age of Artificial Intelligence: Who Rules, Man or Machine? 1-2
In a landmark ruling issued by the U.S. District Court for the Southern District of Florida on May 2025, a real-world example error of AI hallucination. The story began when a visiting attorney from California, alongside a local Florida lawyer, submitted a legal brief citing a judicial decision from the Delaware Court. But the shock came when the court discovered that the cited ruling never existed. It was fabricated by an AI tool used for legal research. This alarm highlight an existential challenge that threatens the core principles of justice and accountability within the legal system. Which is the necessity for fairness, integrity, and infallibility in justice?
Generative artificial intelligence becomes a formidable, disruptive force within courtrooms, law offices, and legislative bodies. Today, AI actively participates in drafting contracts, generating complex legal arguments, editing judicial decisions, and even producing preliminary legislative texts. This is not simply a technical breakthrough; it is a paradigm shift that redefines the very structure of our legal systems.
Recent studies reveal that AI-generated legal texts now span a vast array of critical documents—from commercial contracts and legal pleadings to judicial rulings, legislative proposals, legal complaints, and even preliminary police reports. These are produced using advanced tools like ChatGPT, Westlaw, and CoCounsel, creating a new legal reality where algorithmic power converges with human intention. Law firms increasingly rely on AI to rapidly produce draft contracts, while courts use it to analyze case patterns and predict outcomes. Some legislatures have even begun accepting draft bills generated by AI tools, subject only to final human review.
This dependency raising critical questions of responsibility, review, and consequences. Who is accountable for errors? Who verifies the content? Who bears the legal implications of a misstep?
Amid this enthusiasm for technological progress, deeper challenges emerge—ones that extend far beyond technical concerns to strike at the heart of ethical and philosophical questions about justice itself. Foremost among these challenges are the dangers of hallucinations and bias. As clearly demonstrated by the Florida case, AI tools, despite their computational power, can generate fictitious citations and entirely false legal precedents. This is not a minor technical glitch—it undermines the foundation of legal fairness and equality before the law. Bias embedded in training data may skew legal analysis, raising profound philosophical concerns about how justice can be achieved when the algorithm's foundation is inherently flawed.
A second looming threat is the phenomenon of legal floodgates. The ease with which AI can generate vast volumes of legal text may lead to an overwhelming influx of redundant or unnecessary documents. Courts may become buried under irrelevant data, straining judicial efficiency and potentially damaging public trust in the legal system. The justice process may become clogged with voluminous yet valueless content, diluting the importance of professional legal judgment and undermining procedural clarity.
A third and equally troubling issue is that of authenticity and authorship. Here arises a fundamental question that strikes the moral fabric of the legal profession: Who truly authored a given legal text? Does a document reflect the considered intention of an attorney or the deliberation of a judge—or is it merely the product of an algorithm, devoid of human intent or ethical responsibility? This issue plunges us into the domain of moral philosophy and legal theory, where the 'original intent' behind a legal document is paramount. When human authorship is obscured, the chain of accountability becomes dangerously unclear, threatening the legal system's foundational principles.
Legal institutions across the globe vary in how they approach these transformations, exposing a troubling regulatory gap. Some courts—particularly in the United States, as illustrated by the Florida decision—now explicitly prohibit the submission of AI-generated legal briefs or rulings unless thoroughly reviewed by a human. Meanwhile, other jurisdictions require only clear disclosure of AI usage and mandate human review prior to official submission. This divergence reveals the lack of a unified regulatory framework to govern such technologies.
On the other side of the equation, tech companies have initiated voluntary self-regulation, embedding safeguards to limit AI output in sensitive legal contexts. While such efforts are commendable, they lack legal enforcement and are largely driven by internal ethics and market realities. This reveals the limitations of self-regulation and underscores the urgent need for external legislative intervention to foster long-term trust in legal institutions.
Justice today is no longer solely written by the pens of lawyers and the verdicts of judges—it is increasingly authored in lines of code and AI-generated prompts. This transformation is not merely technical; it is deeply philosophical, changing how we understand law, its origins, and the scope of accountability. The question is no longer 'Should we use AI?' but rather 'How do we use it in a way that ensures justice, protects truth, and preserves the irreplaceable role of human conscience in legal decision-making?'
law is not a static script; it is the living spirit of justice. It must not be distorted by algorithms nor misled by artificial minds.
Professional integrity must remain indivisible and untouchable, no matter how advanced the tools we wield.