Using AI in Legal Practice Without Losing Judgment

Julie DeMuth Mellendorf, Corey Minard, Jacqueline Tomei
5/7/2026
Legal professional reviews documents at a laptop, representing responsible AI governance and risk management in legal practice.

As AI in legal practice expands, legal teams need to implement its use responsibly and defensibly.

AI can generate a first draft in seconds. But can legal teams use it without losing judgment? In practice, the most significant risk lies in responses that appear thoughtful, polished, and nearly complete but then reveal their deficiencies when tested against actual legal judgment.

In a market increasingly saturated with AI tools, access to technology is no longer the differentiator. The distinguishing factor is whether it is used in a manner that is responsible, defensible, and consistent with professional judgment. Legal teams that establish practical guardrails now can better position themselves to move quickly with confidence, preserve client trust, and avoid missteps.

Why judgment matters

A proactive, responsible approach is critical for both in-house counsel and law firms. As AI becomes embedded in routine legal workflows, access and use will continue to normalize. Legal teams can distinguish themselves by using AI tools in a way that remains aligned with professional obligations. Those obligations operate at two levels. On one level, individual lawyers remain accountable under their bar licenses for exercising independent judgment in their use of AI tools, maintaining client confidentiality and privilege, understanding when and how sensitive information may be used with such tools, and complying with applicable organizational policies. On another level, organizations bear responsibility for establishing the controls, supervision, and governance that shape how AI is used in practice by implementing a sound AI governance program from effective policies to clear usage guidelines, training, oversight mechanisms, and ongoing monitoring of AI use in practice.

Here is where the human advantage lies. The value lawyers provide has never been the mere production of text. It is judgment, or the ability to assess risk in context, identify what the tool did not consider, weigh uncertainty, and stand behind a conclusion when the stakes are real.

Much of the public discussion regarding AI in the legal profession has focused on hallucinations, including fabricated cases and incorrect citations. That concern is justified. In practice, however, the more difficult problem is often the answer that is plausible but wrong.

That type of output is more dangerous precisely because it does not announce itself. It might rely on real but weak authority, collapse distinctions between jurisdictions, overstate conclusions, or omit critical facts. It often reads exactly like what a busy lawyer hoped to see. That is why it can enter a workflow without sufficient scrutiny.

Consider this scenario: A lawyer asks AI to draft a clause based on a prior deal, and the output appears tailored but subtly omits a key limitation on liability that was negotiated for a specific risk profile. A research prompt returns a clean, well-structured answer that cites real authority but blends standards across jurisdictions, resulting in an overbroad conclusion. Another scenario to consider: A summary of a contract highlights major provisions but fails to flag a nonstandard indemnity or termination trigger that materially shifts risk. In each scenario, the issue is not that the output is obviously wrong, but that it is incomplete or misaligned in ways that require legal judgment to be detected.

The dependency loop: Automation and confirmation bias

Overreliance on AI often begins as convenience. A lawyer uses AI for a draft, clause, or summary; the response arrives quickly, reads cleanly, and appears complete. At that point, automation bias begins to take hold. The polish of the output can lead to less scrutiny than the work product requires, particularly under time constraints.

Confirmation bias then reinforces the effect. If the output aligns with the lawyer’s initial instinct, it is more likely to be accepted than tested. Over time, AI shifts from being a tool for generating options to one that substitutes for judgment.

The safeguard is straightforward but requires discipline. AI output must be treated as a hypothesis, not a conclusion. Primary-source verification, contrary analysis, and meaningful human review are fundamental. For example, a lawyer using AI-generated research should confirm the cited authorities directly, test whether contrary authority exists, and assess whether the reasoning holds under the specific facts and jurisdiction at issue before relying on the output.

Courts already signaling the direction

The legal risks of using AI are no longer theoretical. Courts are beginning to address AI-related issues within familiar doctrinal frameworks: competence, candor, supervision, confidentiality, privilege, and procedural integrity.

  • United States v. Heppner, No. 1:25-cr-00503-JSR (U.S. District Court, Southern District of New York (SDNY), Feb. 17, 2026). The court held that AI-generated materials created using Anthropic’s Claude and later shared with counsel were not protected by attorney-client privilege or work product because confidentiality was not preserved and the materials were not prepared at the direction of counsel. The decision highlights a broader risk in AI use: Inputs and outputs might fall outside traditional privilege protections when tools are used without sufficient controls, legal direction, or an understanding of how confidentiality is maintained.
  • Mata v. Avianca Inc., 678 F. Supp. 3d 443 (SDNY June 22, 2023). Lawyers were sanctioned after submitting filings that relied on nonexistent cases generated by Open AI’s ChatGPT, which underscores that while AI can assist in drafting, counsel remains responsible for verifying the existence and accuracy of legal authority.
  • Recent 2025 sanctions orders. Courts have imposed penalties, public reprimands, and, in at least one high-profile matter, disqualification and referral to disciplinary authorities following filings containing fabricated AI-generated citations. These actions reinforce that failures in verification and oversight can lead to significant professional consequences.

Courts are also moving beyond reaction toward governance. In 2025, the High Court of England and Wales warned that citing fabricated AI-generated authorities could expose lawyers to contempt or even criminal consequences and framed misuse of AI as a professional responsibility issue, not just a technological one.

American Bar Association Model Rules of Professional Conduct under pressure

Having a governing ethical framework in place is not new. What is new is how AI places sustained pressure on those rules at scale, at speed, and, often, in ways that are not immediately visible.

  • Rule 1.1. Competence
    AI can produce analysis that appears authoritative but is incomplete or incorrect. The risk is more than just error. It is misplaced confidence in output that appears reliable.
  • Rule 1.4. Communication
    AI affects the cost, timing, and risk profile of legal services, which creates an obligation to communicate outcomes and the implications of AI use.
  • Rule 1.6. Confidentiality
    Risk arises at the point of input. Data entered into AI tools might be retained, processed, or exposed in ways the user does not fully understand. Enterprise tools and consumer tools are not equivalent.
  • Rule 5.1. Supervisory responsibilities
    AI-generated work products can outpace traditional review processes. Without defined controls, polished drafts might be mistaken for vetted work.
  • Rule 5.3. Nonlawyer assistance
    AI should be treated as a nonlawyer assistant. The risk arises when its output is treated as final rather than preliminary.

Agentic AI and embedded controls

The practical response is to embed controls directly into workflows rather than relying solely on after-the-fact review. This approach includes defining what the system is permitted to do, how decisions are made, and where human oversight must be applied.

As legal teams move from generative AI to agentic systems, these pressures intensify. A drafting tool can produce an incorrect answer. An agentic system can take incorrect action, such as retrieving the wrong document, routing a matter improperly, or transmitting sensitive information without appropriate review. Legal teams must confirm that they are using AI in a way that preserves judgment and remains defensible, which requires both awareness of risk and structured controls embedded into how legal work is performed.

Effective governance requires operational controls within the workflow itself:

  • Role-based access and least-privilege permissions
  • Approval gates before external transmission or filing
  • Exception routing for low-confidence or policy-triggering outputs
  • Logging and audit trails
  • Data loss prevention and connector controls
  • Human review calibrated to the level of legal risk, not the polish of the output

Agentic AI does not eliminate the need for a lawyer-in-the-loop. It makes that requirement more concrete. The focus should extend beyond whether a lawyer reviews the final draft to whether appropriate controls are in place at each point where the system can influence risk, data movement, or decision-making.

Graphic of legal risk framework with icons for law, user, and controls, representing structured AI and legal risk assessment.

Customized AI governance that supports sound legal judgment

As AI becomes more embedded in legal workflows, organizations need governance that matches the level of risk. Crowe specialists outline a practical ladder of risk, from lower-risk AI assistance to high-stakes legal tasks requiring lawyer review, and highlight key controls in this graphic.

Download the ladder of risk graphic

Additional rules implicated

  • Rule 1.5. Fees
    Efficiency gains raise questions about billing judgment and client expectations regarding transparency.
  • Rule 2.1. Adviser
    AI might overlook business context, risk tolerance, and practical consequences.
  • Rule 3.3. Candor to the tribunal
    Fabricated or misstated authorities raise candor issues, not just competence concerns.
  • Rule 5.5. Unauthorized practice
    AI might blur the line between assisting counsel and delivering unsupervised legal advice.
  • Rule 8.4. Misconduct
    AI can amplify bias or generate misleading content, raising broader professional responsibility concerns.

Lawyer-in-the-loop: A practical checklist

Before using AI

  • Confirm the tool is approved for the task and data type.
  • Use abstracted or summarized facts where possible.
  • Identify whether the matter involves privileged, confidential, or high-risk content.

Before relying on output

  • Verify against primary sources or controlling documents.
  • Confirm all authorities exist and support the proposition.
  • Test the reasoning, not just the language.
  • Assess fit with jurisdiction, facts, and risk tolerance.

Before external use

  • Require lawyer review for filings, legal conclusions, and sensitive content.
  • Use controlled comparison methods, such as blacklines, for document edits.
  • Make approval authority explicit.

At the workflow level

  • Define no-automation categories.
  • Implement quality assurance sampling.
  • Use escalation triggers for uncertainty.
  • Maintain a written AI policy with training and governance.

AI in legal practice: The human advantage

AI in legal practice does not replace lawyers. However, it can reduce friction in work that does not require uniquely human judgment and can allow lawyers to focus on strategy, negotiation, counseling, and accountability.

Legal teams that want to distinguish themselves adopt AI deliberately, with guardrails they can defend. The human advantage is not resistance to innovation or uncritical adoption. The human advantage is judgment.

For most organizations, the question goes beyond whether AI will affect legal work. Instead, the more pressing issue is how to integrate it without compromising judgment, confidentiality, supervision, or defensibility. Such an approach requires practical governance, including AI use policies, structured review protocols, vendor diligence, escalation frameworks, and contract terms tailored to AI-enabled tools.

As access to AI becomes commonplace, discipline, not access, will define the leading legal teams. Legal teams that approach AI with this level of discipline can reduce risk and position themselves to move faster and with greater confidence. With experienced legal guidance, organizations can translate these principles into workable policies, workflows, and controls that can be implemented and defended in practice.

Mitigate AI risk with AI governance
If your company uses AI, you need an AI governance plan. We can help. 

Contact our AI governance team


Our team specializes in helping companies build robust, future-ready AI governance, including living AI policies. Contact us to get started.

Julie DeMuth Mellendorf
Julie DeMuth Mellendorf
Studio Quality and Risk Management Leader
Corey Minard
Corey Minard
Senior Manager, Risk Consulting
Jacqueline Tomei
Jacqueline Tomei
Risk Consulting

Related insights