Generative AI offers real promise for law firms, streamlining document drafting, accelerating research, and optimizing routine tasks. Yet alongside that promise lies material risk: hallucinated content, confidentiality lapses, and overreliance that can erode judgment. At CPM, we believe firms should adopt AI with both ambition and restraint, establishing policies that promote responsible use while guarding against liability.
A Cautious Foundation: Ethics Beyond Innovation
The American Bar Association’s recent Formal Opinion 512 makes clear that using generative AI does not relieve a lawyer of ethical obligations. Among these are duties of competence, confidentiality, candor to tribunals, supervision of agents, and charging reasonable fees. The opinion underscores that AI should serve as a ‘legal assistant,’ not a substitute for human judgment.
This means firms must understand how any AI tool functions, its known failure modes, and how output is generated. It also means implementing review protocols to ensure that someone verifies AI work before it reaches a client or is filed in court.
How to Frame Internal AI Use: Principles Over Prescriptions
Rather than prescribing a checklist, a robust AI policy should rest on guiding principles that allow flexibility as tools evolve:
- Transparency & Client Communication. Clients may have a right to know when AI is involved in their matter. Disclosure enables informed decision-making, especially if AI is used for drafting, analysis, or review.
- Defined Use Scope. Not every task should be given to AI. High-risk areas—like legal strategy, court filings, or opinions—should remain firmly under human control. AI is better suited for first drafts, internal memos, or document summarization.
- Mandatory Review and Escalation. No AI output should go unchecked. Reviewers must confirm accuracy, verify sources, and cross-reference with traditional research before incorporation.
- Data Safeguards. Any client information passed into AI tools must be scrubbed as appropriate, and only secure, vetted platforms should be used. If an AI provider reuses prompt data or is poorly controlled, the risk may be unacceptable.
- Liability Allocation. A firm’s policy must clearly place responsibility for errors drafted or overlooked on humans. Insurance policies should be revisited to confirm AI-related mistakes are covered.
- Ongoing Education & Oversight. AI tools evolve rapidly. Users and supervisors must stay up to date on new models, limitations, and case law developments.
These principles help avoid rigid rules that become outdated, while still establishing guardrails around risky areas.
Real-World Cautions and Use Cases
In actual practice, generative AI can help in meaningful ways if its output is handled wisely. For instance, AI may produce a working draft of a contract or scan multiple cases to propose relevant authorities. But without verification, that draft might cite a case that never existed, or misstate a legal principle. Such “hallucinations” have already resulted in courtroom embarrassment and sanctions in some jurisdictions.
Consider this scenario, a junior associate uses AI to build a research memo that looks pristine. But during review, the supervising attorney spots a few strained analogies, misquoted statutes, or a citation that, on closer inspection, is inapplicable. That’s precisely where competence and professional judgment must intervene.
Another example, a firm might automate the generation of initial client intake summaries. The AI produces a coherent write-up, but because it misunderstood a key detail, the summary is misleading. If the attorney relies too heavily on it without verification, the firm could inadvertently misadvise a client.
The key takeaway: AI accelerates, not replaces, human insight.
What Firms Should Do First (Not a Checklist, but a Starting Playbook)
- Begin with a pilot in non-critical workstreams, such as internal drafting or document review, not client-facing deliverables.
- Conduct retrospective audits by taking a sample of AI-assisted work after the fact and comparing it to fully human work. Understand where failures or near-misses arise.
- Incorporate client engagement language (in retention letters or engagement agreements) that addresses AI use, with optional disclosure and consent.
- Regularly review and update insurance forms, ensuring professional liability coverage contemplates AI-related errors or oversights.
- Monitor developments in ethics opinions, case law, and bar rulings. The landscape is evolving quickly, and staying current is essential to avoid surprises.
Generative AI offers compelling improvements in efficiency, but legal professionals must approach it as a powerful tool under human command; it’s not a miracle solution. At CPM, we encourage any firm integrating AI to proceed with humility, sufficient oversight, and a clear policy rooted in accountability. We are watching the evolution closely, and we welcome nuanced conversations with clients and peers on responsible AI adoption in legal and regulatory settings.
0 Comments