← All resources

ABA Formal Opinion 512: A Practitioner's Guide

ABA Standing Committee on Ethics and Professional Responsibility, Formal Opinion 512 (July 29, 2024). Primary source PDF.

This page is a working reference for licensed attorneys. It is not legal advice. Verify each passage against the primary-source PDF before relying on it in a matter. State rules and state bar opinions may impose additional requirements.

What Opinion 512 is, and why it matters

Formal Opinion 512 is the ABA Standing Committee on Ethics and Professional Responsibility's first formal opinion addressing generative artificial intelligence. It was issued July 29, 2024. In a single 15-page document, it applies six Model Rules to lawyer use of GAI tools and frames the obligations as follows:

"To ensure clients are protected, lawyers using generative artificial intelligence tools must fully consider their applicable ethical obligations, including their duties to provide competent legal representation, to protect client information, to communicate with clients, to supervise their employees and agents, to advance only meritorious claims and contentions, to ensure candor toward the tribunal, and to charge reasonable fees."
Opinion 512, syllabus (p. 1).

The opinion is advisory, not binding. The ABA Model Rules are not the rules of professional conduct in any jurisdiction until a state supreme court adopts them. But Opinion 512 has become the de facto national baseline for two reasons. First, 49 of 50 states (California is the outlier) have adopted the Model Rules' core structure, so the Opinion's rule-by-rule framing maps cleanly onto most state RPCs. Second, state bar AI opinions issued since July 2024 routinely cite and track Opinion 512's analysis, with state-specific overlays rather than wholesale departures.

The practical consequence: if a firm cannot document that its AI practices address each of the six rules below, a malpractice carrier, a disciplinary investigator, or a sanctioning court has a ready-made framework for identifying gaps. The sections that follow translate each rule into the obligation and the documentation that firms should keep.

Rule 1.1: Competence

Model Rule 1.1 requires competent representation, and Comment [8] extends that duty to "the benefits and risks associated with relevant technology." Opinion 512 applies this to GAI by stating the standard plainly:

"To competently use a GAI tool in a client representation, lawyers need not become GAI experts. Rather, lawyers must have a reasonable understanding of the capabilities and limitations of the specific GAI technology that the lawyer might use."
Opinion 512, at 2–3.

The duty is ongoing. A one-time training in 2024 does not satisfy Rule 1.1 two years later if the tool, the model, or the firm's use of it has materially changed. The Opinion is explicit on this point:

"This is not a static undertaking. Given the fast-paced evolution of GAI tools, technological competence presupposes that lawyers remain vigilant about the tools' benefits and risks."
Opinion 512, at 3.

What the Opinion recognizes as acceptable ways to maintain competence: reading about GAI tools aimed at the legal profession, attending CLE, and consulting colleagues or external experts who are proficient with the specific tool in use. The practical documentation a firm should keep is narrow: records showing that attorneys and staff have been trained on each approved tool, and that the training has been refreshed when the tool materially changes.

Rule 1.6: Confidentiality

Rule 1.6 prohibits a lawyer from revealing information relating to the representation of a client without informed consent, and Rule 1.6(c) requires reasonable efforts to prevent inadvertent or unauthorized disclosure. Opinion 512 applies these duties to GAI input with particular force in the context of self-learning tools, where prompts and client information may be retained and later surface in another user's session:

"[B]ecause many of today's self-learning GAI tools are designed so that their output could lead directly or indirectly to the disclosure of information relating to the representation of a client, a client's informed consent is required prior to inputting information relating to the representation into such a GAI tool."
Opinion 512, at 7.

Informed consent in this context is not the language most engagement letters already contain. The Opinion is direct about this:

"To obtain informed consent when using a GAI tool, merely adding general, boiler-plate provisions to engagement letters purporting to authorize the lawyer to use GAI is not sufficient."
Opinion 512, at 7.

To satisfy Rule 1.6, the Opinion describes a baseline review duty: every lawyer using a GAI tool should read and understand the Terms of Use, privacy policy, and related contractual terms for that tool, or delegate the review to a colleague or external expert who has. The specific questions to resolve before inputting client information: who has access to inputs and outputs; whether and how the provider retains data; whether the tool trains on submitted content; and how the tool behaves if access is revoked. Firms should document the vendor review for each approved tool, and keep a record of which matters, if any, required tool-specific informed consent.

Rule 1.4: Communication with clients

Rule 1.4 governs when a lawyer must communicate with a client about the means of representation. Opinion 512 does not impose a blanket duty to disclose GAI use on every matter. The facts control. But it identifies three categories where disclosure is required:

"[L]awyers must disclose their GAI practices if asked by a client how they conducted their work, or whether GAI technologies were employed in doing so, or if the client expressly requires disclosure under the terms of the engagement agreement or the client's outside counsel guidelines."
Opinion 512, at 8.

Even when Rule 1.4 does not require disclosure, the Opinion notes that a firm may still choose to describe its GAI practices in the engagement agreement as a matter of effective client communication. That is the most common way firms are operationalizing this rule: a short, plain- language paragraph in the engagement letter that explains the firm's AI use, names any categories of tools in use, and invites the client to request additional detail.

Rule 1.5: Fees and billing

Rule 1.5 requires reasonable fees and reasonable expenses. Opinion 512 applies this to GAI in two ways that have direct operational consequence.

First, efficiency gains from GAI cannot be billed at the pre-GAI hourly rate. The Opinion states the rule and illustrates it with a 15-minute drafting example:

"GAI tools may provide lawyers with a faster and more efficient way to render legal services to their clients, but lawyers who bill clients an hourly rate for time spent on a matter must bill for their actual time."
Opinion 512, at 12.

The same principle extends to flat and contingent fees. If a flat fee was set against pre-GAI assumptions about time required, the Opinion signals that the fee may no longer be reasonable if GAI materially compresses the work.

Second, time spent learning a GAI tool that the firm will use regularly cannot be billed to the client:

"[A] lawyer may not charge a client to learn about how to use a GAI tool or service that the lawyer will regularly use for clients because lawyers must maintain competence in the tools they use, including but not limited to GAI technology."
Opinion 512, at 14.

The Opinion also distinguishes overhead-type tool costs (for example, a grammar-check feature bundled into word processing) from client-specific tool expenses (for example, a per-matter charge from a contract-review vendor). Firms should have a billing policy that maps each approved tool to one of those two categories and discloses the billing treatment to clients in advance.

Rules 5.1 and 5.3: Supervisory duties

Rules 5.1 and 5.3 assign responsibility to managerial and supervisory lawyers for the conduct of other lawyers and nonlawyer assistants at the firm. Opinion 512 treats GAI tools as falling within this framework in two ways. Internally, managing partners must establish firm-wide policies; supervisory lawyers must train and oversee staff use. The Opinion's formulation:

"Managerial lawyers must establish clear policies regarding the law firm's permissible use of GAI, and supervisory lawyers must make reasonable efforts to ensure that the firm's lawyers and nonlawyers comply with their professional obligations when using GAI tools."
Opinion 512, at 10.

Externally, the Opinion applies Rule 5.3(b)'s duty of reasonable efforts to the GAI tool's provider. The Opinion carries forward the diligence framework from the ABA's prior cloud-computing and outsourcing opinions and applies it to GAI vendors: reference checks and credentials, review of security policies and protocols, confidentiality agreements, conflicts screening where applicable, and attention to whether the provider retains or asserts proprietary rights to submitted information.

The documentation that follows from Rules 5.1 and 5.3 is concrete: a written firm AI policy (see the policy template), training completion records for each attorney and staff member, a vendor due diligence file for each approved tool, and a supervision protocol for AI-assisted work product that goes out the door.

Rule 3.3: Candor toward the tribunal

Rule 3.3 prohibits false statements of law or fact to a tribunal and requires remedial action if a lawyer comes to know that material evidence was false. Rule 8.4(c) prohibits conduct involving dishonesty, fraud, deceit, or misrepresentation. Opinion 512 applies both to GAI-assisted filings and is deliberately direct about the risk:

"Even an unintentional misstatement to a court can involve a misrepresentation under Rule 8.4(c). Therefore, output from a GAI tool must be carefully reviewed to ensure that the assertions made to the court are not false."
Opinion 512, at 10.

The scope of the pre-filing review duty is broader than just citation checking:

"In judicial proceedings, duties to the tribunal likewise require lawyers, before submitting materials to a court, to review these outputs, including analysis and citations to authority, and to correct errors, including misstatements of law and fact, a failure to include controlling legal authority, and misleading arguments."
Opinion 512, at 10.

This is where the rule meets the emerging body of sanctions cases. Courts applying Rule 3.3 (or its state analogs) to AI-generated hallucinations have imposed sanctions ranging from modest fines and mandatory CLE to fee awards exceeding $1.5 million and referrals to state bar discipline.

Notable sanctions cases in this line:

Opinion 512 compliance checklist

A 12-item checklist, each item mapped to the rule it documents. A firm that can answer yes to every item has the documentation a malpractice carrier or disciplinary investigator is likely to request.

  1. Written AI policy in force (Rules 5.1, 5.3). The firm has a current policy, dated within the last 12 months, naming approved tools, prohibited uses, and supervisory responsibility.
  2. Approved-tools list, reviewed (Rules 1.1, 1.6). Each tool on the list has a completed vendor review covering Terms of Use, privacy policy, data retention, and training-on-input behavior.
  3. Prohibited-tools list (Rule 1.6). The policy identifies consumer tools (including personal ChatGPT, Gemini, Claude, and similar) that are not approved for firm work, and explains why.
  4. Tool-specific informed consent language (Rule 1.6). For any self-learning tool into which client information will be input, the firm has client-facing consent language that meets the Opinion's "not boiler-plate" standard.
  5. Engagement letter disclosure (Rule 1.4). The firm's default engagement letter describes its AI use in plain language, or documents the decision not to disclose and the reasoning.
  6. Training records (Rules 1.1, 5.1, 5.3). Every attorney and staff member using an approved tool has completed tool-specific training, logged by date.
  7. CLE tracking for technology competence (Rule 1.1). The firm tracks ongoing CLE or equivalent training so that technological competence is maintained as tools change.
  8. Pre-filing verification protocol (Rule 3.3). Any filing that used AI research assistance is logged, with the citation-verification step documented by the attorney who reviewed it.
  9. Billing policy for AI time (Rule 1.5). The firm has written guidance on billing AI-assisted work: hourly rules, flat-fee treatment, overhead vs. pass-through expense classification for each approved tool.
  10. Supervision protocol for AI-assisted work product (Rule 5.1, 5.3). Any work product generated with AI assistance is reviewed by a supervising lawyer before it leaves the firm, with the review logged.
  11. Incident response procedure (Rule 1.6). The firm has a written procedure for responding to an AI-related confidentiality incident, including client notification and bar reporting where applicable.
  12. Annual policy review (Rules 1.1, 5.1). The firm reviews and re-dates the AI policy at least annually, and on any material tool change.

State cross-reference

Opinion 512 is the national baseline. Several states have issued their own formal opinions or practical guidance that add to, clarify, or in some cases diverge from the ABA's framing. Firms should read Opinion 512 and their state's guidance together, not one in place of the other. Among the more detailed state-level sources to date:

For the current state tracker, with primary-source citations for each state's guidance, see the state tracker.

Next steps for your firm

Primary source

ABA Standing Committee on Ethics and Professional Responsibility, Formal Opinion 512, Generative Artificial Intelligence Tools (July 29, 2024). Full text: americanbar.org (PDF) .

Last verified against primary source: 2026-04-24.