Attorneys are already using AI. The question most firm leadership is asking is how to manage that use responsibly. The question they should also be asking is more specific: which of those tools send client data outside the firm's network, and does that create an ethics problem?
For most consumer and enterprise cloud AI tools, the answer to the first question is yes. The inference happens on an external server. The prompt, the documents attached to it, and the context the attorney provides all transit to a third-party system before a response comes back. That is how these tools work. It is not a flaw. It is the architecture.
Whether that creates an ethics problem depends on the jurisdiction, the engagement agreement, and whether the firm has taken the steps ABA Formal Opinion 512 says a competent, ethical attorney should take before using AI with client information.
An associate drafts a motion using a cloud AI tool. The prompt includes the client’s name, the case facts, and opposing counsel’s arguments. That information is now on an external server, processed under terms of service the client never reviewed. Whether this creates a privilege or confidentiality problem is a fact-specific analysis. But the analysis disappears entirely when the AI runs inside the firm. No transmission. No third party. No analysis required.
What ABA Formal Opinion 512 actually requires
ABA Formal Opinion 512, issued in 2024, addresses generative AI across four ethical rules that every attorney already knows. The opinion does not prohibit AI use. It requires that lawyers address specific obligations before and during use.
Under Rule 1.1 (competence), lawyers must understand the tools they use well enough to deploy them responsibly. That means understanding what the tool does with the information it receives, not just whether its output is accurate. Under Rule 1.6 (confidentiality), lawyers must make reasonable efforts to prevent unauthorized disclosure of client information. The opinion is explicit that this applies to AI tool selection: if a tool sends client data to a third party, the lawyer must evaluate whether that transmission is consistent with the client’s reasonable expectations and any engagement terms.
Rules 5.1 and 5.3 require lawyers to supervise AI output the same way they supervise the work of associates and non-lawyer staff. An attorney cannot delegate judgment to an AI tool and disclaim responsibility for the result. Rule 3.3 (candor) covers what happens when AI-generated content goes to a tribunal: a lawyer who submits filings without verifying AI output has a candor problem, not just a quality problem.
Having an AI policy is not the same as having addressed these obligations. A policy that says “attorneys may use AI tools with caution” does not answer the Rule 1.6 question. The question Opinion 512 requires firms to answer is specific: for each AI tool in use, where does client data go, and is that consistent with the firm’s confidentiality obligations to its clients?
Why new matter intake carries the highest exposure
Conflict checking and new business intake involve some of the most sensitive information a firm handles: prospective client identities, adverse party names, matter descriptions, and the relationship context that makes a conflict analysis possible. Most of this information surfaces before any engagement agreement is signed.
That timing matters. When an attorney uses a cloud AI tool to help organize or analyze intake information, they are transmitting data about a prospective client outside the firm before any engagement terms have been accepted, before the client has agreed to any data handling practices, and before any informed consent framework applies. The confidentiality obligation under Rule 1.6 attaches to prospective clients under Rule 1.18 — the engagement agreement is not the trigger.
Firms that have addressed AI use for active matters sometimes overlook this specific exposure. The intake workflow is often where the first AI touch happens, and it is where the consent and disclosure framework is least developed.
The supervision problem cloud AI creates
Opinion 512’s treatment of supervision is practical and worth reading carefully. The obligation to review AI output applies not just to the factual accuracy of what the tool produces, but to the process. An attorney relying on AI-generated research must verify citations. An attorney relying on AI to draft client communications must review them as carefully as they would review a junior associate’s draft.
The tools most attorneys are reaching for were trained on general data. They were not trained on your firm’s prior work product, your clients’ matter history, or the specific interpretive positions your firm has taken on similar issues. When a cloud AI answers a legal question, it is drawing on publicly available information. When a private AI trained on your firm’s own knowledge base answers the same question, the answer reflects your firm’s institutional reasoning. Those are different outputs.
That gap in training data is not just a quality issue. It affects the supervision burden. An attorney reviewing output from a tool that knows nothing about the client’s history has to independently reconstruct that context. An attorney reviewing output from a tool grounded in the firm’s own prior work has a starting point that is already specific to the matter.
What a defensible AI posture looks like under these standards
A firm that can answer the following four questions has done the core work Opinion 512 requires. For each AI tool in use: does it send client data outside the firm’s network? If yes, what does the vendor do with that data, and does your engagement agreement or client disclosure cover it? Has the firm evaluated whether that use is consistent with Rule 1.6 obligations in the jurisdictions where it practices? And are attorneys actually reviewing AI output before relying on it in client work?
The firms with the clearest answers tend to share one structural characteristic: the AI that touches client work runs inside the firm. The confidentiality analysis simplifies considerably when the answer to “where does client data go” is “nowhere — it stays inside our network, governed by our own access controls.”
That posture does not require eliminating AI. It requires being specific about which AI runs where. General-purpose cloud tools for legal research on public law are a different question from tools that touch client files, matter data, and privileged work product. The obligation under Opinion 512 is to know the difference and act accordingly.
See How Cognetryx Keeps Client Data Inside Your Firm
Cognetryx deploys entirely inside your firm’s or legal department’s network. Client matter data, privileged work product, and intake information never leave your controlled environment. Your existing access controls govern who can query what.
Book a Free AI Strategy Assessment →