Generating . . . Client Confidentiality Concerns in the Use of Generative AI Technology

Since the emergence of low- to no-cost public access to more sophisticated uses of artificial intelligence (“AI”), there has been lively discussion regarding how lawyers may ethically use various artificial intelligence software and services. The discussion has primarily focused on the duties of competence, communication, confidentiality, detecting conflicts of interest, and supervising.1 “The potential applications for AI in the legal world are immense and include composing client briefs, producing complex analyses from troves of documents, and helping firms with limited resources compete with the largest groups.”2 At, perhaps, its best, AI can provide access to justice tools to help narrow the justice gap.3 No matter the manner of AI use, upholding ethical duties in this new technological frontier will require proficiency in data privacy standards and programming. It also mandates a reasonable understanding of ever-changing professional norms when it comes to the use of technology and the lawyer’s duty to safeguard confidential information.

Arguably, the principle of maintaining confidentiality between the client and lawyer is one of the most important duties that lawyers possess—without it, the client-lawyer relationship cannot be fully functional because clients may be reluctant to share information necessary for representation out of fear that their confidences will not be protected.4 Therefore, it is very important to examine how the use of AI implicates the duty of confidentiality. Specifically, upholding the duty while using certain generative AI platforms may require an examination of the duty to protect client confidences from inadvertent or unauthorized disclosure.5

The duty to protect client confidences under the Model Rules of Professional Conduct (“MRPC”) is twofold. First, MRPC 1.6(a) commands that: “A lawyer shall not reveal information relating to the representation of a client unless the client gives informed consent, the disclosure is impliedly authorized in order to carry out the representation or the disclosure is permitted by [an exception in] paragraph (b).” Comment 3 to Rule 1.6 goes on to explain that “the principle of client-lawyer confidentiality is given effect by related bodies of law: the attorney-client privilege, the work product doctrine, and the rule of confidentiality established in professional ethics.”6 Importantly, the attorney-client privilege and work product doctrine apply in formal judicial and administrative proceedings where a lawyer may be required to provide evidence or testimony concerning communications with a client or notes related to representation. The duty of confidentiality under MRPC 1.6, however, applies in all other situations. The Restatement (Third) of the Law Governing Lawyers § 59 defines confidential client information as “all information relating to the representation of a client, whether in oral, documentary, electronic, photographic, or other forms.”7 Additionally, “[i]t includes work product that the lawyer develops in representing the client, such as the lawyer’s notes. . . .”8 Thus, the scope of confidential information protected by the duty of confidentiality is broad and includes almost all information concerning the representation of a client.

The second component of the duty to protect client confidences is mandated in MRPC 1.6(c). It compels lawyers to “make reasonable efforts to prevent the inadvertent or unauthorized disclosure of, or unauthorized access to, information relating to the representation of a client.”9 Essentially, this paragraph assigns an objectively reasonable duty of competence to secure and safeguard confidential client information.10 More specifically, lawyers are required to employ methods that protect electronic documents from “unauthorized access by third parties,” which may happen in the case of a data breach or other cyber-attack.11 If a lawyer, firm, legal aid, or government office experiences unauthorized access or disclosure of confidential client information, this provision is not violated if the lawyer “made reasonable efforts to prevent the access or disclosure.”12 Factors that can be considered when determining whether reasonable efforts have been made include: “the sensitivity of the information, the likelihood of disclosure if additional safeguards are not employed, the cost of employing additional safeguards, the difficulty of implementing the safeguards, and the extent to which the safeguards adversely affect the lawyer’s ability to represent clients (e.g., by making a device or important piece of software excessively difficult to use).”13 Moreover, in the transmission of confidential client information, a lawyer is required to take “reasonable precautions to prevent the information from coming into the hands of unintended recipients.”14

The use of AI software and services directly implicates the lawyer’s duty to protect client confidences under MRPC 1.6(c). The risk of unauthorized access to confidential client information varies based on the artificial intelligence functions used. One of the most popular uses of artificial intelligence includes widely accessible generative AI programs, such as ChatGPT, Scribe, GPT-4, GitHub Copilot, and Claude. These platforms typically employ machine learning processes to create a variety of content, including audio, code, images, text, and videos, based on the input and requests of the user.15 Its use truly has the potential to transform the way we request and deliver legal services.

Although generative AI has the potential to revolutionize the practice of law, it poses some risks to protecting client confidentiality due to the nature of the programming employed. Relevant to lawyers, some generative AI platforms use what is known as large language models (LLM), which are “a specialized class of AI model that uses natural language processing (NLP) to understand and generate humanlike text-based content in response.”16 “These large models achieve contextual understanding and remember things because memory units are incorporated in their architectures. They store and retrieve relevant information and can then produce coherent and contextually accurate responses.”17 Essentially, generative AI platforms allow a user to input written prompts and instructions in order to generate highly specific and individualized content. One can imagine a situation where a lawyer inputs transcripts and interview notes and requests an output of discovery requests or other litigation documents.

The use of this generative AI technology by lawyers poses a confidentiality risk under MRPC 1.6(c). First, due to the nature of LLMs and NLPs, AI platforms collect and regularly use massive amounts of data to improve their processes and train their algorithms. When utilizing and inputting information into generative AI platforms, lawyers risk providing confidential information to unauthorized third parties, i.e., employees and developers. For example, “ChatGPT chat history is accessible and reviewable by ChatGPT employees. . . .”18 Legal commentators have opined that the use of these programs may “effectively waive attorney-client privilege.”19 It is important, however, to distinguish between the attorney-client privilege and the duty of confidentiality and the implications of a lawyer’s failure to meet the duties under each principle.

The attorney-client privilege is an evidentiary privilege that shields clients and lawyers from compelled disclosure of confidential communications. It is narrower in scope than the duty of confidentiality, and clients have the ultimate authority to determine whether to assert or waive privilege.20 Ultimately, an inquiry regarding whether a lawyer waives a client’s privilege through the use of generative AI will likely be complex and fact-specific and turn on whether the lawyer’s use of generative AI is considered an authorized, unauthorized, or inadvertent disclosure of privileged communications.21 The answer to that question will likely be answered in the scope of an evidentiary dispute in the course of litigation, and does not resolve the question of whether the duty of confidentiality under MRPC 1.6(a) and (c) is being upheld.

Thus, the principal privacy risk when using generative AI tools likely lies with the duty to protect client confidences from unauthorized disclosure.22 Not only do generative AI tools store and utilize information inputs by users, but the platforms have also been vulnerable to data breaches and data leaks.23 For instance, in 2023, ChatGPT experienced a data breach where users were able to see the chat history of other users.24 Indeed, some bad actors are using generative AI to elevate and improve their cyber-attacks.25 There was another instance where sensitive personal information of users was accessible to other users.26 While these intrusions could be considered minor, some have noted that “the incidence could be a harbinger of the risks that could impact chatbots and users in the future.”27 Unsurprisingly, when asked whether a user’s input data is confidential, ChatGPT’s chatbot responded with: “OpenAI may collect and use user data for research and privacy purposes, as described in its privacy policy.” It followed with: “To ensure the confidentiality of your data, it is important to follow best practices, such as not sharing sensitive personal information or confidential data when using AI models like me.”

If confidential client information of lawyers were to be accessed during one of these sophisticated intrusions or collapses of generative AI software security structures, it would certainly move the boundary line of what is considered “reasonable efforts to prevent the access or disclosure.”28 It has the potential to be embarrassing to both the lawyer and the client and, if the disclosure is large enough, could result in lawsuits, financial consequences, and professional discipline.29 It also can trigger an attorney’s duty to report a data breach, even if it involves the breach of a third-party vendor.30 Therefore, when using generative AI programs, lawyers have a greater duty to inquire into privacy standards and policies and ascertain the methods by which input and output data are protected. Due to the risk of inadvertent disclosure of client confidences, lawyers should be very selective when utilizing generative AI platforms, and developers should be familiar with the needs of those providing legal services when developing and marketing their programming. Importantly, those who are tempted to input confidential client information to streamline tasks should ensure the platform’s compliance with MRPC 1.6(c) obligations and standards.

1See Delchin et al., Legal Ethics in the Use of Artificial Intelligence, Squire Patton Boggs (Feb. 2019), https://perma.cc/HKU4-J7FH; Linna Jr. et al., Ethical Obligation to Protect Client Data when Building Artificial Intelligence Tools: Wigmore Meets AI, Am. Bar Ass’n (Oct. 2, 2020), https://www.americanbar.org/groups/professional_responsibility/publications/professional_lawyer/27/1/ethical-obligations-protect-client-data-when-building-artificial-intelligence-tools-wigmore-meets-ai/#ref33.

2Suzanne McGee, Generative AI and the Law, LexisNexis, https://www.lexisnexis.com/html/lexisnexis-generative-ai-story/ (last viewed Mar. 21, 2024).

3See Drew Simshaw, Access to A.I. Justice: Avoiding an Inequitable Two-Tiered System of Legal Services, 24 Yale J.L. & Tech. 150, 158-70 (2022).

4See 2 Legal Malpractice § 187 (2024 ed.) (“Concern about misusing a former client’s confidences can affect a lawyer’s zeal and competence in representing a . . . client.”).

5See Model Rules Professional Conduct 1.6(c)[hereinafter MPRC].

6MRPC 1.6, Comment 3.

7Restatement (Third) of Laws § 59(b).

8Id.

9MRPC 1.6(c).

10See MRPC 1.1 (“A lawyer shall provide competent representation to a client. Competent representation requires the legal knowledge, skill, thoroughness and preparation reasonably necessary for the representation.”).

11MRPC 1.6, Comment 18; See ABA Ethics Op. 483 (“Data breaches and cyber threats involving or targeting lawyers and law firms are a major professional responsibility and liability threat facing the legal profession.”).

12MRPC 1.6, Comment 18.

13Id.

14MRPC 1.6, Comment 19. Historically, these intrusions have taken the form of spamware, malware, and, more recently, ransomware, and other cyber-attacks that target individuals and entities.

15See What is Generative AI?, McKinsey and Company (Jan. 19, 2023), https://perma.cc/C7D3-4XQR.

16Catherine Dee, Large Language Models (LLMs) Vs Generative AI: What is the Difference?, Algolia (Nov. 9, 2023), https://perma.cc/5L5E-GFMQ.

17Id.

18Seth M. Pavsner, The Attorney’s Ethical Obligations When Using AI, Cuddy+Feder LLP Blog (July 28, 2023), https://perma.cc/LNN6-WNK8.

19Id.; See Isabel Gottlieb, Generative AI Use Poses Threats to Attorney-Client Privilege, Bloomberg L. (Jan. 23, 2024), https://perma.cc/DAG2-L3JN (“Public-facing generative AI models, like ChatGPT’s free version, 3.5, pose a tangible threat to confidential information: The models could repeat information in one user’s query to the next user asking about something similar.”).

20See Restatement (Third) of the Law Governing Lawyers § 68(c); Restatement (Third) of the Law Governing Lawyers/ § 78(b).

21See Restatement (Third) of the Law Governing Lawyers § 79(b)-(h). See also Federal Rules of Evidence 502. See generally Jared. S. Sunshine, Failing to Keep the Cat in the Bag: A Decennial Assessment of Federal Rule of Evidence 502’s Impact on Forfeiture of Legal Privilege Under Customary Waiver Doctrine, 68 Cleveland St. L. Rev. 637 (2020); Tory L. Lucas, Rethinking Lawyer Ethics to Allow the Rules of Evidence, Rules of Civil Procedure, and Private Agreements to Control Ethical Obligations Involving Inadvertent Disclosures, 63 St. Louis U. L.J. 235 (2019).

22See MRPC 1.6(c).

23See David Barry, Microsoft’s AI Data Leak isn’t the Last One We’ll See, Reworked (Sept. 29, 2023), https://perma.cc/JY87-ADH5; Mark Gurman, Samsung Bans Staff’s AI Use After Spotting ChatGPT Data Leak, Bloomberg, https://perma.cc/V5N5-DDYK (last updated May 2, 2024, 1:54 AM EDT).

24Sue Poremba, ChatGPT Confirms Data Breach, Raising Security Concerns, Security Intelligence (May 2, 2023), https://perma.cc/3GSH-77U7.

25See Rachel Curry, The Hacking Underworld has Removed All of AI’s Guardrails, But the Good Guys are Closing in, CNBC, https://perma.cc/J2FM-A3WG (last updated Mar. 11, 2024).

26Id.

27Id.

28See Sean Harrington, Cyber Insurance, 72 Bench & B. Minn. 16, 18-19 (2015); MRCP 1.6(c); MRPC 1.6, Comment 18; ABA Formal Opinion No. 18-483 (“[A]n attorney’s competence in preserving a client’s confidentiality is not a strict liability standard and does not require the lawyer to be invulnerable or impenetrable.”); ABA Cybersecurity Handbook 73 (“Although security is relative, a legal standard for “reasonable” security is emerging. That standard rejects requirements for specific security measures [such as firewalls, passwords, or the like] and instead adopts a fact-specific approach to business security obligations that requires a “process” to assess risks, identify and implement appropriate security measures responsive to those risks, verify that the measures are effectively implemented, and ensure that they are continually updated in response to new developments”).

29See Harrington, supra note 28 at 18-19; see generally R. Andrew Grindstaff, Article III Standing, the Sword and the Shield: Resolving A Circuit Split in Favor of Data Breach Plaintiffs, 29 Wm. & Mary Bill Rights J. 851 (2021); A. Michael Froomkin, Government Data Breaches, 24 Berkeley Tech. L.J. 1019. 1051-1054 (2009).

30See Mass. Gen. L. c. 93H, § 3 (requiring written notice of known data breaches to the attorney general, office of consumer affairs, and the affected persons).

Chanal McCain

Professor McCain is a faculty fellow at New England Law | Boston.

Next
Next

Managing the Groundwater Governance Gap