Showing posts with label TC Rule 33. Show all posts
Showing posts with label TC Rule 33. Show all posts

Wednesday, January 22, 2025

Use of AI, Including Large Language Models (LLMs), in Tax Court Brief Writing (And Really Other Legal Analysis) (1/22/25; 4/29/25)

 AI (artificial intelligence) is ubiquitous now; or at least the discussion of AI is ubiquitous. See generally Artificial intelligence. (2025, January 22), Wikipedia, here.  I asked ChatGPT about use of AI by lawyers and received the response linked here. I write today on some instances recently called to my attention of misuse of AI in briefing in Tax Court cases, but I understand that similar misuse has been identified in briefing in other courts.

Use of AI in legal briefing has received considerable attention, from general discussion of the strengths and weaknesses to specific instances where lawyers have been called out when they used AI that failed. E.g., Is AI a Good Tool for Legal Brief Writing? (Spellbook 10/22/24), here (general discussion, but noting in part for today’s blog that “AI tools can sometimes "hallucinate" information and generate fake citations that human lawyers must carefully check.”); What Are the Best AI Tools for Writing Legal Briefs? (Bloomberg Law 6/10/24), here (nothing that AI in large language models (“LLM”) can produce “false information” via what are called “hallucinations;” and that, as a result, “21 federal trial judges have issued standing orders regarding AI, and attorneys are often required to disclose all uses of AI.”) Suffice it to say that my understanding is that AI generated content must be carefully checked and appropriate revisions made before submitting that content in a brief submitted to the court. (This is confirmed by my limited use of AI as discussed at the end of this blog.)

The Tax Court has no formal rule addressing the use of AI. However, a reader recently advised me of two Tax Court Orders by Judge Buch addressing the issue. Thomas v. Commissioner (T.C. Dkt 10795-22 at #36 Order dtd 10/23/24), here; and Westlake Housing, L.P. v. Commissioner (T.C. Dkt. No. 478-24L at # 32 Order dated 1/13/25), here. (I have posted both orders to my Google Docs to permit a permalink that readers can directly access without having to go through the DAWSON docket sheet which does not offer a permalink for direct access to the orders.)

Thomas is a short order (5 pages); Westlake is even shorter (2 pages). I discuss Thomas in some detail. The Court (Judge Buch) sets the issue up in its opening paragraph:

          This case was tried on September 17, 2024, in Atlanta, Georgia. In preparing for trial, the Court noticed that some of the authorities cited in petitioner’s Pretrial Memorandum did not exist, evidencing possible AI hallucinations. To inquire into these authorities, the Court held a hearing to provide petitioner’s counsel an opportunity to clarify the Pretrial Memorandum. During that hearing, petitioner’s counsel explained that someone else had prepared the Pretrial Memorandum, and she did not review the work that was provided to her. Rule 33 instructs that, in signing a pleading, counsel is certifying that he or she has read the pleading, that it is well grounded in fact; and that it is warranted by existing law. Because the Pretrial Memorandum violates this standard, we will deem it to be stricken. We will also take this occasion to address the use of AI as a tool to assist petitioners and practitioners. As discussed below, however, striking the Pretrial Memorandum will not affect the ultimate outcome in this case.

After then summarizing nicely the role of the Pretrial Memorandum (pp. 1 & 2), the Court noted: