The legal landscape surrounding artificial intelligence and evidentiary standards has quickly evolved. Conceptual proposals have now entered the formal rulemaking process. In particular, the Federal Advisory Committee on Evidence Rules has advanced Proposed Rule 707 for public comment.1 The proposal is designed to address the admissibility of “Machine-Generated Evidence,” including outputs from AI systems that are used to analyze data and offer predictions. The rule would require that such evidence meets the same reliability standards that apply to expert witnesses under Rule 702. This means that courts would have to assess whether the AI-generated material is based on sufficient facts or data, is the product of reliable principles and methods, and has been reliably applied in the case at hand.
At the same time, other AI-focused amendments are seeing mixed progress. A separate proposal aimed at addressing the authentication of potential “deep-fake” audio and video was ultimately shelved. The Advisory Committee determined that existing rules provide enough flexibility to allow judges to scrutinize such evidence on a case-by-case basis. Likewise, a proposed amendment that would have required disclosure of AI model information (akin to explaining how the proverbial AI sausage is made) remains under internal discussion and has not been published for public input.
While the federal rulemaking process moves forward, states are beginning to take their own steps. For instance, California is considering legislation that would require its courts to develop AI-specific evidentiary procedures. Meanwhile, Massachusetts continues to rely on its existing framework, both to test the reliability of novel technologies and to apply traditional authentication rules found in the Massachusetts Guide to Evidence. Courts in Massachusetts and elsewhere also have enforced professional responsibility in this area, with numerous lawyers receiving sanctions for relying on AI-generated and hallucinated case citations.
In light of these developments, lawyers and non-lawyers alike should begin planning for how AI-generated material will be handled in litigation and business. Lawyers must be vigilant when receiving information and materials from the other side to ensure that all potential evidence is real and reliable. Even in the absence of formal rules updates, courts are already demanding transparency, reliability, and human oversight when AI-generated content is submitted in court. Businesses should implement internal processes for review of contracts and other records, especially those shared with other organizations, to confirm that AI interventions do not introduce errors or lead to unintended consequences. Lawyers and nonlawyers must also be concerned about the type of information and documents uploaded to popular large language models, such as ChatGPT, out of privacy concerns.2
Therefore, in a rapidly changing environment, the message is clear: as artificial intelligence becomes more integrated into daily workflows and legal strategy, the rules may evolve, but the responsibilities of human judgment and ethical compliance remain unchanged.
*****
This alert is for informational purposes only and may be considered advertising. It does not constitute the rendering of legal, tax, or professional advice or services. You should seek specific detailed legal advice prior to taking any definitive actions.
1 The proposed rule reads as follows: “When machine-generated evidence is offered without an expert witness and would be subject to Rule 702 if testified to by a witness, the court may admit the evidence only it if satisfies the requirements of Rule 702 (a)-(d). This rule does not apply to the output of basic scientific instruments.”
2 A federal judge recently ordered OpenAI – the company behind ChatGPT – “to preserve and segregate all output log data that would otherwise be deleted on a going forward basis.” https://docs.justia.com/cases/federal/district-courts/new-york/nysdce/1%3A2023cv11195/612697/551?utm_source=chatgpt.com