In many ways, 2023 was marked by dramatic transformations in the field of artificial intelligence (“AI”). The rise of generative AI models and stories of these models mimicking human conversation or passing state bar exams dominated news cycles and captivated the global imagination. Experts noted how advancements in generative AI are poised to inexorably disrupt industries such as education and legal practice. Among many notable commentators, U.S. Supreme Court Chief Justice John Roberts also addressed the future of AI in the law in his annual report. Although “human judges will be around for a while” the Chief Justice notes “with equal confidence” that “judicial work—particularly at the trial level—will be significantly affected by AI.”
Throughout 2023, National Association of Attorneys General (“NAAG”) programming explored these dramatic developments, culminating in AI taking centerstage at the annual Capital Forum in December. Experts joined attorney general moderators for multiple AI-focused conversations throughout the event, exploring the topic through the lens of the unique jurisdiction of state attorneys general.
Organized by NAAG President and Ohio Attorney General Dave Yost, the first panel discussion looked at the promises and perils of deploying generative AI and other AI tools within government. New Jersey Attorney General Matt Platkin and Iowa Attorney General Brenna Bird spoke with a panel of experts on harnessing the power of AI models for improving efficiency in the day-to-day operations of government. Panelists noted that many government agencies already use AI supported platforms, such as chatbots or document coding and taxonomy. For attorney general offices, current advancements in AI offer real breakthroughs in the arduous discovery process, employing predictive coding and scanning for attorney/client privileged information. Other models could streamline the contracting process or help spot illegalities in the steps of a given supply chain.
But the panelists cautioned that without tailored workflow/governance frameworks and appropriate staff training, the increased use of AI in government offices inevitably carries risks. These include perennial concerns over data privacy, cybersecurity, confidentiality, bias, and broader regulatory compliance, to name a few. The panelists urged attorney general offices and government agencies more broadly to develop and integrate protocols and frameworks prior to procuring AI tools, and to take an iterative approach when deploying these tools within a given agency. Resources such as the National Institute of Standards and Technology (NIST) AI Risk Management Framework offer helpful guidance and resources to protect organizations and end-users against systemic risk.
Government use of AI segued into the next panel discussion on another broad role for state attorneys general: protecting the public in the age of artificial intelligence. Massachusetts Attorney General Andrea Campbell and New Hampshire Attorney General John Formella engaged panelists on a range of issues, including tools in the proverbial attorney general toolbox for public protection work, and how to navigate the crush of AI media coverage inundating the public at-large. Panelists cautioned regulators and enforcers not to succumb to the extremes in the AI news cycle, but rather, to question whose incentives these narratives serve. They reminded the attendees that many regulatory and enforcement tools are already available to address potential harms. For instance, lawyers are bound by ethical obligations such as the duty of confidentiality and supervision that, in most instances, provide sufficient guardrails for appropriate use. State data privacy laws, as well as state Unfair, Deceptive Acts and Practices statutes and antitrust laws should, in many ways, be viewed as AI statutes. The FTC’s novel remedy of algorithmic disgorgement or the potential of an AI licensing regime offer possibilities for enhanced protection. And companies, such as IBM, have instituted layered protocols and procedures to ensure scrutiny at various stages of the AI development and deployment process.
Several panelists noted that, unfortunately, opacity still reigns supreme within the industry as companies train AI models on unknown data sets. Just as an ecosystem of innovation pervades, an AI ecosystem of public knowledge and accountability should be the norm when things go wrong. Several of the panelists urged state attorneys general and government at-large to compel industry to institute greater levels of public transparency, whether through their regulatory and enforcement authorities or through the bully pulpit.
The Capital Forum AI topics concluded with South Carolina Attorney General Alan Wilson and New Mexico Attorney General Raúl Torrez discussing NAAG’s bipartisan advocacy letter urging Congress to take immediate action against the rise of AI and child sexual abuse material (CSAM). Signed by 54 states and territories, the letter highlights current areas of concern posed by advancements in AI, and calls on Congress to convene an expert commission to study the topic and offer legislative recommendations to eventually equip prosecutors with the legal tools to combat the problem.
NAAG’s commitment to exploring AI topics of interest to state attorneys general offices will continue and deepen in 2024. NAAG will keep you updated on:
- AI cybercriminal issues,
- use-cases and use-guidelines for attorney general offices and state government more broadly, and
- webinar and in-person training opportunities to familiarize attorney general staff at all levels on how AI intersects with their daily practice.