Regulators Put AI in Their Crosshairs at the ABA National Institute on White Collar Crime
The American Bar Association’s National Institute on White Collar Crime has long focused on regulating, prosecuting, and defending evolving permutations of fraud, or allegedly fraudulent conduct. At the 39th annual event, held March 6-8 in San Francisco, a new focus was on how various agencies intend to regulate artificial intelligence (“AI”) in the white collar arena. Some announcements were new to the conference, while others repeated agency guidance issued in recent months. Taken together, the announcements confirm that regulators are unified in their concern that left unchecked, AI could amplify financial crimes and their determination to nip that threat in the bud.
First, regulators are dedicating considerable resources to understand AI and the risks this technology poses. Attorney General Merrick Garland announced Jonathan Mayer as the Justice Department’s first Chief Science and Technology Advisor and Chief Artificial Intelligence (AI) Officer. Mayer is an assistant professor at Princeton University’s Department of Computer Science and is tasked with getting the Department (and its enforcement efforts) up to speed on AI and other emergent technologies. Deputy Attorney General Lisa Monaco also spotlighted the Department’s “Justice AI” initiative, a series of roundtables with experts from law enforcement, academia, and the private sector. These panels are likely to inform the Department’s future enforcement efforts and add substance to the policy priorities outlined at this year’s conference.
Second, the Department is taking a “tough-on-AI” stance. In a sense, this should be nothing new, as DAG Monaco quipped that “fraud using AI is still fraud.” But the Department’s efforts appear to go further, as DAG Monaco warned that crimes enhanced by AI will face enhanced penalties. DAG Monaco announced that she has instructed prosecutors to seek stiffer penalties when criminals use AI to enhance the scope and magnitude of white collar crimes. Notably, these penalties will apply to both companies and individuals.
This strict position on AI extends to corporate compliance programs. DAG Monaco explained that the Department expects companies to adopt corporate compliance programs that mitigate AI-related risk. These expectations will be reflected in future updates to the Criminal Division’s Evaluation of Corporate Compliance Programs, and companies should remain alert for these changes.
Third, regulators had much more to say about AI’s risk than its potential rewards. Although AG Garland spoke of the “great promise and the risk of great harm” that AI could bring about, regulators at the conference spoke much more of potential peril than potential promise. DAG Monaco spoke of AI as a “double-edged sword,” yet her speech—and others—focused almost exclusively on deterring AI-driven crime, with little said about how AI might be used to detect and disrupt white collar crime. This was a change of emphasis from remarks she made earlier this year, where she stated the Department wanted to understand “how to ensure we accelerate AI’s potential for good while guarding against its risks.”
And while AI-related warnings from DOJ officials were stern, they were also vague. Regulators are no doubt concerned that the use of AI could supercharge wrongdoing, but were light on the particulars of how they believe that might play out. Department officials spoke of AI in broad terms, without, for example, differentiating generative AI from productive AI or identifying particular AI applications as sources of concerns. It remains to be seen how enhanced penalties and guidance on compliance will come to bear in future enforcement actions.
Fourth, the DOJ is not the only regulator with AI in its sights. SEC officials spoke in concrete terms about how companies could face liability for misleading investors when it comes to AI. SEC Enforcement Director Gurbrir Grewal expressed his team’s interest in regulating companies that mislead investors about the use of AI in their investment strategies, coining the term “AI-washing.” In this sense, Director Grewal was echoing remarks of SEC Chair Gary Gensler, who stated in February that companies may need to make particularized disclosures about how and where they use AI moving forward.
Jason Lee, Associate Regional Director of the SEC’s Enforcement Division, elaborated on Director Grewal’s comments in a later panel, explaining that the SEC was wary that companies might seek to capitalize on buzz around AI to mislead investors. This warning followed a January 25, 2024 Investor Alert issued by the SEC, FINRA, and the North American Securities Administrators Association (NASAA), which urged consumers to be skeptical of companies touting their AI abilities. Mr. Lee further noted that companies could face liability when they fail to disclose AI-related risks, including the risks a company takes by using AI technology as well as any risk a company might face from emerging AI technologies rendering its products obsolete.
To conclude, regulators are serious about their efforts to regulate AI, but at this point any AI enforcement actions remain speculative. A persistent refrain from regulators was that companies should “knock on our door before we knock on yours.” This mantra no doubt applies to AI compliance as well, and regulators clearly want companies to proactively assess risks associated with their use of AI. These efforts should include reducing potential misuse of AI by employees and guarding against biases and overpromises associated with AI solutions.
Information provided on InsightZS should not be considered legal advice and expressed views are those of the authors alone. Readers should seek specific legal guidance before acting in any particular circumstance.
As the regulatory and business environments in which our clients operate grow increasingly complex, we identify and offer perspectives on significant legal developments affecting businesses, organizations, and individuals. Each post aims to address timely issues and trends by evaluating impactful decisions, sharing observations of key enforcement changes, or distilling best practices drawn from experience. InsightZS also features personal interest pieces about the impact of our legal work in our communities and about associate life at Zuckerman Spaeder.
Information provided on InsightZS should not be considered legal advice and expressed views are those of the authors alone. Readers should seek specific legal guidance before acting in any particular circumstance.