Physical Address

304 North Cardinal St.
Dorchester Center, MA 02124

Legal experts step up to defend wave of AI lawsuits

When the Microsoft-backed OpenAI unleashed its ChatGPT chatbot in November 2022, it sparked a frenzy over the potential of generative artificial intelligence.
But faultlines have quickly emerged about how it is being developed, deployed, and regulated — spurring a wave of litigation and lobbying efforts that have pushed legal expertise to the fore.
Over the past year, dozens of writers, musicians, visual artists, and software code developers have filed copyright infringement claims and other commercial disputes in multiple courts against OpenAI and its rival start-ups.
Comedic writer Sarah Silverman and novelist John Grisham allege OpenAI failed to secure writers’ approval to use copyrighted works to train its language models.
Recommended
Programmers also allege that Microsoft, its GitHub subsidiary, and OpenAI pooled resources to launch an AI assistant, Codex, and CoPilot software tools but did not program them “to treat attribution, copyright notices, and licence terms as legally essential”.
Universal Music, the world’s largest music group, has sued OpenAI’s rival Anthropic, alleging its AI-based platform Claude generates “nearly word-for-word” copyrighted lyrics.
Meanwhile, visual artists have targeted AI ventures Stability AI, Midjourney and DeviantArt for copyright infringement, claiming their platforms were trained on the styles of the plaintiffs’ works without first seeking permissions or offering credit or compensation.
In the US, some AI developers defending against copyright claims have relied in part on the “fair use doctrine,” which was deployed by Google in 2015 to defeat a claim by the Authors Guild, the writers organisation, that its online book searching function violated writers’ copyrights.
AI developers are hoping that copyright infringement concerns do not discourage businesses from buying their services. In September, Microsoft committed itself to paying any legal costs for commercial customers that are sued for using tools or any output generated by its AI.
Such pledges reflect “smart business” strategies, says Danny Tobey, who leads DLA Piper’s AI-focused practice group and represents developers including OpenAI before regulators, lawmakers and the courts.
Developers are expressing “the type of confidence you need to project for a new technology to get adopted”, he adds.
In court, Tobey has already represented OpenAI in defamation battles. He is defending the chatbot operator against a lawsuit filed by radio host Mark Walters, who also alleges ChatGPT accused him errantly of embezzling from a gun rights group after the platform generated “a complete fabrication” of a lawsuit, according to what his lawyer John Monroe in Dawsonville, Georgia, told the court.
In another defamation lawsuit, aerospace author Jeffrey Battle has targeted Microsoft’s AI-assisted Bing, alleging it conflated him falsely with a convicted felon.
That lawsuit “aims to treat Bing as a publisher or speaker of information provided by itself,” according to a blog post by Eugene Volokh, a law professor at the University of California, Los Angeles.
Copyright infringement and defamation liabilities are not the only legal threats for AI developers and users of the tools. Future claims will centre on “safety and accountability,” Tobey says.
DLA Piper has been heavily involved in helping OpenAI put forward its own views to Congress on how the technology should be regulated. “Our clients love that we’re involved with the rulemaking around AI because people know there’s going to be regulation,” he says. “It’s the uncertainty that’s bothering them.”
Recommended
Generative AI-based large language models are “the Dictaphone for everything on Earth,” Tobey argues. Tools using voice or text interrogation will help people deal with anything from planning holidays to raising health questions. But the answers can be supplied without “the traditional human gatekeepers” — lawyers, architects, engineers and doctors, he explains.
His team includes forensic lawyers, data analysts, science experts and subject matter experts. It helps AI-assisted tool developers and innovators test for accountability, the risks of discrimination and bias, statutory compliance.
In addition, the team works to create legal “guardrails” for Fortune 500 companies to “create policies, procedures, controls, monitoring, feedback loops” for the technology’s use, Tobey notes.
Such guardrails must past muster as “credible” to policymakers and regulators, Tobey says. “There’s not going to be political appetite for broad immunities just to nurture the industry.”
Since 2020, OpenAI has also appeared on Morrison Foerster’s client roster, according to Justin Haan, a technology transactions group partner at the firm. 
“We’ve already been quite immersed in doing work that is directly related, not just tangential, to AI and machine learning models,” including procuring data, says Haan. The firm is helping to defend OpenAI against the Silverman-led authors and also against software programmers’ copyright claims.
Morrison Foerster represents more than 75 clients involved in the AI field and that roster has “increased substantially over the past five years”, Haan says. His firm, like DLA Piper, aims to help corporate clients develop internal policies for generative AI tool use.
We’re going to have to worry more about deep fakes used as evidence
David Cohen, Pittsburgh-based chair of Reed Smith’s records and ediscovery group, predicts that AI-assisted tools will require at least some of the 70 lawyers he supervises to reinvent their roles in the next few years. “Disruption should be expected — nobody should be complacent,” he says.
But he also envisions the tools generating new tasks. “We’re going to have to worry more about deep fakes used as evidence — a videotape of somebody saying something or doing something that they didn’t really say or do. That’s going to create a whole new set [of evidence-authentication issues],” he says. 
“You’re going to need professionals who get good at validating evidence — or finding where evidence is fake.”
Early instances of AI tools generating fake case references that surface in briefings have prompted court administrators and judges — from Manitoba, Canada to North Texas, US — to create rules about how litigators may deploy the new technology.
Cohen also expects discovery battles when plaintiffs aim to uncover evidence about the liability of the owners or developers of AI-assisted tools, which may have played a role in car accidents or employment discrimination, or other causes of claims. 
“People are going to be suing about things that happen partially because of AI,” he says, and new discovery puzzles will emerge.
So far, emerging AI tools are demanding his time but not necessarily leading to billable hours. He scours announcements, blogs, and podcasts to determine the tools “we ought to be testing”, he says.
Ultimately, he can see the technology transforming ediscovery entirely — with litigants agreeing to “throw the documents into one generative AI system” that allows both sides to pose questions. “There’s so much about today’s discovery system that’s inefficient, including the fact that both sides are essentially duplicating efforts,” Cohen says.

en_USEnglish