LEARN MORE
AI Litigation
A wave of lawsuits have been filed this year, most of which focus on generative AI and remain pending. The issues in these cases are ones that are fundamental to the legality of key issues with generative AI. Some of the key issues in these cases include:
AI Litigation
Patent and Copyright Offices' AI Initiatives
FTC Guidance and Enforcement
Executive Order On AI
LEARN MORE
U.S. and International Legislation
Artificial Intelligence Legal Issues
2023 Year in Review and 2024 Areas to Watch
Download 2023 Year in Review
Areas to watch in 2024
LEARN MORE
The launch of ChatGPT 3.5 in November 2022 set up 2023 as a year for rapid growth and early adoption of this transformative technology. It reached 100 million users within 2 months of launch – setting a record for the fastest-growing user base of a technology tool. As of November 2023, the platform boasted an estimated 100 million weekly active users and roughly 1.7 billion users. Notably, ChatGPT is just one of the growing number of generative AI tools on the market. The pace of technical development and user adoption is unprecedented.
This rapid growth also brought to light many legal and regulatory issues and societal concerns with this nascent technology. Many lawsuits have been filed, regulatory actions are increasing, states are passing AI legislation to protect consumers and the White House issued a sweeping executive order to promote safe, secure, and trustworthy development and use of AI. Governments around the world are taking action as well. From a legal perspective, 2023 saw more legal questions than answers. Hopefully, 2024 will bring greater legal clarity on some key issues.
Artificial Intelligence Legal Issues - 2023 Year in Review and 2024 Areas to Watch
To better assist clients with managing the legal risks associated with AI, Sheppard Mullin launched its multi-disciplinary Artificial Intelligence Team. While our lawyers have been handling AI legal issues for decades, this team brought together over 100 Sheppard Mullin lawyers with diverse legal and industry backgrounds to more effectively provide clients comprehensive AI legal strategies and advice.
Our team provides advice on a broad array of AI legal issues. Some of the most sought after services in 2023 were internal legal training for boards, C-suite and legal departments to understand AI legal issues and the establishment of policies on employee use of AI to mitigate legal risks and policies on training AI models to ensure legal compliance and responsible use of AI.
The following is a summary of five of the most important AI-related legal developments in 2023 and five things to watch for in 2024.
Patent and Copyright Offices' AI Initiatives
FTC Guidance and Enforcement
Executive Order On AI
U.S. and International Legislation
Areas to Watch in 2024
Does training an AI model on copyrighted content constitute infringement, or is it fair use?
Must companies training AI models on open source software retain copyright information if any of the code is output and must users comply with other open source license compliance obligations (e.g., attribution, disclosing modifications, etc.)? For more information, see Solving Open Source Problems With AI Code Generators – Legal issues and Solutions
Does the use of facial information to train AI models without obtaining the users’ consent constitute misuse of biometric information?
In Flora v. Prisma Labs, Inc., a district court in California recently granted the defendants motion to compel arbitration for claims relating to the use of facial information to train AI models without obtaining the users’ consent.
Does the output of false information alleging fraud and embezzlement by a person constitute defamation? See Walters v. Open AI. This matter was remanded to state court without the court’s ruling on Open AI’s motion to dismiss. The plaintiff is now appealing the remand order. See Walters v. OpenAI, Case No. 23-cv-03122 (N.D. Ga. 2023).
Many of these cases have not progressed past the motion to dismiss stage. Many of the initial rulings have granted the motion to dismiss for some claims, but have provided plaintiffs leave to amend and more clearly plead their claims. Based on the rulings on these motions, a few trends are emerging:
In Young v. NeoCortext, Inc., a class action complaint survived a motion to dismiss the right of publicity claims relating to defendant’s AI-powered “Reface” application, which allows users to digitally “swap” their faces with celebrities and public figures in photos and videos. The motions to dismiss unsuccessfully argued that plaintiffs’ state law claims are barred by the First Amendment and pre-empted by the federal Copyright Act. (Case No. 2:23-cv-02496 (C.D. Cal. 2023)).
Does the use of AI, which enables users to swap their faces with those of celebrities and public figures in photos and videos, raise right of publicity concerns?
Will courts enforce arbitration provisions in AI tools terms of services, effectively preventing class action claims?
see more
General allegations that copyrighted works were used to train AI models may not be sufficient to maintain a claim of copyright infringement .
Not surprisingly, the plaintiff must have a copyright registration for works alleged to be infringed.
Courts want examples of specific registered works that were used to train the AI model and the specific output that infringed on the plaintiff’s work.
The Patent Office and the Copyright Office embarked on initiatives to solicit public feedback on AI issues with patents and copyrights to help them better assess any law and policy changes necessary to address AI, particularly with generative AI. As generative AI models are typically trained on copyrighted content and generative AI tools produce new expressive content, the copyright implications are significant.
Copyright Office
In early 2023, the Copyright Office launched an initiative to examine the copyright law and policy issues raised by AI technology, including the scope of copyright in works generated using AI tools and the use of copyrighted materials in AI training. In March, it published guidance on works containing material generated by AI. This guidance affirmed that only human authored works can be registered and clarified that any AI-generated content that is more than de minimis must be explicitly excluded from the application. This duty to disclaim includes AI-generated content in new, pending and issued registrations.
Some significant decisions in high profile copyright applications include:
Zarya of the Dawn
Zarya of the Dawn – partial cancellation of an issued registration after the Copyright Office subsequently learned some of the content was AI-generated. See Registration Decision on Zarya of the Dawn (Feb. 21, 2023)
Théâtre D’opéra Spatial
Théâtre D’opéra Spatial – refusal to grant registration for a work that was partially AI-generated when the applicant refused to disclaim the AI-generated content. See Review Board Decision on Théâtre D’opéra Spatial (Sept. 5, 2023)
Suryastr
Suryastr – refusal to grant registration for a work that was partially AI-generated despite the input allegedly being an original work of the applicant. The application listed the artist as an author along with the AI tool. See Second Request for Reconsideration for Refusal to Register SURYAST.
A Recent Entrance to Paradise
A Recent Entrance to Paradise - the Copyright Office prevailed in a district court case which confirmed a 2022 refusal to register a work where the listed author was an AI-based machine.
The Copyright Office also hosted public listening sessions and webinars to obtain information about AI technologies and their impact. It subsequently published a Notice of Inquiry in the Federal Register in August 2023 to inform its study and help assess whether legislative or regulatory steps in this area are warranted. It received nearly 10,000 submissions!
For more information on these issues see:
Copyright Office Guidance on AI
Generative AI and Copyright – Some Recent Denials and Unanswered Questions
Copyright Office Artificial Intelligence Initiative and Resource Guide
Patent Office
The U.S. Patent Office also has embarked on an AI initiative involving public outreach. It too held listening sessions and other public events.
In April, the U.S. Supreme Court denied certiorari in a case challenging the USPTO’s refusal to approve a patent application that listed an AI tool as the inventor. The refusal was previously confirmed by a district court and the Federal Circuit based on the requirement that inventors be human.
While the Federal Trade Commission (FTC) has been active in policing AI for some time, in 2023 the FTC intensified its oversight of AI. This year, the FTC has not only issued additional guidance on AI issues, but has also stepped up enforcement measures in the industry.
Guidance
Some of the main actions taken by the FTC this year in relation to guidance on AI use includes policy statements on the use of biometric information, enforcement efforts against discrimination and bias, the use of AI in consumer applications, and preparing companies for regulatory compliance and risk mitigation.
In July, the FTC instituted an investigation into the generative AI (GAI) practices of OpenAI through a 20 page investigative demand letter. The investigation focuses on whether Open AI, “in connection with offering or making available products and services incorporating, using or relying on Large Language Models has (1) engaged in unfair or deceptive privacy or data security practices or (2) engaged in unfair or deceptive practices relating to risks of harm to consumers, including reputational harm in violation of section 5 of the FTC Act, 15 USC Section 45, and whether Commission action to obtain monetary relief would be in the public interest."
Enforcement
In November, the FTC submitted comments to the Copyright Office identifying several issues raised by the development and deployment of AI that implicate competition and consumer protection policy concerns. In a press release, the FTC noted that creators’ ability to compete may be unfairly harmed and consumers may be deceived when authorship does not align with consumer expectations (e.g., a consumer may think a work has been created by a particular musician or other artist, when in fact it is an AI-created product).
In November, the FTC approved an omnibus resolution authorizing the use of compulsory process in nonpublic investigations involving products and services that use or claim to be produced using artificial intelligence (AI) or claim to detect its use. This resolution aims to streamline the issuance of civil investigative demands (CIDs), akin to subpoenas, in AI-related investigations, granting the FTC authority to determine when CIDs are necessary. The resolution will be effective for 10 years.
In October, the White House issued an Executive Order focusing on safe, secure and trustworthy AI and laying out a national policy on AI. The Executive Order seeks to enhance federal government use and deployment of AI, including to improve cybersecurity and U.S. defenses, and to promote innovation and competition to allow the U.S. to maintain its position as a global leader on AI issues. It also focuses on protection of various groups including consumers, patients, students, workers and children.
The Executive Order covers several important issues, including safety testing and standards for such tests, content authentication and privacy, cybersecurity and national security, equity and civil rights, consumer and worker protection, advancing U.S. leadership in AI innovation and competition while collaborating with governments worldwide and promoting the responsible use of AI by the U.S. government.
Building on the Blueprint for an AI Bill of Rights and the Executive Order directing agencies to combat algorithmic discrimination, the Executive Order mandates additional actions to advance equity and civil rights, including an increase of efforts to combat “BAD” AI (Biased And Discriminatory AI). See Equity and Civil Rights Issues in the White House Executive Order on AI for more.
Much of the Executive Order includes mandates to various agencies to take specific actions within proscribed time frames and specifically calls on Congress to implement federal privacy legislation. This tracker provides a detailed, line-level tracker of the 150 requirements that agencies and other federal entities must now implement, in some cases, within months and for the majority of the deadlines, by October 2024. Some of the largest number of requirements look to focus on safety, innovation and government, reflecting the uptick in concerns about national security risks and the large push to quickly recruit AI talent to the federal government.
Overall, the Executive Order will be far reaching and is expected to create a number of programs and resources to enhance leadership innovation while also protecting U.S. infrastructure from foreign bad actors’ use of AI.
As with many technological transitions, certain industries may resist against the change and advancements, as exemplified by the SAG protests this year. Conversely, there will be those that embrace emerging innovations, leveraging them to advance their businesses. It is only natural that such shifts bring about some level of impact and disruption. In anticipation of this, the White House Executive Order is designed to address and mitigate these effects.
United States
International
United States
At the federal level, there is a proposed privacy bill which would cover AI—the American Data Protection and Privacy Act (ADPPA). This bill proposes risk assessment obligations that would directly impact companies developing and utilizing AI technologies. However, ADPPA has stalled during the past Congressional session, and it remains to be seen whether its framework will advance. While there is an absence of comprehensive federal legislation, numerous states have proposed legislation covering different aspects and implementation of AI.
Various states have proposed study bills which require task forces to review the potential impacts of AI on industries and jobs, from the service sector to white collar positions. Below are examples of states that have enacted, or are trying to enact, more substantive bills on AI. While many substantive laws have failed at the state level, these may be indicative of legislative trends to come and we only anticipate further attempts at AI regulation at the state level.
March 2023 — Texas attempted to pass a law that would prohibit the use of artificial intelligence technology to provide counseling, therapy, or other mental health services unless:
the AI technology application through which the services are provided is an application approved by the commission; and
the person providing the services is a licensed mental health professional or a person that makes a licensed mental health professional available at all times to each person who receives services through the AI technology.
February 2023 — Illinois attempted to introduce the Illinois Data Privacy and Protection Act, which would regulate the collection and processing of personal information and the use of “covered algorithms.” The bill defined “covered algorithm,” as “a computational process that uses machine learning, natural language processing, artificial intelligence techniques, or other computational processing techniques of similar or greater complexity and that makes a decision or facilitates human decision-making with respect to covered data, including to determine the provision of products or services or to rank, order, promote, recommend, amplify, or similarly determine the delivery or display of information to an individual.”
July 2023 – New York City law prohibits use of Automated Employment Decision Tools (AEDTs) in employment decisions unless:
the tool has been the subject of a bias audit conducted within the previous year in accordance with the AEDTL’s requirements;
the employer has published a summary of the results of the tool’s most recent bias audit, and tool distribution date on its publicly-available website.
Sept 2023 – New York state bill which would dramatically restrict employers’ ability to use electronic monitoring tool (EMTs) and AEDTs.
EMT is defined as “any system that facilitates the collection of data concerning worker activities or communications by any means other than direct observation, including the use of a computer, telephone, wire, radio, camera, electromagnetic, photoelectric, or photo-optical system.”
AEDT is defined as “any computational process, derived from machine learning, statistical modeling, data analytics, or artificial intelligence, that issues simplified output” such as a “score, classification, or recommendation, that is used to substantially assist or replace discretionary decision making.”
International
Originally proposed in April 2021, the EU AI Act aims to create a comprehensive regulatory framework for AI technologies to ensure their responsible development and use while addressing potential risks. As of November 9, 2023, the European Parliament reached an agreement to implement the AI Act, which will become one of the first comprehensive attempts to establish limits on the use of AI. Some key features and provisions that are expected to be included in the EU AI Act are: a risk-based approach to classifying AI, outlining prohibited practices in AI systems; emphasizing transparency and accountability; data quality and bias management and sector-based limitations on high-risk technologies, i.e. facial recognition software in policing efforts.
Finally, in an effort to boost international efforts on AI safety, 29 countries signed the Bletchley Declaration at the AI Safety Summit in early November 2023. The Declaration encourages transparency and accountability for actors developing, designing and deploying AI tools and models. The global effort will support an international network on scientific research and will focus on a two-part agenda: (i) identifying and understanding the complex risks associated with AI and (ii) building respective risk-based policies across nations to ensure safety.
Additionally, in October 2023, the Group of Seven economies, Canada, France, Germany, Italy, Japan, Britain, and the United States (G7) met to discuss and develop a code of conduct for management and development of artificial intelligence systems. The goal of the code of conduct is to provide a set of guidelines and risk mitigation guidance for organizations involved in developing AI systems, including generative AI.
Results of Executive Order
1.
Resolution of Litigation
2.
Further Legislation
3.
Company Adoption and Customization of AI Tools
4.
Advanced/Cutting Edge Technology
5.
Results of Executive Order
Based on the Executive Order (“EO”), we expect to see significant activity by many agencies, including policy and legislative recommendations, agency guidance, rulemaking and enforcements. For the majority of requirements (about 75%), the EO has set hard deadlines pursuant to which the actions are required to be completed. Of these deadlines, 94% of actions are required to be completed within a year (by October 29, 2024) with about 25% required to be completed within 90 days (January 28, 2024). Based on the dates set in the EO (assuming no extensions of time are granted) some of the key sections where regulations will be implemented in the next year will be as follows:
As reflected to the left, a number of requirements will be geared toward ensuring safety and security of AI technology, with requirements that security and reliability of AI systems are implemented through evaluations, benchmarking and information disclosure requirements. Strengthening federal government use of AI and promoting innovation and competition are also key areas for action items.
Areas to watch in 2024
1
1Based on information from Stanford University Human-Centered Artificial Intelligence tracker By the Numbers: Tracking The AI Executive Order (stanford.edu) (November 16, 2023).
Resolution of Litigation
Areas to watch in 2024
While cases involving generative AI are generally in the early stages of litigation, we hope to get at least some resolution of the issues in the pending litigation discussed in Section 1 of the Year In Review (AI Litigation). Additionally, we anticipate that many new lawsuits will be filed. One of the issues to watch for is whether courts will enforce arbitration clauses in AI tools terms of service, effectively preventing class actions. Most of the lawsuits to date have been against the AI tool providers. In 2024, we anticipate that plaintiffs’ attorneys may start filing lawsuits against enterprise users of the tools as well.
Further Legislation
Areas to watch in 2024
We anticipate further enactment of state specific AI laws. Topics to watch for in new and pending bills include: employment, policing and security, education, healthcare, data privacy, and consumer protection.
Based on the progression of the AI Act, we anticipate that more countries will pursue legislation in 2024. However, it is not clear whether other countries will model their legislation based on the AI Act. This act was set into motion prior to the advent of generative AI, and focuses in large part on regulated uses of AI based on the level of risk. Some commentators have stated that this approach is not the right fit for regulating the AI advancements that have emerged, including LLMs which can be implemented for many uses. What the use-based regulation fails to account for are the regulations needed with respect to the LLMs themselves including what they are trained on (e.g., copyrighted materials), how they are trained (e.g., do they avoid biased and discriminatory results) the liability for hallucinations, and other issues with the training and deployment of AI tools. Despite being hailed as a regulatory breakthrough, the enforcement of the AI Act remains uncertain, with some aspects of the regulations not expected to take effect for another 12 to 24 months and it is possible that the AI Act undergoes additional changes before going into effect. If the AI Act does reach final approval, the global impact of these regulations will be closely monitored, and we anticipate a large impact on major AI developers and various businesses across sectors like education, healthcare, and banking.
While many countries are talking about collaboration and harmonization, we think that this will be difficult in many areas, particularly for intellectual property. For example, Japan has announced that it will not enforce copyright infringement for training of AI on creative material. China recently granted copyright protection for a work that was AI-assisted. These actions seem out of step with how the U.S. and other countries are dealing with these issues.
Company Adoption and Customization of AI Tools
Areas to watch in 2024
This year we saw a lot of companies using third-party tools, but in the coming year we anticipate companies developing their own tools and training their own models. We anticipate seeing more companies licensing foundational models and infrastructure and building on top of that with their own data and training. In particular, watch for an increased use of fine tuning and/or retrieval augmented generation (“RAG”), which can reduce costs of training and increase efficiency. This approach allows the model to handle tasks beyond the pre-trained model’s knowledge. Along with this, companies will need to continue to develop their policies on AI models and testing for bias.
Fine tuning is a technique for enhancing a pre-trained model to train it on specific data (e.g., company specific, or domain specific, such as health care) and/or to perform new tasks. This approach allows the model to handle tasks beyond those of the pre-trained model and provide better responses based on the specific data.
RAG is an AI framework for improving the quality of LLM-generated responses by grounding the model on external sources of knowledge to supplement the LLM’s internal representation of information. Implementing RAG in an LLM-based question answering system has two main benefits: It ensures that the model has access to the most current, reliable facts, and that users have access to the model’s sources, ensuring that its claims can be checked for accuracy and ultimately trusted.
In 2023, many companies developed corporate policies on the use of third-party AI tools. As companies expand their use of AI by leveraging fine tuning and RAGs, or developing their AI tools, they will need to update their AI policies to cover these activities. Among other things, these policies will need to include data mapping of the training materials to ensure their right to use, design for and test to ensure no biased or discriminatory results, to prevent and remedy hallucinations, and much more.
Advanced/Cutting Edge Technology
Areas to watch in 2024
We anticipate seeing an increase in advanced technologies that utilize AI. Some of the interesting areas to watch for are rapid advancements in robotics, autonomous vehicles, brain interface technology, smart prosthetics, and quantum computing. These may not all see wide-scale commercial deployment in 2024, but some will likely see at least some commercialization. These are just a few of the many areas in which AI will be leveraged with other technologies for a wide range of uses.
Robots are being piloted in many industries for multiple purposes. For example, automated robots equipped with advanced AI algorithms are being used in industry to perform a variety of complex tasks — from sorting and assembly to quality inspection. Watch for further automation, which will streamline operational workflows leading to long-term gains for businesses and consumers alike.
Autonomous vehicles already exist and their use will likely increase, particularly as their safety and reliability continue to improve.
Brain-computer interfaces (BCIs), have the potential for a number of medical advancements, such as helping people with paralysis regain control of their limbs or assisting with recovery after physical injuries or neurological disorders and even regaining hearing after hearing loss. BCIs are also expected to be widely-supportive for scientific research. While this technology is still largely experimental, some actual use cases exist, and the possibilities are extensive.
Smart prosthetics are technologically-advanced artificial limbs or body parts that are designed to enhance the functionality and usability for individuals with limb loss or limb impairment. These prosthetics incorporate various sensors, actuators, and advanced control systems to provide a more natural and intuitive user experience.
Many other advancements will leverage AI and we are sure the examples listed above will be just the tip of the iceberg.
Development and adoption of these new technologies undoubtably creates new legal issues and requires considerations of unique risk factors by companies and businesses developing and adopting these technologies. Companies will need to ensure they comply with existing and new regulations, maintain up to date policies, develop ethical frameworks and manage implementation and advancements of AI. We look forward to seeing what 2024 brings for improvement to and uses of AI and the new legal and regulatory issues that will ensue.
For more information see:
Flash Briefing on White House Executive Order on AI Regulation and Policy
White House Executive Order Ramps Up US Regulation of and Policy Toward AI
Resource Guide for the White House Executive Order on Artificial Intelligence
AI’s Executive Order and Its Key Healthcare Implications
Meet the authors
Meet the Authors
James Gatto
Partner, Co-Leader of the Artificial Intelligence Team, the Blockchain & Fintech Team, and Leader of the Open Source Team
Bio
Moriah Dworkin
Associate, Co-Leader of the Technology Transactions Team
Bio
Tiana Garbett
Associate, Member of the Artificial Intelligence Team and Technology Transactions Team
Bio