News

ALI Launches Principles of the Law, Civil Liability for Artificial Intelligence

ALI Launches Principles of the Law, Civil Liability for Artificial Intelligence

The American Law Institute’s Council voted to approve the launch of Principles of the Law, Civil Liability for Artificial Intelligence. The project will be led by Reporter Mark Geistfeld of New York University School of Law.

“Artificial intelligence has become front-page news, and in a short time has seen rapid advancements and increasing integration in many aspects of our society,” said ALI Director Diane P. Wood. “As AI systems become more sophisticated and capable, legal questions surrounding their use, including exposure to liability and ethical implications, are becoming increasingly complex and pressing. Given the anticipated increase in AI adoption by many industries over the next decade, now is an opportune time for The American Law Institute to undertake a more sustained analysis of common-law AI liability topics through a Principles project.”

“Courts are already facing the first set of cases alleging harms, largely related to copyright and privacy, stemming from chatbots and other generative AI models,” added Reporter Geistfeld, “ but,  there is not yet a sufficient body of caselaw that could be usefully restated. Meanwhile, influential state legislatures are actively considering bills addressing AI, and Congress and federal regulators pursuant to President Biden’s Executive Order 14110 are also addressing these matters. These efforts could benefit from a set of principles, grounded in the common law, for assigning responsibility and resolving associated questions such as the reasonably safe performance of AI systems.”

ALI’s Principles of the Law are mainly addressed to legislatures, administrative agencies, or private actors. Like Restatements, they can be addressed to courts when an area is so new that there is little established law. Principles will often take the form of best practices for either private or public institutions.

“This project can help courts, the tech industry, and federal regulators understand the legal implications of AI,” explained Wood. “It focuses on common-law principles of responsibility, which can guide decision-making in the absence of applicable legislation. By identifying these principles, the project can help avoid conflicts between federal and state laws and provide clarity for all involved parties.”

The Principles project will focus on the core problem of physical harms (bodily injury and property damage). Other types of harm, such as copyright infringement, defamation, and privacy, have their own distinctive doctrinal questions and are the subjects of separate, ongoing Restatement projects. By focusing on physical harms, the project can maintain a clear scope and avoid overlap with other ongoing work. As the project progresses, the Institute will consider the broader implications of AI-caused harms and whether a more comprehensive approach might be necessary in the future.

“There are certain characteristics of AI systems that will likely raise hard questions when existing liability doctrines are applied to AI-caused harms,” explained Geistfeld. “Examples include the general-purpose nature of many AI systems, the often opaque, ‘black box,’ decision-making processes of AI technologies, the allocation of responsibility along the multi-layered supply chain for AI systems, the widespread use of open-source code for foundation models, the increasing autonomy of AI systems, and their anticipated deployment across a wide range of industries for a wide range of uses.”

The Institute and Reporter Geistfeld will now identify Associate Reporters and Advisers to the project.

-------------------------

Mark Geistfeld is the Sheila Lubetsky Birnbaum Professor of Civil Litigation at New York University School of Law, where his research has extensively addressed the common-law rules governing the prevention of and compensation for physical harms. He has authored or co-authored five books along with over 50 articles and book chapters, often showing how difficult doctrinal issues can be resolved by systematic reliance on the underlying legal principles. For example, in his books Product Liability Law (Aspen Pub., 2d ed. 2021) and Principles of Products Liability (Foundation Press, 3d ed. 2021), he shows why the two dominant conceptions of products liability are not conflicting, as has been commonly assumed, but instead work together in a complementary fashion that persuasively explains the primary doctrines while resolving various issues that have perplexed the courts. Similarly, in Tort Law: The Essentials (Aspen Pub., 2008), Geistfeld explains the important doctrines of tort law by reference to a compensatory tort right that unifies the primary liability functions of deterrence and compensation. In a series of ensuing articles, he shows why this conception is historically accurate and consistent with extensive empirical studies of contemporary social norms. Geistfeld also addresses the liability and insurance implications of autonomous vehicles in various publications, including “A Roadmap for Autonomous Vehicles: State Tort Liability, Automobile Insurance, and Federal Safety Regulation,” 105 California Law Review 1611 (2017). Much of his scholarship relies on insurance considerations due to the interplay between tort liability and insurance mechanisms.

Before joining the NYU Law faculty, Geistfeld spent a year as a litigation associate in the New York office of Dewey Ballantine, another year as a law clerk for Judge Wilfred Feinberg of the US Court of Appeals for the Second Circuit, and then another year as a litigation associate in the New York office of Simpson Thacher. He continues to stay involved in litigation practice, having served as an expert witness or legal consultant in a number of interesting tort and insurance cases.

Geistfeld is a senior editor of the Journal of Tort Law, an Adviser to both the Restatement of the Law Third Torts: Concluding Provisions and the Restatement of the Law Third Torts: Medical Malpractice, and often is a referee for peer-reviewed scholarly journals, university presses, and governmental funding agencies.