Artificial intelligence (AI) has become an indispensable part of the modern workplace, revolutionizing various aspects such as copywriting, customer support, and recruitment. With advancements in technology, the idea of corporations managed or owned by AI is no longer purely speculative. The legal landscape already accommodates the existence of “Zero-member LLCs,” raising intriguing questions about how AI-operated LLCs would be regulated and held accountable for their actions. As lawmakers confront this unprecedented challenge, it becomes crucial to determine how a nonhuman with human-level cognitive capabilities would respond to legal responsibilities and consequences.
In their thought-provoking article, titled “Artificial intelligence and interspecific law,” Daniel Gervais of Vanderbilt Law School and John Nay of The Center for Legal Informatics at Stanford University present a compelling case for further research on the legal compliance of nonhuman entities with human-level intellectual capacity. The authors emphasize that considering an interspecific legal system offers an opportunity to establish effective governance for AI. Surprisingly, they also contend that the legal system may be better prepared for AI agents than commonly believed.
Gervais and Nay propose a practical approach to integrate law-following behaviors into AI by imparting legal training to AI agents and utilizing large language models (LLMs) to monitor, influence, and incentivize their actions. This training would equip AI agents with an understanding of both the “letter” and the “spirit” of the law, enabling them to address complex legal issues and ambiguous scenarios where human court opinions are necessary. Monitoring AI is crucial to ensure its compliance with human laws, as it facilitates tracking, shaping, and mitigating potential harm caused by AI agents.
The authors argue that wrapping AI agents in legal entities is fundamental to maximizing the advantages offered by AI while maintaining human control over its activities. Without imposing legal responsibilities on AI, the ability to track and regulate AI actions would be severely limited. This proactive approach allows for effective oversight, shaping AI’s behavior, and preventing potential harm. However, Gervais and Nay acknowledge a potential alternative – halting AI development altogether. Yet, they conclude that given the momentum of capitalism, the vast potential for innovation, and society’s reliance on growth, a complete halt is unlikely.
As AI increasingly replaces human cognitive tasks, its integration into legal frameworks becomes a pressing concern. Gervais and Nay’s article offers a valuable perspective on the complex relationship between AI and the law. By exploring legal compliance for entities with human-level intellectual capacity, they shed light on the need for comprehensive research in this area. As society embraces AI’s potential, it becomes imperative to establish robust legal mechanisms that strike a balance between fostering innovation and preserving human control.
The emergence of AI-operated corporations necessitates a thorough examination of the legal implications and regulation of nonhuman entities. Gervais and Nay’s groundbreaking article highlights the importance of AI research in ensuring the legal compliance of AI agents with human-level cognitive capabilities. By embedding law-following behaviors and actively governing AI, society can harness the immense potential of AI while addressing concerns related to control, responsibility, and accountability. A collaborative effort between legal and technological fields is essential to shape the future of AI in a manner that aligns with societal values and aspirations.