In the field of artificial intelligence (AI), inclusive global cooperation is essential for navigating the complex societal impacts of this rapidly evolving technology. To address this need, Kevin Frazier, an assistant professor of law at St. Thomas University, is spearheading an effort to create new legal educational tools for the AI sector. In a recent interview with VentureBeat, Frazier discussed his development of an open-source legal syllabus aimed at providing teaching materials on AI, law, and policy.
The modular curriculum that Frazier has built covers foundational AI concepts, associated risks, and legal frameworks. It also includes lectures from scholars in the field, providing comprehensive knowledge of technology and its implications. Frazier’s goal is to foster informed, multidisciplinary dialogue on shaping oversight frameworks through the use of “living documents” like this syllabus. By embracing global participation, initiatives like this can best guide responsible progress as AI continues to reshape society.
The Need for Broad Representation
Frazier’s motivation in involving more voices in AI governance talks is rooted in the observation that the current landscape of policy and legal conversations in this field is exclusive. He recognizes the necessity of more inclusive and representative discussions to address the wide-ranging implications of AI. Frazier highlights that previous governance efforts have been dominated by ideas of self-regulation by CEOs, who may not possess a comprehensive understanding of the technology.
To build a more inclusive and expansive research agenda, Frazier emphasizes the importance of developing informed perspectives worldwide. His open-source syllabus serves as a platform for cultivating expertise and aims to bridge the knowledge gap. Whether attending St. Thomas University or the Harvard Kennedy School, students have the opportunity to receive an education that prepares them to be active contributors in the AI governance conversation.
When envisioning effective AI governance frameworks, Frazier looks to other technologies that have reshaped society for guidance. He points to geoengineering (also known as climate engineering) as an example. Like AI, geoengineering involves introducing complex risks through large-scale environmental modifications with far-reaching implications. Frazier notes that discussions on geoengineering, much like early AI policy talks, have been confined to a limited set of voices.
Frazier highlights the lack of understanding of the underlying technology in the legal and regulatory communities surrounding geoengineering. Without input from scientific communities, frameworks have struggled to emerge. Similarly, AI, with its potential to transform nearly every industry and community, requires governance informed by technical expertise. Frazier’s open syllabus aims to foster multidisciplinary and inclusive conversations by providing knowledge of AI systems’ inner workings, which is critical to developing governance aligned with the diverse impacts and opportunities presented by AI.
Frazier’s initiatives also respond to calls for more inclusive participation in global issues. He was particularly motivated by a Member of Parliament from Tanzania who challenged scholars on this front during an event hosted by the Future Society. The MP emphasized the importance of actively soliciting participation from communities in the global south. Frazier acknowledges that those from developing regions stand to experience the profound impacts of AI but often have limited involvement in governance discussions.
Representation from the populations most deeply affected is crucial to shape the conversation in a meaningful way. Frazier realizes the need to do a better job involving diverse communities in AI governance. By broadening viewpoints and engaging with a wide range of perspectives, inclusive decision-making can be fostered.
Frazier’s open syllabus reflects his vision of cultivating progress through collaborative partnerships. By connecting AI policy educators and making resources widely accessible, the goal is to facilitate knowledge sharing that advances inclusive governance. The modular structure of the syllabus also allows for ongoing evolution as other institutions contribute their localized expertise to the living document. The Legal Priorities Project and the Center for AI Safety, among other scholars, have provided feedback and support for the syllabus.
Frazier recognizes that business decision-makers also have a critical role to play in shaping AI governance. Their engagement and cooperation can contribute valuable insights and expertise to the conversation, ensuring that AI is governed in a way that considers both the benefits and risks it presents.
Inclusive global cooperation is vital in effectively governing AI and addressing its impacts on society. Kevin Frazier’s open-source legal syllabus represents a significant step towards cultivating informed and inclusive dialogue in AI governance. By involving diverse voices, drawing lessons from parallel technologies, responding to calls for inclusion, and fostering partnerships, the aim is to develop governance frameworks that are aligned with the wide-ranging implications of AI.