Across the globe, there is growing awareness of the risks of unchecked artificial intelligence research and development. Governments are moving fast in an attempt to address this, using existing legal frameworks or introducing new standards and assurance mechanisms. Recently, the White House proposed an AI Bill of Rights.
But the great paradox of a field founded on data is that so little is known about what’s happening in AI, and what might lie ahead.
This is why we believe the time is right for the creation of a global AI observatory — a GAIO — to better identify risks, opportunities, and developments and to predict AI’s possible global effects.
The world already has a model in the Intergovernmental Panel on Climate Change. Established in 1988 by the United Nations with member countries from around the world, the IPCC provides governments with scientific information they can use to develop climate policies. A comparable body for AI would provide a reliable basis of data, models, and interpretation to guide policy and broader decision-making about AI.
At present, numerous bodies collect valuable AI-related metrics. Nation-states track developments within their borders; private enterprises gather relevant industry data; and organizations like the OECD’s AI Policy Observatory focus on national AI policies and trends. While these initiatives are a crucial beginning, much about AI remains opaque, often deliberately. It is impossible to regulate what governments don’t understand. A GAIO could fill this gap through four main areas of activity.
1. Create a global, standardized incident-reporting database concentrating on critical interactions between AI systems and the real world. For example, in the domain of biorisk, where AI could aid in creating dangerous pathogens, a structured framework for documenting incidents related to such risks could help mitigate threats. A centralized database would record essential details about specific incidents involving AI applications and their consequences in diverse settings — examining factors such as the system’s purpose, use cases, and metadata about training and evaluation processes. Standardized incident reports could enable cross-border coordination.
2. Assemble a registry of crucial AI systems focused on AI applications with the largest social and economic impacts, as measured by the number of people affected, the person-hours of interaction, and the stakes of their effects, to track their potential consequences.
3. Bring together global knowledge about the impacts of AI on critical areas such as labor markets, education, media, and health care. Subgroups could orchestrate the gathering, interpretation, and forecasting of data. A GAIO would also include metrics for both positive and negative impacts of AI, such as the economic value created by AI products and the impact of AI-enabled social media on mental health and political polarization.
4. Orchestrate global debate through an annual report on the state of AI that analyzes key issues, patterns that arise, and choices governments and international organizations need to consider. This would involve rolling out a program of predictions and scenarios focused primarily on technologies likely to go live in the succeeding two to three years. The program could build on existing efforts, such as the AI Index produced by Stanford University.
A focus on facts rather than prescriptions
A GAIO would also need to innovate. Crucially, it would use collective intelligence methods to bring together inputs from thousands of scientists and citizens, which is essential in tracking emergent capabilities in a fast-moving and complex field. In addition, a GAIO would introduce whistleblowing mechanisms similar to the U.S. government’s incentives for employees to report on harmful or illegal actions.
To succeed, a GAIO would need a comparable legitimacy to the IPCC. This can be achieved through its members including governments, scientific bodies, and universities, among others, and by ensuring a sharp focus on facts and analysis more than prescription, which would be left in the hands of governments.
Contributors to the work of a GAIO would be selected, as with the IPCC, on the basis of nominations by member organizations, to ensure depth of expertise, disciplinary diversity, and global representativeness. Their selection would also require maximum transparency, to minimize both real and perceived conflicts of interest.
The AI community and businesses using AI tend to be suspicious of government involvement, often viewing it as a purveyor of restrictions. But the age of self-governance is now over. We propose an organization that exists in part for governments, but with the primary work undertaken by scientists. All international initiatives related to AI would be welcomed.
In order to grow, a GAIO will need to convince key players from the U.S., China, the U.K., the European Union, and India, among others, that it will fill a vital gap. The fundamental case for its creation is that no country will benefit from out-of-control AI, just as no country benefits from out-of-control pathogens.
The greatest risk now is multiple unconnected efforts. Unmanaged artificial intelligence threatens important infrastructure and the information space we all need to think, act, and thrive. Before nation-states squeeze radically new technologies into old legal and policy boxes, the creation of a GAIO is the most feasible step.
This article is written by Sir Geoff Mulgan, a professor of collective intelligence, public policy, and social innovation at University College London;a professor of information management at MIT Sloan and the director of the MIT Center for Collective Intelligence; Divya Siddharth and Saffron Huang of the Collective Intelligence Project; Joshua Tan of the Metagovernance Project; and Lewis Hammond of the Cooperative AI Foundation.