In March 2023, a group of researchers and tech leaders, including Elon Musk and Steve Wozniak, published an open letter calling for a pause in the artificial intelligence (AI) race. They recommended that labs halt the training of technologies stronger than OpenAI’s GPT-4, the most sophisticated language-generating AI system, for at least six months. While AI has the potential to complete tasks more efficiently than humans, it also poses dangers, such as racial bias in facial recognition technology and the spread of misinformation.
The Free Rider Problem
The AI industry exemplifies the “free rider problem,” which arises from collective action problems. In such situations, everyone would benefit from a particular action, but each individual would benefit from not doing it. AI is a public good, and its benefits and dangers will affect everyone, even those who don’t use it. However, if some tech companies voluntarily halt their experiments, other corporations would have a monetary interest in continuing their own AI research to get ahead in the AI arms race.
The Need for Regulation
Decades of social science research on collective action problems has shown that regulation is often necessary when trust and goodwill are insufficient to avoid free riders. Regulations must be enforceable, and government action is often needed to ensure compliance. The Paris Agreement, a global accord on climate change, is voluntary, and the United Nations has no recourse to enforce it. Similarly, regulating AI development would require global collective action and cooperation, just as with climate change.
Effective regulation and enforcement of AI would require federal oversight of research and the ability to impose fines or shut down noncompliant AI experiments to ensure responsible development. Without enforcement, there will be free riders, and the AI threat won’t abate anytime soon. OpenAI has acknowledged the risks posed by AI and has called for regulation to ensure safety evaluations. However, the industry is continuing to develop and train advanced AIs.
In conclusion, the free rider problem grounds arguments to regulate AI development. The risks posed by AI and greenhouse gas emissions are not restricted to a program’s country of origin. Effective regulation and enforcement of AI would require global collective action and cooperation, just as with climate change. Regulations must be enforceable, and government action is often needed to ensure compliance. Without enforcement, there will be free riders, and the AI threat won’t abate anytime soon.