Whether human laws can be imposed as a design constraint on AI models is a question for the engineers. But if it can be done, it would help to settle many otherwise intractable debates about how the technology should be used and regulated in a world of competing values.
CAMBRIDGE – If the British computer scientist Alan Turing’s work on “thinking machines” was the prequel to what we now call artificial intelligence, the late psychologist Daniel Kahneman’s bestselling Thinking, Fast and Slow might be the sequel, given its insights into how we ourselves think. Understanding “us” will be crucial for regulating “them.”
That effort has rapidly moved to the top of policymakers’ agenda. On March 21, the United Nations unanimously adopted a landmark resolution (led by the United States) calling on the international community “to govern this technology rather than let it govern us.” And that came on the heels of the European Union’s AI Act and the Bletchley Declaration on AI safety, which more than 20 countries (most of them advanced economies) signed last November. Moreover, country-level efforts are ongoing, including in the US, where President Joe Biden has issued an executive order on the “safe, secure, and trustworthy development and use” of AI.
These efforts are a response to the AI arms race that started with OpenAI’s public release of ChatGPT in late 2022. The fundamental concern is the increasingly well-known “alignment problem”: the fact that an AI’s objectives and chosen means of pursuing them may not be deferential to, or even compatible with, those of humans. The new AI tools also have the potential to be misused by bad actors (from scam artists to propagandists), to deepen and amplify pre-existing forms of discrimination and bias, to violate privacy, and to displace workers.
CAMBRIDGE – If the British computer scientist Alan Turing’s work on “thinking machines” was the prequel to what we now call artificial intelligence, the late psychologist Daniel Kahneman’s bestselling Thinking, Fast and Slow might be the sequel, given its insights into how we ourselves think. Understanding “us” will be crucial for regulating “them.”
That effort has rapidly moved to the top of policymakers’ agenda. On March 21, the United Nations unanimously adopted a landmark resolution (led by the United States) calling on the international community “to govern this technology rather than let it govern us.” And that came on the heels of the European Union’s AI Act and the Bletchley Declaration on AI safety, which more than 20 countries (most of them advanced economies) signed last November. Moreover, country-level efforts are ongoing, including in the US, where President Joe Biden has issued an executive order on the “safe, secure, and trustworthy development and use” of AI.
These efforts are a response to the AI arms race that started with OpenAI’s public release of ChatGPT in late 2022. The fundamental concern is the increasingly well-known “alignment problem”: the fact that an AI’s objectives and chosen means of pursuing them may not be deferential to, or even compatible with, those of humans. The new AI tools also have the potential to be misused by bad actors (from scam artists to propagandists), to deepen and amplify pre-existing forms of discrimination and bias, to violate privacy, and to displace workers.