The bottom line of ambitious rules
Global rules are urgently needed to ensure global artificial intelligence governance is anchored in international law, and studied – and implemented – through a truly global lens
How much have recently introduced digital technologies improved productivity?
No clear answer applies universally. The Organisation for Economic Co-operation and Development, for example, speaks of a ‘productivity paradox’ that shows that over the past decades, during which we saw widespread digitalisation, there was also the ‘protracted slowdown of aggregate productivity growth, contrary to all expectations’. Additionally, digitalisation is leading to the concentration of access to resources such as capital, data, microchips and computing power. This makes competition from smaller companies challenging, as it is in the space of artificial intelligence. The growth of large tech companies often does not translate into job creation but rather into capital concentration. All in all, it amounts to a broad concentration of power over digital technologies in the hands of private companies, which hurts the public interest and the agency of democratic governments.
How much impact will AI have?
According to the International Monetary Fund’s ‘Gen-AI: Artificial Intelligence and the Future of Work’, AI will have significant impacts on jobs worldwide. Some of the anticipated effects are surprising: the most developed economies are also set to experience the most job losses. The IMF foresees that 40% of jobs in the world will be affected by AI. Those will not all be lost, but even a small percentage of lost jobs will have significant economic, social and political implications. Governments should prepare now.
What rules are needed to ensure that AI delivers the greatest benefits and the least harm?
The world over, researchers, policymakers and executives are listing a wide variety of risks and harms from the use of AI. Some focus on risks they consider existential and to be anticipated with more powerful AI in the future, while others point to the known risks and harms affecting democracy, equal treatment and fraud today. There is a lively debate about what counts as the most urgent, but little debate about the fact that there is risk associated with the use of AI. Governments should ensure they develop criteria and guardrails through regulation, but also use investments and government spending wisely to shape AI markets.
On the international level we see a lot of activity. The European Union’s AI Act has been adopted as the first comprehensive and legally binding law on AI in the democratic world. The United Nations has invited an advisory body to look at the global governance of AI, and there are initiatives at the G7 with the Hiroshima AI Process, as well as at the OECD. While it is understandable to try to collaborate across borders, the need for domestic regulation, especially in the United States, should not be overlooked. With its high concentration of AI companies, the US would set the tone for the world if it actually adopted AI laws. It’s not only about adopting new regulations for AI, though; it’s also about enforcing existing principles and laws in new contexts. We don’t need AI to understand that non-discrimination, antitrust rules and national security are important protections that governments must uphold and enforce.
The work I am involved with as a member of the UN AI Advisory Board focuses on ensuring global AI governance is anchored in international law, based on universal human rights and inclusive with a global outlook. To make that a reality, global rules are urgently needed. Too often people in the Global South or in developing economies are left out of the analysis and policy debates about AI. The UN is in a unique position to ensure it looks at AI governance needs through a truly global lens. Hopefully, the least we can achieve is to develop a bottom line of rights protecting AI governance that countries will use as a minimum threshold, although some countries may decide to adopt more ambitious rules.
How can G7 governors and their leaders best help?
G7 members have taken a strong step by adopting the Hiroshima AI Process with the code of conduct on AI as well as several ministerial declarations. For the articulated principles in these statements to become more meaningful, they should be operationalised. I hope the Apulia Summit will dedicate time to this important next phase of the G7’s AI leadership.