Britain's Prime Minister Rishi Sunak (left) attends an in-conversation event with Tesla and SpaceX's CEO Elon Musk in London, Nov 2, 2023. (PHOTO / POOL VIA AP)
LONDON - British Prime Minister Rishi Sunak championed a series of landmark agreements after hosting the first artificial intelligence (AI) safety summit but a global plan for overseeing the technology remains a long way off.
Over two days of talks between world leaders, business executives and researchers, tech CEOs such as Elon Musk and OpenAI's Sam Altman rubbed shoulders with the likes of US Vice-President Kamala Harris and European Commission chief Ursula von der Leyen to discuss the future regulation of AI.
Leaders from 28 nations signed the Bletchley Declaration, a joint statement acknowledging the technology's risks; the US and Britain both announced plans to launch their own AI safety institutes; and two more summits were announced to take place in South Korea and France next year.
The UK has also diverged from the EU by proposing light-touch approach to AI regulation, in contrast to Europe's AI Act, which is close to being finalized and will bind developers of what are deemed "high-risk" applications to stricter controls
But while some consensus was reached on the need to regulate AI, disagreements remain over exactly how that should happen – and who will lead such efforts.
READ MORE: Britain publishes 'Bletchley Declaration' on AI safety
Risks around rapidly developing AI have been an increasingly high priority for policymakers since Microsoft-backed Open AI released ChatGPT to the public last year.
The chatbot’s unprecedented ability to respond to prompts with human-like fluency has led some experts to call for a pause in the development of such systems, warning they could gain autonomy and threaten humanity.
Sunak talked of being "privileged and excited" to host Tesla founder Musk, but European lawmakers warned of too much technology and data being held by a small number of companies in one country, the United States.
READ MORE: Musk to integrate xAI with social media platform X
"Having just one single country with all of the technologies, all of the private companies, all the devices, all the skills, will be a failure for all of us," French Minister of the Economy and Finance Bruno Le Maire told reporters.
The UK has also diverged from the EU by proposing light-touch approach to AI regulation, in contrast to Europe's AI Act, which is close to being finalized and will bind developers of what are deemed "high-risk" applications to stricter controls.
"I came here to sell our AI Act," Vera Jourova, Vice-President of the European Commission.
Jourova said, while she did not expect other countries to copy the bloc's laws wholesale, some agreement on global rules was required.
US Vice-President Kamala Harris speaks to the media after the end of the AI Safety Summit at Bletchley Park in Milton Keynes, England, Nov 2, 2023. (PHOTO / AP)
"If the democratic world will not be rule-makers, and we become rule-takers, the battle will be lost," she said.
While projecting an image of unity, attendees said some power blocs in attendance tried to assert their dominance.
ALSO READ: Call for international standards on AI
Some suggested Harris had upstaged Sunak when the US government announced its own AI safety institute – just as Britain had a week earlier – and she delivered a speech in London highlighting the technology’s short-term risks, in contrast to the summit’s focus on existential threats.
Some experts have warned that open-source models could be used by terrorists to create chemical weapons, or even create a super-intelligence beyond human control
"It was fascinating that just as we announced our AI safety institute, the Americans announced theirs," said attendee Nigel Toon, CEO of British AI firm Graphcore.
ALSO READ: China calls for global cooperation at AI safety summit
A recurring theme of the behind-closed-door discussions, highlighted by a number of attendees, was the potential risks of open-source AI, which gives members of the public free access to experiment with the code behind the technology.
Some experts have warned that open-source models could be used by terrorists to create chemical weapons, or even create a super-intelligence beyond human control.
Speaking with Sunak at a live event in London on Thursday, Musk said: "It will get to the point where you’ve got open-source AI that will start to approach human-level intelligence, or perhaps exceed it. I don’t know quite what to do about it."
READ MORE: Dutch regulator urges companies to prepare for EU's AI Act
Yoshua Bengio, an AI pioneer appointed to lead a "state of the science" report commissioned as part of the Bletchley Declaration, told Reuters the risks of open-source AI were a high priority.
He said: "It could be put in the hands of bad actors, and it could be modified for malicious purposes. You can't have the open-source release of these powerful systems, and still protect the public with the right guardrails."