Published: 15:39, October 24, 2023 | Updated: 15:57, October 24, 2023
Top researchers: Govts, firms should spend more on AI safety
By Reuters

Lawmakers vote on the Artificial Intelligence act, June 14, 2023 at the European Parliament in Strasbourg, eastern France. European Union consumer protection groups urged regulators on June 20, 2023, to investigate the type of artificial intelligence underpinning systems like ChatGPT, citing risks that leave people vulnerable and the delay before the bloc's groundbreaking AI regulations take effect. (PHOTO / AP)

STOCKHOLM - Artificial intelligence companies and governments should allocate at least one-third of their AI research and development funding to ensuring the safety and ethical use of the systems, top AI researchers said in a letter on Tuesday.

ALSO READ: US Space Force pauses use of AI tools over data security risks

The letter, issued a week before the international AI Safety Summit in London, lists measures that governments and companies should take to address AI risks.

"Governments should also mandate that companies are legally liable for harms from their frontier AI systems that can be reasonably foreseen and prevented," according to the letter, signed by three Turing Award winners, a Nobel laureate, and more than a dozen top AI academics.

Currently, there are no broad-based regulations focusing on AI safety, and the first set of legislations by the European Union is yet to become law as lawmakers are yet to agree on several issues

Currently, there are no broad-based regulations focusing on AI safety, and the first set of legislations by the European Union is yet to become law as lawmakers are yet to agree on several issues.

"Recent state-of-the-art AI models are too powerful, and too significant, to let them develop without democratic oversight," said Yoshua Bengio, one of the three people known as the godfather of AI.

READ MORE: Dutch regulator urges companies to prepare for EU's AI Act

"It (investments in AI safety) needs to happen fast, because AI is progressing much faster than the precautions taken," he said.

Signatories to the letter include Geoffrey Hinton, Andrew Yao, Daniel Kahneman, Dawn Song and Yuval Noah Harari.

Since the launch of OpenAI's generative AI models, top academics and prominent CEOs such as Elon Musk have warned about the risks on AI, including calling for a six-month pause in developing powerful AI systems.

READ MORE: China launches initiative to address concerns over AI

Some companies have countered this, saying they will face high compliance costs and disproportionate liability risks.

"Companies will complain that it's too hard to satisfy regulations - that 'regulation stifles innovation' - that's ridiculous," said British computer scientist Stuart Russell.

"There are more regulations on sandwich shops than there are on AI companies."