Talk of AI dangers has ‘run ahead of the technology’, says Meta’s Nick Clegg

Comments come after Meta said it was opening access to its new large language model, Llama 2

Meta president global affairs Nick Clegg thinks many of the existential warnings sounded about AI relate to models that don’t currently exist. Photograph: Kenzo Tribouillard/AFP
Meta president global affairs Nick Clegg thinks many of the existential warnings sounded about AI relate to models that don’t currently exist. Photograph: Kenzo Tribouillard/AFP

Talk of artificial intelligence (AI) models posing a threat to humanity has “run in advance of the technology”, according to Sir Nick Clegg.

The former leader of the UK Liberal Democrat party and UK deputy prime minister said concerns around “open-source” models, which are made freely available and can be modified by the public, were exaggerated, and the technology could offer solutions to problems such as hate speech.

It comes after Facebook’s parent company Meta said on Tuesday that it was opening access to its new large language model, Llama 2, which will be free for research and commercial use.

Generative AI tools such as ChatGPT, a chatbot that can provide detailed prose responses and engage in human-like conversations, have become widely used in the public domain in the last year.

READ SOME MORE

Norway regulator to fine Meta over privacy breaches unless action takenOpens in new window ]

In an interview on BBC Radio 4’s Today programme on Wednesday, Sir Nick, president of global affairs at Meta, said: “My view is that the hype has somewhat run in advance of the technology.

“I think a lot of the existential warnings relate to models that don’t currently exist, so-called super-intelligent, super-powerful AI models – the vision where AI develops an autonomy and agency on its own, where it can think for itself and reproduce itself.

“The models that we’re open-sourcing are far, far, far short of that. In fact, in many ways they’re quite stupid.”

Sir Nick said a claim by Dame Wendy Hall, co-chair of the Government’s AI Review, that Meta’s model could not be regulated and was akin to “giving people a template to build a nuclear bomb” was “complete hyperbole”, adding: “It’s not as if we’re at a T-junction where firms can choose to open source or not. Models are being open-sourced all the time already.”

He said Meta had 350 people “stress-testing” its models over several months to check for potential issues, and that Llama 2 was safer than any other large language models currently available on the internet.

Meta has previously faced questions around security and trust, with the company fined €1.2 billion by the Data Protection Commissioner in May over the transfer of data from European users to US servers. – PA