Artificial intelligence: AI 'too dangerous to release' STUNS experts – 'Mind blowing'

The artificial intelligence tool Generative Pre-training Transformer (GPT-3) has stunned experts with its unerring ability to design websites, prescribe medication and even answer questions. GPT-3 is the third generation of the machine learning model, where computers can automatically learn from their experiences without having to be programmed.

The AI’s predecessor GPT-2, made headlines after being dubbed “too dangerous to release” due to its ability to create text apparently indistinguishable from that written by human beings.

While GPT-2 had 1.5 billion parameters which could be set, the AI’s successor has 175 billion parameters.

A parameter is a variable which affects the data’s prominence in the machine learning tool, and changing them affects the output of the tool.

At the time when GPT-2 was deemed “too dangerous” to release, it used 124 million parameters.

READ: Mars crater key to human colonisation in space captured in never-before-seen video

GPT-3 is currently in closed-access, with demonstrations of the AI’s incredible ability being shared on social media.

Coder Sharif Shameem has shown how artificial intelligence can be used to describe designs which will then be built by the AI despite it not being trained to do so.

Designer Jordan Singer created a similar process for app designs, while a medical student at Kings College London Qasim Munye showed how the program can access information to answer medical questions.

Given an incomplete image, the cutting-edge artificial intelligence can also be used to ‘auto-complete’ it.

Jack Clark, the OpenAI’s head of policy, said: “We need to perform experimentation to find out what they can and can’t do.

“If you can’t anticipate all the abilities of a model, you have to prod it to see what it can do. There are many more people than us who are better at thinking what it can do maliciously.”

The achievement is visually impressive, with some going as far as to suggest the tool will be a threat to industry or even that it is showing self-awareness.

However, OpenAI’s CEO Sam Altman has described the “hype” as “way too much”.

He said: “It’s impressive (thanks for the nice compliments!) but it still has serious weaknesses and sometimes makes very silly mistakes.

“AI is going to change the world, but GPT-3 is just a very early glimpse. We have a lot still to figure out.”

Moreover, questions have been raised regarding exactly what achievements are made by GPT-3.

Kevin Lacker, who formerly worked as a computer scientist at Facebook and Google, showed that while the artificial intelligence can respond to ‘common sense’ questions, answers that would be obvious to a human are unavailable to the machine and questions which are ‘nonsense’ are responded to as if they are not.

source: express.co.uk