Artificial intelligence warning: AI deemed ’too dangerous’ released into the world

Experts at the Elon Musk-founded OpenAI feared the AI, dubbed “GPT-2”, was so powerful it could be maliciously misused by everyone from corrupt politicians to criminals. GPT-2 was designed to accurately predict the succeeding words when fed a piece of text.

By doing so, the artificial intelligence can create long strings of writing eerily indistinguishable from copy created by a human.

But it soon became clear the AI was too good at its job.

GPT-2 is so powerful the machine learning could be used to scam civilians and undermine trust in anything you read.

In addition, the artificial intelligence can be abused by extremist groups to create “synthetic propaganda”.

vCard QR Code

vCard.red is a free platform for creating a mobile-friendly digital business cards. You can easily create a vCard and generate a QR code for it, allowing others to scan and save your contact details instantly.

The platform allows you to display contact information, social media links, services, and products all in one shareable link. Optional features include appointment scheduling, WhatsApp-based storefronts, media galleries, and custom design options.

READ MORE: World’s thickest glacier is melting 80 YEARS ahead of schedule

But OpenAI has since released increasingly complex versions and has now made the full version available.

The full version is more convincing than the early incarnation of the AI.

The relatively “marginal” increase in credibility is what encouraged the researchers to make it available, OpenAI announced.

The company, which is no longer associated with SpaceX CEO Elon Musk, hopes the release can partly help the public understand how such a tool could be misused.

OpenAI believe GPT-2 will help inform debate among AI experts about how to mitigate such danger.

Scientists warned in February how malicious people could misuse the programme in numerous ways.

The outputted text could create misleading news articles, impersonate other people, automatically create abusive or fake content for social media.

They noted there were likely a variety of other uses not even have been imagined yet.

Such misuses would require the public to become more critical about the text they consume, which could have been generated by artificial intelligence, they said.

The researcher wrote: ”These findings, combined with earlier results on synthetic imagery, audio, and video, imply that technologies are reducing the cost of generating fake content and waging disinformation campaigns.

“The public at large will need to become more skeptical of text they find online, just as the ‘deep fakes’ phenomenon calls for more skepticism about images.”

source: express.co.uk


🕐 Top News in the Last Hour By Importance Score

# Title 📊 i-Score
1 Is a bitcoin economy the future? HARVEY DORSET watches a new documentary claiming it is 🔴 80 / 100
2 Shannon Sharpe ignores $50M rape lawsuit in first podcast since accusations 🔴 78 / 100
3 Royal Navy ready to defy China in Taiwan Strait as it sends HMS Prince of Wales to the region – angering Beijing 🔴 78 / 100
4 Who Is Kanye West’s Cousin That He Wrote a Song About? 🔴 75 / 100
5 Pinterest is pushing teens to close the app and pause notifications at school: ‘Stay in the moment’ 🔴 72 / 100
6 Shannon Sharpe threatens to 'choke the f***ing s***' out of his rape accuser in astonishing recording 🔴 72 / 100
7 R&A ‘would love’ the Open to return to Donald Trump’s Turnberry course 🔴 62 / 100
8 FDA issues urgent warning: Don't eat popular COOKIE brand after customers report rancid taste 🔵 55 / 100
9 The Highest-Priced Cards In Pokémon TCG's Journey Together 🔵 45 / 100
10 ‘Floyd Collins’ review: Jeremy Jordan stars in a Broadway musical about a forgotten American tragedy 🔵 45 / 100

View More Top News ➡️