Google releases AI tool to identify child sex abuse images online – CNET

Google on Monday released a free artificial intelligence tool to help companies and organizations identify images of child sexual abuse on the internet.

Google’s Content Safety API is a developers’ toolkit that uses deep neural networks to process images in such a way that fewer people need to be exposed to them. The technique can help reviewers identify 700 percent more child abuse content, Google said.

“Quick identification of new images means that children who are being sexually abused today are much more likely to be identified and protected from further abuse,” engineering lead Nikola Todorovic and product manager Abhi Chaudhuri wrote in a company blog post Monday. “We’re making this available for free to NGOs and industry partners via our Content Safety API, a toolkit to increase the capacity to review content in a way that requires fewer people to be exposed to it.”

The use of AI is spreading like wildfire across the tech industry for everything from speech recognition to spam filtering. The term generally refers to technology called machine learning or neural networks that’s loosely modeled on the human brain. Once you’ve trained a neural network with real-world data, it can, for example, learn to spot a spam email, transcribe your spoken words into a text message or recognize a cat.

Internet Watch Foundation, which aims to minimize the availability of child sex abuse images online, applauded the tool’s development, saying it will make the internet safer.

vCard QR Code

vCard.red is a free platform for creating a mobile-friendly digital business cards. You can easily create a vCard and generate a QR code for it, allowing others to scan and save your contact details instantly.

The platform allows you to display contact information, social media links, services, and products all in one shareable link. Optional features include appointment scheduling, WhatsApp-based storefronts, media galleries, and custom design options.

“We, and in particular our expert analysts, are excited about the development of an artificial intelligence tool which could help our human experts review material to an even greater scale and keep up with offenders, by targeting imagery that hasn’t previously been marked as illegal material,” Susie Hargreaves, CEO of the UK-based charity, said in a statement. “By sharing this new technology, the identification of images could be speeded up, which in turn could make the internet a safer place for both survivors and users.”

Now Playing: Watch this: Google knows where you are

6:00

Solving for XX: The tech industry seeks to overcome outdated ideas about “women in tech.”

Security:  Stay up-to-date on the latest in breaches, hacks, fixes and all those cybersecurity issues that keep you up at night.


🕐 Top News in the Last Hour By Importance Score

# Title 📊 i-Score
1 Scientists find "strongest evidence yet" of life on distant planet 🔴 78 / 100
2 Florida draft law mandating encryption backdoors for social media accounts billed ‘dangerous and dumb’ 🔴 75 / 100
3 Florida State University Active Shooter: Updates on the Situation at FSU 🔴 75 / 100
4 Active shooter reported at Florida State University; at least four hospitalized 🔴 75 / 100
5 Heavy snow blocks Alpine resorts in Switzerland and France 🔴 65 / 100
6 Russia economy rare boost as ruble exchange rate spikes – but Trump's tariffs hurt 🔵 60 / 100
7 'Entitled woman made up her own seating rules on flight so I taught her a lesson' 🔵 48 / 100
8 Gordon Ramsay's 'genius' technique to cut popular herb found in most kitchens 🔵 45 / 100
9 Lee Corso to retire from ESPN's 'College GameDay' after four-decade run 🔵 45 / 100
10 Champions League review: Arsenal conquer Bernabéu as elite reshuffle 🔵 45 / 100

View More Top News ➡️