AI news: Neural network learns when it should not be trusted – ’99% won’t cut it’

AI is becoming increasingly central to our everyday lives, from driverless cars to medical diagnosis. But although such next-gen networks excel at recognising patterns in complex datasets, engineers are only now understanding how we know when they are correct.

AI experts developed a method for modelling the machine’s confidence level based on the quality of the available data.

MIT engineers expect this advance may eventually save lives, as deep learning is now widely deployed in everyday ways.

For example, a network’s level of certainty can be the difference between an autonomous vehicle determining between a clear crossroad and “it’s probably clear, so stop just in case.”

This approach, led by MIT PhD student Alexander Amini, dubbed “deep evidential regression”, accelerates the process and could lead to even safer AI technology.

READ MORE: AI-manipulated media will be ‘WEAPONISED’ to trick military

He said: “We need the ability to not only have high-performance models, but also to understand when we cannot trust those models.

“This idea is important and applicable broadly. It can be used to assess products that rely on learned models.

“By estimating the uncertainty of a learned model, we also learn how much error to expect from the model, and what missing data could improve the model.”

The AI analyst adds how previous approaches to uncertainty analysis are based on Bayesian deep learning.

“We really care about that one percent of the time, and how we can detect those situations reliably and efficiently.”

The researchers started with a challenging computer vision task to put their approach to the test.

They trained their neural network to analyse an image and estimate the focal depth for each pixel.

Self-driving cars use similar calculations to estimate proximity to pedestrian or another vehicle – no simple task.

As the researchers had hoped, the network projected high uncertainty for pixels where it predicted the wrong depth.

Amini said: “It was very calibrated to the errors that the network makes, which we believe was one of the most important things in judging the quality of a new uncertainty estimator.”

The test revealed the network’s ability to flag when users should not place full trust in its decisions.

In such examples, “if this is a health care application, maybe we don’t trust the diagnosis that the model is giving, and instead seek a second opinion,” Amini added.

Dr Raia Hadsell, a DeepMind artificial intelligence researcher not involved with the workDeep evidential describes regression as “a simple and elegant approach that advances the field of uncertainty estimation, which is important for robotics and other real-world control systems.

She added: “This is done in a novel way that avoids some of the messy aspects of other approaches — [for example] sampling or ensembles — which makes it not only elegant but also computationally more efficient — a winning combination.”

source: express.co.uk