Safer and more reliable autonomous systems, such as self-driving vehicles, may be possible thanks to a new understanding of deep learning, a type of artificial intelligence (AI) that mimics the way humans learn and process information.
The study, conducted at Bar-Ilan University and published in the Physica A journal, highlights the interplay between AI confidence levels and decision-making processes.
“Understanding the confidence levels of AI systems allows us to develop applications that prioritize safety and reliability,” explained Ella Koresh, an undergraduate student who contributed to the research.
“For instance, in the context of autonomous vehicles, when confidence in identifying a road sign is exceptionally high, the system can autonomously make decisions. However, in scenarios where confidence levels are lower, the system prompts for human intervention, ensuring cautious and informed decision-making.”
According to the researchers, “deep learning architectures can achieve higher confidence levels for a substantial portion of inputs, while maintaining an overall average confidence.”
Put more simply: deep learning AI can be more certain about a lot of things without sacrificing overall reliability.
The ability to bolster the confidence levels of AI systems establishes a new benchmark for AI performance and safety and could be applicable across a spectrum of fields, from AI-driven writing and image classification to pivotal decision-making processes in healthcare and autonomous vehicles.
In addition to Koresh, the study was authored by Yuval Meir, Ofek Tevet, Yarden Tzach and Prof. Ido Kanter from the department of physics at Bar-Ilan and the university’s brain research center.