When two cars collide on the freeway because the car in front stopped short and the car behind was not maintaining a safe distance, the driver of the tailgating car is considered at fault.

But what happens if that second car is driven not by a person but by a robot?

This question will be relevant as soon as autonomous vehicles and cars driven by human beings coexist on the roads. And if the question of assigning blame is not addressed, “there is very real risk that self-driving vehicles will never realize their lifesaving potential,” says Prof. Amnon Shashua, CEO of Intel’s Mobileye division in Israel, which is developing technology for autonomous vehicles.

Shashua’s solution: Responsible Sensitive Safety (RSS), a mathematical formula that the autonomous vehicle industry as well as the regulators and insurance companies covering those cars must all agree on.

The RSS model provides “specific and measurable parameters for the human concepts of responsibility and caution and defines a ‘safe state’ where the autonomous vehicle cannot cause an accident, no matter what action is taken by other vehicles,” said Shashua, who devised RSS with colleague Shai Shalev-Schwartz. Shashua also a senior vice president at Intel, which acquired Mobileye earlier this year.

“Although crashes caused by human error kill more than one million people annually,” Shashua writes in a paper delivered at the World Knowledge Forum in Seoul, South Korea earlier this month, “it may only take a few fatal crashes of a fully autonomous vehicle, where fault is uncertain, to meaningfully delay or forever foreclose” on the self-driving future.

In other words, a self-driving car following the RSS model could not rear-end a human-driven car because its algorithms would always keep it at a safe distance. That doesn’t mean that autonomous vehicles won’t be involved in accidents, just that they won’t be at fault unless there is a technical glitch.

Because the autonomous vehicle’s system of sensors will continuously collect data, like the “black box” on airplane, it should be possible to “rapidly, conclusively determine responsibility for incidents that involve an autonomous vehicle,” writes Shashua.

The Intel RSS model also is meant to clarify the legal liability in the event of an accident. Most importantly, if RSS is adopted across the industry it could reduce the number of fatalities in traffic accidents from about 40,000 a year in the US to only 40.

Experts agree that computer-driven cars – with 360-degree vision and lighting fast reaction times – will be better drivers than humans. As a result, “self-driving vehicles can and should be held to a standard of operational safety that is inordinately better than what we humans exhibit today,” Shashua writes.