I was surprisingly anxious in the moments before my “truth test.” I was about to answer a dozen questions that would score my honesty and integrity.
It isn’t a written questionnaire, with right and wrong answers. It’s AI watching, listening and assessing me based on indicators over which I have no control – such as changes in my skin pigmentation, blood flow, pulse, voice, eye blinking, facial expressions, and signs of brain activity.
I have to answer each question aloud, and mark the answer, on a scale of 1 to 7, indicating how strongly I agree or disagree with each statement.
What matters more than the answers themselves are the levels of stress, cognition and emotion that the AI detects – over a standard laptop video link – as I answer them.
The AI is trained to recognize cognitive dissonance, the discomfort we feel when we hold — and when we verbalize — conflicting beliefs, values or attitudes.
The system, developed by Israel-based startup Revealense, picks up on a whole series of tiny clues and interprets them with such accuracy that insurance companies trust it to check whether claimants are being truthful about what was stolen.
Finance companies use it to make decisions about loans. Homeland security agencies use it to identify terror threats. Human resources managers use it to help make decisions about who gets the job and psychologists use it to help diagnose PTSD.
Tom Cruise, Bill Clinton
The company describes what it does as “analyzing human behavior DNA through responsible AI.”
The decisions we humans make, based on face-to-face contact, can be unreliable because they’re influenced by our culture, mental state and past experiences.
But AI is impartial and has no agenda. It makes precise and objective measurements of a person’s physiological changes within a specific context that are too subtle for us to see or hear.
Amit Cohen, the company’s chief product officer, shows me the Illuminator software in action with a couple of iconic video moments.
Actor Tom Cruise is asked whether Nicole Kidman was the love of his life, after their 11-year marriage ended with their “conscious uncoupling.”
“The minute he hears her name we see it triggers a lot of emotion and a lot of thinking. He becomes confused and nervous,” says Cohen, after the 38-second YouTube clip.
“I don’t know if he really, really loves Nicole Kidman or he really, really hates her. But what I do know is that when he’s asked this question something explodes inside him, even though he’s playing a poker face and he’s a movie actor and he knows how to act in front of the camera.”
Despite the actor’s calm exterior, the software picks up super-high levels of stress, emotion and cognition, from the involuntary signals he gives off.
Former US President Bill Clinton managed to score low, at first, when he declares on camera in 1998: “I did not have sexual relations with that woman.”
He was being truthful, to some extent. According to a very particular interpretation of “sexual relations,” he’d convinced himself he wasn’t lying about what happened between him and Monica Lewinsky.
Cohen says: “We see from the video that he lied, but he believed his own lie. He almost convinced everyone, but at the end of the video we see an explosion on his cognition and emotion. He’s trying to believe in his lie, but you can’t manipulate your internal biology.”
The Illuminator software can provide useful insights from as little as 20 seconds of reasonable-quality video.
Deepfake detector
The startup has also just introduced a new tool that uses the illuminator software to combat the growing threat of deepfakes in electoral processes, a timely launch given the US elections among many others coming in the next months.
The detector analyzes videos at scale, categorizing them as deepfake, authentic, or suspect for further examination. It can process unlimited volumes of content, from single videos to millions, making it an invaluable asset for election integrity.
“With deepfakes increasing by 245% year-on-year in 2024, the potential to impact major events like national elections is significant,” said Dov Donin, the company’s founder and CEO. “Our system is already used by several governments globally to protect democratic processes from disinformation campaigns.”
It is definitely a serious problem and fake videos can easily shape attitudes and impact voting behavior. In January, a robocall impersonating President Biden’s voice advised New Hampshire voters to abstain from voting in the Democratic primary, a tactic aimed at manipulating voter turnout.
Also in the US, deepfake manipulation of political figures has become widespread, with synthetic images and videos being used to sway voter sentiment. For example, a fake image of Trump embracing black supporters was circulated to bolster his popularity.
Internationally, China has been reported to use AI-driven content to influence elections, as seen in Taiwan’s early-year elections and India’s pre-election period, where a massive amount of AI-generated voice calls imitated public figures. T
“Today’s technologies enable the creation of highly realistic fake videos accessible to anyone online, amplifying their impact through social media – the most influential platform in democracies with its unparalleled reach,” said Donin.
“This situation underscores the urgent need for reliable and ethical fake detection technologies. Such tools are crucial for institutions and media to help global citizens distinguish between reality and manipulated content, ensuring a safer and more informed navigation of the modern world.”
The technology is also being used by US insurance companies to speed up claim settlements for people prepared to submit to a truth test.
They provide details of what was stolen from their home on a video link, then declare that they haven’t “topped up” their claim. Illuminator is watching, very closely, for signs of cognitive dissonance – otherwise known as lying or, in this case, “light fraud.”
It’s a smart version of the old polygraph lie detector, which relies on physical indicators — breathing rate, perspiration, blood pressure, and pulse rate. The polygraph, now a century old, can only be used for yes/no questions and has been dismissed by skeptics as junk science.
More recent attempts at a truth test have involved bombarding people with hundreds of very similar questions in rapid succession – based on the logic that they’ll give a true answer if they don’t have to time to think of a false one.
There are other companies that measure some aspects of involuntary human behavior, such as voice analysis or eye movements.
“As far as I know, we are the only company that has managed to gather several human factors working in parallel and then analyze it into a dashboard like ours,” Cohen tells ISRAEL21c.
PTSD and deepfakes
Revealense was founded in 2021 in Petah Tikva, central Israel, and has raised over $4 million in funding.
“We’re also using our technology for identifying the possibility for PTSD [post-traumatic stress disorder] among soldiers returning from the war,” says Cohen.
“We are already working with the Israeli military on a technology that can assess in a matter of minutes the chances of a soldier developing post-trauma.”
The future will bring new challenges in terms of deepfakes, he says, such as the convincing video that surface a couple of years ago that appeared to show actor Morgan Freeman declare: “I am not Morgan Freeman. And what you see is not real.”
The technology already exists to produce video that looks and sounds exactly like a president, an actor, or even your own family member, with potentially devastating consequences.
Scammers can create an ultra-sophisticated deepfake of your son, for example, in a video call saying: “Hi Dad, I’m short of cash and I’m stuck without fuel. Can you send me some money?”
But the deepfake won’t have the telltale signs of truth that Revealense’s AI is trained to detect and analyze.
“To protect humans from the danger of AI, specifically from deepfakes, we believe we can provide everyone with what we call a ‘mental ID’ that is unique, like your fingerprint to protect them from AI,” Cohen explains.
By the way, if you’re wondering how I fared in my truth test (and therefore whether you can believe what I’ve written), I’m sorry to tell you that the results are confidential.