Almost 40 percent of Americans aged 12 to 17 report having been bullied online. Seventy-four percent of video game players on websites like Fortnite have been harassed.
Those numbers alarmed Zohar Levkovitz, an Israeli startup entrepreneur who had just sold his company, Silicon Valley-based online advertising platform Amobee, for $350 million.
“I met with the heads of Facebook and Google and I asked them, what are you going to do? And they said “Nothing, there’s no problem,’” Levkovitz tells ISRAEL21c. “I’m a father. I realized that if the government can’t do anything about it and Facebook and Google won’t, then I will.”
Levkovitz teamed up with cybersecurity expert Ron Porat, who had developed an algorithm for exactly Lefkowitz’s concerns: it uses artificial intelligence to search through text, images, video and audio on social networks, hosting platforms and pretty much anywhere on the Internet to ferret out hate speech and online toxicity.
Levkovitz and Porat first called their company AntiToxin Technologies. “That sounded too medical,” Levkovitz says.
The company was rebranded L1ght
(“We are shedding light over the darkness of the Internet”) and raised a $15 million seed round earlier this year from Mangrove Capital Partners, Tribeca Venture Partners and Western Technology Investment. The company was formally founded in December 2018.
L1ght is not the same as “parental control applications that parents install on their child’s device to track the kid,” Levkovitz points out. “That tech is too limited. Besides, parents should not be tracking their kids.”
L1ght is also different than hate speech and bully trackers that look for specific keywords as triggers to flag inappropriate content. Dictionaries are limited – for instance, if the world “purple” gets blacklisted, people could type variations such as “prpl” to get around the filter.
“We use 75 different micro-classifiers that check for different behavior – hate speech, shaming, white supremacy, bullying,” Levkovitz explains. “One micro-classifier will check if there is nudity, another will check the age of the participant, another will look for sexual activity.”
A safety belt
Words are not enough, Levkovitz stresses. If a kid writes, “I want to kill you” on Facebook, that’s flagged. The same text on a gaming platform might have a much more benign meaning.
Context is also important. A child writing “I have a big exam on Sunday, I want to kill myself” would be treated with less alarm.
Profanity is not always a give-away, either. “If a pedophile talks to a child, there will not be any profanity,” Levkovitz says. “But over time you can spot an attitude. For example, 10-year-olds will never ask ‘What time are your parents coming home?’ or ‘Where are your parents working?’”
L1ght is focusing on service providers as its first customers. “It’s like buying a car,” Levkovitz explains. “There’s a seat belt already in the car. No one expects you to install it yourself. So, we say, let’s put a safety belt in the application itself.”
L1ght’s algorithms can be run from the cloud or, if a customer is big enough, installed on premises. L1ght has a separate version for law enforcement agencies – it’s a standalone machine that’s not connected to the Internet for safety reasons, Levkovitz notes.
Police agencies have been among L1ght’s first customers, and L1ght is now talking with the major social networks and search engines. Levkovitz isn’t able to get specific.
When inappropriate content is flagged, L1ght sends a message to its customer, which then must decide what action to take. L1ght has no access to the data. “We cannot approach the law agencies ourselves,” Levkovitz points out.
Why would a company like Facebook be interested in L1ght’s tools now when it previously told Levkovitz it wasn’t interested? After all, social media companies are designed to scale rapidly, which doesn’t go hand-in-hand with offering child safety.
“When we started 18 months ago, the biggest problem was that toxicity was free,” he tells ISRAEL21c.
But then some countries in Europe passed laws fining social networks and game companies $1 million per violation, particularly for pedophilia content. Money talks.
In the United States, a similar discussion is taking place around changing Section 230 of the Communications Decency Act, which would strip liability protection from Internet sites. The cost of doing business without protection against inappropriate content would jump.
L1ght won’t be free, but “it will be much less than the fine,” Levkovitz says.
It’s not just fear of fines that’s making this L1ght’s moment. High-profile boycotts of Facebook by advertisers such as Unilever and Coke in the shadow of increased sensitivity to racism and hate speech online “are costing them a lot of money.”
And then there are some companies that just want to do the right thing, Levkovitz says.
Covid-19 has played its part. During lockdowns, when the world was forced to interact primarily online, there was a 900% increase in hate speech on Twitter according to one report (much of that directed at the Chinese) and a 300% increase in violence, Levkovitz notes.
Just as alarming: “there is an increasing amount of anti-Semitic and racist comments.” Among teens in chats, hate speech is up 70%.
Policing chat groups
If prior to the pandemic, bullied kids could turn to adults at school to protect them, since the arrival of coronavirus and distance learning, kids who rely solely on social media effectively become isolated, Levkovitz says.
And although it’s impossible to tie suicide rates to hate speech, anecdotally from speaking with hospitals, Levkovitz says, he’s noted that “about 80% of kids who come after a suicide attempt experienced bullying or hate online.”
If the customer wants, L1ght can send a command not to a human moderator but directly to the system to block objectionable content. “We have the technology. We can eliminate bad words or take out parts of a chat,” Levkovitz says.
L1ght scored its first major success in 2018 when it ran its algorithms on WhatsApp’s public groups and found pedophiles in the six figures. WhatsApp subsequently suspended the accounts of 130,000 users, according to a report in the Financial Times.
The triumph was fleeting, though. “The pedophiles didn’t stop. They just opened other groups,” Levkovitz says.
That’s a problem with chat apps that use end-to-end encryption – there’s no way to see anything if the conversation is protected this way. “We believe that over time, any responsible social network will remove encryption for chats between teens. India is already talking about this,” Levkovitz says.
Unfortunately, apps like Telegram, which are promoted as impervious to outsiders, will not comply with L1ght’s suggestions nor with law enforcement agencies.
“There’s no need to go to the ‘dark web’ anymore,” Levkovitz bemoans. “You can just use Telegram or WhatsApp for encrypted messages. You can find pedophilia on Google and Bing.” Levkovitz has forbidden his children, aged 12, 8 and 5, from using apps like Telegram.
How about Zoom bombing, where hackers jump into a Zoom call to bombard it with inappropriate content? L1ght could address that – if Zoom is interested.
An algorithm saving lives
Levkovitz lived for 20 years in Silicon Valley but returned to Israel two years ago. He is now based – like L1ght – in Tel Aviv.
“Even when I was in Silicon Valley, all of my companies had research and development centers in Israel. I used to fly here almost every month. Now we’re in the same place. It’s easier in many ways.”
Levkovitz is somewhat of a local celebrity, serving as a judge on the Israeli-version of the TV reality show “Shark Tank.”
L1ght employs 45 people, most in Israel with small offices in Boston and San Francisco.
Levkovitz says his proudest moment was after the first commercial installation of L1ght was installed at a police station six months ago.
“We installed it on a Monday morning. By Wednesday morning, we arrested our first pedophile. It was the highlight of my career. I could see that my algorithm is actually saving lives.”
For more information, click here