Abigail Klein Leichman

Google recently restricted its new AI Overviews feature after the search engine told users “odd, inaccurate or unhelpful” things – such as to eat rocks, put glue on their pizzas and that Barack Obama is Muslim.

As generative AI gets multimodal — OpenAI’s new GPT-4o multimodal AI application handles any combination of text, audio, video, and images — it has more potential for missteps and misinformation.

These issues are referred to as hallucinations, wrong responses, compliance violations and jailbreak attempts.

Tel Aviv- and San Francisco-based Aporia has launched a multimodal AI guardrails solution that addresses these issues for the first time, enhancing the safety and reliability of multimodal AI.

The new Guardrails give engineers the ability to add a layer of security and control between the app and the user. The system detects and mitigates 94% of hallucinations before they reach users via chat, audio or video.

“Multimodal AI is a gamechanger for the world we live in, but one that requires guardrails to ensure its safety, success and ultimate adoption,” said Liran Hason, CEO and cofounder of Aporia.

“Industries across the globe are coming to rely on AI, yet as many engineers are discovering, AI by itself is inherently unreliable,” Hason added.

“Customer service agents are quickly being replaced with AI, but imagine what would happen without the  human element in between AI and the end-user? As we have seen, disastrous accidents can occur quickly. Aporia Guardrails for Multimodal AI is the first solution to actively mitigate spoken and written responses in real time and support the human in the loop.”

Aporia’s solution also prevents the misuse of applications for malicious purposes such as prompt injections or prompt leakage, which can lead to the exposure of sensitive information. It can prevent explicit and offensive language in user interactions, identifying and blocking inappropriate wording and phrasing.

“With the release of multimodal applications, we knew we had to create a solution to protect emerging AI apps,” said Alon Gubkin, cofounder and CTO of Aporia.

“At Aporia, we believe continuous research into risk and prevention measures must go hand-in-hand with AI development. Keeping AI safe is our main objective, which is why we are committed to developing solutions, like our Guardrails for Multimodal AI Applications, that allow AI engineers to reap all the benefits of this world-changing technology.”

Aporia is recognized as a Technology Pioneer by the World Economic Forum for its mission of driving Responsible AI. Clients include Bosch, Lemonade, Levi’s, Munich RE, and Sixt.

More on AI

Fighting for Israel's truth

We cover what makes life in Israel so special — it's people. A non-profit organization, ISRAEL21c's team of journalists are committed to telling stories that humanize Israelis and show their positive impact on our world. You can bring these stories to life by making a donation of $6/month. 

Jason Harris

Jason Harris

Executive Director

More on Technology