A developer tests the Maestro system.by Tracy Staedter
In an attempt to explore the safety issues surrounding Internet navigation while behind the wheel, Meirav Taieb-Maimon, a faculty member at Ben-Gurion University of the Negev in Israel, designed a voice-activated search engine.
The system allows drivers to dictate a query and navigate the results while keeping their hands on the wheel.
“We wanted to study the effects of voice-based search engine while driving on drivers’ distraction,” said Taieb-Maimon.
The system is made of three basic elements: two off-the-shelf speech recognition components from Microsoft and a custom piece of software called “Maestro,” which orchestrates the movement of speech to text, text to Internet search, and results to speech.
Let’s say a person wants to find a restaurant in Manhattan that has gotten good reviews. First, she would first dictate her query by saying, “Restaurants New York City.”
Maestro triggers the speech recognition software to convert the speech to text and then delivers it to a so-called “query builder,” which puts the request in language a search engine such as Google can understand.
The query builder returns the query to Maestro, which then delivers it to a search engine.
When the results come back, Maestro sends them to the text to speech component for the driver to hear.
The system does not dictate a long list of Web addresses, rather it automatically organizes the data into menu choices ? for example, “one” for location; “two” for price; or “three” for reviews ? for the driver to navigate through until they come to the desired Web site.
“It sounds like these designers are closer to a conventional interface with a menu-driven system,” said Richard Stern, professor of electrical engineering and computer science at Carnegie Mellon University, who for the last 20 years has been involved with automatic speech recognition and language systems.
Conventional interaction is the kind people find when they call an airline to check on a flight arrival, for example, said Stern. On the other end is an interaction more similar to talking with a human.
“The ideal design is one that begins with a human-initiated interaction and has a machine that knows when the user is getting into trouble and knows when to help out,” Stern said. “But you have to start somewhere.”
Taieb-Maimon would like to see more safety tests conducted before such a system finds it’s way into automobiles.
Currently, she is preparing a study that will compare driver distraction while using the Maestro system to how much a driver is preoccupied while not using the voice-activated search function to how much they are distracted while conversing with another passenger.