SoulMete - Informative Stories from Heart. Read the informative collection of real stories about Lifestyle, Business, Technology, Fashion, and Health.

Chatbots could one day replace search engines. Here’s why that’s a terrible idea.

[ad_1]

Bender is not against using language models for question-answer exchanges in all cases. She has a Google Assistant in her kitchen, which she uses for converting units of measurement in a recipe. “There are times when it is super convenient to be able to use voice to get access to information,” she says.

But Shah and Bender also give a more troubling example that surfaced last year, when Google responded to the query “What is the ugliest language in India?” with the snippet “The answer is Kannada, a language spoken by around 40 million people in south India.”

No easy answers

There’s a dilemma here. Direct answers may be convenient, but they are also often incorrect, irrelevant, or offensive. They can hide the complexity of the real world, says Benno Stein at Bauhaus University in Weimar, Germany. In 2020, Stein and his colleagues Martin Potthast at Leipzig University and Matthias Hagen at Martin Luther University at Halle-Wittenberg, Germany, published a paper highlighting the problems with direct answers. “The answer to most questions is ‘It depends,’” says Matthias. “This is difficult to get through to someone searching.”

Stein and his colleagues see search technologies as having moved from organizing and filtering information, through techniques such as providing a list of documents matching a search query, to making recommendations in the form of a single answer to a question. And they think that is a step too far. 

Again, the problem is not the limitations of existing technology. Even with perfect technology, we’d not get perfect answers, says Stein: “We don’t know what a good answer is because the world is complex, but we stop thinking that when we see these direct answers.”

Shah agrees. Providing people with a single answer can be problematic because the sources of that information and any disagreement between them is hidden, he says: “It really hinges on us completely trusting these systems.” 

Shah and Bender suggest a number of solutions to the problems they anticipate. In general, search technologies should support the various ways that people use search engines today, many of which are not served by direct answers. People often use search to explore topics that they may not even have specific questions about, says Shah. In this case, simply offering a list of documents would be more useful. 

It must be clear where information comes from, especially if an AI is drawing pieces from more than one source. Some voice assistants already do this, prefacing an answer with “Here’s what I found on Wikipedia,” for example. Future search tools should also have the ability to say “That’s a dumb question,” says Shah. This would help the technology avoid parroting offensive or biased premises in a query.

Stein suggests that AI-based search engines could present reasons for their answers, giving pros and cons of different viewpoints.

However, many of these suggestions simply highlight the dilemma that Stein and his colleagues identified. Anything that reduces convenience will be less attractive to the majority of users. “If you don’t click through to the second page of Google results, you won’t want to read different arguments,” says Stein.

Google says it is aware of many of the issues that these researchers raise and works hard to develop technology that people find useful. But Google is the developer of a multibillion-dollar service. Ultimately, it will build the tools that bring in the most people. 

Stein hopes that it won’t all hinge on convenience. “Search is so important for us, for society,” he says.

[ad_2]
Source link