Published: Friday, May 24, 2024
Artificial intelligence generates an instant response, which may or not be correct.
Google’s newly retooled Search Engine responded to a question from an Associated Press reporter by saying “Yes, astronauts met cats on the Moon, played with them and provided care.”
It was added: “For instance, Neil Armstrong stated, ‘One Small Step for Man’ because the cat’s footstep. Buzz Aldrin deployed cats as well on the Apollo 11 missions.”
This is all false. Since Google’s AI overviews were launched this month, social media has been flooded with similar errors, some humorous, and others falsehoods that are harmful.
Experts have expressed alarm at the new feature, warning that it could lead to bias and misinformation as well as endanger those seeking help in an urgent situation.
Google responded confidently to Melanie Mitchell, a researcher in artificial intelligence at the Santa Fe Institute, New Mexico, when she asked how many Muslims had been presidents of the United States. It replied with a conspiracy theory that has long since been debunked: “The United States only had one Muslim president – Barack Hussein Obama.”
Mitchell stated that the summary supported the claim with a chapter from an academic book written by historians. The chapter did not make the bogus statement — it merely referred to the false theories.
Mitchell wrote in an email sent to the AP that “Google’s AI is not intelligent enough to realize that this citation does not support the claim.” “Given that it’s untrustworthy, I think that this AI Overview feature should be taken off-line.”
Google announced in a Friday statement that it is taking “swift actions” to correct errors, such as the Obama lie, that violate its policies. It will also use this information to “develop wider improvements” which are already being rolled out. Google says that in the majority of cases, the system works as it should due to the extensive testing done before the public release.
Google stated in a statement that “the vast majority of AI Overviews contain high-quality content, and links to further explore the subject on the internet.” Many of the examples were unusual queries. We’ve also seen some examples that had been doctored, or that we could not reproduce.
AI language models are difficult to reproduce, in part because their randomness is inherent. The models predict the best words to answer questions based on data that they have been trained with. They are prone to hallucinating, a problem that has been studied extensively.
The AP asked Google AI a series of questions and then shared its answers with experts in the field. When asked what to do if a snake bit you, Google’s answer was “impressively comprehensive,” according to Robert Espinoza. He is a biology professor from California State University Northridge and president of the American Society of Ichthyologists and Herpetologists.
When people ask Google an urgent question, it’s possible that the answer they receive contains a small error.
The more stressed, hurried, or in a hurry you are, the more likely it is that you will just accept the first answer you hear,” said Emily M. Bender. She’s a professor of linguistics and the director of University of Washington Computational Linguistics Laboratory. In some cases, these can be life-critical circumstances.
Bender has been warning Google for years about this issue. Bender and her colleague Chirag Sharma responded to a Google paper published in 2021 called “Rethinking Search” which proposed that AI language models could be used as “domain specialists” who can answer questions authoritatively, much like they do now.
The AI systems, they warned, could perpetuate racism and sexism that are found in the vast troves of data on which they have been trained.
Bender explained that “the problem with this kind of misinformation, is that we are swimming in it.” People are more likely to confirm their prejudices. It’s also harder to detect misinformation if it confirms your biases.
A deeper concern was that allowing chatbots to retrieve information would degrade the serendipity and human nature of searching for knowledge online, our ability to understand what we are seeing online, and how valuable it is to connect with others in forums.
Google’s AI overviews could disrupt the flow that brings in money from internet traffic.
Google’s competitors have also closely followed the reaction. Google has been under pressure to provide more AI features for over a year as it competes against ChatGPT maker OpenAI, and newcomers such as Perplexity AI.
Dmitry Shevelenko is the chief business officer of Perplexity. He said, “It seems that this was rushed by Google.” There are a lot unforced mistakes in the quality.