Google criticized as AI Overview makes obvious errors, such as saying former President Obama is Muslim

Technology
Monday, May 27th, 2024 2:43 pm EDT

Key Points

  • Criticism of AI Overview: Google’s newly introduced “AI Overview” in Google Search has faced significant public criticism due to its frequent delivery of nonsensical and inaccurate results, with users having no option to opt out. Examples include incorrect statements about U.S. presidents, unsafe health advice, and bizarre suggestions like adding glue to pizza sauce.
  • Broader Context and Market Competition: The AI Overview rollout is part of a broader trend among tech giants like Google, Microsoft, and OpenAI, who are in a competitive race to integrate generative AI into their products. This market is projected to reach over $1 trillion in revenue within a decade. Despite the criticism, Google asserts that most AI Overviews provide high-quality information, attributing errors to uncommon or doctored queries.
  • Past Issues and Ongoing Challenges: This controversy follows Google’s problematic release of its Gemini image-generation tool, which faced backlash for producing historically inaccurate and culturally insensitive images. The recurring issues with AI tools have sparked debates over AI ethics and highlighted the challenges of deploying reliable AI technologies, with Google facing internal and external scrutiny over its handling of AI ethics and the rushed rollout of AI products like Bard.

In less than two weeks since the debut of Google’s “AI Overview” feature in Google Search, public criticism has surged due to the tool returning nonsensical or inaccurate results with no option for users to opt out. The AI Overview provides a summarized answer at the top of the search results, synthesized from various web sources. However, social media users have shared numerous screenshots showing the AI offering incorrect or controversial responses. For instance, when asked how many Muslim presidents the U.S. has had, it inaccurately claimed Barack Obama was Muslim. In another example, it suggested adding non-toxic glue to pizza sauce to solve the issue of cheese not sticking, based on an old Reddit comment.

Attribution issues are also prevalent, with the AI Overview incorrectly attributing false information to medical professionals or scientists. For example, it dangerously advised that staring at the sun for 5-15 minutes is safe for health, citing WebMD and misattributing dietary advice about eating rocks to UC Berkeley geologists. It has also provided basic factual inaccuracies, such as misidentifying fruits or historical dates.

The rollout of AI Overview is part of a broader trend where companies like Google, Microsoft, and OpenAI are racing to incorporate generative AI into their products, aiming to stay competitive in a market projected to exceed $1 trillion in revenue within a decade. Google’s new feature aims to add assistant-like planning capabilities to search, such as generating meal plans.

Despite the backlash, Google maintains that the majority of AI Overviews deliver high-quality information with links for deeper exploration. A Google spokesperson stated that most examples of errors involve uncommon queries, some of which were doctored or unreplicable. The company claims extensive testing preceded the feature’s launch, and it is addressing issues promptly under its content policies.

This controversy follows Google’s troubled release of its Gemini image-generation tool in February, which was paused after users found historical inaccuracies and questionable responses. Gemini’s tool was criticized for generating historically inaccurate and racially diverse images for specific historical queries. For example, it depicted a diverse group of soldiers for a 1943 German soldier query and a racially diverse medieval British king. Following these issues, Google paused the image generation feature, promising a re-release of an improved version.

These problems have sparked debates about AI ethics, with some criticizing Google for either being too left-leaning or not investing sufficiently in ethical AI development. Google’s AI ethics group faced internal turmoil in 2020 and 2021 after its co-leads were dismissed following their publication of a critical paper on AI model risks. The company was also criticized for the hasty rollout of Bard, which followed the rapid rise of ChatGPT.

Overall, Google’s attempts to integrate advanced AI features have faced significant hurdles, from AI Overview’s factual inaccuracies to Gemini’s problematic image outputs, reflecting broader challenges in deploying reliable and ethical AI technologies at scale.

For the full original article on CNBC, please click here: https://www.cnbc.com/2024/05/24/google-criticized-as-ai-overview-makes-errors-like-saying-president-obama-is-muslim.html