MacDirectory magazine is the premiere creative lifestyle magazine for Apple enthusiasts featuring interviews, in-depth tech reviews, Apple news, insights, latest Apple patents, apps, market analysis, entertainment and more.
Issue link: https://digital.macdirectory.com/i/1522076
Google has rolled out its latest experimental search feature on Chrome, Firefox and the Google app browser to hundreds of millions of users. “AI Overviews” saves you clicking on links by using generative AI — the same technology that powers rival product ChatGPT — to provide summaries of the search results. Ask “how to keep bananas fresh for longer” and it uses AI to generate a useful summary of tips such as storing them in a cool, dark place and away from other fruits like apples. But ask it a left-field question and the results can be disastrous, or even dangerous. Google is currently scrambling to fix these problems one by one, but it is a PR disaster for the search giant and a challenging game of whack-a-mole. AI Overviews helpfully tells you that “Whack-A-Mole is a classic arcade game where players use a mallet to hit moles that pop up at random for points. The game was invented in Japan in 1975 by the amusement manufacturer TOGO and was originally called Mogura Taiji or Mogura Tataki.” But AI Overviews also tells you that “astronauts have met cats on the moon, played with them, and provided care”. More worryingly, it also recommends “you should eat at least one small rock per day” as “rocks are a vital source of minerals and vitamins”, and suggests putting glue in pizza topping. Why is this happening? One fundamental problem is that generative AI tools don’t know what is true, just what is popular. For example, there aren’t a lot of articles on the web about eating rocks as it is so self-evidently a bad idea. There is, however, a well-read satirical article from The Onion about eating rocks. And so Google’s AI based its summary on what was popular, not what was true. Another problem is that generative AI tools don’t have our values. They’re trained on a large chunk of the web. And while sophisticated techniques (that go by exotic names such as “reinforcement learning from human feedback” or RLHF) are used to eliminate the worst, it is unsurprising they reflect some of the biases, conspiracy theories and worse to be found on the web. Indeed, I am always amazed how polite and well-behaved AI chatbots are, given what they’re trained on. Is this the future of search? If this is really the future of search, then we’re in for a bumpy ride. Google is, of course, playing catch-up with OpenAI and Microsoft. The financial incentives to lead the AI race are immense. Google is therefore being less prudent than in the past in pushing the technology out into users’ hands. In 2023, Google chief executive Sundar Pichai said: "We’ve been cautious. There are areas where we’ve chosen not to be the first to put a product out. We’ve set up good structures around responsible AI. You will continue to see us take our time." That no longer appears to be so true, as Google responds to criticisms that it has become a large and lethargic competitor. Google’s AI Overviews may damage the tech giant’s reputation for providing reliable results. Image by Google / The Conversation