Google's cutting-edge AI technology has a familiar connection to the past — and in this case, that isn't a good thing. Credit: Google/JR Raphael Allow me to paint a purely hypothetical picture for you — for, erm, no reason in particular. Imagine the following conversation about a groundbreaking new product someone is really excited to show you. I’ll take the liberty of playing the role of you and responding as we go: “Hey, look, this new magic instant dictionary is super convenient and easy to use!” Oh, nice — so it gives you correct definitions and everything? “Well, some of the time.” Huh. So sometimes it just doesn’t tell you things? “No, it always answers. It’s just wrong with every fourth answer or so.” Weird. But you can tell when it’s wrong, at least? “No, it still says the answer really confidently, so you assume it’s right. You just have to double-check its answers in a regular dictionary to be sure. Or, you know, just accept that you’re going to have wrong definitions sometimes.” Isn’t that…a problem? “I mentioned how convenient and easy it is to use, right?” Aaaad, scene. [ Related: The irony of Google’s Pixel 9 AI gamble ] You can probably sense where I’m going with this by now. But somehow, so many people don’t seem to see this in the context of our current obsession with AI technology — specifically, the large-language model variety that’s been all the rage since ChatGPT seeped into our public conscience last year and Google got gung-ho about getting its Gemini equivalent out into the world. My friend, we really need to talk. [Cut through the hype with my free Android Intelligence newsletter. Three new things to know and try every Friday — straight from my keyboard to your inbox.] The awkward asterisk with Google Gemini Look — I’m no luddite. I love geeky goodies more than most and get embarrassingly excited about new tech advancements. But for me, personally, it’s the practical application of a new innovation that’s the most interesting and important. Tech for tech’s sake is neat, sure, but the best kind of technology is the kind that actually solves a relevant problem and makes our lives easier in some meaningful, even if miniscule, way. And let’s be brutally honest for a second: Google’s Gemini system is not that technology. Not in its current form, anyhow, nor in the way Google is scrambling to cram it into every possible nook and cranny and have it act as the end-all answer for every imaginable tech purpose. What’s most frustrating of all is how few people — including, most of all, Google itself and the other companies pushing similar sorts of systems — are willing or able to acknowledge this. The reality, though, is that large-language models like Gemini and ChatGPT are wildly impressive at a very small set of specific, limited tasks. They work wonders when it comes to unambiguous data processing, text summarizing, and other low-level, closely defined and clearly objective chores. That’s great! They’re an incredible new asset for those sorts of purposes. But everyone in the tech industry seems to be clamoring to brush aside an extremely real asterisk to that — and that’s the fact that Gemini, ChatGPT, and other such systems simply don’t belong everywhere. They aren’t at all reliable as “creative” tools or tools intended to parse information and provide specific, factual answers. And we, as actual human users of the services associated with this stuff, don’t need this type of technology everywhere — and might even be actively harmed by having it forced into so many places where it doesn’t genuinely belong. That brings us back to our magic dictionary example from a minute ago. Would anyone in their right mind actually think that sounds like an appealing or advantageous real-world upgrade? Of course not. It’s patently absurd — no two ways about it. And yet, that’s exactly the same scenario the Gemini-style AI tools are offering us as both virtual assistants and all-purpose search centers. For some reason, though — a pretty obvious reason, one might contend — the companies behind them are downplaying that reality as much as possible and trying to convince us that it’s somehow all fine. News flash: It isn’t. The Gemini reliability problem Traditionally, tech teams have operated under a philosophy that something has to be damn-near close to 100% reliable if it’s gonna be effective and accepted by the masses. It’s a lofty standard, but it actually makes a lot of sense: If you know something is going to give you a wrong answer or fail at what you need it to do even one out of every 10 times, you aren’t going to be able to rely on it. You’ll get frustrated with it quite quickly. And you’ll ultimately stop using it. Anecdotally speaking, it seems safe to say that Gemini and its contemporaries get things wrong much more often than that. Based on my own experiences and those I’ve heard from other folks, I’d say we’d be generous to claim they’re right and reliable with high-quality answers, info, and output even 70% of the time. But the worst part is that when they can’t complete a task confidently, they don’t give you an error or tell you they’re unable to finish. They make something up and serve you incorrect information — just like our magic dictionary from a moment ago. It’d be completely comical if it weren’t for the fact that companies like Google are pretending this isn’t a problem and pushing these systems toward taking over as our phones’ virtual assistants and the brains behind our online searches. To be clear, it’s not that they’re somehow oblivious to this disconnect. All of these companies are covering themselves legally. Look closely, and you’ll see a fine-print disclaimer beneath every AI system telling you that the system makes mistakes and that the onus is on you to double-check everything it tells you to confirm it’s correct. Erm, right. So you can rely on these systems for information — but then you need to go search somewhere else and see if they’re making something up? In that case, wouldn’t it be faster and more effective to, I don’t know, simply look it up yourself in the first place? Maybe using the types of tools we had before these groundbreaking innovations came our way? Even when you limit these systems to a small subset of specifically supplied documents or web pages, the results are wildly unpredictable. That’s been my experience with Google’s AI-powered NotebookLM service, which lets you upload your own private documents and ask questions about the associated data. I’ve tried inputting a bunch of my extremely cut-and-dried Android Upgrade Report Card data into the system and then asking it questions about that data, and it’s returned fabricated, laughably inaccurate answers with an astounding degree of confidence. It’s not just me, either — or just Gemini, for that matter. The Verge Editor-in-Chief Nilay Patel shared an experience this week of asking the latest ChatGPT model to summarize an interview he’d done with Google’s CEO — and, as he observed, it returned “a full hallucination complete with citations to things [they] did not talk about at all.” Hell, even during its closely controlled on-stage presentation at Google I/O a week ago, Google featured factually inaccurate answers during a deliberate demo of Gemini’s info-providing prowess. How is any of this okay? The answer is simple: It isn’t. And that brings us to the bigger issue here. Gemini’s minuses — and pluses A Google UX design veteran who recently left the company shared some pointed words about this subject on LinkedIn earlier this week, saying that the AI projects he worked on within Google were “poorly motivated and driven by this panic that as long as it had ‘AI’ in it, it would be great”: This myopia is NOT something driven by a user need. It is a stone cold panic that they are getting left behind. He went on to draw a parallel to the current situation with Gemini and the similar sort of “all in” sentiment around Google+ 13 years ago — when Google panicked about Facebook’s then-rapid rise and the threat it posed to its business around the way people sought out information. I was one of the few freaks who actually appreciated Google+, but there’s no denying the frenzy around it was an ill-advised overreaction to an external factor. It was a new religion — a “this defines us and everything we do from this moment forward” pivot. And while G+ itself had its good qualities, it was ultimately that determination within Google to force it into every possible corner — whether or not it belonged there and whether its presence was a positive upgrade or a practical downgrade for Google’s users — that doomed it from day one. Beyond that, Google+ just wasn’t the answer to a problem. It was a solution in search of a problem to solve. And, call me crazy, but that manner of thinking is starting to feel awfully familiar again. The main difference is that the stakes are substantially higher this time — with these generative AI systems actively serving up misinformation and threatening to exterminate the very industries they depend upon to exist. The question we all have to ask ourselves is if we really want to accept this new “magic dictionary” that feeds us alarmingly inaccurate information alongside the occasionally convenient results. Google and the other companies chasing this AI fantasy are desperate to have us see these systems as a life-changing leap forward, but it’s critically important for us to remain aware of the very real minuses that come with this latest shiny plus. Check out my free Android Intelligence newsletter for three things to know and try in your inbox every Friday — a zesty blend of practical tips and plain-English perspective on all the juiciest Googley news. Related content news analysis Google just made a major ChromeOS misstep One of the Chromebook's most meaningful advantages is about to take a back seat to something silly. By JR Raphael Oct 02, 2024 10 mins Chromebooks Chatbots Chrome news analysis The irony of Google's Pixel 9 AI gamble Does Google really want these two letters to define its Pixel experience? By JR Raphael Aug 15, 2024 12 mins Google Assistant Google Android news analysis Google's Gemini-Assistant identity crisis Is it Google Assistant? Gemini? A little bit of both? Brace yourself, 'cause an already clumsy era of Android-centric uncertainty is about to get even more confusing. By JR Raphael Aug 07, 2024 9 mins Google Assistant Google Android news analysis Google's Pixel 9 and the end of Google Assistant With Google's upcoming Pixel models, the end of a once-heavily-hyped era appears to be upon us. By JR Raphael Jul 26, 2024 9 mins Google Assistant Mobile Apps Android Podcasts Videos Resources Events SUBSCRIBE TO OUR NEWSLETTER From our editors straight to your inbox Get started by entering your email address below. Please enter a valid email address Subscribe