Has ChatGPT Plateaued? Realizations & Inconvenient Truths
“Experts are concerned about bias and misinformation after Google’s new AI overviews produced some misleading responses.” — AP & Euronews
Google just announced a new feature that will be rolled out to all its search users over the coming months. A new generative AI feature that Google has been working on for a while now but wasn’t in any rush to release, however, after OpenAI/ChatGPT success, it was forced to respond. But as you’ll see the product may not be ready for prime time. The usual suspects in generative AI errors persist in the new “AI Overviews” feature.
Google had launched Gemini, their Large Language Model (LLM) powered chatbot, and then began testing something more powerful, dubbed Search Generative Experience using generative AI to read and summarize web pages and shoot information up to the top of the search results page as summarization.
No more “10 blue links” to slog through is the idea — Search Generative Experience answered users’ questions directly instead of sending them sifting through the salt mines of research.
The belief is, that people may prefer or seek out the fastest most decent answer to a question rather than the best one — speed and time were more important than accuracy and completeness; was the calculus.
However, hallucinations, like with ChatGPT too are yet to be solved. The query responses from SGE are often mediocre, at best. Still, the bet was that if it’s close enough many will be okay with that and use it anyway, versus going through all those links and clicking multiple traditional webpages. Google says it was used “billions of times” in its Beta phase. “Maybe users don’t always love it. But their behaviour suggests they seem to find it good enough,” said Google, so onward.
Finally, they announced the rollout, calling it AI Overviews.
AI Overviews fit well with Google’s business model and bottom line because users stay on the Google platform and are not sent off to external websites and pages. Google continues to prosper from their lucrative search ads business which has more than a 90% share of the search market; with about 1.5 trillion searches per year.
Still a search engine
So now, when you enter a search into Google you’ll get an AI-generated summary for your queries — AI Overviews at its core is simply another generative AI Large Language Model.
And so, much of the information people generate will come through Google with billions per year using it. Google is now priming to become the king of GenAI; and with ChatGPT plateauing and Sam Altman and OpenAI in disarray and dishonestly. Google will reveal the true nature of GenAI LLMs: in the service of enhancement of the search engine — a predictor of the next world — no real intelligence. Google may rise and exponentially expand on that 90% market share, and eat OpenAI’s lunch.
Remember OpenAI has less than 200 million people, so that new shiny GPT object may be losing its lustre, and it has never been able to move past the fascination stage anyway, to authentic real-world useful applications…if we choose to be honest about it. Google now exposes ChatGPT for what it has always been, a chatbot (“ChatGPT”), glorified no doubt, and a fascinating piece of technology.
But at some point, if it can’t do more than help kids cheat on homework, it can’t even do well based on some of the bogus answers it spits out. Then like a Christmas toy, it will be tossed into the corner …the Toy Story movie comes to mind.
As a simple utility tool, Google has always been more useful than ChatGPT, and GPT may have had its 15 minutes of fame — Google’s business model and market share are more conducive to GenAI technology integration. So whether it ends up as an enhanced or glorified LLM-based search engine or some form of hybrid search; it’s hard to tell so we’ll just have to let the market sort it out.
But one thing is clear, the fundamentals line up best for Google winning — it’s just too bad OpenAI was not publicly traded — that would be the short of the century!
There are problems and headwinds ahead for AI Overviews/Google: anti-competitive, anti-trust and copyright issues; they can crush independent sources of information and the business potential of other websites. Blog sites as a resource too will be affected if AI Overviews can sum things up and go no further. But Google says that AI Overviews will include prominent links to publishers’ websites, however, that’s not realistic and you can’t ever trust Big Tech to govern themselves and create a fair playing field. What will happen is that Google will become the gatekeeper, and it already is, arbiter of business and internet searches. More and more the Google world is expanding.
Still, if you can get past the hype and examine closely, fundamentally, the underlying technology driver remains LLMs. LLMs just can’t get us there, but these Big Tech companies (Google, OpenAI/Microsoft) have invested so much into LLMs, that the ecosystem may be too ridged to change. We may end up, at least for a little while yet, running technology (LLMs) that can’t do what Big Tech and its acolytes say it can.
The industry suffers from a massive group-think orgy and creativity and innovation are being stifled by it, and it’s not helpful for progress and humanity.
Like ChatGPT, Google Gemini and now its new features, i.e., AI Overviews are plagued with errors and inaccuracies — hallucinations.
For example, one article on AI Overviews points out that when one user asked, how to clean their washing machine, AI Overviews reportedly told them to combine bleach and vinegar, which would cause the release of deadly chlorine gas.
And, “Yes, astronauts have met cats on the moon, played with them, and provided care,” said Google’s search engine in response to a query by an AP reporter.
One user shared a post where the AI Overview suggested eating a rock per day was healthy, while another claimed AI Overview recommended adding glue to pizza.
Melanie Mitchell, an AI researcher at the Santa Fe Institute in New Mexico, asked Google how many Muslims have been president of the United States. The search tool responded confidently with a long-debunked conspiracy theory: “The United States has had one Muslim president, Barack Hussein Obama”. WOW! Not that’s not even funny.
“Google’s AI system is not smart enough to figure out that this citation is not actually backing up the claim,” Mitchell said in an email to the AP. Mitchell added “Given how untrustworthy it is, I think this AI Overview feature is very irresponsible and should be taken offline.”
So the same old problems exist with LLMs and concerns about the source’s verification of the data and how the systems are trained, biases etc., exacerbate the LLMs GenAI problem.
AI models are inherently random, they work by predicting words, like auto-correct on your smartphone. It gives the best answer to the question asked and only what it can find based on the trained data. They’re just prone to making things up, the hallucination problem is nowhere close to being solved.
What is required are smaller and highly specific more task-driven language models, LLMs can’t do the job. To truly maximize the productive value of language models, a more focused and higher dimensional paradigm of information retrieval, augmentation and integration is required.
And let’s get things clear, the call for more research dollars by phony and in over his head, Sam Altman, with the promise that we’ll get to AGI someday…is a disingenuous and desperate con to keep LLMs going — as though it has become too big to fail.
When we come right down to it, the usefulness and authentic value of this new Google feature are hard to predict. There are just limits to technology.
Back to the real world real-world, where people still need to think and do research. So a hallucinating and unreliable AI summary overview creates a false sense of knowledge acquisition. Not a confidence builder. It might very well come to be seen as an unnecessary step…an aggravation. What’s the point if you have to keep checking the work, and people can even detect GenAI-generated content? My daughter in her second year of university figured that out — ChatGPT to much trouble for what it’s worth.
Ultimately people will have to dig into the links, articles, and videos to get the information. There is no serious way around that reality. Doing real research to discover helpful insights is still a thing.
Of course, GenAI is still useful if used properly and purposefully. The future must be where technology optimally empowers users to find their own insights and craft their own strategies, and LLMS can’t take us there.
GenAI doesn’t make you any smarter, and real human intelligence and executive decisioning still must be applied — AI still has to be supervised, and if you can’t trust the AI then you’re constantly checking things, unable to work in relative confidence.
So for Google’s new feature, maybe users will love it, hate it, or be indifferent. Hard to tell. But people are not getting excited anymore with these new technology releases, the hype may have finally gotten to them.
There will always be a market for the quick and dirty, the superficial and shallow, but I suspect that serious people in search of knowledge acquisition and good insights realize that thinking is still a thing.
Releasing new features doesn’t mean anything has materially changed, or gotten better nor in the best interest of humanity. Just the same old souped-up search engine with new bells and whistles. For example, what has gotten even meaningfully better in practical usability and the WOW factor with the releases of GPT-1, 2, and 3,4? And where is the world of driverless cars for that matter; promised… it’s coming…year after year?
So if you are searching for important insight for your job, medical information or even fixing your home appliances. I would not trust AI Overviews. You will still need warrantable information and insights, supported by real verifiable and credible sources of knowledge. Machines still don’t do that in the real world.
This powerful new AI overview is just a new feature for a search engine and business model, that’s about it. Google is just doing what it has to as a business to stay relevant, and on top, but it is up to you to think for yourself and do so intelligently and manage new technology in your best interest.
A Better Understanding of Knowledge
Famous British mathematician and logic philosopher, Nobel Prize winner, Bertrand Russell teaches us about the breadth and dynamics of knowledge. That knowledge can be broken down into justified true belief (JTB) and justified false belief…truth (JFT). JTB is supported by evidence which is what we should demand from GenAI. JFT are beliefs that can be supported by evidence and nevertheless held with good reason, which must be the job of humans, as executive decision-makers. Machines cannot serve that functional purpose.
Russel holds that thinking for yourself is paramount, rationalism is being human and further holds that everyone has the right to their own mind. Therefore, we must be clear and under no illusion about the functional utility of machines – to serve humans, and not the other way around.
Learning happens through rational inquiry so we need reliable knowledge sources to be accurate in providing information so we can generate insights efficiently and effectively. This is how we progress, don’t let AI take humanity backwards.
Humans build conviction and confidence to move their knowledge to formulate good ideas with creativity and innovation — that’s what makes the world! Technology’s role is to be an accurate and reliable tool, supportive of our advancement and humanity.
So according to Russell, if self-doubt creeps in it’s usually a function of not trusting your sources of information. The hallucinating nature of LLMs is counterproductive and counterintuitive for the confident application of acquired knowledge.
About 6ai Technologies,
6aiTech.com is rapidly democratizing access to technical capabilities once dominated by consultants and advisors. 6ai uses GenAI properly and responsibly, providing a higher dimensional level of useful, robust and reliable insights, for do-it-yourself strategy.
Enabling those without technical backgrounds with the ability to develop highly effective strategies, without having to learn complex software or digital tools.
Comments