AGI Is Not Happening
Perry C. Douglas
July 2 2024
ChatGPT 1, 2, 3, 4….19…LLMs are not going to get any meaningfully better, and AGI is a contrived narrative, designed to hold people into believing in large language models (LLMs). So the evolving narrative now coming from Big Tech and the New Tech Aristocracy is a strategy of distraction and deception. LLMs just need more time and data to scale more, then we’ll get to the utopian world of artificial general intelligence (AGI).
Nevertheless, the “scaling” narrative along with more iteration won’t change the fundamental laws of nature and math, which LLM promoters can’t ever overcome. So the hard reality is that LLMs are not reliable enough and simply not good enough to be the most useful utility for the future of work.
Big Tech realizes this so they are pivoting with the not-so-new narrative of trust us; hang in there, wait for the next iteration…and the next iteration, the new best version of ChatGPT. And this will get us closer to achieving AGI. And saying this with a straight face without any qualification or evidence to back it up.
Spending billions like Microsoft has on OpenAI, for example, has created a too-big-to-fail embarrassment scenario for Microsoft. One of the big tech kings of the new tech aristocracy round table. The decided strategy is to let the band play on. So they continue with the narrative that LLMs will get to AGI someday.
It’s gotten so bad that they’re even bringing in Ray Kurzweil, renowned futurist and technologist to paint a future for AGI arriving by the “early 2030s.”
I mean, I guess in terms of writing, ChatGPT’s poetry is actually not bad, but it’s not up to the best human poets. I’m not sure whether we’ll achieve that by 2029. If it’s not happening by then, it’ll happen by 2032. It may take a few more years, but anything you can define will be achieved because AI keeps getting better and better.
That’s the most incoherent dodgy prediction if I’ve ever heard one. A highly skilled non-committed prediction; and only the uninformed or the gullible would believe that. The AI aristocracy has switched its focus to long-term promises that are impossible to verify and more grandiose than ever.
However, if we pay attention, we'll see that the LLM hype is running out of steam. Meta’s chief AI scientist Yann LeCun, for example, at a VivaTech2024 talk in Paris this April, said that he doesn’t recommend PhD students work on LLMs research. Essentially admitting LLMs have nothing left in the tank. Yann said that there is nothing meaningful for PhD students to dedicate their time to research — there isn’t anything of value that could be meaningfully added, he said.
“I think that it’s important to remember that LLMs are next-word prediction tools trained to be conversational by imitating human responses. You should work on the next generation of AI systems that lift the limitation of LLMs.” And that “you don’t get much different information from an LLM than you do from a standard search engine.”
In other words, Generative AI Sucks: Meta’s Chief AI Scientist Calls For A Shift To Objective-Driven AI, was the title of a Forbes article on his Paris talk.
At last year’s VivaTech talk, in June 2023, LeCun said that current artificial intelligence systems like ChatGPT, trained on large language models do not have human-level intelligence and are barely smarter than a dog. Why he’s picking on dogs I don’t know, but he went on to say that generative AI trained on large language models are not very intelligent because they are solely coached on language. Well no kidding!
“Those systems are still very limited, they don’t have any understanding of the underlying reality of the real world, because they are purely trained on text, a massive amount of text,” LeCun said. And that, “Most of human knowledge has nothing to do with language … so that part of the human experience is not captured by AI.”
Generative AI has captivated the imagination of many with its ability to produce text, images, and things that can mimic human creativity. It’s amazing, yet this technology, when compared to the innate learning capabilities of humans can’t hold a candle next to human intelligence.
LeCun concluded, “What it tells you we are missing something really big … to reach not just human-level intelligence, but even dog intelligence.”
“I have no idea how to reproduce this capacity with machines today. Until we can do this, we are not going to have human-level intelligence, we are not going to have dog-level or cat-level [intelligence].”
In other words, don’t hold your breath!
Human intelligence unlike machine “intelligence” is multi-dimensional and robust, it utilizes the five senses versus the machine that functions only on the past programmed data. Humans always have and can ascertain context when applying their intelligence and in real-time. Capabilities that naturally stem from an intuitive understanding of the complex universe.
Therefore, heeding what LeCunn has to say (You should work on the next generation of AI systems that lift the limitation of LLMs.) The applied intelligence (ai) process leverages those limitations for practical purposes in advancing human progress. A more focused ai language-driven model, augmented by the useful parts of generative AI, is the evolving future of AI.
…applied intelligence (ai), therefore, is objective-driven, practical, applicable, commonsensical and highly useful to human progress for the future of work. The ai methodology grasps the nuanced elements of human communication because of the conversational and human-centric approach.
It’s becoming increasingly difficult for the new tech aristocracy to hide the objective truth. For example, OpenAI’s CTO, Mira Murati recently publicly acknowledged that there is nothing mind-blowing happening behind the scenes with the coming GPT-5. In an interview with Fortune, Murati said, unintentionally perhaps, that “inside the labs, we have these capable models and they’re not that far ahead”.
OpenAI’s LLMs remain good enough and excel in narrowly defined domains and tasks that don’t require an understanding of causality. But for the task of planning, innovating and creating more naturally, it’s proven not very useful.
To generate useful strategies for real-world applications, less language is more; more focus becomes more useful. Accordingly, as strategy is the first principle of achieving most things, a highly focused higher dimensional level “grounded in a nuanced comprehension of the physical and social world,” will be central to the most useful GenAI models of the future.
Technology is an extension of human ingenuity, collaborators crafted to provide human insights that can be harnessed effectively to provide useful strategy solutions for growth.
So in times of excessive hype cycles, you have to keep your good common sense and let the nonsense run its course. Stay well informed and adhere to the overriding reality of the universe. Nature’s laws, physical and extending to the metaphysical, always tell us what is possible and impossible. So regardless of how upside-down things seem at the moment, rely on your good science which is backed up by real science. And remember, conviction and hype don’t mean anything. It can often be highly charged wishful thinking driven by those with a financial agenda.
So embracing the challenges and not falling for science fiction is what creates the opportunities to learn, create, and innovate better into the future. AI is a real thing but for it to have genuine sustainable long-term productive impacts on humanity, we must be truthful about its limitations.
Therefore, from a 6ai Technologies perspective, reality is your friend, at least it can stop you from wasting a whole lot of time. The future is not AGI. It’s about the real ai, applied intelligence, which includes applying AI properly and responsibly to ensure its purposeful use in serving humanity best.
Progress can’t be about creating machines to devalue human worth; self-destruction, although at times it may seem to be a human trait, it’s not a human desire. So valuing machines above humans won’t be sustainable. Progress must be about empowering real people to expand their capacity and capabilities, to produce enhanced value, and machines should play a part in that empowerment. But with humans remaining in control and to serve human objectives.
So the real flex is about applying your applied intelligence purposefully in the best interest of oneself, and humanity.
Shifting Tactics
If you have ever read almost anything written by Gary Marcus, computer science professor, best-selling author and successful AI entrepreneur. You’ll know that he’s been calling out the hype about AI is going to change “EVERYTHING FOR EVERYONE” tomorrow. More or less, everything he’s been saying and writing about for years is turning out to be true. That Generative AI does still have enormous limitations, just as anticipated, “that it still hallucinates and makes boneheaded mistakes,” and the whole Generative AI thing is looking more and more like a dud,” Marcus says.
He goes on to say that OpenAI may yet turn out to be the WeWork of AI and that for the billions upon billions being spent, the returns are simply not there. And businesses are starting to find this out — that AI in certain applications or new approaches are turning out to be more trouble than it’s worth.
He also points out that the new tactic and counter-offensive for big tech is saying that AI is going to change everything, if not tomorrow, sometime over the next 5–20 years. Hoping that consumers continue to believe those empty promises.
The more pressing and unlikely AGI is becoming, the more the big tech aristocracy doubles down on the hype and falsehoods. A couple of years ago they were just exaggerating says Marcus, but now it’s becoming dangerous.
The truth is, that LLMs are not reliable and can’t be trusted and this is becoming more widely known, so “the big AI leaders have switched their focus to long-term promises that are impossible to verify.” This is becoming the tactical shift in their overall strategy says Marcus.
However, some with power are beginning to fight back, The New York Times has recently stepped up. First with a lawsuit against OpenAI for stealing its content — non-adherence to copyright laws. Secondly, with details of its wide range of common errors, hallucinations, and failures in basic elementary reasoning. And about the simple sorting of which data sources are relevant to particular problems.
Recently, OpenAI’s Sam Altman, promised at the Aspen Ideas conference that AI would “discover all of physics”, a claim that sounds more like something Terrance Howard would utter, than a person who is supposed to be a responsible CEO. Altman’s comments are neither coherent nor plausible, and more on the desperate salesman side.
Microsoft’s AI CEO Mustafa Suleyman confidently told the same Aspen crowd that the cost of knowledge production would go to zero in 15 years. And Anthropic’s Dario Amodei is now fantasizing about how AI is as intelligent as Nobel Prize-winning scientists, accelerates discoveries in biology and cures diseases, when his company’s own products, at the same time often fail on basic reasoning.
Unable to regulate his desperation, Altman also told the Aspen crowd…off-the-cuff, that AI could “double the world’s GDP” — without a shred of evidence to support his claim. He’s learned from the best I guess, Elizabeth Holmes too, understood long ago that this sort of techno-utopia plays well with audiences. Combined with a confident delivery for the appearance of believably, this level of deceit often fools a lot of people, a lot of the time.
Redefining how strategy is crafted in the age of AI
A new dimensional level of useful focused language models is required. These focused language models (FLMs) will be tuned into the user’s specific strategy objectives, not big tech telling you what’s good for you and being the arbiter of your life. FLMs put you in the driver’s seat.
FLMs will democratize data access and offer more applied business intelligence solutions that can be authentically useful and will be easy to use, not requiring any special skills or training.
The MIT Technology Review titled “The Great Acceleration: CIO Perspectives on Generative AI,” directly tells us how LLMs don’t do the job and the focus should be on smaller language models instead.
Smaller open-source models, could rival the performance of large models and allow practitioners to innovate, share, and collaborate. One team built an LLM using the weights from LLaMA at a cost of less than $600, compared to the $100 million involved in training GPT-4. — MIT Technology Review
Thankfully, smaller does not mean weaker language models, but focused ones. GenAI models can be fine-tuned and focused for all domains, thus requiring less data, as evidenced through models like BERT — for biomedical content (BioBERT), legal content (Legal- BERT), and French text (the delightfully named CamemBERT).
FLMs — the more focused they are the better they are at reducing complexity, by narrowing the focus specifically to users’ domain, objective and strategy needs. FLMs focus away from all-purpose jack-of-all-trades…master-of-none LLMs models, to high-quality FLM knowledge acquisition which is what users want and what businesses need.
6ai Technologies, therefore, empowers users by enabling them to utilize FLM-Templates (FLM-T), a customizable tool, precisely for building reliable strategies that can be taken as empirically warrantable. Which allows organizations to execute with confidence.
Smaller-focused technology can be within the reach of many organizations, “It’s not just the OpenAIs and the Googles and the Microsofts of the world, but more average-size businesses, even startups,” says MIT.
This empowers users to craft their dashboards and drive their insights from data themselves. This type of data democratization was already a top way that companies were generating tangible benefits from AI, but in 2022 with the arrival of ChatGPT, people got lost in the hype.
For 6ai Technologies, the business objective is to empower strategy development led by users, not by big tech. Where AI-augmented strategy will start coming from the workforce, marking the beginning of a self-driven entrepreneurial era for people and organizations.
So to truly maximize the productive value of language models, a morefocused and higher dimensional paradigm of information retrieval,augmentation and integration must be authentically achieved. Thismust be underpinned by contextualization and prioritization, common sense, with the ability to integrate historical examples for comparison, and relevance analysis.
Accordingly, by focusing on specific topic/domain information retrieval, the 6-step applied intelligence process achieves next-level customization and simplification. It doesn’t suffer LLM-type noise, irrelevance, and hallucination.
FLMs users can be pointed back to sources: citations, attributions, and comparative analysis. This allows for an easy, reliable, and trustworthy user experience. Proceeding with speed, accuracy, and confidence!
6ai creates template models with trainable retrievers of relevant information. Multiple topic-specific templates can be customized to users’ strategy objectives, and “If you’re doing something in a more focused domain you can avoid all the random junk and unwanted information from the web,” says Matei Zaharia, co-founder and chief technology officer at Databricks and associate professor of computer science at the University of California, Berkeley.
Data sovereignty: to maximize the organization’s competitiveness, the organization must build their strategy-IP internally. We don’t want to build models with LLMs where the data being put in could be easily accessed and used by competitors. Therefore, FLMs’ internally driven templates protect the organization’s strategy-IP.
Different types of templates provide for different domains of strategy development. Agents are designed to deconstruct complexity and prompt more purposefully for users to identify the whitespaces of market opportunity.
About 6ai Technologies
Redefining how strategy is crafted in the age of AI. An easy, do-it-yourself 6-step applied intelligence process to master strategy, without the need for any specialized training or complex technical tools. Accessible to all and at a fraction of the cost of hiring consultants or advisors.
Comments