top of page
Search

Apple Takes a Bite out of LLM Hype GenAI, and the pervasive atrophy of human intelligence

  • perrydouglas9
  • 6 days ago
  • 8 min read

I recently read an article in The Globe and Mail titled “Your Brain on AI: Will Tools Designed to Help Us Instead Atrophy Our Critical-Thinking Skills?” As tech companies court students, educators calculate the long-term risks. The article was about a peer-reviewed paper discussing the precipitous decline of critical thinking skills of students due to AI overuse and dependency.


It was authored by Professor Michael Gerlich, who teaches at SBS Swiss Business School in Zurich, where he heads the Centre for Strategic Corporate Foresight and Sustainability. He examined “the relationship between the use of generative artificial intelligence applications, such as ChatGPT, and critical thinking skills,” and it reminded me of an article I wrote several weeks back for the publication byblacks.com, titled “Using AI To Cheat Your Way Through School Is Definitely Giving You Brain Rot.”


And it triggered an avalanche of responses from other professors who were also experiencing similar issues in their classrooms: the overuse of GenAI leads to the degeneration of critical thinking skills, and students are not learning, professors are saying.


Late last week, Apple Research followed up on a White Paper it wrote about a year ago, saying that LLMs can’t think or reason. The newly released paper blows the doors off the LLM hype-based industry. Apple, once again, proved that GenAI can’t reason, they aren’t intelligent, make guesses, and can’t think.


Both papers help to shine a bright light on a highly suspect and flawed industry that continues to cleverly fool people with their bogus claims, led by top snake oil salesmen like OpenAI CEO Sam Altman and Dario Amodei, CEO of Anthropic; high priests of the GenAI-LLM new religion!


After cutting through the hype and stripping away the fancy prompting and pattern-matching illusions, LLM models consistently fail at tasks that require multi-step reasoning and abstract logic. Effectively, problem-solving ability is a constructed illusion for large models because no thinking or reasoning is happening, just computational memory in the training data that performs these tricks. Giving us the appearance of intelligence.


Apple stress-tested these LLMs outside their comfort zones, and the conclusions were: “These models aren’t thinking…they are very expensive autocomplete, error rates of 30–70%.”

Many industry experts have already concluded that LLMs have hit a wall, a while ago, but with the continued panic-induced hype coming from the industry and Tech Bros, and with ultra-billions spent, the industry has consciously made itself blind to the writing on the wall.


Like many times repeated before in history, the adoption of the too-big-to-fail postured narrative has taken shape in the industry. The sense of disbelief: How can it possibly be that so many smart people can be wrong? But you’ll find that the phone book is full of smart people, particularly from Silicon Valley, being wrong all the time.


Atrophy of the mind


Atrophy of the mind is a metaphor that describes the decline, weakening, and deterioration of one’s cognitive abilities due to the lack of mental exercise and the overuse of generative AI, in this context. Like our muscles weaken and shrink from disuse, the mind, too, loses robustness and core cognitive functionality if it disengages from intellectual stimulation and overengages in mind-dulling laziness.


So it is illogical to believe that if you allow AI to do all the work that you are somehow learning and enhancing your intellectual capacity and skills. You are just fooling yourself!


Core cognitive functions represent the strengths in our minds. Performing the heavy lifting of Critical Thinking and Problem-Solving: The ability to analyze information, form reasoned judgments and solve complex problems. Memory: The capacity to learn, retain, and recall information. Creativity: The ability to generate new ideas and think outside of established patterns. Learning Capacity: The ease and speed with which you can acquire new skills and knowledge. Mental Acuity and Flexibility: the sharpness, quickness, and clarity of one’s thinking and cognitive abilities — the ability to adapt thinking to new situations and consider different perspectives.


The ancient Greek philosopher Aristotle said that intelligence was not a single, measurable quantity but a multifaceted set of virtues and capacities intrinsic to the human soul. In other words, intelligence requires consciousness!


Only humans can have consciousness, thus, authentic intelligence can only be found in humans, not machines. Artificial intelligence, therefore, is just that, artificial! Not real, but a constructed illusion that gives the appearance of intelligence.


Intelligence, according to the 18th-century German philosopher Immanuel Kant, who defined intelligence and reason in his published work, “Critique of Pure Reason,” lays out its limits and scope; metaphysics: the most general features of reality, including existence, objects and their properties, possibility and necessity, space and time, change, causation, and the relation between matter and mind.


Intelligence from a Kantian perspective is the intricate workings of sensibility, understanding, reason, judgment and emotional intelligence, and to appreciate their collaborative role in constructing our experience of reality, which guides our decisions. No machine can do that!


Like Aristotle, Kant also didn’t see “intelligence” as a single, measurable quality that could be captured in a single test, philosophy, or system; neither AI. The human mind is a multi-faceted “architecture of cognition,” Kant said, where what we would call intelligence emerges from the dynamic interplay of distinct mental faculties.


Therefore, if those faculties are not being used in any meaningful way, how can students authentically learn and enhance their intelligence?


Prof. Gerlich’s paper also aligns well with the core theme from my article about cheating with GenAI: cheating, while not new, has not been seen on this scale before, and has become a threat to authentic human intelligence, future innovation and invention capabilities, I wrote. Cheating or overuse of AI can set you up to be a sucker, eventually you’ll play yourself; and a price.


To make the point. I told the parable of the Thanksgiving Turkey, by best-selling author and NYU business professor, Nicholas Nassim Taleb, in his book The Black Swan, which examines our beliefs around “the impact of the highly improbable” things becoming probable. This can happen when we fail to think critically and open our eyes. Like the Thanksgiving Turkey, too often we swallow whole what others feed us — naïve and not observant enough to see what is really happening right in front of our eyes.


“Consider a turkey that is fed every day. Every single feeding will firm up the bird’s belief that it is the general rule of life to be fed every day by friendly members of the human race looking out for its best interests. But on the afternoon of the Wednesday before Thanksgiving, something unexpected will happen to the turkey. It will incur a revision of belief.”


― Nassim Nicholas


The moral of the story is that everyone has motivations — there is no free lunch — don’t become a gullible Thanksgiving Turkey buying into improbability.


Prof. Gerlich’s research also reveals a “significant negative correlation” between the outsourcing of intelligence and critical thinking ability: increased dependency on AI tools is associated with lower critical thinking scores. Suggesting a vicious cycle occurs with over-reliance on generative AI: it reduces the need for deep analysis and thought, which leads to even more reliance on AI. The “inadvertently fosters dependence, which can compromise critical thinking skills over time,” says Prof. Gerlich.


Michael Johnson, an associate provost at Texas A&M University, says that “Adolescents benefit from structured adversity, whether it’s algebra or chores. They build self-esteem and a work ethic. It’s why the social psychologist Jonathan Haidt has argued for the importance of children learning to do hard things, something that technology is making infinitely easier to avoid.”


“Massive numbers of students are going to emerge from university with degrees, and into the workforce, who are essentially illiterate,” says Troy Jollimore, philosopher and Cal State Chico ethics professor.


Professor Brian Patrick Green, Applied Ethics at Santa Clara University, adds, “We’re talking about an entire generation of learning, perhaps significantly undermined here. It’s short-circuiting the learning process, and it’s happening fast.”


For generations to progress and thrive, they must have the ability to solve the most pressing problems of their times. This is basic to the human condition’s need for survival and security; part of evolution. Without such abilities, humanity degenerates, which will lead to a world led by unremarkable, hubristic and foolish individuals, degenerates. Those who would tell us to put aside our common sense and individualism and join their new religion, undermining the very universal aspects of our human existence.


As more university graduates enter the workforce with diminished cognitive abilities, the greater the risk to humanity — we risk creating an entire generation of superfluous people.


Oliver Hardt, an associate professor at McGill University who researches neuroscience and memory, says that generative-AI tools and even GPS use can affect our spatial memories, which are governed by a part of the brain called the hippocampus. “People that constantly rely on GPS for navigation they actually see a negative change in the hippocampus, or some form of atrophy.”


Therefore, it is logical to conclude that excessive cheating with GenAI in school and overall overreliance and dependency on AI would have significant consequences for students in their future lives.


AI pushes against the very essence of what it means to be human, so how we utilize AI will significantly influence our future and our place in the universe.


Learning from a thousand years of the history of technology and contemporary evidence makes one thing clear — civilization’s progress depends more on the choices we make about the technology we choose, than the technology itself.


Using AI as a partner, as an augmenting tool to enhance our human capacity, creativity, and ingenuity, is the optimal use if we want to preserve our humanity. The AI-everything path provides instant gratification but brings long-term pain and suffering, and has a hidden inverse relationship with success: the more your AI usage goes up, the more your success factors are reduced. So choose wisely and don’t look to AI for the answer because AI doesn’t have wisdom.


6ai — the authentic intelligence paradigm


Prof. Michael Gerlich believes that one positive way to engage with generative AI is to treat it as an augmenting tool for authentic learning, seeking data and evidence, exercising and maximizing intellectual capacity, finding alternative views to consider, and filling the gaps in your thinking.


I would agree, the optimal use for Gen-AI is to utilize it for mental growth, cognitive flourishing and intellectual vitality — neuroplasticity: the brain’s capacity to change, reorganize and reconnect in response to new information and learning, with thoughtfulness. All of this requires intellectual exercise.


Accordingly, 6ai Technologies authentic intelligence solution maximizes the productive value of generative AI, building a more focused and higher-dimensional level of information retrieval, augmentation, and integration, to authentically help maximize human potential.


6ai Technologies my6ai software is human-centric. It doesn’t aim to make AI competitive, surpass, or replace human intelligence. By democratizing knowledge ecosystems, we are making them increasingly accessible and also taking a Socratic approach, powered by the six-step (6ai) process. Done practically, purposefully, and responsibly, to augment human intelligence capacity, capabilities, and ingenuity.


6ai Technologies provides an alternative to the different shades of grey AI we have now. Therefore, 6ai presents a full-stack AI-Agent-based company aiming to compete directly with large consulting firms by providing individuals and organizations with the right tools to develop their own insights and craft their own strategies, in making their world!


The six-step process leverages and couples AI-Agents with its Socratic Conversational Method (SCM) IP, comprising its core architecture and infrastructure, built to scale and putting humans, not machines, in control.

 
 
 

Comments


Stay up-to-date 

Thank you for subscribing!

bottom of page