The Problem With Fragile AI Systems
- Mar 27
- 6 min read
The alternative: Antifragile applied intelligence — the real ai

“Some things benefit from shocks; they thrive and grow when exposed to volatility, randomness, disorder, and stressors.”
— Nassim Nicholas Taleb, Antifragile
In the book Why Nations Fail: The Origins of Power, Prosperity, and Poverty (2012), the authors, Daron Acemoglu and James A. Robinson, argue that inclusive systems create prosperity and thrive, while extractive systems are inherently fragile and eventually fall apart. This aligns with Nassim Nicholas Taleb, a NYU business professor, former Wall Street Trader, mathematician, essayist, and risk analyst whose work focuses on probability, uncertainty, and randomness. He’s the author of many best-selling books, like the classic Antifragile: Things That Gain from Disorder. Taleb, too, argues that complex extractive authoritarian systems controlled by the few are also fragile and will break.
Talib’s concept of Antifragile — systems that gain from disorder, stressors, and volatility — provides us with a critical framework for analyzing large language model (LLM) fragility. Effectively, LLMs are systems built on hype and hubris, the fallacy of endless scaling: even bigger and bigger models will eventually get you to artificial general intelligence (AGI).
However, the basis for LLMs is the neural networks theory, and it’s only a theory, one that infers patterns and rules from large volumes of data, believing it can match, even surpass, human intelligence, based on its pattern recognition capabilities. But what many do not understand is that simply applying a neural network theory to a problem does not automatically create a solution. We can’t solve the real pressing social and economic problems via a theory and a massive LLM memory bank of trained data.
True problem solving requires a process of formal logic and reasoning (not the fallacy of informal logic that LLMs are based on), so math alone is not enough; math needs philosophy to develop an effective, reality-based applied intelligence methodology. This methodology can be learned systematically and applied effectively to core principles of premise and structure. It involves working with complexity — comprehensive sets of empirical facts that allow us to draw scientifically informed conclusions. This approach is called applied intelligence (ai), a Socratic, human-centric process that applies critical and independent thinking to turn information into knowledge, fostering insight and rational, antifragile strategic thinking for empirically justified strategies.
Big Tech AI ecosystems are fragile due to their top-down authoritarian style and centralization of knowledge. Increasingly, however, when exposed to scrutiny and stressors, their order turns into disorder, and the fragile constructs are exposed.
LLMs operate as cognitive extractors — systems designed to replace, not elevate, human thought and the mind. By prioritizing speed and scale for quick answers and summaries, they reduce users to passive consumers of prefabricated ideas, eroding intellectual capacity and strategic agency. The alternative: applied intelligence (ai), is the antidote: a Socratic framework, where humans retain sovereignty over their intellect.
Fragile extractive authoritarian systems tend to concentrate wealth and control, and the big tech LLM industry mirrors this dichotomy: its extractive ecosystems centralize data and knowledge, stifle competition through proprietary control, and rely on monopolistic practices that hoard value for only a few stakeholders. Like other extractive authoritarian regimes in history, their systems grow brittle under stressors — regulatory scrutiny, market volatility, public distrust and inequalities.
Conversely, inclusive systems — applied intelligence, its platform democratizes, decentralizes, and is collaborative. Emulating antifragile dynamic pluralistic systems that distribute agency, encourage experimentation, and free and critical thinking. The fundamental point here is that systems designed mainly to extract value as opposed to augmenting human intelligence, capacity and capabilities, are not authentic systems and not long-term sustainable. They crumble under the weight of their own fragility.
So, unlike ChatGPT’s opaque algorithms, applied intelligence embeds empathy into its architecture — a system powered by user curiosity — users interrogate assumptions, pressure-test hypotheses, and create and own their decisioning frameworks and strategies. Every user is a unique one. Transforming AI from an overhanging threat of replacement to a collaborator and partner that frees users from the ignorant LLM herd, empowering them to thrive in the new globalized economy while guarding against the ultimate risk of intellectual obsolescence.
applied intelligence — an Antifragile built system
LLMs exhibit remarkable capabilities within their training distribution; there can be no doubt about that. However, fragility is revealed when adversarial brittleness appears…simple perturbations can cause dramatic failures in performance — systematic failures when confronted with novel situations or basic contextualization that deviates from their training data.
Better training and adding more parameters won’t change this fundamental problem that LLMs can’t get around. Any hope of solving this major issue would have to begin with breaking down the architecture, and that’s not going to happen, so LLMs are, effectively, a dead end.
The Antifragile nature of applied intelligence is its bottom-up process, which has allowed it to learn and grow from users (not an imposed top-down ChatGPT model). Evolving organically through users’ inquiries, real-world stressors, and adversities to become antifragile. Accordingly, the applied intelligence process is an optimal balance of machine-scale processing power coupled with human judgment and empathy, which is exactly where real, sustainable research and strategy live.

Times and technology may change, but human nature remains the same. Today, fragile authoritarian-style regimes are being emulated in Silicon Valley, propaganda replaced with hype. Big Tech and its billionaire owners, with VC courtiers enforcing the system by effectively investing in startups that feed the interests of the Big Tech AI Aristocracy. Of course, the system wouldn’t be complete without political corruption that helps force it down our throats. The Trump White House, with Palantir lurking in the shadows…with mass surveillance technology to bolster the authoritarian regime.
Nevertheless, the arch of history is long, and fragile systems don’t have history on their side. The break or the will bubble will burst, and if there is one thing history has taught us, it’s that it’s never any different this time.
The “Most Likely” trap has fooled many…with the appearance of intelligence. But LLMs don’t know meaning; therefore, they cannot ever be intelligent. What is produced is bias, consensus, the “most likely explanation” based on its preprogrammed training data. It’s a parlour trick, says Noam Chomsky. AI tells you nothing new, or nothing you can’t find out yourself, with a little effort. A LLM is an “Impossible Language,” he says, only intelligible to the machine-self; sophisticated “high-tech plagiarism” engines that scan vast datasets to identify regularities, rather than understanding the underlying, innate rules of language that humans use to convey meaning.
It’s a “recipe” as Alan Turing, a founding father of computer science and artificial intelligence, emphasized the need for learning rather than just rigid programming. However, the AI Aristocracy has moved us in the opposite direction from learning.
Talip calls LLMs a self-licking lollipop; it nullifies any enjoyment or purpose. So LLMs can generate all the summaries you want, but they suppress critical thinking and create a recursive “self-feeding loop” of statistical noise and superficial averages that reduce the diversity of thought and weaken minds.
Quick answers and instant gratification feel good in the moment, but the damage is more lasting. Excessive use of tools like ChatGPT erodes your thinking capacity, capabilities, and the application of rigour associated with knowledge acquisition and wisdom. Resulting in meaningful declines in critical thinking skills and cognition.
According to MIT, LLMs/AI don’t make you smarter it can generate brain rot instead with excessive use:
MIT Media Lab: The correlation between AI tool use and critical thinking was strongly negative (r = −0.68), suggesting that greater reliance on AI tools is associated with a 32% decline in critical thinking skills and cognitive effort.

In summary, LLMs are fragile systems that break down when encountering stressors — novel situations. Current safety measures, like putting “guardrails” on AI, don’t fundamentally change a thing. It’s only an illusion used to placate the anxious and uninformed to justify the Big Tech Aristocracy’s sinister push for self-governance and authoritarian rule — don’t believe the hype!
LLMs are void of philosophy and methodology, so they don’t know meaning, and often shatter from the very same math they were built on. LLMs are pretrained, so they can’t account for the mighty independent variable (where changes in the independent variable directly cause changes in the dependent variable). Therefore, hallucination is an incurable flaw/feature in the system; no matter how many dependent variables (training data) have been calculated, the mighty independent variable can still come along and crash the party. And that’s that!
The only logical and proven alternative is an Antifragile system: applied intelligence — a curiosity-driven decisioning system where one can apply their own intelligence to transform data/research into actionable wisdom. One where every query mutates the system’s DNA, embedding your intellectual fingerprints into its core — every strategy is unique.
LLM systems like ChatGPT aren’t making you smarter; they’re making you mentally lazier. The applied intelligence system, on the other hand, is built to make you a better thinker, because your mind’s irreplicable thinking is the spark that powers it! It is inclusive, not extractive!
Hence, the moat in applied intelligence isn’t technical; it’s human: your thinking reshapes the framework, making it authentically yours to claim ownership of your work. LLMs compete on speed; applied intelligence also offers speed but engages the user in intellectual rigour, to ensure outputs that can be trusted and outcomes that can be celebrated!
.png)



Comments