Technopower Manifestos of Authoritarian AI
- Apr 24
- 10 min read
Updated: 7 days ago
Bogus Predictions, Prophecies and other Hubris (Never Truth) from the Silicon Valley Prophets

“If the main pillar of the system is living a lie, then it is not surprising that the fundamental threat to it is living the truth.”
—Václav Havel
As it stands today, just a handful of men have been entrusted with the world’s most potent new technology—exercising free-wheeling command over artificial-intelligence models that can shape our future. Coupled with an all-time grifter Trump Administration, this has emboldened authoritarian AI, which hinders the development of any alternative thinking systems and technologies. This has created an inertia of insidious manipulation, the abdication of human agency, through fear and fantasy, prophecies for the wilfully ignorant to consume.
The Silicon Valley Technopower aristocracy is run, authoritarian style, by just a few billionaires (supported by the courtier VCs) whose businesses and power thrive on our wilful ignorance and obedience to prescriptive predictions that are neither knowledge nor truth, which we give oxygen to, thereby becoming self-fulfilling prophecies.
This handful of men depicted in the image above have been elevated to the status of gods and prophets, whatever you like, with their hubristic manifestos, and are becoming even more emboldened in trying to shape our world. However, manifestos are not truth; they are predictions, and predictions are not facts. Facts are what are in the present and the past; the future doesn't exist yet, so there can be no facts about what doesn't exist. This is a power play, nothing more, or as Anne Applebaum put it in her book Autocracy, Inc., authoritarian regimes in the 21st century are not bound by a particular “ideology, but rather by a ruthless, single-minded determination to preserve their personal wealth and power.”
These billionaire manifestos do not provide us with any true knowledge about the world. They're just hubristic prophecies, meant to make us believe that the future has already been determined, and by them. Tech companies are not in it for the knowledge; they're in it for the money. Aiming to build business empires aligned with their own wealth ambitions, driven not by any special access to knowledge, but by their relentless pursuit of power.
Like the landlords of past feudal systems that originated in medieval Europe around the 9th-11th centuries, it's all about the rent, cloud rents now, in the 21st century.
Palantir’s CEO, Alex Karp, who recently issued his Technofascist manifesto (The Technological Republic: Hard Power, Soft Belief, and the Future of the West) championing US military dominance and AI weapons, set off a significant backlash for its aggressive, pro-war stance and attacks on cultural inclusivity, has been described by The Guardian as the ‘ramblings of a supervillain.’ Pay attention. Karp’s manifesto represents the broader intent of Technopower authoritarian AI.
Palantir is a mass surveillance company founded by Alex Karp and Peter Thiel after 911 to ‘defend Western civilization.’ Karp’s manifesto extols the benefits of American nationalism and power, implying that other cultures are inferior. Standard white supremacy stuff. Karp writes that "Silicon Valley owes a moral debt to the country that made its rise possible." Tying American AI development to American power projection and cultural superiority. Treating technological dominance as synonymous with national identity. Implying that questions of democratic oversight are secondary to “America’s” competitive victory.
Another manifesto comes from Marc Andreessen, co-founder of top-tier venture capital firm Andreessen Horowitz. "The Techno-Optimist Manifesto," published in October 2023, is a 5,000-word defence of technology and free-market capitalism as the only drivers of human progress. It advocates for unfettered technological growth, asserting that technology makes everything "cheaper and more abundant". His manifesto urges adopting a pro-growth, pro-tech mindset overlay on humanity.
Next, OpenAI CEO Sam Altman's manifesto, "The Intelligence Age," outlines a future driven by superintelligence, promising massive, positive, and rapid transformation for humanity. Altman envisions "gentle singularity," where AI becomes a widespread, general utility—similar to electricity—enabling only him to solve all the complex problems in the world: climate change, disease, and “all physics,” he proclaims, but first, Altman wants us to give him up to $7 trillion for AI chips; the natural resources required would be mind-boggling, and cost us the planet.
Harvard Business School, Professor Emeritus Charles Edward Wilson, is brutally honest about Altman, stating that “Altman is no genius and has little vision beyond his baseless rhetoric and the billions of dollars from greedy or guileless investors.”
Dario Amodei, CEO of Anthropic manifesto, titled "Machines of Loving Grace" (sometimes referenced alongside themes of "The Adolescence of Technology"), is a manifesto, primarily using fear as his trade-craft marketing tool. Amodei’s prophecy is particularly cunning and dangerous because he distracts with his “AI-safety” fear-driven narratives of inevitability, which fool the less observant and naive among us into believing his branding narrative.
Manifesto Pattern Analysis
Actor | Manifesto Title/Date | Key Excerpts | Evidence of Narrative Control | Evidence of Foreclosure |
Andreessen | "The Techno- Optimist Manifesto" (Oct 2023) | "[Quote positioning tech/capitalism as primary progress driver]" | Frames market-driven acceleration as the only viable path to human flourishing |
Dismisses alternative governance or development models a priori |
Altman | "The Intelligence Age" (Sept 2024) / "Industrial Policy..." (Apr 2026) | "[Quote about 'gentle singularity' or inevitable prosperity]" | Presents AGI as foregone conclusion, not a choice |
Treats ubiquity as predetermined; no space for "should we?" only "how soon?" |
Amodei | "Machines of Loving Grace" (Oct 2024) / "The Adolescence..." (Jan 2026) | "[Quote about 'powerful AI' arriving 2026-27 with civilization-level stakes]" | Claims exclusive expertise to manage existential technology |
Positions public oversight as potentially dangerous given urgency |
Karp | Palantir Manifesto (2026) | "[Quote linking AI to American power/cultural superiority]" | Ties technological dominance to national identity |
Democratic oversight framed as competitive weakness |
Nevertheless, all of these manifestos are nothing more than prophecies by hubristic men telling us how they will design the future for us. Everyone will be using AI, and only their version of the future matters.
They’re trying to bend reality toward their vision, pushing the fear of missing out (FOMO) and being left behind, which produces anxiety (irrational fears that stem from ignorance and faulty thinking) about something that doesn't exist. Often, this causes people to act against their own best interests.
Prediction manifestos ultimately become self-fulfilling prophecies that lead to tyranny and surveillance by the big tech aristocracy, working in tandem with the government. These are no longer descriptive, but prescriptive, proposing a prescribed course of action. Many who listen to these prophecies begin to take them as truth!
Buying into AI, foolishly believing they are getting an advantage, or ahead of the curve, ignorant of the fact that everyone gets the same LLM, the same AI, so how can one have an advantage if everyone gets the same thing?
For example, Dario’s prediction that "Coding is going away first, then all of software engineering," is a prophecy meant to prescribe a certain future path for Anthropic to benefit financially from. Influential software engineer Gergely Orosz wrote that “The only people who believe any of [what Amodei said] are non-coders.” His“Machines of Loving Grace,” borrowed from a Richard Brautigan poem, is utopian, resembling science fiction. Amodei tells The Atlantic that “we will soon create the first polymath AI with abilities that surpass those of Nobel Prize winners in most relevant fields, and we’ll have millions of them, a country of geniuses”. Again, a prophecy, not knowledge nor truth, because there can be no facts about the future. These are words meant to get us to entrust Anthropic as stewards of AI.
The Danger Zone
Some AI experts are finally waking up and sounding the alarm about companies like Anthropic, releasing powerful models, where safety rests entirely with the company releasing the product, without a regulator or even a third-party auditing process. “The current status quo, where private companies get to decide which models are released, is harmful to society,” said Nicolas Papernot, co-director of the research program at the Canadian AI Safety Institute. Adding that the “current reality of generative AI being a new form of public infrastructure,” with no regulation.
Canadian AI pioneer Yoshua Bengio added that it is “deeply concerning” that defining and applying safety standards to models can be left solely to companies. “If current scientific trends continue, we will likely face an increasing number of similar cases, referring to Mythos, Anthropic’s new ‘powerful’ model.
The Economist has also pointed out recently that there is a greater need to ask better questions and not be overly trusting of tech CEOs. That we should question the value to humanity of Anthropic's new model, Mythos, which finds software vulnerabilities, and those concerns must go beyond what would happen if Mythos got into the wrong hands. Questions around the authentic, beneficial value the software provides to society. I.e., if it threatens critical infrastructure, from banks to hospitals, biosecurity hazards to a new level of industrial-scale scamming, does the bad outweigh the good? And how does introducing potential malaise and chaos benefit us?
The benefit, however, is not for society; it’s for the billionaires and their corporations, creating more superfluous technology products we don't need.
Tech giants are set to spend an estimated $1 trillion on AI capex in the coming years, says Jim Covello, Goldman Sachs’s head of stock research. Covello has warned that building too much of what the world doesn’t need “typically ends badly.” Warning that if Generative AI (GenAI) does not move beyond simple chatbots and deliver significant, measurable productivity gains soon, the high capital costs will weigh heavily against their business models, making them, and other L-M based companies, unsustainable.
We must open our eyes and not allow ourselves to be led blindly into the future, but participate in ushering it in. Consider the parable of the Thanksgiving turkey. Every day of its life on the farm, the turkey is fed, kept warm, and well cared for by its caretaker. And every passing day that the turkey becomes more fattened up, it becomes more gullible and trusting of whatever its caretaker says, believing that this looks to be an amazing future the farmer has laid out for it. The turkey becomes increasingly lazy, not questioning why the farmer is being so incredibly nice, right up until the point the turkey sees the farmer coming with an axe for its neck! The turkey will now incur a revision of belief.
Not being aware of our surroundings and the actions and motivations of others around us can put us in peril. The turkey bought into the improbability of such a wonderful future at the hands of others. The warning, therefore, is that people do things primarily for their benefit, even if it may appear in the moment to be in yours, so be aware of the motives of others because you are ultimately responsible for your own self-preservation.
Skepticism would have served the turkey well; similarly, basic skepticism will undoubtedly serve us humans, well, too.
The more the turkey relied on others, the more misinformed and delusional it became. As the wise aphorism goes, absence of evidence does not amount to evidence of absence. If the turkey weren’t so lazy and gullible and tried to understand the motivations of the farmer, it would've fled for its life. Similarly, if we create the habit of exercising skepticism, it will protect us against falling for the highly improbable. But the moral of the story is, don't be a damn turkey!
In the brilliant new book Prophecy: Prediction, Power, and the Fight for the Future, from Ancient Oracles to AI, by Associate Professor Carissa Véliz, at the Institute for Ethics in AI at the University of Oxford. Véliz explains how putting too much stock in others' predictions makes us vulnerable to charlatans, con artists, dubious technology, and self-deception. She uncovers several insights through her research in Computing and Philosophy, that predictions in AI technology tend to be self-fulfilling. More data doesn't guarantee better outcomes. That AI is more likely to increase risk and create more problems for society than decrease it, and a free and robust society requires not more prediction but better preparation to build a better future.
Trusting corporations to act in our best interest is ill-advised. Corporations want to influence your thinking so you can adopt their product and beliefs. Amazon, for example, has conquered a bit of you when it persuaded you to adopt a Kindle book rather than a paper one. Palantir has power over our freedoms when it is given government contracts for mass surveillance.
At no other time in history has a single industry had so much power over government and us. The magnificent seven stocks are worth trillions and have amassed a gravitational pull in power and influence over the entire universe. Palantir is the de facto policy maker for the US government because, via data, they control the critical decisioning of government officials and their leaders. Greatly influencing outcomes. Governments no longer run their own infrastructure; big tech does. The President of the United States have come to kneel at the foot of AI CEOs, i.e., Trump, in the white press conference announcing Sam Altman's $500 billion infrastructure grift, "Stargate." When a start-up uses Google Docs, they are handing Google access to their plans, and Google is in the business of taking out startups with potential to challenge any part of their business. Microsoft Teams has a surveillance function, even though Microsoft says it doesn’t. So, big tech runs the world, whether you choose to believe it or not.
Nevertheless, the future remains unwritten; nothing is predetermined, despite what the AI prophets are telling you. Keep your heads while others are losing theirs, and resist being manipulated by bogus prophecies appearing as knowledge, because there can be no knowledge about the future, as facts about the future can’t ever exist in the present. So, this is not truth! Just manipulative and prescribed predictions for power over you. Don’t fall for it.
Conclusions
We must fully grasp that predictions, particularly those based on hubristic manifestos, are not knowledge or genuine insight into the future. Rather, they’re a raw power play. Therefore, we must not surrender or squander our human gift of agency and walk through life with our eyes closed, giving in to the designs of a small group of men creating the AI that benefits their wealth and power.
Big tech billionaires have positioned themselves as prophets to accrue greater power, using predictions as a tool of manipulation, and the more absurd the narratives become, the more the public swallows them whole. Hiding the many risks of AI, including the circumvention of privacy and the increase in mass surveillance. AI prophecy only increases our anxieties, and the human condition often makes bad decisions when feeling vulnerable. They search for saviours that can offer them the illusion of safety, i.e., the game Dario Amodei plays.
In the end, AI is a robust autocomplete; a glorified search engine, on steroids. But the greater dangers lie in human beings buying the hype of those who design the particular AI that can hurt us, and they take no responsibility for those actions, nor the trouncing of democracy that their machines contribute greatly to.
There is a case to be made for a new applied intelligence alternative. We’ll get into it in part two of this article.
.png)



Comments