A Relentless Pursuit of a Narrow Vision of the Future
- perrydouglas9
- Aug 1
- 11 min read
The Emperor wannabe has no clothes

OpenAI is the world’s third most valuable private company, valued at $300 billion following its latest fundraising efforts, a record $40 billion tranche led by SoftBank in March 2025. It has raised more than $63 billion since its inception. However, its valuation has chiefly been built on raising capital and hype around AGI, rather than solid business fundamentals. JPMorgan recently warned that risks to OpenAI’s business model are increasing, as it faces growing uncertainty in preparing for a potential ‘OS wars’ against Google, Apple, and other Silicon Valley giants. Yet, OpenAI has failed to present a sustainable enterprise business model or a solution for customers.
JPMorgan states that OpenAI’s “frontier model innovation” is becoming a “more fragile moat,” and it faces a “window of risks” that is widening each day, including “rising talent and litigation risks, as well as strategic uncertainty related to OpenAI’s unconventional organizational structure.” The bank describes OpenAI’s “window” as “quite opaque,” and suggests that its leader, Sam Altman, might be exposing vulnerabilities and may indeed have no clothes.
Still, OpenAI continues to pursue a future that’s not about responding to demand or improving efficiency for human progress; instead, it’s focused on a narrow vision of the future crafted in its leader’s image.
Leadership and vision can inspire, motivate, and guide people toward future goals. Altman’s vision for an OpenAI world is to be “your interface to the internet, transforming the way humans interact with machines,” effectively overlaying control on humanity with himself at the helm, making all key decisions for us all.
Leaders like Altman often aim to build businesses aligned with their own ambitions, rather than serving the best interests of the marketplace or human progress; they are often driven by hubris and the desire for conquest, control, fame, wealth, and empire. In Greek tragedies or Shakespearean stories, times and technologies may change, but human nature remains constant.
Bold visions attract followers — those with similar ideals but who lack leadership qualities. They may have visions for the future and possess strong technical skills, but lack confidence. Altman combines a compelling vision with confidence, which he leverages to gather a like-minded team willing to carry out his desires.
“Never doubt that a small group of thoughtful, committed citizens can change the world; indeed, it’s the only thing that ever has.”
— Margaret Mead
Skepticism remains a crucial tool; it must be central when evaluating OpenAI’s vision for the future and whether it truly benefits us. He has been remarkably successful so far in engaging those who are intellectually lazy, weak-minded, willfully ignorant, or unwilling to critically think or do some research — traits prevalent in much of the population.
Unfortunately, Altman’s masterful storytelling makes him appear slick and elusive, concealing the facts and making them difficult to pin down, which is likely by design. The hype, greed, and arrogance permeating the AI industry from Silicon Valley have overtaken rational thinking. This irrational exuberance clouds judgment, distracts us, and consolidates power in the hands of a few billionaires who dominate our lives.
The emerging AI aristocracy pushes a top-down agenda, steering technology away from democratization toward a digital dictatorship — concentrated power held by only a handful of wealthy elites.
The lack of proper scrutiny, meaningful regulation, AI safety, and other important AI-related concerns allows hype and lies to drive the OpenAI well-crafted AGI narrative: artificial general intelligence (AGI) will be as intelligent as and surpass human intelligence. None of this, of course, is backed by any scientific evidence or discovery, but the narrative Altman has crafted has been able to capture imaginations, allowing him to raise tons of money on unsubstantiated claims.
So, when people talk about AI today, they are often uninformed and see AI as a broad, vague category of technologies. While the term “AI” is catchy and seems suitable for systems mimicking human actions, these systems operate very differently from real human thinking and reasoning. Labels like machine learning, deep learning, and neural networks are theories, not discoveries grounded in science. AI right now is about raw power driven by a narrow ideology that involves the theft of intellectual property without proper credit or payment. And it is more associated with filling off online fraud, deep-fakes and porn than anything else today.
OpenAI must still face a real problem with its business model, which is not an enterprise solution; it was never built for that. Unlike platform giants like Apple, Google, and Microsoft, Salesforce, and more, OpenAI has no enterprise product or operating system to offer the market — just more hype with AGI.
JP Morgan’s research highlights that “The company never wanted ChatGPT to settle into the well-established grooves of the software-as-a-service (SaaS) sector; instead, it says that OpenAI aims for a much larger battle for control over user interaction itself.” So, Altman’s pursuit has always been about dominating, with a self-absorbed, narrow vision of the future.
However, this vision is proving problematic to achieve and is a strategic mistake because, unlike Google, Microsoft, Apple and others, ChatGPT is not an enterprise product! It’s a commodity product. It is abstract at best.
ChatGPT is a product for quick summaries, at the end of the day, a glorified search engine that’s dangerous because it hallucinates as it doesn’t know the real world, so it can’t be trusted, it creates more work for users, and often must be fact-checked…where is the efficiency and value in that?
JPMorgan adds “that any of those companies,” i.e., Apple, can develop their own AI when they see a strategic and competitive value fit and advantage in doing so. Apple is not falling behind with AI development; it’s just not rushing into anything, being smart and patient and not getting caught up in the hype and an AI arms race. Apple has always been strategic and smarter than most, and it has north of 63 billion in cash and no breakthroughs with AI, it can afford to be patient.
JPMorgan notes that AI/LLM is a “crowded space right now, with many varieties of generative AI models released in the past 18 months,” which further points to the commoditization of the industry.
Google’s Gemini 2.5 and China’s DeepSeek R1 (developed at a fraction of the cost) match or surpass OpenAI on benchmarks for ‘reasoning,’ coding and cost efficiency.
Price wars have erupted: OpenAI cut o3 model prices by 80% after Gemini’s Pro model outranked it in user rankings, highlighting how hard it is to differentiate core models. Again, all characteristics of a commodity industry.
OpenAI’s main challenge is fundamental: its requirement for ever-increasingly large infrastructure and immense natural resources to fuel advancing models is just unsustainable. It exacerbates global socioeconomic imbalances, and the cost to the planet is untenable. The trade-offs just don’t work or present value for human progress and the planet. Not even close!
The world is wasting a whole lot of time, resources and capital on one man’s amour propre…narrow view of a future world.
A Canary in the coal mine
Steve Jobs once said that the best way to build a technology business that people want is to start with the customer experience, seek to understand authentic demand or create it, and then work backward to the technology. OpenAI has done the opposite; it has first developed the technology it wants, and in the image of its founders, and now it’s trying to get everyone to adopt it, with brute force.
As conflicts escalate and warning signs emerge, particularly around training data and copyright issues, this could be the canary in the coal mine. First, OpenAI is far from being profitable in a largely unregulated wild west environment that allows it to essentially steal content to train its models for free. What will happen to the business model when they have to pay?
That’s not an engineering problem; it’s a fundamental business model problem; that’s not going away. And when your entire business model is dependent on copyrighted materials, and your risk mitigation strategy is hoping that nobody enforces copyright laws, you’re not a revolutionary business; you’re in denial.
It’s About Empire
Sam Altman is a well-known admirer of Napoleon. He mentioned this on stage at an event in 2019, saying that The Mind of Napoleon was one of the best books he’s ever read. Napoleon, of course, was the French military leader, narcissist, and self-proclaimed Emperor of France, who later sought to conquer Europe. But eventually, he went too far, self-destructed, and was exiled.
Altman has expressed fascination with Napoleon and human psychology — how he built himself up to be the Emperor of France. Altman states he studied Napoleon’s strategies for building systems and gaining people’s loyalty. Similarly, Altman aims to create his empire in his own image. It’s not about OpenAI improving humanity or altruism.
The creation of OpenAI as a nonprofit at its founding (which included Elon Musk as a co-founder) was largely a distraction to mask the real objective, which is a narrow, self-absorbed, superficial worldview, with Altman as the top dog. Like Napoleon, it was about the supposed good of the people or the technology to serve society — yet it’s really about ego, glory, and power!
In the insightful and detailed book by journalist Karen Hao, titled Empire of AI — Dreams and Nightmares in Sam Altman’s OpenAI, Hao outlines Altman’s approach to empire-building. She explains that Altman first seeks to centralize talent by offering a grand vision and making people feel part of achieving artificial general intelligence (AGI). The glory, power, fame, and fortune that come with that attract smart people. Altman laid the groundwork by gathering followers in a cult-like manner: “Successful people create companies. More successful people create countries. The most successful people create religions.”
That quote above is attributed to Qi Lu, but Altman said, “It got me thinking, though — the most successful founders do not set out to create companies. They are on a mission to create something closer to a religion, and at some point, it turns out that forming a company is the easiest way to do so.”
Secondly, achieving such big goals also involves consolidating capital and resources and removing regulations and dissent: “Who will control the future of AI?” Altman wrote in July 2024 in an op-ed for The Washington Post, asserting that advanced AI innovation must be led by ‘US technological superiority,’ in coordination with other Western nations. A typical Western supremacy narrative — the belief that they are the only ones fit to lead and make all the important decisions. Altman concludes that the world will benefit from Western scientists’ brilliance and the widespread advantages this will bring. Napoleon thought the same, seeking to conquer Europe — with France and himself as emperor at the helm.
Altman’s increasing political engagement also reveals his empire ambitions and a next level of corruption. Since Trump’s second term began, for example, Altman has been able to advance his political influence to further his expansion ambitions. The White House announcement of Stargate and travelling to the Middle East with Trump, for starters. Altman and other tech billionaires like Alex Karp, the CEO of Palantir, a mass surveillance firm primarily working for the US government and CIA, orchestrated a major grift: first, Trump signed deals for his family business, and the tech billionaires negotiated deals in their own interests.
Hao points out in her book that the most remarkable move OpenAI has pulled off is the bait-and-switch. Founded as a nonprofit to be “unconstrained by a need to generate financial return,” and “that everyone should benefit from the fruits of AI,” today, however, OpenAI is registered as a for-profit company, and “We need to raise more capital than we’d imagine,” Altman proclaimed, pushing his grand vision even harder:
“AGI is just around the corner,” and “We are now confident we know how to build AGI as we have traditionally understood it,” he wrote in a blog post on January 6, 2025:
“The world is moving to build out a new infrastructure of energy, land use, chips, datacenters, data, AI models, and AI systems for the 21st-century economy. We seek to evolve to take the next step in our mission, helping to build the AGI economy and ensuring it benefits humanity.”
No! The world is not moving that way; Altman is trying to pull it there, to fulfill his Napoleon-like ambitions for empire.
AI Colonialism & Dictatorship
“In the 21st century, to conquer a country, you don’t need soldiers; you need data. There are no longer independent countries, but data colonies. Economically, the danger is that if you harvest the data of customers, you can monopolize an industry.”
— Yuval Noah Harari
Colonialism in the 21st century is no longer just about stealing resources like sugar, cotton, or minerals and extracting wealth from the Global South; a new form of colonialism, AI Colonialism, exists, and it is accompanied by a big tech dictatorship. The free extraction of vast freshwater supplies to support Google’s large data centers in Chile or paying sweatshop wages to workers in Kenya for data annotation by OpenAI is what it looks like.
It includes the blatant theft of content from publications, artists, writers, and others — the data used to fuel these large language models (LLMs) — monsters! OpenAI disregards copyright laws and fair compensation. This is an AI dictatorship backed by the US government, controlled by a few big tech billionaires. A new form of oppression and exploitation by an AI aristocracy is now among the world’s biggest exploiters, harming both the planet and its people.
Historian and best-selling author Yuval Noah Harari warns that as corporations and governments rush to adopt AI… leads to the amassing of unchecked power. And that the concentration of data and knowledge ecosystems in the hands of just a few billionaires poses an existential threat to democracy and humanity!
Ultimately, he warns, a digital dictatorship led by big tech firms and their billionaires aims to first consolidate their power through influence peddling and then dominate the economy. Historically, colonizers seized land from Indigenous peoples and enacted laws to restrict their use and benefit from it.
Today, it’s not land but intellectual property and data — what some call cloud feudalism — where Big Tech harvests data at zero cost and sells it back as cloud services. This bears a chilling resemblance to the Triangular Trade, particularly the transatlantic slave trade, which has profoundly shaped global economics, politics, and social hierarchies — dynamics that persist to this day. It is arguably the most significant event of the last thousand years, maintaining the North-South power structures that define our world.
Be cautious of what is happening around you, because history is swift and fluid, repeating itself when you ignore it. What once seemed unimaginable can rapidly become your reality, and it can threaten your very existence, pushing us toward living under an AI dictatorship. While times and technologies change, human nature remains the same.
In the 21st century, Hao notes, “Controlling knowledge production fuels influence; growing influence accumulates resources; amassing resources secures knowledge production.”
Resistance against AI colonization should be a top priority. We must oppose the centralization of knowledge within a few corporations and billionaires and reject the idea that large language models (LLMs) are the inevitable future. That’s not true.
Instead, if AI is deemed useful, we should focus on smaller, task-specific models that have proven to be more practical, efficient, less prone to hallucinations, and more environmentally friendly than larger models.
In 2023, MIT released a report titled The Great Acceleration: CIO Perspectives on Generative AI, which shows that smaller, focused language models outperform larger ones. But the AI aristocracy doesn’t want you to know about that research.
OpenAI, Altman, and the Tech Bros don’t get to decide what’s best for us; we can and must choose and speak for ourselves. We need to oppose OpenAI’s narrow vision of AI. Nothing in this world is set in stone; everything depends on the choices we make and the effort we invest. The same goes for technology — it’s about making decisions that ensure long-term sustainability, not accepting the top-down push from big tech elites. We need to flip the script — empowering the people from the bottom up.
The brute-force approach to AI only succeeds if we allow it, turning it into an instrument of digital violence, asserts Professor Emeritus Charles Edward Wilson from Harvard Business School. “Altman is no genius and has little vision beyond his baseless rhetoric and the billions of dollars from greedy or guileless investors,” he states.
Nobel Laureate Daron Acemoglu, drawing from his research, The Simple Macroeconomics of AI, adds, “AI might automate only 5% of tasks and add just 1% to global GDP over the next decade.” While many are lured by promises that AI will rapidly transform everything, the macroeconomic facts tell a different story, he explains.
Conclusion
Dismantling the AI hype requires more transparency, education, dispelling of myths and industry lies and standing up for yourself in this economy.
MIT professor Joseph Weizenbaum said in the 1960s, “Once a particular program is unmasked, once its inner workings are explained in language sufficiently plain to induce understanding, its magic crumbles away.”
Therefore, we must be highly skeptical of the AI promoters and pursue transparency and alternative research, systems and be crystal clear about what AI truly is and what it is not — what we want it to be, not what the tech billionaires want us to believe.
Comments