The Socratic Philosophy in 6ai
- perrydouglas9
- Oct 20
- 7 min read
Updated: Oct 21
Amplifying the Socratic process through technology

“To be uncertain is to be uncomfortable, but to be certain is to be ridiculous,” said Socrates, and the only certain thing is that nothing is certain. We live in an age where many technology leaders try to convince us that their version of artificial intelligence and large language models can offer certainty. However, the only thing these models have delivered is certain hallucinations.
6ai Technologies, my6ai software, is based on the Socratic process, a form of dialogue developed by the ancient Greek philosopher Socrates, asking probing questions to guide one in critically examining their own beliefs and assumptions. It's a collaborative process that emphasizes critical thinking and self-reflection rather than merely providing quick, summarized answers. By continually questioning and exploring the merit of ideas, the method helps individuals deepen their understanding of a topic, which often reveals inconsistencies in their logic.
The 6ai Socratic process IP takes an augmenting approach, powered by the six-step applied intelligence (ai), which utilizes generative AI as an amplifying tool, used practically, purposefully, and responsibly, to enhance human intelligence capacity, capabilities, and ingenuity.
6ai’s underlying philosophy is to help anyone or any organization discover fact-based insights and build comprehensive strategies through a process of logic. Logic is the rules-based, structured examination of reasoning. 6ai uses algorithms/mathematics coupled with philosophy to turn the abstract into insight, and insight into strategy.
Our Empathetic Socratic Conversational Method (ESCM) of questioning and rigorous authenticity of analysis provides a clear understanding of the subject matter, for the user and software—the software must first learn before it advises. This allows for the fundamental exploration of reasoning: logic, truth, and the nature of reality—providing a framework for examining complexity stemming from inquiries.
ESCM aligns with 18th century mathematician and philosopher Immanuel Kant: Kant explores the limits of human reason and argues that metaphysics, or knowledge beyond experience, is impossible. Kant build off 17th century mathematician and philosopher René Descartes, who famously said "I think, therefore I am," confirming that the very act of thinking proves one's own existence. And that thinking and reasoning requires consciousness, without consciousness there can be no intelligence.
The 6ai process distinguishes between knowledge, opinions and belief systems. It seeks to ascertain the practical conditions under which any given claim can be justified. Offering thoughtful and clarifying fact-based responses instead of quick superficial summation. It examines the nature of the reality surrounding the inquiry because understanding the nature of the question is a first principle of learning and asking better clarifying questions.
The Fallacy of Neural Networks-LLMs
After the release of ChatGPT-5, Cal Newport, Professor of Computer Science at UC Berkeley, in an article in The New Yorker, questioned what if this is all there is to AI, “what if A.I. doesn’t get much better than this,” and whether it has indeed reached the scaling wall (which it has, a while ago). Newport states that “once the bubble begins to burst, everyone runs away from the technology even if it has some valuable uses.”
The simplistic idea that if you make models larger and larger, they become more intelligent: Superintelligence...AGI, which has been the underlying narrative of the LLM hype.
This oversimplistic, cursory hypothesis coming out of OpenAI was that genesis of ChatGPT, from GPT-1 to 5. The theory goes: large language models (LLMs) would get better as they grew in size — the power law, one resembling the hockey stick trajectory. LLMs would grow in leaps and bounds if you keep building larger and larger models and train them on even larger data sets; like magic, they’ll just perform amazingly well and we’ll get to AGI. This has not worked out so well, to say the least.
Professor Emeritus Charles Edward Wilson from Harvard Business School says, “Altman is no genius and has little vision beyond his baseless rhetoric and the billions of dollars from greedy or guileless investors.”
Gary Marcus, an AI entrepreneur and professor emeritus of psychology and neural science at N.Y.U., and a well-known truth teller and AI critic, has been warning us, for at least ten years now, that the AI narrative was all hype and no substance. Marcus notes that “the so-called scaling laws aren’t universal laws like gravity but rather mere observations,” a nice theory.
Deductive Logic
The applied intelligence Socratic process is built on formal deductive logic or formal reasoning: a step-by-step process that concerns itself with the form or structure and premise of the inquiry. Unlike LLM-based models like ChatGPT, which is built on the fallacy of informal logic or reasoning, where no rules of deductive reasoning apply, it manipulates or makes up its own rules instead.
20th-century philosopher and mathematician Bertrand Russell saw informal logic, which is used in LLMs, as akin to pure mathematics, which can produce elegance. He defined it as the study of hypothetical objects that possess certain general properties, without regard to whether they exist empirically. For Russell, pure mathematics was the analysis of formal logic, where deductions are made from initial hypotheses and inference, and is distinct from applied mathematics (i.e., applied intelligence), which connects mathematical reasoning to the real world.
Einstein said that his work "falls into two parts. First discover the principles and then draw the conclusions which follow from them." Hence, it is reasonable to conclude that eking out patterns to make judgments is not very scientific; it's misleading. Informal reasoning, therefore, is always conceptualized statements, which means that they are, if only implicitly, constructed theoretically.
For applied intelligence (ai), therefore, when developing a strategy, in every case, the process remains the same: break complexity down into infinitely many simpler pieces, then solve them separately, step-by-step. This is a straightforward step-by-step proof; as in calculus, i.e., applied intelligence proofs. These proof tests are used to see if the statement is true by logically moving from the given information to a conclusion.
The main point here is that formal deductive logic rules are embedded in the 6ai process. The scientific methodology is interested only in the facts—finding the objective truth—identifying and validating the inquiry based on the first principles of logic: the premise being true, so the outcome too would be true.
Informal logic, on the other hand, is based on inference; connecting the dots. This is the fundamental weakness in LLMS like ChatGPT and Claude, which excel at informal reasoning, but are ill-structured and built on the fallacy of pattern recognition. It involves analogical thinking and narrative problem-solving rather than strict deductive logic and proof.
It gives the appearance of intelligence by drawing upon patterns learned from vast text corpora or training models, expressing summarizations rather than formal logic and rules. This is exactly why these systems hallucinate and can’t be trusted; they often have to be fact-checked, creating more work and aggravation.
By contrast, the 6ai IP involves provable, rule-based deduction, symbolic logic. LLMs struggle with this type of reasoning, or proofs and rigour, because they were not designed for formal logic engagement. LLMs mimic reasoning textually and lack consistent validity and provability of genuine formal reasoning systems, which is intrinsic to 6ai...applied intelligence IP.
The following is the typical delusional nonsense that OpenAI/ChatGPT and its acolytes promote, illustrating perfectly the difference or efficacy between Formal Logic, i.e., applied intelligence (ai) and Informal Logic, artificial intelligence (AI).
OpenAI’s Sebastien Bubeck (first author, earlier, on the oversold paper Sparks of Artificial General Intelligence, which dubiously alleged that GPT-4 “could reasonably be viewed as an early version of an artificial general intelligence (AGI) system” made a HUGE claim on Friday: that he solved a whole bunch of unsolved Erdös problems (a famous set of mathematical conjectures). This would indeed be a big deal.
However, this was pure imagination or self-hallucination...that the system had discovered original solutions to “open problems.” However, all that really happened is that GPT-5 crawled the web for solutions to already-solved problems.
Within hours, the math and AI communities revolted:

Sir Demis Hassabis called it “embarrassing”.
The next day, Bubeck tried to backtrack, deleting the original tweet and claiming he was misunderstood.
The Impossible Language
Former MIT professor and the father of modern linguistics, Noam Chomsky, calls LLMs the Impossible Language, only intelligible to itself; it doesn’t understand meaning, which is necessary to be intelligent.
There is nothing new, he says, in ChatGPT. It offers no insights; everything is pre-programmed—just a recipe. They don’t understand, nor can they translate natural human language. They are not reliable. The entire underlying concept of neural networks, for Chomsky, is about replication and impersonation, the appearance of intelligence. A constructed illusion, a parlour trick, nice theories but don’t work in the real world, he says.
In the New York Times Bestseller, Infinite Powers: How CALCULUS Reveals the Secrets of the Universe, Professor Steven Strogatz says that the central utility of calculus is to decipher the universe’s idioms and nuances. Harnessing calculus helps us make sense of the universe and make our world.
When we come across elegant hyped-up systems like LLMs that don’t adhere to form, structure, and mathematical rules and make up their own rules, this is not science; we know better. It’s a fallacy.
LLMs can master text, but without meaning or self-awareness, they remain disconnected from the real world. Lacking any meaningful adaptive model to the real world, syntactic, a compressor of words, incapable of thinking and reasoning.
The LLM proposition asserts so many things, but when we try to get inside of it to understand and prove the validity of its claims through quantification, it can’t be done because the entire system has been built from top-down hype, like a new religion.
GPT stands for Generative Pre-trained Transformer, which is a type of AI model that can produce human-like text. Generative means it can create content, Pre-trained signifies it was trained on a massive dataset, and Transformer refers to the specific neural network architecture it uses to understand context and relationships between words. This is an inescapable box. The 6ai applied intelligence (ai) Socratic process is the opposite of GPT.
For Socrates, the best way to learn is through open-ended questioning and objective conversation in the real-world context with meaning, which leads to better insight generation. The method tests what is being said, dissects claims, and establishes a mutual understanding of the topic. It extends to all fields of discourse from business to law, politics and personal development.
As Chomsky says, a machine doesn’t know meaning, so the communication with ChatGPT, by its design, can never be authentic. Intelligence can only come from conscious humans. GPTs are essentially big memory banks, but unlike humans, machines can’t pick up the idiolects or the idiosyncrasies in language; thus, they can’t align with the authentic meaning in language.
6ai employs scientific methodologies and self-awareness through human user engagement. AI has no consciousness, and intelligence requires consciousness. It doesn’t know that it doesn’t know and lacks the intelligence to understand it doesn't know. LLMs are doomed by their own training.
The 6ai Socratic process integrates and leverages its customized AI-agents within its Empathetic Socratic Conversational Method (ESCM) — core architecture, built to scale!
6ai authentic strategy development requires human consciousness, critical thinking, reflection and analysis, its a conversational dance between human and software. A genuine partnership to develop and execute the user’s core vision. This is the 6ai mission.



Comments