applied intelligence | ai
- perrydouglas9
- Oct 30
- 8 min read
Updated: Oct 31
Modern Philosophy - Ancient Wisdom

“…applied intelligence is not about brilliance, but the activity
of learning, not perfection but the art of getting there,
confidently, which provides the most fulfilment.”
—Perry C. Douglas | ai
What is applied intelligence | ai
…applied intelligence (ai) involves applying critical and independent thinking to transform information into knowledge, fostering insight and rational strategic decisioning.
It is the opposite of ChatGPT; ai is built on Formal Logic (facts and structure). ChatGPT is Informal reasoning (guessing the next word), a glorified search engine. That’s why it hallucinates all the time.
ai & ancient wisdom
The story of philosophy is epic, spanning civilizations and continents. It’s universal and not limited to one group or civilization. Epistemology, or the ‘theory of knowledge’, and Metaphysics, the enquiry into the nature of reality and existence, are central to philosophy and wisdom and are present everywhere and in everything. Logic, therefore, is the science of valid and sound reasoning, serving as the fundamental tool of philosophy, which also forms the basis of applied intelligence | ai.
To be intelligent, one must be conscious—only humans (animals) possess consciousness. Therefore, language-based communication for authentic meaning can only be carried out through intelligent inquiry by humans. Accordingly, applied intelligence (ai) is fundamentally a Socratic process, taken after the philosopher Socrates: a form of dialogue that utilizes a series of open-ended, probing questions to help one critically examine their beliefs, uncover assumptions, and develop a deeper understanding of the topic.
The Socratic conversational method (SCM) prevents unnecessary reinventing of the wheel and helps to avoid making needless errors in judgment when pursuing knowledge. The adherence to universal rules of nature is the bedrock of ai—ai provides a practical, disciplined framework for developing the right questions to ask—assumptions, methods, and claims of the inquiry itself—ensuring the scientific method underpins the entire ai process.
For ai, scientific reasoning is deductive, and all science must be open to refutation through the presentation of evidence. However, mathematics alone is not sufficient. Math requires philosophy to provide meaning—new understandings based on sound principles. So, ai, at its core, is about learning and applying that knowledge to the problem one is trying to solve, using scientific principles to examine complexes of empirical facts and certain general features that allow for the precise formulation of decision-making.
The AI process progresses similarly to a mathematician working through proofs to solve an equation and develop a strategy. Strategy formulation is a statement about phenomena; it does not exist independently; it is built on facts, conceptualized, and dependent on other variables being true. Therefore, ai-generated strategies stem from fact-based insights that can be transformed into comprehensive plans.
The foundational principles of ai are not new; they build upon the ideas of intellectual giants. Einstein, for example, emphasized that discovery principles are central to the free invention of human intellect, a new framework to take us beyond the boundaries that we ourselves set for our own minds. In brief, ai is a consistent intellectual framework that sharpens our dreams and aspirations, vividly, and transforms them into new, bold ideas that can transform our world.
Reflection on enquiries is part of the ai Socratic process, which aims to understand the right problem to solve or strategic question to answer. Enabling the user to gain understanding and perspective, eliminating doubt, obscurity, and ignorance, is the initial frontier to cross for strategy exploration and development. The Socratic method is embedded in ai, with a disciplined structure that optimizes both questions and answers.
Thales is regarded as the first philosopher of ancient Greece in the pre-Socratic philosophy era and was the first to apply the scientific method: non-reliance on legends, myths, ancient scriptures, traditions and other teachings. Thales relied instead on observation and reason; accordingly, he is associated with the beginning of philosophy—the very beginning of science.
The applied intelligence Socratic process is built on Formal Deductive Logic aligned with a step-by-step process which includes form, structure and premise. Conversely, LLMs/ChatGPT are based on the fallacy of Informal Logic, a form of reasoning where no rules of deduction apply and where cursory summarizations and hallucination are the norm.
One of the most concerning negative factors in the evolution of artificial intelligence (AI) today is the unbound race to replace humanity. Eighteenth-century philosopher and mathematician Emmanuel Kant believed in the fundamental right to dignity and individuality of every human. This, therefore, should never be taken away by technology or anything else. Kant wrote that we must actively protect and enforce our own individuality and humanity, especially when it becomes threatened.
Kant believed that enlightenment was fundamental to the human capacity for growth, necessary for generating the insight that makes our world, which is a primary objective of ai.
We must “Have the courage to use your own understanding,” Kant said, urging people to “think for themselves,” and avoid bias, ignorance, intellectual laziness, and blind obedience to others.
Kant was staunch about our thinking for ourselves and having frameworks to help us do so. Adherence to science was not optional for Kant. In his critique of reason, he emphasized critical thinking, with a willingness to be self-critical, which is what the Socratic process endeavours to do. Kant believed that nothing was more important than acquiring knowledge, which provides societies with the best chance to build a better world.
According to Kant, all people seek happiness; for Aristotle, “Happiness depends upon ourselves,” and happiness is the central purpose of human life and an end in itself. Technology should not infringe on those rights, and a small group of people don’t get to decide what’s good or not good for us. We can and should be able to engage intellectually, for ourselves and not depend on others crafting our thinking.
Once given the right tools, there is no doubting the human capacity to do extraordinary things—ai is your partner in crafting your own vision and happiness, and the right to pursue your individuality!
6ai - a modern philosophy solution
The reliance on historical data applied to informal reasoning, and communicated through text, is a fundamental misunderstanding of how thinking works and what authentic intelligence is. In mathematics/physics, which explain the universe, the laws of nature don’t change because technology does. With informal reasoning, it is not about what is true or not and how that information was obtained; it’s argumentative: about speed in predicting the next word, sentence, and summarization for the appearance of intelligence.
Machine learning/LLM models are built on retrieving historical information that provides snapshots in time, telling us what happened but not why or what’s possible in the future. LLMs are limited by their own training, so they can’t tell us anything new. They’re an amazing memory bank, yes, but memorization is not intelligence.
Spitting out summarizations and hallucinating all day long is not what intelligent and productive people want; they want an authentic partner to help with critical decisioning and insight generation.
The symmetry in ai is the reliability of its framework; those governing principles are consistent with nature and remain the same, regardless of the nature of the inquiry. Without such form and structure in the pursuit of reasoning, you get hallucinations, i.e., ChatGPT.
The ai process begins by trying to understand the nature of the problem, then develops an architectural approach that addresses the structural aspects of the inquiry to generate insightful responses. By leveraging modern AI techniques and incorporating verification layers, confidence scoring, and hybrid approaches and blending symbolic reasoning, we focus on model interpretability, fact-checking modules, and external knowledge grounding to mitigate hallucinations.
We tackle the practical challenges of formal AI systems, such as brittleness, dependence on precisely defined problem spaces, and difficulties scaling to ambiguous, unstructured real-world problems by emphasizing the context for analysis and decision-making, where generative AI often fails.
The six steps in ai are designed with goal-driven, autonomous agents that drive strategic thinking, which is integrated into its process. It is built to follow multi-step objectives, including planning, gathering relevant data, analyzing, reflecting and more, with reliance on structured reasoning chain frameworks rather than free-form text guessing.
The ai Socratic process remains a human-centric system in terms of human interpretation, context-awareness, and human oversight critical in high-stakes decision-making alongside AI support, regardless of AI’s internal logic style. It is highly adapted to the person using it, leveraging persistent context and memory to retain long-term understanding of clients’ ontologies and internal taxonomies—continuous learning underpinned through updates to the internal model from past projects, data and outcomes.
By framing the core question or problem using formal logic, turning ideas into a logical proposition ("If A, then B"), defining terms, assumptions, and required outputs very clearly, similar to setting up a mathematical proof or outlining an experimental procedure.
In contrast, LLMs take a less explicit approach, drawing immediately from vast patterns they've seen before, which means their "thesis" is often implicit or inferred rather than explicit and structured.
The six-step (6ai) process-IP will attempt to reason through each step systematically, much like a mathematician: it applies rules, breaks the problem down, and checks each inference logically. For example, if answering a scientific question, it will retrieve factual steps, cite data sources directly, and show how it connects premises to the conclusion. And the reasoning in 6ai is domain-tuned: access to domain-specific resources and templates to guide and inform its process.
6ai agents have access to a custom toolset for querying datasets, scraping available documents, running code for analysis (with potentially custom libraries for economic and market frameworks, heuristics, etc.), doing simulations and iterating.
We also use multi-modality to further keep the possibilities of hallucinations at bay; checking through reflection, aggressive fact-checking and collaborative reasoning, where different expert agents jointly debate and argue to converge on solutions or use Monte Carlo style modelling to explore alternative futures, stress test strategies.
When it comes to fact-checking, ai ensures that each step is supported by direct evidence from its gathered data. 6ai can cross-check data points (e.g., scientific papers or legal precedents) against the logical structure of its reasoning, essentially forming an argument that can be audited for correctness.
Unlike LLMs, which can sometimes generate plausible-sounding but unverifiable or incorrect statements, 6ai provides a traceable path where every conclusion is tied to its premises and supporting evidence. It's possible to inspect each logical step and challenge it if needed.
For example, if asked about the safety of a drug, 6ai would:
Define "safety" formally (using regulatory standards).
Gather direct supporting data (clinical trial results).
Use that data in a stepwise logical argument ("Data X supports premise Y; therefore, conclusion Z holds with probability P").
6ai's structure prevents it from relying solely on pattern matching; the logic enforces a check at every step. However, in practice, there are challenges to mitigate:
Data Quality: If 6ai's database contains errors or biased information, its "logical" conclusions are only as good as the input data.
Formal Errors: Mistakes in the logical structure, or gaps in how the problem is broken down, could still lead to inaccuracies.
Ambiguity Handling: Whenever the problem is under-defined or the data is insufficient, 6ai can either:
Admit uncertainty explicitly (“the answer cannot be proven”), or
Fall back on generalized reasoning, which does resemble pattern-matching.
A critical advantage 6ai provides users is the requirement for auditable, step-by-step justification, so you can clearly see where its answer originates, what evidence supports it, and how each logical step proceeds. In contrast, LLMs may blend supporting information and reasoning seamlessly, but you cannot always trace each statement back to its source or verify the process explicitly. No system is flawless even within a formal logic framework. Bad data, vague definitions, or human mistakes in rules can still cause inaccuracies.
Acknowledging this and keeping it at the forefront, 6ai has incorporated a system of checks and balances, including initial validation with the user of its understanding of the problem statement, dedicated agents for independent verification of logic rules, and ongoing monitoring of a continuously enriched context to ensure responses stay relevant to the current conversation.



Comments