Knowledge Discovery in Hybrid Intelligence

AI is a form of knowledge discovery, performed automatically by a machine. AI training algorithms seek to discover knowledge via the identification of statistical relationships in data prior to subsequently applying these learned patterns to predict outcomes for new or unseen inputs 

Hybrid Intelligence (HI) is a substantively different approach and better than current AI paradigms in the way it discovers knowledge in data. Its neuro-symbolic architecture leverages both the statistical pattern-matching power of neural networks together with the analytical reasoning power of symbolic logic systems.  

Forms of Reasoning 

There are three main forms of reasoning: 

  • Deduction seeks certainty and formal correctness. It involves deriving specific conclusions from general principles or rules. If the premises are true and the logical structure is valid, the conclusion must also be true. 

  • Induction focuses on discovering patterns and forming generalizations or theories based on specific observations. Its conclusions are not guaranteed to be true but are considered probable. 

  • Abduction aims to find the most plausible explanation for a given observation and often involves causal reasoning to propose a likely explanation to be tested or explored further. 

The three main forms of symbolic logic reasoning strategies .

LLMs and many deep learning models generate outputs by predicting the most plausible outcome based on patterns in their training data. This process resembles abduction because it seeks the "most plausible explanation" for the given input. However, it does so without explicitly testing hypotheses or establishing causal links. Deep Learning models and LLMs do not perform true reasoning, as they lack mechanisms for deduction or induction. Their abduction-like behaviour arises as a byproduct of their probabilistic pattern-matching capabilities, but it cannot be considered true reasoning. 

The knowledge discovery methods integral to Hybrid Intelligence fully and concurrently utilise all three forms of reasoning, deduction, induction and abduction in a process of multi-hypothesis generation. 

In applying a superior reasoning competence Hybrid Intelligence ensures that outcomes are statistically sound, explainable, compliant, and hard to vary. (For more information on quality of explanations and David Deutsch's principles refer to the “Quality of Explanations in Hybrid Intelligence” document) 

Knowledge Graphs and Ontologies 

Classical AI systems and Large Language Models (LLMs) typically store knowledge as statistical representations of sub-symbolic data within large, high-dimensional vector spaces. These models encode patterns and associations in distributed numerical weights, lacking explicit structure, logical constraints, or causal relationships. As a result, they struggle with interpretability, controlled reasoning, and verifiable knowledge representation, making their decision processes opaque and difficult to audit. 

In contrast to relying on sub-symbolic statistical patterns, Hybrid Intelligence employs symbolic knowledge representation through fully integrated knowledge graphs and ontologies, storing and processing knowledge in an explicit, structured format. 

  • Causal Graphs provide a flexible representation of suspected and verified cause-and-effect relationships. These models can also extend to concepts that are outside the available dataset, utilising learnt knowledge about the world (technically known as exogeneous variables). 

  • Knowledge Graphs provide a flexible, graph-based structure that represents entities, concepts, and their relationships, enabling transparent and structured reasoning. 

  • Ontologies define formal constraints and logical rules, facilitating domain-specific reasoning, causal inference, and decision traceability. 

By grounding decision-making in symbolic knowledge, HI ensures that every output is explainable, testable, auditable, and resistant to arbitrary modifications, addressing key limitations of black-box AI systems. This architecture enables controlled reasoning, verifiable decision-making, and the seamless integration of human-defined knowledge. 

Hybrid Intelligence builds on its knowledge graph and ontology integration by incorporating causal reasoning. The system constructs causal graphs by extracting meaningful relationships from the knowledge graph and validating them against ontological rules. This structured approach enables counterfactual testing, allowing HI to simulate hypothetical changes in variables and assess their impact on decision outcomes. 

Hybrid Intelligence bridges symbolic reasoning with real-world data by aligning neural network-based data processing and pattern matching, with a symbolic hierarchy of knowledge graphs and ontologies, with causal graphs and world models acting as plausibility and sanity checks.  

Multi-modal data inputs such as numerical features, text, and images are mapped to knowledge graph nodes and ontological relationships, ensuring structured representation. This alignment enables HI to maintain explainability and logical consistency, even when processing unstructured data. 

When faced with incomplete or ambiguous data, Hybrid Intelligence applies causal inference, probabilistic reasoning, and imputation techniques to manage uncertainty while preserving interpretability. By leveraging causal relationships and logical constraints, HI can infer missing information without relying solely on statistical approximations, ensuring maximum robustness, explainability, and contextual grounding, even in uncertain scenarios. 

The Decision Optimization Agent Services in Hybrid Intelligence refine policy learning and selection by incorporating ontological constraints and knowledge graph insights. Ontologies enforce predefined rules to ensure decisions remain compliant with domain knowledge, while the knowledge graph provides traceable causal pathways for structured, human-readable explanations. This dual-layer approach enables optimised, explainable, and context-aware decision-making. The Hybrid Intelligence agent’s introspection and world modelling capabilities ensure plausibility checks and eliminate most algorithmic hallucinations.

Summary of Benefits 

  • Causal Depth emphasises causality over simple correlations, enabling deeper reasoning and insight. 

  • Transparency & Explainability is assured via Symbolic hierarchies that expose decision paths, making reasoning more interpretable. 

  • Logical Consistency is achieved through ontological constraints that prevent contradictions and ensure coherence in decision-making. 

  • Multi-Modal Data Integration aligns structured and unstructured data, enabling seamless knowledge representation. 

  • Counterfactual Analysis supports hypothesis testing and falsifiability, essential for high-stakes applications. 

  • Direct Integration with the Data Vector Space enhances causal depth, explainability, and consistency across decision tasks. 

Hybrid Intelligence is particularly suited to complex and consequential decision domains within high-impact applications such as finance, healthcare, and scientific discovery, where transparency, accountability, and causal reasoning are critical. 

Previous
Previous

The Quality of Explanations