The Quality of Explanations

UMNAI’s Hybrid Intelligence (HI) is a groundbreaking approach to artificial intelligence, combining neuro-symbolic AI techniques to create solutions that are explainable, auditable, and high-performing. Explanations that are created by HI systems need to be of good quality. One of the cornerstones of our methodology aligns closely with David Deutsch’s emphasis on “good explanations,” which prioritize insights that are robust, hard to vary, and universally applicable (i.e. generalised). According to Deutsch, good explanations are central to human understanding and progress. 

David Deutsch’s theory of explanations, suggests that effective explanations should extend beyond surface-level predictions, offering reach, depth, and simplicity. Good explanations, according to Deutsch, are those that are hard to vary while still accounting for the phenomena they explain in accordance with the principle of parsimony, “Everything should be kept as simple as possible, but no simpler”, which is often referred to as Occam’s Razor. 

Good explanations reveal causal mechanisms, which are key to building a coherent understanding of reality. Hybrid Intelligence aligns with this theory by providing more than just predictive accuracy; it embeds causal, interpretable structures into its reasoning, making explanations transparent and scientifically robust. This alignment with Deutsch's principles allows HI to create models that can be tested, refined, and improved, mirroring the principles of the scientific method.  

The Hybrid Intelligence Explanation Process

Explanation process in Hybrid Intelligence.

Depth and Reach Through Causal Relationships 

Hybrid Intelligence embodies Deutsch’s focus on reach and depth by capturing underlying causal relationships rather than merely replicating observed data patterns. This approach enables HI to generalise across diverse situations, providing explanations that transcend specific instances to reveal foundational principles going beyond fitting data to trends; it captures causal relationships that explain why data behaves a certain way. HI attempts to explain the underlying generative process that has led to the data being observed or captured in the first place. This deeper understanding of modelled phenomena gives users insights into complex scenarios, with confidence that outcomes are causally grounded.  

Causal explanations built by HI focus on uncovering the mechanisms and relationships that link causes to effects, providing a deeper understanding of why phenomena occur. This approach is central to many fields, including medicine, economics, and engineering, where identifying causality is crucial for intervention and prediction.  

Modern causal inference frameworks have revolutionized the study of causation, with Judea Pearl’s causal calculus at the forefront. Pearl’s framework, built on structural causal models (SCMs) and represented using directed acyclic graphs (DAGs), provides a mathematical foundation for understanding and quantifying causality. HI utilises Pearl’s do-calculus, enabling it to distinguish correlation from causation, simulate interventions, and answer counterfactual queries, thus achieving causal reasoning. 

HI also utilises other mainstream causal inference approaches including the Rubin Causal Model (RCM), which formalizes causality using potential outcomes to compare what would happen under different treatments or interventions, and Granger causality, widely used in time series analysis to infer directional relationships between variables. Complementing these, frameworks such as causal discovery algorithms (e.g., PC algorithm, FCI algorithm) aim to infer causal structures directly from observational data, emphasizing the need for robust, testable mechanisms rather than relying solely on statistical associations.  

Simplicity and Coherence in Reasoning 

Deutsch emphasizes simplicity and coherence in effective explanations, and Hybrid Intelligence meets this standard by focusing on interpretable, causally relevant features. It avoids extraneous complexity, offering clear, logical explanations by structuring decisions around core principles rather than opaque patterns. In contrast to opaque GenAI stacks, HI reduces cognitive load, making complex decisions easier to comprehend and trust, while also minimising the risk of spurious conclusions. This coherence aligns with the demand for transparent systems, supporting both user comprehension and decision integrity. 

Hybrid Intelligence prefers simple yet hard to vary explanations that are grounded to the real world using symbolic logic.

Alignment with the Scientific Method 

From a scientific perspective, Hybrid Intelligence resonates with the scientific method’s emphasis on hypothesis testing, repeatability, and refutability. Beyond detecting patterns, HI supports hypothesis generation by identifying and testing causal relationships within data. By focusing on causally significant features, HI enables the formulation and evaluation of hypotheses, fostering an exploratory, hypothesis-driven approach. This iterative process of testing and refinement allows errors to be detected and corrected as systems adapt to new data, mirroring the scientific cycle of continuous improvement.  

Technically, Deutsch explicitly builds on Karl Popper's epistemology. Both emphasise the fallibility of knowledge and the role of critical thinking in improving explanations. Deutsch expands on Popper by introducing the "hard-to-vary" criterion as a refinement for assessing explanatory quality, moving beyond falsifiability as the sole criterion. Hybrid Intelligence embodies both falsifiability and the “hard-to-vary” criterion, together with cause-and-effect modelling to achieve robust, generalisable models that are transparent. 

Better Probabilistic Reasoning 

Current machine learning systems are highly effective in tasks requiring probabilistic reasoning, such as decision-making under uncertainty and predictive analytics. This is generally achieved by using Bayesian models, where the AI model updates it current “beliefs” based on prior information and new evidence. While Bayesian systems excel in optimizing predictions and handling noisy or incomplete data, they fall short in areas that require deep explanatory insight, creativity, and the ability to transcend established paradigms. 

Deutsch’s emphasis on "good explanations" as those that are hard-to-vary provides a framework for designing AI systems that prioritize explanatory depth and universality over probabilistic coherence alone.  

Bayesian models evaluate hypotheses based on their likelihood given the evidence, but they do not inherently assess the robustness or transformative potential of an explanation. This limitation is significant for AI systems operating in domains that need generic discovery and adaptation to novel situations, where breakthroughs often require conceptual shifts that challenge existing frameworks. HI goes beyond probabilistic inference, incorporating mechanisms for generating creative hypotheses and critically evaluating them based on their explanatory structure. With Hybrid Intelligence, explanations are generated using an Explanation Structure Model (ESM), which coherently combines Deutsch’s and Pearl’s theories into a coherent structure that explains the structure of an explanation itself. 

In the context of interpretability and explainability, while Bayesian models do provide confidence scores and probabilistic insights, they do not inherently produce explanations that are universally meaningful or applicable across domains.  

Another challenge for pure probabilistic AI is to handle paradigm shifts (sometimes known as Out of Distribution Data (OOD) or a sudden Data Shift), as they are constrained by predefined probabilistic models and priors. HI challenges foundational assumptions by integrating methods that allow it to, not only update its beliefs, but also creatively reframe its understanding of problems. The neuro-symbolic AI core of HI combines probabilistic reasoning with symbolic logic and allows for the seamless integration of exploration driven mechanisms such as reinforcement learning, world models and goal-plan-action systems.  

HI uses both Bayesian models (where appropriate) in conjunction with Deutsch’s and Pearl’s frameworks to generate explanations that are not only predictive but also robust, interpretable, and universally applicable. This is a crucial differentiating feature that allows HI to support trust and usability in high-stakes applications like healthcare, autonomous systems, and finance. This allows HI to achieve a balance between predictive accuracy and explanatory depth.  

Practical Benefits in Industrial and Commercial Applications

Aligning with scientific principles brings tangible benefits to industrial and commercial settings. In decision-making contexts, HI’s causal insights improve decision quality by providing a transparent view of potential risks and outcomes. The causal structure also helps reduce model bias, which is essential in applications where ethical sensitivity is required. Additionally, the focus on causally stable relationships makes HI highly adaptable to dynamic environments reducing the need for frequent retraining and supporting agile responses to shifting data patterns. 

Enhanced Human-AI Collaboration 

In collaborative settings, Hybrid Intelligence enhances human-system interaction by providing clear explanations that help users understand and guide the system’s reasoning. This transparency fosters a productive partnership, where human operators can question, adjust, or validate system choices, improving outcomes and building user confidence. HI’s explainable structure also supports regulatory compliance, meeting transparency standards in regulated industries. Verifiable explanations for each decision allow companies to adhere to regulatory requirements with confidence. 

Facilitating Research and Development 

Hybrid Intelligence is a valuable tool in R&D fields like pharmaceuticals and engineering. Its causal, grounded explanations function as a “research assistant,” allowing scientists to systematically test hypotheses and identify new relationships or potential innovations. By aligning closely with the scientific process, HI supports structured, transparent inquiry that drives innovation while preserving scientific rigor, offering a transformative tool for exploratory and hypothesis-driven research. 

In summary, Hybrid Intelligence aligns closely with David Deutsch’s criteria for deep, coherent explanations and the scientific method’s requirements for transparency, hypothesis testing, and iterative improvement. This alignment translates into practical benefits, including enhanced decision-making, reduced bias, increased model adaptability, and regulatory compliance.  


Technical Sidenote 

Read more about David Deutsch’s theory of explanations in his book The Beginning of Infinity: Explanations that transform the World, first published in 2011. David Deutsch's theory of explanations is deeply compatible with Popper’s principle of falsifiability, which emphasizes the iterative refinement of knowledge through the rejection of errors. Both frameworks highlight the critical role of testing and improvement in epistemology. Additionally, Deutsch aligns partially with Inference to the Best Explanation (IBE), as both prioritize the selection of high-quality explanations, although Deutsch refines this with his hard-to-vary criterion. Causal explanations, too, are often congruent with Deutsch's framework, particularly when they uncover robust mechanisms that satisfy his standards for explanatory depth and universality. These shared foundations underscore a commitment to objectivity, creativity, and the advancement of knowledge through rigorous evaluation. 

Previous
Previous

Agents and Explainable Reinforcement Learning

Next
Next

Knowledge Discovery in Hybrid Intelligence