NEURO-SYMBOLIC AI
Realise the full potential of Neuro-Symbolic AI with our suite of easy-to-use technologies.
HYBRID INTELLIGENCE
Hybrid Intelligence is a framework developed by UMNAI that drives better decisions from Artificial Intelligence (AI) and Machine Learning (ML) by seamlessly combining data and model driven insights with human knowledge. The framework improves overall performance with certainty and confidence, building trust in AI and ensuring compliance and safety in any application.
Our Hybrid Intelligence framework pushes performance past the frontier of current AI and is based upon Induction and eXplainable Neural Nets (XNNs).
UMNAI’s model Induction process generates predictive models that perform as well as or better than the latest black-box techniques. Induction may be applied to any data type. Similar to (and better than) the latest AutoML technologies, UMNAI has developed our AutoXAI technology that leverages the transparency of XNNs to optimise them to hitherto unachievable levels of performance.
XNNs are wholly and inherently interpretable, explainable, and actionable Neuro-symbolic AI/ML models. XNNs communicate with each other and with the real world through Explanation Structure Models (ESMs) using the eXplanation Interchange Format (XIF).
UMNAI’s Hybrid Intelligence Framework includes a set of easy-to-use toolkits that enable our partners and customers to build better systems that leverage the powerful confluence of neural nets and symbolic logic.
UMNAI’s Hybrid Intelligence Framework is a practical, easy-to-use and easy-to-deploy alternative to current AI and ML approaches.


Model Induction
UMNAI’s model induction process generates and trains our neuro-symbolic XNN models straight from data. If you’ve already invested in a black-box AI model, our induction process can interrogate that model to improve the performance of the induced XNN. The induction process automatically analyses the data and any existing model to work out the optimal structure and training strategy.
The induction process examines and observes the data to:
- Identify how data features interact
- Analyse the importance of each feature in relation to the outcome
- Adapt to different situations and scenarios encoded in the data
- Structure the information hierarchically, taking relationships into account and optimising at scale
Induction will also leverage additional sources of information, including causal mapping, to improve performance beyond what is possible using only statistical correlation in the data.
AUTO XAI
UMNAI’s AutoXAI tools further refine XNNs by exploring variations and optimisations to improve their performance.
During Induction, it is possible to specify a number of objectives and constraints that the XNN should be optimised for. The AutoXAI tools will then carry out these optimisations automatically, making sure that the best possible model is generated in the most efficient manner (given the targets and constraints).


Explainable Neural Networks
XNNs are novel, graph-based neural networks that are inherently neuro-symbolic. They combine neural network methods like Deep Learning with logic-based symbolic methods. XNNs are inherently interpretable and explainable, merging learning from a neural substrate with symbolic reasoning and knowledge.
XNNs incorporate multiple contexts and hierarchical concepts, implemented via partitions and groups of rules. It is through these rules that XNNs enable the embedding of human knowledge within the neural network. And it is through partition customisation that XNNs afford adaptability to simplicity or complexity.
The function and purpose of every component within an XNN is precisely known and identifiable. All the activations and attributions from a model are openly and precisely observable and identifiable, without the need for additional post-processing and its computational cost.
XNNs may also form part of causal logic induction systems. These extract high-level concepts and link them to XNNs through a causal models. XNNs link to causal models both internally as well as at the output layer.
In practice, XNNs function like a standard neural network, except that XNNs compute and communicate the answer and its explanation in a single feed-forward step.
Explanation Structure Models
To maximise the benefits of XNNs and explanations, you need to have a ‘language’ and a structure within which to use them. Explanation Structure Models (ESMs) began life as this ‘language’ and soon evolved into a much more powerful system that enables UMNAI’s Neuro-symbolic to reach its full potential.
An ESM is a hypergraph network that incorporates and connects XNNs, explanations, symbolic knowledge, and causal knowledge in a seamless manner. This enables information and explanation data to be analysed and structured at various levels of abstraction to support appropriate consumption by different target users or systems.
ESMs can also integrate multiple XNNs and fuse data from multiple models and datasets.
Within UMNAI’s Hybrid Intelligence framework, ESMs enable the development of explainable, fit-for-purpose decisions that interpret complex relationships between data and processes using human-friendly concepts curated to the needs of each stakeholder.
