Kamal Acharya

Ph.D. Candidate at UMBC, AI Researcher

Back to Publications

Journal Article

A Comprehensive Review of Neuro-symbolic AI for Robustness, Uncertainty Quantification, and Intervenability

An open-access Arabian Journal for Science and Engineering survey on neuro-symbolic AI as a trustworthy AI paradigm, organized around robustness, uncertainty quantification, and intervenability.

2026 Arabian Journal for Science and Engineering DOI: 10.1007/s13369-025-10887-3

Neurosymbolic AI Trustworthy AI Robustness

DOI Publisher Page PDF Cite

Abstract

As Artificial Intelligence (AI) systems are increasingly deployed in high-stakes domains such as healthcare, autonomous systems, finance, and critical infrastructure, ensuring their trustworthiness has become imperative. This paper presents a comprehensive survey of neuro-symbolic AI, a hybrid paradigm that combines the learning capabilities of neural networks with the reasoning strengths of symbolic AI, through the lens of three foundational dimensions: robustness, uncertainty quantification (UQ), and intervenability. We first establish the limitations of purely data-driven “black-box” models in handling distribution shifts, ambiguous inputs, and human oversight. In contrast, neuro-symbolic systems offer enhanced interpretability, verifiability, and control, making them promising candidates for real-world deployment. We systematically review state-of-the-art techniques for modeling robustness, quantifying uncertainty, and enabling intervenability. We further examine how logic, probability, and learning can be integrated into unified or modular architectures to support transparent, adaptive reasoning. Finally, we outline current challenges and identify key research opportunities for advancing neuro-symbolic AI as a trustworthy paradigm. This survey aims to equip researchers and practitioners with a structured understanding of how to build reliable, interpretable, and interactive AI systems by bridging statistical learning and symbolic reasoning.

Plain-Language Summary

This paper studies how AI systems can become more reliable and controllable by combining neural networks, which learn from data, with symbolic reasoning, which supports logic, rules, and explanation.

Why This Paper Matters

AI systems used in healthcare, autonomous systems, finance, cybersecurity, and critical infrastructure must do more than produce accurate predictions. They must remain reliable under unexpected conditions, communicate uncertainty, and allow meaningful human oversight. This paper frames neuro-symbolic AI as a practical route toward systems that combine statistical learning with explicit reasoning, making them more suitable for high-stakes deployment.

Research Summary

This paper studies neuro-symbolic AI through the lens of trustworthiness. The core motivation is that purely data-driven models can perform well in controlled settings but may fail under distribution shifts, ambiguous inputs, or high-stakes decision contexts where transparency and human oversight are necessary.

The survey organizes the field around three dimensions: robustness, uncertainty quantification, and intervenability. Robustness concerns whether AI systems remain reliable under perturbations and new conditions. Uncertainty quantification concerns whether systems can express confidence and ambiguity. Intervenability concerns whether humans can inspect, correct, or guide the reasoning process.

By connecting neural learning with symbolic reasoning, the paper explains how AI systems can become more interpretable, verifiable, and controllable. This makes the survey relevant to safety-critical applications such as autonomous systems, cybersecurity, healthcare, finance, and critical infrastructure.

Trustworthy Neuro-Symbolic AI Framework

1

Robustness

Studies whether AI systems maintain stable and reliable behavior under noisy inputs, distribution shifts, adversarial perturbations, and unexpected operating conditions.

2

Uncertainty Quantification

Examines how AI systems estimate confidence, represent aleatoric and epistemic uncertainty, and communicate ambiguity in predictions.

3

Intervenability

Focuses on whether humans can inspect, correct, steer, or constrain model behavior through concepts, rules, explanations, or symbolic interfaces.

Key Contributions

  • Organizes neuro-symbolic AI around robustness, uncertainty quantification, and intervenability.
  • Reviews methods for making AI systems more reliable under distribution shifts and ambiguous inputs.
  • Connects symbolic reasoning with transparent, adaptive, and human-interpretable AI decision-making.
  • Identifies open research challenges for deploying trustworthy neuro-symbolic systems.

Modeling Approaches Reviewed

Learning for Reasoning

Uses neural models to support symbolic reasoning by extracting representations, reducing search spaces, or translating raw data into symbolic structures.

Reasoning for Learning

Uses symbolic rules, constraints, priors, and domain knowledge to guide neural learning and improve generalization.

Learning-Reasoning

Integrates neural and symbolic modules bidirectionally so learning and reasoning refine each other in a unified architecture.

Probabilistic Neuro-Symbolic AI

Combines logic, probability, and learning to represent uncertainty while preserving structured reasoning.

Human-in-the-Loop Neuro-Symbolic AI

Enables concept correction, rule injection, interactive debugging, and governance through interpretable symbolic components.

Research Gaps

Scalable benchmarks Foundation model integration Real-time control Uncertainty tracking Human-in-the-loop tools Certifiable systems Ethical alignment Domain-constrained deployment

Publication Details

Type
Journal Article
Venue
Arabian Journal for Science and Engineering
Year
2026
Published
December 9, 2025
Volume
51
Pages
35-67

Authors

Research Topics

Neurosymbolic AI Trustworthy AI Robustness

Citation

@article{acharya2026comprehensive,
  author={Acharya, Kamal and Song, Houbing},
  title={A Comprehensive Review of Neuro-symbolic AI for Robustness, Uncertainty Quantification, and Intervenability},
  journal={Arabian Journal for Science and Engineering},
  year={2026},
  volume={51},
  pages={35--67},
  doi={10.1007/s13369-025-10887-3}
}