Enhancing AI's Reasoning with Active Retrieval and Monte Carlo Tree Search
A clear explanation of AR-MCTS, a framework that improves multimodal reasoning by combining active retrieval, Monte Carlo Tree Search, and process rewards.
Ph.D. Candidate in Information Systems, UMBC
Blog Topic
Articles on explainable AI, trustworthy AI, interpretability, cybersecurity, and transparent decision models.
A clear explanation of AR-MCTS, a framework that improves multimodal reasoning by combining active retrieval, Monte Carlo Tree Search, and process rewards.
A practical explanation of how Explainable AI and Neurosymbolic AI differ, where they overlap, and why both matter for trustworthy systems.
A practical explanation of the RVS framework for explainable AI, covering feature-based, design, representational, training-data, and stakeholder-aware explanations.
A practical explanation of the NeurIPS 2022 paper on decision trees that optimize for short rules by reducing the number of distinct attributes used in each explanation.
A practical explanation of what generative AI needs to become trustworthy AI, based on Lenat and Marcus's discussion of LLM limitations, reasoning, provenance, ethics, context, and Cyc-style symbolic knowledge.
A practical overview of explainable AI in cybersecurity, covering intrusion detection, malware analysis, anomaly detection, cyber-risk scoring, trust, debugging, and model improvement.