Akshat Dubey – Explainable AI (XAI) Researcher | AI in Health | Berlin, Germany Akshat Dubey | Explainable AI (XAI) researcher at Robert Koch Institute & Freie Universität Berlin. Focused on interpretable, explainable, uncertainty-aware AI systems for healthcare.
Akshat Dubey - Explainable AI Researcher

This is me. :)

🧪 Research & Project Updates

2025 June 01

New research work titled Surrogate Interpretable Graph for Random Decision Forest is now available on Arxiv - check out research section

2024 Decemeber 13

New research work titled Persona Adaptable Strategies Make Large Language Models Tractable Published at International Conference on Natural Language Processing and Information Retrieval (NLPIR 2024) in Okayama University in Japan

2024 October 29

Presented the research work on Trustworthy AI in Public Health at 151st Annual Meeting of the American Public Health Association in Minneapolis-United States of America

2024 October 20

Presented the research article titled AI Readiness in Healthcare through Storytelling XAI at First Workshop on Explainable Artificial Intelligence for the Medical Domain (EXPLIMED) @ European Conference on Artificial Intelligence (ECAI) - 2024 in Santiago de Compostela-Spain

2024 April 01

Started working as Teaching Assistant (TA) at Freie Universität in Berlin (Germany)

Welcome! I'm building interpretable & explainable AI.

Hello! I’m Akshat Dubey, a Research Associate at the Robert Koch Institute and a PhD candidate at Freie Universität Berlin, working under the supervision of Dr. habil. Georges Hattab.

My research lies at the intersection of Explainable Artificial Intelligence (XAI), NLP, and human-centered AI. I focus on building interpretable, transparent, and efficient ML systems for critical domains like healthcare. My work draws from ensemble tree ML models, probabilistic modeling, optimization, statistics, and LLMs, with current themes including:

  • 🔍 Uncertainty Quantification in XAI
    Designing principled methods to estimate and communicate explanation confidence—especially in high-stakes domains.

  • 🏥 XAI for Healthcare
    Embedding explainable and uncertainty-aware models into real-world clinical workflows to support informed decision-making.

  • 🧠 Persona-Adaptable LLM Strategies
    Tailoring large language models to align with users’ cognitive styles and mental models—enabling intuitive and adaptive interaction.

  • ⚖️ Regulatory-Compliant AI Design
    Implementing the “Nested Model for AI Design & Validation” to meet the EU AI Act’s transparency and safety requirements.

  • 🎲 Distributed Gaussian Process Learning
    Developing decentralized, trust-aware GP frameworks for collaborative and privacy-preserving learning.

  • 🧠 LLMs with LoRA/QLoRA & Quantization
    Enabling efficient, on-device large-model inference using quantized, low-rank-adapted LLMs for constrained environments.

  • 🐳 Reproducible ML via Docker & Orchestration
    Building automated, scalable pipelines for training, deploying, and monitoring ML models with full reproducibility.

  • 📈 MILP for Interpretable Graph Design
    Using mixed-integer linear programming to optimize surrogate graph structures from decision forests for maximum interpretability.

  • 🤝 Human-in-the-Loop AI: GNNs, NLP, and Visual Analytics
    Fusing graph learning, language understanding, and interaction design to make AI decisions transparent, explorable, and trustworthy.

Outside of research, I’m a lifelong learner, “math nerd,” and “algorithm enthusiast” passionate about systems that empower humans through explainability. I enjoy building bridges between theory and practice.

Let’s connect and collaborate—reach out on LinkedIn.