I develop intelligent interactive AI systems as experimental platforms to study how humans understand, trust, and adapt to AI under uncertainty.
My work lies at the intersection of:
- 🤖 Human-AI Interaction — Understanding how people reason about, trust, and adapt to AI systems.
- 🧠 Computational Cognitive Modeling — Formalizing psychological theory to model human reasoning under uncertainty.
- ⚙️ Adaptive Intelligent Systems — Engineering AI systems that respond to and align with human cognition.
- 🔍 Explainable AI — Informing AI design to improve transparency, interpretability, and user confidence.
I pursue a cognition-driven, closed-loop research program to develop intelligent systems that adapt to human reasoning under uncertainty.
| 🧠 Question | 🏗️ Build | 🧪 Study | 🔢 Model | 🔄 Adapt |
|---|---|---|---|---|
| Formulate cognitive and interaction questions about how humans interpret, trust, and update beliefs about AI behavior. | Develop controlled, interactive AI systems as experimental platforms. | Conduct behavioral experiments to measure trust dynamics, belief updating, and mental model formation over time. | Construct computational models that infer latent cognitive states such as beliefs and uncertainty. | Embed these models into AI systems that dynamically adjust explanations, autonomy, and decision policies—enabling human–AI co-adaptation. |
This cycle moves from understanding human cognition to engineering human-adaptive intelligent systems grounded in empirical and computational insights.
| 📄 Publication | 📚 Venue | 🔗 Links |
|---|---|---|
| UNET-Based Segmentation for Diabetic Macular Edema Detection in OCT Images | ICCIS 2025, Springer LNNS |




