Explainable Artificial Intelligence
Explainable Artificial Intelligence (XAI) research focuses on making AI systems more transparent, interpretable, and understandable for users. The aim is to build AI models and interfaces that are not only accurate but also explain their decisions in a clear and reliable manner. This is essential for fostering trust and accountability in AI applications across various domains, including healthcare, industry, mobility, workplaces, and private life. As AI systems become increasingly integrated into our daily lives, designing strategies that ensure users can understand and interact confidently with these systems is critical. In our research group, we utilize state-of-the-art techniques such as reinforcement learning, computational rationality, and generative AI to develop, implement, and evaluate explainable models and interfaces. Our primary focus is on enhancing the interpretability of AI, modeling human decision-making, and adapting explanations to align with users’ contexts, goals, and tasks.