XAI/ADVERSITY
This area explores how to make AI more transparent and trustworthy—developing methods that explain, test, and strengthen AI systems in complex or adversarial environments.
Towards Virtual Patient-Based Training
This project develops interactive virtual patients—human-like, conversational agents with both verbal and non-verbal behaviors that resemble face-to-face interactions. Built with Unity 3D, the system offers an active, cost-efficient, and personalized learning experience. Trainees engage with diverse virtual personas (e.g., gender, ethnicity) and varying learner agency (e.g., confidence, commitment, talkativeness), practicing targeted communication strategies. The virtual patients provide naturalistic feedback based on user input, simulating real-world interactions to support skill development and competency assessment.
Plant C, Hubig N, Shao J, Ottley A, Gou L, Möller T, Perer A, Lex A, Crisan A
Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining
A Abhinav, A Amle, R Bhandari, M Riechert, Nina Hubig
55th HICSS 2022
- Exploiting explainable features to enhance DNN robustness
Nina Hubig
under review
- Modeling Interaction Attribution Using Piece-wise Linear Approximation
M Iqbal, Nina Hubig
under review
- How to defend your autonomous vehicle against perturbed traffic signs
Nushrat Humaira, Nina Hubig
under review
- Quantifying Uncertainty in Model Agnostic Black-Box Explainability Methods
Brandon Walker, Nina Hubig
KDD under review
- A Robust Adversarial Ensemble with Causal (Feature Interaction) Interpretations.
Chunheng Zhao, Pierluigi Pisu, Nina Hubig
Farah Alshanik, Nina Hubig, Amy Apon
- Explainable ̸= Secure: Membership Inference Attack Against Explainability