SCHOOL OF COMPUTER SCIENCE
TECHNICAL WORKSHOP SERIES: Introducing LIME and SHAP (as two great candidates to explain machine learning models)
Presenter: Nasrin Tavakoli
Date: Friday, November 17th, 2023
Time: 1:00 PM – 2:00 PM
Location: 4th Floor (Workshop space) at 300 Ouellette Avenue (School of Computer Science Advanced Computing Hub)
LATECOMERS WILL NOT BE ADMITTED once the presentation has begun.
The session unfolds with a comprehensive review of XAI, exploring its pivotal role in enhancing the interpretability of complex machine learning models. Delve into the specifics of LIME (Local Interpretable Model-agnostic Explanations) and SHAP (SHapley Additive exPlanations), two leading methodologies in the XAI landscape. Participants will gain a nuanced understanding of how LIME crafts locally faithful explanations, while SHAP employs cooperative game theory to reveal global feature contributions. A critical comparison between LIME and SHAP will unravel their distinct attributes, empowering attendees to make informed choices in their XAI endeavors.
- A review on Explainable AI (XAI)
- LIME: Local Interpretable Model-agnostic Explanations
- SHAP: SHapley Additive exPlanations
- Comparing LIME and SHAP
- Basic Understanding of Machine Learning and AI
- Understanding of Model Training and Evaluation
Nasrin Tavakoli is a Ph.D. student of Computer Science at the University of Windsor. Her field of study has been Artificial Intelligence and Machine Learning. During her master's program, she worked on breast cancer diagnosis based on deep features. She is continuing her research in Artificial Intelligence, specifically on Explainable AI, in the Ph.D. program.