The School of Computer Science would like to present…
Date: Monday, May 5th, 2025
Time: 10:00 AM
Location: Essex Hall 122
Ensuring transparency and trust in machine learning models is critical for deploying automated inspection systems in industrial environments. Vision-based defect detection methods, often powered by deep learning models such as Convolutional Neural Networks (CNNs) and Vision Transformers (ViTs), have demonstrated strong performance in identifying anomalies in manufacturing. However, these models are typically black box in nature, making their decision-making processes difficult to interpret.
This research focuses on integrating explainable machine learning (XAI) techniques into vision-based defect detection systems to enhance interpretability, reliability, and human trust. XAI methods can be categorized based on their design (intrinsic vs. post-hoc), scope (local vs. global), and model dependency (model-specific vs. model-agnostic). In this study, we plan to apply and evaluate state-of-the-art post-hoc explainability methods, including Grad-CAM and SHAP, on CNN- and Transformer-based defect detection models. By analyzing the effectiveness of these techniques in highlighting true defect regions and supporting transparent decision-making, this work aims to bridge the gap between high-performing vision models and the need for explainability in real-world industrial applications.
Explainable Artificial Intelligence (XAI), Machine Learning, Deep Learning, Defect Detection, Convolutional Neural Networks (CNNs), Industrial Inspection, Model Interpretability
External Reader: Dr. Mohammad Hassanzadeh
Internal Reader: Dr. Dan Wu
Internal Reader: Dr. Hamidreza Koohi
Advisor(s): Dr. Ziad Kobti, Dr. Narayan Kar