MSc Thesis Defense Announcement of Chris Khalil:"Input Discretization: Defense Against First Order Adversarial Attacks on Machine Learning Classifiers"

Tuesday, December 14, 2021 - 16:30 to 18:00


The School of Computer Science is pleased to present… 

MSc Thesis Defense by: Chris Khalil 

Date: Tuesday December 14, 2021  
Time:  4:30pm to 6:00pm 
Passcode: If interested in attending this event, contact the Graduate Secretary at with sufficient notice before the event to obtain the passcode   


Machine learning models play a critical role in solving the complexity of large-scale data problems on programmable computers. Artificial Neural Network models provide a powerful general framework for encoding this structure. For Artificial Neural Networks which are based on machine learning principle, there exist efficient exact algorithms called stochastic gradient descent to find the best weight Ẋ . However, machine learning classifiers are vulnerable to adversarial attacks. As the key optimization algorithm used in machine learning models, gradient descent can be recomputed to push the outcome into different classification regions on the loss function landscape, it is necessary to develop more efficient alternatives to gradient descent. In this thesis, we develop a novel input discretization algorithm for defending Artificial Neural Networks against First order Adversarial Attacks. It operates by transforming continuous input values into a discrete form. We do this by creating a set of contiguous intervals (or bins) that go across the range of our desired variable/model/function. In contrast to many other approaches to defend against Adversarial Attacks, the discretization-based procedures we propose are faster to train and make it impossible to recalculate gradients. Although training on adversarial examples as defense is effective fifty percent of the time for quite simple datasets such as MNIST, the input discretization is most efficient for more complex datasets like CIFAR and ImageNet. 
Keywords: CNN, adversrial attack , discretization, linearity, high dimensionality 

MSc Thesis Committee:  

Internal Reader: Sherif Saad         
External Reader: Mohammad Hassanzadeh           
Advisor: Alioune Ngom 
Chair:    TBD 

 MSc Thesis Defense Announcement      Vector Institute in Artificial Intelligence, artificial intelligence approved logo

5113 Lambton Tower 401 Sunset Ave. Windsor ON, N9B 3P4 (519) 253-3000 Ext. 3716 (working remotely)