MSc Thesis Defense Announcement of Vishakha Gautam:"Constructing Adversarial Examples in Question-Answering Systems"

Thursday, March 16, 2023 - 14:30 to 16:00


The School of Computer Science is pleased to present…

MSc Thesis Defense by: Vishakha Gautam

Date: Thursday March 16, 2023
Time: 2:30 PM – 4:00 PM
Location: Essex Hall, Room 122
Reminders: 1. Two-part attendance mandatory (sign-in sheet, QR Code)
2. Arrive at least 5-10 minutes prior to the event starting - LATECOMERS WILL NOT BE ADMITTED. Note that due to demand, if the room has reached capacity, even if you are "early" admission is not guaranteed.
3. Please be respectful of the presenter by NOT knocking on the door for admittance once the door has been closed whether the presentation has begun or not (If the room is at capacity, overflow is not permitted (ie. sitting on floors) as this is a violation of the Fire Safety code).
4. Be respectful of the decision of the advisor/host of the event if you are not given admittance. The School of Computer Science has numerous events occurring in the near future.


Being an NLP pre-training model built with transformer and attention mechanism, BERT has proven to be highly efficient in developing various applications like text classification, machine translation, question-answering systems etc. Despite the recent advancement, however, the generalizability of the models remains a challenging issue. In this thesis, we study the generalizability issues of the prediction models in the question-answering systems, particularly for the unanswerable examples. To gain the insight about where the models do not generalize well, we are interested in constructing adversarial examples that are challenging for the model to predict correctly. The adversarial examples are obtained by pairing each question with a different context in a same dataset. Constructing adversarial examples only, we make sure that the new context does not contain any answer to the question it is paired with. In order to maximally challenge the prediction models, among the large number of candidates of the context to a given question, we select the one with the highest text similarity score to the original context of this question. The proposed method is exercised on SQuAD, a benchmark question answering dataset, with three deep learning models, namely, BERT, LSTM, and GRU, respectively. Our experiment shows that the examples constructed from the proposed method drastically reduce the performance of the models, from a range of 3.19-6.4% to a range of 0.03-0.18%, demonstrating the effectiveness of the method. The experiment also shows that the existing models are capable of learning from the constructed examples, leading to the enhanced performance.

Keywords: BERT, Data augmentation, QA system, Robustness, Adversarial Example

MSc Thesis Committee:

Internal Reader: Dr. Jianguo Lu
External Reader: Dr. Ning Zhang
Advisor: Dr. Jessica Chen
Chair: Dr. Curtis Bright

MSc Thesis Defense Announcement

Vector Institute in Artificial Intelligence, artificial intelligence approved topic logo


5113 Lambton Tower 401 Sunset Ave. Windsor ON, N9B 3P4 (519) 253-3000 Ext. 3716