MSc Proposal " Feasibility of Adversarial Attacks against Machine Learning Models" By: Kimia Tahayori

Friday, January 12, 2024 - 13:00 to 14:00

The School of Computer Science is pleased to present…

Feasibility of Adversarial Attacks against Machine Learning Models
MSc Thesis Proposal by:  Kimia Tahayori

Date: Friday, January 12, 2024

Time:  1:00 pm - 2:00 pm

Location: Essex Hall Room 122

Abstract:

In an era where machine learning (ML) models are integral to various sectors, from healthcare to finance, their security against adversarial attacks is crucial. This proposal aims to conduct a comprehensive investigation into the vulnerabilities of ML models when exposed to adversarial attacks in both controlled and real-world scenarios. Our primary objectives include analyzing intrinsic properties of datasets and ML model architectures that influence adversarial attack success and evaluating the practicality of these attacks in real-world deployments. We aim to bridge the gap between controlled experiments and the unpredictability of real-world deployments, offering a more comprehensive understanding of ML model vulnerabilities. This includes a dual-environment analysis to provide comparative insights into these attacks' feasibility and practical implications. Our study aims to enhance model training processes through this approach, thereby improving resilience against adversarial tactics and contributing vital insights for developing more robust ML systems.

Thesis Committee:

Internal Reader: Dr. Arunita Jaekel          

External Reader: Dr. Mitra Mirhassani    

Advisors: Dr. Sherif Saad – Dr. Saeed Samet

Vector Logo