Enhancing Accountability Against Poisoning Attacks - PhD. Comprehensive Exam by: Safiia Mohammed

Thursday, August 21, 2025 - 14:00

The School of Computer Science would like to present… 

 

Enhancing Accountability Against Poisoning Attacks.

PhD. Comprehensive Exam by: Safiia Mohammed

 

Date: Thursday, August 21st, 2025

Time: 2:00 pm

Location: Essex Hall, Room 122

 

Abstract:

The growing adoption of Machine Intelligence (MI) across critical sectors such as healthcare, finance, and public services increases concerns about trust, reliability, and resilience in real-world applications. This presentation discusses the threats of poisoning attacks, existing defences, and the role of accountability in Machine Intelligence. Among adversarial threats, poisoning attacks, which manipulate training data or model gradients, remain among the most challenging attacks, particularly in Federated Learning (FL). While preserving privacy, the decentralized architecture of FL expands the attack surface and complicates detection, introducing vulnerabilities such as model poisoning, backdoor injection, and gradient manipulation. From anomaly detection to Byzantine-robust aggregation, existing defences address specific threats but remain limited by challenges such as non-IID client data, insufficient logging, and the inherent trade-offs between privacy and security.

As a result, accountability is essential for trustworthy MI. Apply accountability mechanisms to poisoning defence strategies to detect malicious behaviour and provide evidence of system integrity, making MI applications trustworthy and resilient in adversarial environments.

 

Keywords:

 

Federated Learning, Poisoning Attacks, Accountability, Anomaly Detection.

 

PhD Doctoral Committee:

External Reader: Dr. Abdulkadir Hussein

Internal Reader: Dr. Imran Ahmad

Internal Reader: Dr.  Luis Rueda

Advisor(s): Dr. Alioune Ngom and Dr. Dima Alhadidi

 

Vector Logo