The School of Computer Science at the University of Windsor is Pleased to Present....
Identifying BIAS in NLP Models-Part 1
Presenter: Reem Al-Saidi
Date: Tuesday, February 27th, 2024
Time: 2:30-3:30 PM
Location: 4th Floor (Workshop space) at 300 Ouellette Avenue (School of Computer Science Advanced Computing Hub)
Abstract:
Look into the complex world of pre-trained language models with our workshop, "Identifying BIAS in NLP models." This workshop explores bias identification and mitigation while giving participants the information and abilities to work efficiently with pre-trained BERT models. We'll explore the transformative power of attention mechanisms and the nuances of working with pre-trained models, setting the stage for a deeper dive into the complexities of bias detection. Different tools for fairness estimation will be discussed, including the WEAT (Word Embedding Association Test). Participants will learn how to apply the WEAT test to evaluate biases across different demographic groups and linguistic dimensions, gaining valuable insights into the ethical considerations surrounding AI technologies.
Workshop Outline:
- Overview of Transformer & attention mechanism (Background)
- Bias in pre-trained language models (parts 1 &2) How can we measure bias? (part 3)
- Ways to mitigate bias.
- BERT Model (Background)
- Familiarity with natural language processing (NLP) concepts
- Basic to intermediate Python programming
- Interest in emerging technologies and fairness in NLP models
Reem is a Ph.D. student at the University of Windsor in the Computer Science department. She focuses on applying different privacy and security techniques in AI tools, providing trust and reputation in various AI applications, and assessing bias and fairness in NLP models.