The School of Computer Science Presents...
Presenter: Reem Al-Saidi
Date: Monday, June 2nd, 2025
Time: 10:00 am
Location: 4th Floor (Lecture space) at 300 Ouellette Avenue (School of Computer Science Advanced Computing Hub)
Look into the complex world of pre-trained language models with our workshop, "Identifying BIAS in NLP models."
This workshop explores bias identification and mitigation while giving participants the information and abilities to work efficiently with pre-trained BERT models.
We'll explore the transformative power of attention mechanisms and the nuances of working with pre-trained models, setting the stage for a deeper dive into the complexities of bias detection.
Different tools for fairness estimation will be discussed, including the WEAT (Word Embedding Association Test). Participants will learn how to apply the WEAT test to evaluate biases across different demographic groups and linguistic dimensions, gaining valuable insights into the ethical considerations surrounding AI technologies.
Participants will tour the basic ideas of pre-trained language models using Google Colab. They will work through practical coding exercises, loading pre-trained BERT models and completing simple text-processing tasks.
- BERT Model (Background)
- Overview of Transformer & attention mechanism (Background)
- Bias in pre-trained language models
- How can we measure bias?
- Ways to mitigate bias
- Hands-on exercises
- Familiarity with natural language processing (NLP) concepts:
- Basic to intermediate Python programming:
- Interest in emerging technologies and fairness in NLP models
Reem is a Ph.D. student at the University of Windsor in the School of Computer Science. She focuses on applying different privacy and security techniques in AI tools, providing trust and reputation in various AI applications, and assessing bias and fairness in NLP models.