EvoNorm Variants in Augmentation and GAN Training (2nd Offering)
Presenter: Reem Al-Saidi
Date: Tuesday, October 28th, 2025
Time: 10:00 AM
Location: Workshop Space, 4th Floor - 300 Ouellette Ave., School of Computer Science Advanced Computing Hub
Generative Adversarial Networks (GANs) are notoriously challenging to train due to their unstable adversarial dynamics and extreme sensitivity to architectural choices. This workshop explores how evolved normalization-activation layers (EvoNorms) address these challenges through automated neural architecture search. We'll examine the critical role of normalization in GAN training, understand how data augmentation interacts with normalization strategies, and analyze the performance differences between batch-dependent (EvoNorm-B0) and batch-independent (EvoNorm-S0) variants. Through the lens of the BigGAN-deep experiments on ImageNet generation, participants will gain practical insights into selecting appropriate normalization strategies for their own GAN projects.
- GAN Training Fundamentals
- The Normalization Problem in GANs
- Data Augmentation
- Handson Implementation
- Basic understanding of neural networks and deep learning concepts
- Familiarity with convolutional neural networks (CNNs)
- Basic knowledge of gradient descent and backpropagation
Reem Al-Saidi is a PhD student in Computer Science at the University of Windsor. Her research focuses on privacy-preserving machine learning, with a particular emphasis on large language models (LLMs) for health and genomic data, utilizing a cloud environment. Her current work explores secure data sharing and publishing through deep learning–based synthetic data generation.