EvoNorms: Revolutionizing Neural Networks with Unified Normalization-Activation Layers (1st Offering)- JLR Challenge #3 by: Reem Al- Saidi

Monday, October 20, 2025 - 10:00
School of Computer Science – JLR Challenge #3 Workshop

 

EvoNorms: Revolutionizing Neural Networks with Unified Normalization-Activation Layers (1st Offering)

Presenter: Reem Al-Saidi

Date: Monday, October 20, 2025

Time: 10:00 am

Location: Workshop Space, 4th Floor - 300 Ouellette Ave., School of Computer Science Advanced Computing Hub

Abstract

This workshop explores the evolution of normalization and activation layers in deep neural networks, with a special focus on the innovative EvoNorm approach. Participants will learn how traditional sequential design patterns (BatchNorm followed by ReLU) can be replaced with unified computation graphs that combine normalization and activation into a single operation. We'll examine how EvoNorms were discovered through evolutionary search and why they outperform conventional approaches, especially in challenging scenarios like small-batch training. Through both theoretical explanations and practical code examples, attendees will gain a comprehensive understanding of these cutting-edge techniques and how to implement them in their own deep learning projects.

Workshop Outline:

1. CNN architecture fundamentals and the role of normalization/activation

2. Overview of normalization techniques (BatchNorm, GroupNorm)

3. Overview of activation functions (ReLU, SiLU/Swish)

4. Implementation of traditional sequential normalization-activation patterns

5. Introduction to unified normalization-activation through EvoNorms

6. EvoNorm-B0 (batch-dependent variant)

7. EvoNorm-S0 (sample-based variant)

8. Hands-on implementation

Prerequisites:

- Basic understanding of deep learning concepts

- Familiarity with neural network architectures (especially CNNs)

- Working knowledge of Python and PyTorch

- Experience with implementing and training neural networks

Biography

Reem Al-Saidi is a PhD student in Computer Science at the University of Windsor. Her research focuses on privacy-preserving machine learning, with a particular emphasis on large language models (LLMs) for health and genomic data, utilizing a cloud environment. Her current work explores secure data sharing and publishing through deep learning–based synthetic data generation.

 

Registration Link (only MAC students need to pre-register)