School of Computer Science -Technical Workshop Series
Evaluation on GANBART for Lecture Summarization (2nd Offering)
Presenter: Thennavan Karuppaiah
Date: Thursday, July 31st, 2025
Time: 12:00 pm
Location: 4th Floor (Workshop space) at 300 Ouellette Avenue (School of Computer Science Advanced Computing Hub)
This report presents our investigation into GANBART, a novel adversarial framework designed for lecture summarization. We began with a publicly available Lecture Summarization dataset and addressed data scarcity by augmenting the corpus through translation and back-translation. Our model leverages LoRA fine-tuning on BART to reduce training parameters, paired with a GAN architecture wherein the BART-based Generator produces candidate summaries, and a BERT-based Discriminator refines them via adversarial feedback. We compare our approach against BARTLarge-CNN as a baseline, evaluating system outputs using ROUGE. Results demonstrate that GANBART yields improved coverage (higher ROUGE) in generated summaries, indicating its efficacy in producing more coherent and informative summaries from a limited-resource lecture dataset.
- Welcome & Goals
- Why Lecture Summarization is Hard
- Dataset Tour & Augmentation
- LoRA Fine- tuning Primer
- Building GANBART
- Basic NLP pipeline & tokenization
- Transformer encoder–decoder anatomy (queries/keys/values)
- GAN intuition (generator vs. discriminator)
Thennavan Karuppaiah is a Master of Applied Computing candidate at the University of Windsor. Their research probes low-resource text-summarization and parameter-efficient fine-tuning of large language models.
Registration Link (only MAC students need to pre-register)