Reconstruction Attack in Large Language Model Embeddings: A Comparative Analysis of Full and Fine-tune Embeddings Vulnerabilities on Genomic Data - PhD. Seminar by: Reem Al-Saidi

Friday, November 7, 2025 - 10:00
The School of Computer Science at the University of Windsor is pleased to present …
 

Reconstruction Attack in Large Language Model Embeddings: A Comparative Analysis of Full and Fine-tune Embeddings Vulnerabilities on Genomic Data

PhD. Seminar by: Reem Al-Saidi

 

Date: Friday, November 7th, 2025

Time: 10:00 am

Location: Erie Hall, Room 3123

 

Abstract:

This study investigates embedding reconstruction attacks in large language models (LLMs) applied to genomic sequences, with a specific focus on how fine-tuning affects vulnerability to these attacks. Building upon Pan et al.’s seminal work demonstrating that embeddings from pretrained language models can leak sensitive information, we conduct a comprehensive analysis using the HS3D genomic dataset to determine whether task-specific optimization strengthens or weakens privacy protections. Our research extends Pan et al.’s work in three significant dimensions. First, we apply their reconstruction attack pipeline to pretrained and fine-tuned model embeddings, addressing a critical gap in their methodology that did not specify embedding types. Second, we implement specialized tokenization mechanisms tailored specifically for DNA sequences, enhancing the model’s ability to process genomic data, as these models are pretrained on natural language and not DNA. Third, we perform a detailed comparative analysis examining position-specific, nucleotide-type, and privacy changes between pretrained and fine-tuned embeddings. We assess embeddings vulnerabilities across different types and dimensions, providing deeper insights into how task adaptation shifts privacy risks throughout genomic sequences. Our findings show a clear distinction in reconstruction vulnerability between pretrained and fine-tuned embeddings. Notably, fine tuning strengthens resistance to reconstruction attacks in multiple architectures—XLNet (+19.8%), GPT-2 (+9.8%), and BERT (+7.8%)—pointing to task-specific optimization as a potential privacy enhancement mechanism. These results highlight the need for advanced protective mechanisms for language models processing sensitive genomic data, while highlighting fine-tuning as a potential privacy-enhancing technique worth further exploration

 
PhD Doctoral Committee:

Internal Reader: Dr. Pooya Moradian Zadeh

Internal Reader: Dr.Saeed Samet

External Reader: Dr.Mitra Mirhassani

Advisor (s):  Dr.Ziad Kobti 

Vector Logo

 

Registration Link (Only MAC students need to pre-register)