close up of micro chip

Intelligent Signal Processing Laboratory

Research Specialization

The Intelligent Signal Processing Laboratory (ISPLab) pursues original, innovative, and frontier research in the theory, design, and implementation of (a) One-dimensional and multi-dimensional digital filters, (b) Discrete Gabor transforms, (c) Fuzzy-neural networks, (d) Deep learning and machine learning algorithms, and (e) Evolutionary and gradient-based optimization algorithms, to build real-time intelligent signal processing systems for processing multimedia signals or data. Multimedia signals include text, sound, audio, speech, image, video, and radio frequency signals. Multimedia data include sensor/ traffic/ stock data, and DNA sequences.

The ISPLab accepts Ph.D. and M.A.Sc. students under the supervision of Dr. H. K. Kwan. We offer financial support to a highly qualified and motivated graduate applicant with a suitable academic background to complete a Ph.D. (or M.A.Sc.) research in our area of specialization. Our Ph.D. and M.A.Sc. students are regular full-time students in the Department of Electrical and Computer Engineering of the Faculty of Engineering admitted under the normal University of Windsor's graduate admission procedures. 

We train and support (with funding) Postdoctoral Fellows and welcome international and national collaborations with researchers of common research interests. Prior successful journal publication experience in our area of specialization is required. 

Our research is supported in part by the Natural Sciences and Engineering Research Council of Canada (NSERC). 

Demo 1: Qing Pan, Teng Gao, Jian Zhou, Huabin Wang, Liang Tao, Hon Keung Kwan, "CycleGAN with dual adversarial loss for bone-conducted speech enhancement," arXiv:2111.01430v1, November, 2021. https://arxiv.org/pdf/2111.01430.pdf

Websitehttps://qpan77.github.io/Dadv_Cycle/demo.html

Demo 2: Teng Gao, Qing Pan, Jian Zhou, Huabin Wang, Liang Tao, Hon Keung Kwan, "A novel attention-guided generative adversarial network for whisper-to-normal speech conversion," Regular paper, Cognitive Computation, 16 January 2023. https://doi.org/10.1007/s12559-023-10108-9

Websitehttps://mingze-sheep.github.io/b204_W2N.github.io/

Demo 3: Jian Zhou, Yuting Hu, Hailun Lian, Huabin Wang, Liang Tao, Hon Keung Kwan, "Multimodal voice conversion under adverse environment using a deep convolutional neural network," IEEE Access, volume 7, pages 170878 - 170887, 26 November 2019. https://doi.org/10.1109/ACCESS.2019.2955982

Websitehttps://jerry98998.github.io/hyt/ 

Demo 4: Huabin Wang, Rui Cheng, Jian Zhou, Liang Tao, Hon Keung Kwan, "Multistage model for robust face alignment using deep neural networks," Cognitive Computation, Regular paper, volume 13, number 2, pages 1-17, 7 March 2022. https://doi.org/10.1007/s12559-021-09846-5

Websitehttps://jerry98998.github.io/MSM-face-alignment_files/