School of Computer Science Colloquium" Towards Virtual Reality-based Therapy and Training using Machine Intelligence " By: Dr. Naimul Khan

Friday, October 6, 2023 - 11:00

The School of Comptuer Science Presents....

 

Colloquium Presentation by Dr. Naimul Khan

Towards Virtual Reality-based Therapy and Training using Machine Intelligence 

 

Date: Friday, October 6, 2023

Time: 11:00am – 12:00pm

Location: Erie Hall Room 3123

 

Abstract:

Ever since the pandemic, there has been a sharp decline in mental health across the spectrum. The need for quality mental therapy far outpaces the resources available in our healthcare system. The same applies for first responders, who have become the de-facto mental health workers at the onset of a mental health crisis. There is a need for an intelligent system that can be adaptive, immersive, and scalable. In this talk, a framework for such an AI-assisted Virtual Reality (VR) framework will be presented. The core of the system is a biofeedback-based interactive "gaming" platform. Novel multimodal machine learning algorithms can accurately assess the engagement/stress level of a participant in real-time, and adjust the VR experience automatically. Such an interactive system can ease the burden on mental health practitioners, whose role can transition into that of a human-in-the-loop supervisor to adjust the AI decisions as necessary. Alongside the technical details of the multimodal algorithms, three case studies utilizing this framework will be presented: A VR therapy game for kids with Autism, a VR de-escalation training system for first responders, and a VR therapeutic video experience for refugees in Kampala, Uganda. The case studies depict the immense potential of the proposed framework in rapidly scaling up available therapy/training resources. 

 

Keywords: Virtual Reality, Multimodal Machine Learning, Biofeedback, Serious Games, physiological sensing, ECG. 
 
Biography:

Dr. Naimul Khan is an associate professor of electrical, computer, and biomedical engineering at Toronto Metropolitan University (TMU), where he directs the Multimedia Research Laboratory. He is also cross-appointed at the Creative School through the Master of Digital Media program. He obtained his PhD in Electrical and Computer Engineering, M.Sc., and B.Sc. in Computer Science from Toronto Metropolitan University, the University of Windsor, and Bangladesh University of Engineering & Technology, respectively. His research focuses on user-centric intelligent systems combining AI, computer vision and Augmented/Virtual Reality (AR/VR). He has an impressive record of publications in premier multimedia, AI, and health informatics venues, and a rich history of industry collaboration with media and healthcare companies in Toronto and beyond. This record is backed by over $3.5M in government funding from NSERC, OCE, MITACS, and SSHRC. Notable projects include physiological signal analysis for arrythmia detection (with the Mount Sinai Hospital and Dapasoft Inc.), non-pharmacological anxiety reduction with biofeedback-based AR/VR games (with Shaftesbury Inc.), computer aided diagnosis of neonatal anomalies (with the Mount Sinai Hospital and Dapasoft Inc.), outdoor AR storytelling at the Fort York National Historic site (with AWE Company Ltd.), AR exhibit at the Canada Science and Technology Museum (with SimentIT Inc.), and VR intervention for refugee mental health literacy in Uganda. His research work has been featured in the media multiple times due to the broad practical impact of the projects (the Toronto Star, the Toronto Sun, Globe and Mail, TMU News). He is a recipient of the best paper award at the IEEE International Symposium on Multimedia 2017, best paper runner-up at the IEEE CVPR Workshop 2021, TMU Dean’s Research Award 2021, OCE TalentEdge Postdoctoral Fellowship, and several other awards. He is a senior member of the IEEE, a member of the IEEE Signal Processing Society and the IEEE Engineering in Medicine and Biology Society.