The School of Computer Science is pleased to present…
Date: Tuesday, May 6th, 2025
Time: 1:30 pm
Location: Memorial Hall, Room 109
With the rapid advancement of virtual reality (VR) headsets, users can now perform office tasks such as composing emails and documents using virtual keyboards, expanding VR's capabilities. However, using virtual keyboards remains inefficient due to the limitations of mid-air hand tracking or eye tracking. In this paper, we present a 4-stage method consisting of hand skeleton estimation, keystroke detection, keystroke classification, and prediction refinement to recognize keystrokes in VR devices. In the detection and classification stage, we introduce a novel deep learning model specifically designed to recognize keystrokes, employing convolution operations on hypergraphs to capture spatial features and leveraging temporal convolutions to model typing dynamics. Our model achieves strong performance, with 93.45% accuracy in classifying keystrokes at normal typing speed while maintaining a lightweight architecture compared to existing models.
Internal Reader: Dr. Imran Ahmad
External Reader: Dr. Esam Abdel-Raheem
Advisor: Dr. Dan Wu
Chair: Dr. Muhammad Asaduzzaman