Meghal Dani, Gaurav Garg, Ramakrishna Perla, and Ramya Hebbalaguppe.
Mid-air fingertip-based user interaction in mixed reality.
In Adjunct Proceedings of the IEEE and ACM International Symposium for Mixed and Augmented Reality 2018 (To appear). 2018.
With data growing at a huge rate; there arises a need for advanced data visualization techniques. Visualizing these data sets in Mixed Reality(MR) mode provides an immersive experience to the user in the context of the real world applications. Most of the existing works can only be used with inordinately priced devices such as Microsoft HoloLens; Meta Glass that use proprietary hardware for data visualization and user interaction through hand gestures. In this paper; we demonstrate a cost-effective solution for data visualization using frugal devices such as Google Cardboard; VR Box etc in MR mode. However; these devices still employ only primitive modes of interaction such as the magnetic trigger; conductive lever and have a limited user-input capability. To interact with visualizations and facilitate rich user experience; we propose the use of intuitive pointing fingertip gestural interface in the user's Field of View(FoV). The proposed pointing hand gesture recognition framework is driven by cascade of state-of-the-art deep learning model - Faster RCNN for localizing the hand followed by a proposed regression CNN for fingertip localization. We conduct both objective and subjective evaluation to demonstrate the performance of our proposed method. Objective metrics are fingertip recognition accuracy and computational time. Subjective evaluation included user comfort and effectiveness of fingertip interaction that is proposed.