ISMAR 2018
IEEEIEEE computer societyIEEE vgtc


Platinum Apple
Silver MozillaIntelDaqriPTCAmazon
Bronze FacebookUmajinDisney Research
SME EnvisageAR
Academic TUMETHZ

Umberto Fontana, Fabrizio Cutolo, Nadia Cattari, and Vincenzo Ferrari. Closed – loop calibration for optical see-through near eye display with infinity focus. In Adjunct Proceedings of the IEEE and ACM International Symposium for Mixed and Augmented Reality 2018 (To appear). 2018.


In wearable augmented reality systems; optical see-through near-eye displays (OST NEDs) based on waveguides are becoming a standard as they are generally preferred over solutions based on semi-reflective curved mirrors. This is mostly due to their ability to ensure reduced image distortion and sufficiently wide eye motion box without the need for bulky optical and electronics components to be placed in front of the user's face and/or onto the user's line of sight. In OST head-mounted displays (HMDs) the user's own view is augmented by optically combining it with the virtual content rendered on a two-dimensional (2D) microdisplay. For achieving a perfect combination of the light field in the real 3D world and the computer-generated 2D graphics projected on the display; an accurate alignment between real and virtual content must be yielded at the level of the NED imaging plane. To this end; we must know the exact position of the user's eyes within the HMD reference system. State-of-the-art methods models the eye-NED system as an off-axis pinhole camera model; and therefore include the contribution of the eyes' positions into the modelling of the intrinsic matrix of the eye-NED. In this paper; we will describe a method for robustly calibrating OST NEDs that explicitly ignore this assumption. To verify the accuracy of our method; we conducted a set of experiments in a setup comprising a commercial waveguide-based OST NED and a camera in place of the user's eye. We tested a set of different camera (or eye) positions within the eye box of the NED. The obtained results demonstrate that the proposed method yields accurate results in terms of real-to-virtual alignment; regardless of the position of the eyes within the eye box of the NEDs (Figure 1). The achieved viewing accuracy was of 1.85 $\pm$ 1.37 pixels.