THE CHALLENGE
All-day wearable augmented reality (AR) devices, particularly smart glasses, will be increasingly available in the near future, revolutionizing the way people live and work. Impediments limiting further adoption are not only related to hardware, but also to insights surrounding presentation and interaction with virtual information.
OUR SOLUTION
The Bowman lab at Virginia Tech has developed glanceable methods for viewing and interacting with virtual information in wearable augmented reality displays. This glanceable AR technology allows information acquisition via quick glances at the periphery of the visual field, allowing for unobtrusive display of information together with quick intake. These methods do not interfere with the primary visual task of the user whether in scenarios where users are stationary, moving, or engaged in activities that require both hands.
Using Glanceable AR interface in 3 everyday scenarios: (a) working in front of a desktop computer with glanceable widgets residing at the edge of the physical monitor; (b) cooking with recipe and timer following the user for hands-free access of information; (c) walking outside with music, fitness and map widgets following the user.
An illustration of head glance plus: (a) widgets are represented as small targets to avoid occluding the user’s view when the user is looking at the real-world environment behind the target; (b) when the user converges their gaze at the depth of the target, the widget expands and appears.