Image Credits: Zapp2Photo / Shutterstock
Invention Summary:
Most existing Augmented Reality (AR) and Mixed Reality (MR) systems understand the 3D geometry of surroundings but lack the ability to detect and classify complex objects in the real world- considered essential for these types of applications. Such capabilities can be enabled with deep Convolutional Neural Networks (CNN), but are limited by size due to difficulties in executing large neural networks on mobile devices. Offloading object detection to the edge or cloud is also, a challenge due to stringent requirements on high detection accuracy and low, end-to-end latency. Even detection latencies of less than 100ms can significantly reduce the detection accuracy due to changes in the user’s view—the frame locations where the object was originally detected may no longer match the current location of the object.
Researchers at Rutgers University have developed technology for edge assisted real-time object detection for mobile augmented reality. The technology system operates on a mobile device, such as an AR device, and dynamically offloads computationally intensive object detection functions to an edge cloud device using an adaptive offloading process. The system also includes dynamic RoI encoding and motion vector-based object tracking processes that operate in a tracking and rendering pipeline executing on the AR device. The system improves the detection accuracy significantly and exhibits minimal latency on the AR device. The system leaves more time and computational resources to render virtual elements for the next frame, enabling higher quality AR/MR experiences.
Advantages:
Market Applications:
Intellectual Property & Development Status:
Patent issued and is currently available for licensing.