Background
Foreground and background segmentation is an important part of computer vision tasks including surveillance, remote sensing, and environmental monitoring. However, in long-range imaging for outdoor environments, substantial image distortions can occur due to atmospheric turbulence. These distortions limit the effectiveness of segmentation methods and obscure crucial details including the separation between foreground and background elements. This also causes the blurring and distortion of moving objects in dynamic scenes, which hinders reliable segmentation and tracking. There is a need for new approaches to foreground-background segmentation for outdoor long-range computer vision applications.
Invention Description
Researchers at Arizona State University, Clemson University and George Mason University have developed a two-stage unsupervised foreground object segmentation network designed for dynamic scenes affected by atmospheric turbulence. This algorithm can generate optimized and robust optical flow feature maps that are resilient to the effects of turbulence, which can be used to produce initial coarse masks for each object in every frame. This method does not require labeled training data and works across varied turbulence strengths for long-range video.
Potential Applications
Benefits and Advantages
Related Publication: Unsupervised Region-Growing Network for Object Segmentation in Atmospheric Turbulence