This technology introduces a new kind of camera sensor in which each pixel functions as a tiny interferometer, instead of a standard light detector. This allows the sensor to capture not only the brightness of a scene but also additional information about the light’s spatial coherence. By leveraging arrays of integrated interferometers, the system captures a richer dataset that enables more precise image reconstructions and feature detection, even when traditional resolution limits would normally apply. This approach enhances how scenes are captured and interpreted, particularly when fine detail or object separation is critical. Background: Traditional digital cameras use focal plane arrays composed of photodetectors that measure irradiance, or light intensity, at each pixel. While these systems are efficient, they discard valuable coherence information about the light field, limiting the system’s ability to distinguish between closely spaced features. Attempts to overcome these limitations using post-processing or advanced optics still rely on irradiance data, which restricts their potential. This novel technology departs from this model by measuring the mutual intensity across the focal plane, capturing richer data that improves the estimation accuracy of image features. Unlike current systems, it extracts more degrees of freedom per sample, boosting information capacity and enabling sub-Rayleigh resolution in certain conditions. Applications:
Advantages: