Summary: UCLA researchers in the Department of Electrical and Computer Engineering have developed a deep learning-based system to design diffractive multispectral imagers that can capture various spectral channels without the need for powered equipment. Background: The field of advanced imaging technologies has witnessed significant progression over recent years yet remains subpar in certain areas. Traditional imaging systems, though sophisticated, often struggle with real-time data analysis due to the sequential nature of capturing images across different spectral bands. This sequential process not only extends data acquisition periods but also hampers applications that necessitate instantaneous or real-time outcomes like aerospace, AR/VR and responsive self-driving. The modern integration of artificial intelligence and imaging mostly occurs post-capture, on the software end, adding another layer of delay to the processing timeline. Such delays and inefficiencies underscore the need for a more streamlined approach that bridges the gap between capture and analysis. The industry is on the lookout for innovative solutions that can transcend these limitations and offer a faster, smarter, and more integrated imaging experience.
Innovation: UCLA researchers have developed a deep learning-based system to design diffractive multispectral imagers. Snapshot multispectral imaging using a diffractive optical network addresses a burgeoning need in the world of advanced imaging technologies that overcomes the limitations of traditional multispectral imaging systems. Merging diffractive optical networks with multispectral imaging promises simultaneous acquisition and processing of traditionally complex multispectral data. The inventors leverage a novel diffractive network design that achieves snapshot multispectral imaging with 4, 9 and 16 unique spectral bands within the visible spectrum. This integration effectively streamlines the capture-to-analysis pipeline, shaving critical seconds or even minutes from the process. This advancement has the potential to revolutionize a wide array of imaging and sensing applications, including defense, medical imaging, agricultural, and weather forecasting. Furthermore, by embedding artificial intelligence directly within the optical pathway, the inventors have developed a fusion of hardware and software capabilities, resulting in faster, smarter, and more efficient imaging solutions. This innovative technology presents a compelling answer to the limitations of prior systems, offering potential partners a cutting-edge solution for their advanced imaging needs.
Potential Applications: • Environmental monitoring & agricultural sciences • Aerospace & defense • Biomedicine & diagnostics • Polarization-insensitive imaging • Climate monitoring
Advantages: • No power requirement • Reduced equipment complexity • No need for image reconstruction algorithms • Broadband imaging capabilities • Scalability • High contrast
State of Development: The inventors have developed a functional neural network and working physical prototype of the technology which has been demonstrated and a subsequent paper has been published.
RELATED PAPERS: • Mengu, D., Tabassum, A., Jarrahi, M., & Ozcan, A. (2023). Snapshot multispectral imaging using a diffractive optical network. Light: Science & Applications, 12(1), 86. • Ballard, Z., Brown, C., Madni, A. M., & Ozcan, A. (2021). Machine learning and computation-enabled intelligent sensor design. Nature Machine Intelligence, 3(7), 556-565. • Mengu, D., & Ozcan, A. (2022). All‐optical phase recovery: diffractive computing for quantitative phase imaging. Advanced Optical Materials, 10(15), 2200281. • Kulce, O., Mengu, D., Rivenson, Y., & Ozcan, A. (2021). All-optical synthesis of an arbitrary linear transformation using diffractive surfaces. Light: Science & Applications, 10(1), 196.
Reference: UCLA Case No. 2023-112
Lead Inventor: Aydogan Ozcan, UCLA Professor of Electrical and Computer Engineering