Author: YANG Linan |
A study published in Optics Communications by researchers from the Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, introduced a novel deep learning-based approach for accurate wavefront sensing in optical systems that suffer from undersampling. This method addresses a critical limitation in modern compact optical devices, such as space telescopes and remote sensing cameras, where design trade-offs often lead to reduced image sampling.
Wavefront sensing is essential for measuring and correcting optical aberrations that degrade image quality. Traditional methods like Shack-Hartmann sensors require additional hardware, making them bulky and unsuitable for many space-constrained applications. Image-based techniques, particularly phase diversity, infer aberrations directly from captured images but rely on the Nyquist sampling theorem. Many high-performance optical systems, however, are intentionally designed to be undersampled to optimize other parameters like field of view and data bandwidth, which causes spectral aliasing and renders conventional wavefront sensing ineffective.
To overcome this challenge, the research team developed a hybrid neural network that combines a Convolutional Neural Network (CNN) with a Long Short-Term Memory (LSTM) network. They recognized that while aliasing corrupts high-frequency image details, the low-frequency components in the Fourier domain still retain robust information about the underlying wavefront aberrations. Their strategy involved capturing multiple images at different defocused positions and then extracting low-frequency Fourier coefficients from these images to serve as input features for the neural network.
The CNN component was designed to extract spatial features from the frequency-domain representations of the images. The LSTM network then modeled the sequential relationship between these features across the different focus channels, leveraging the physical continuity of the wavefront as it evolves through focus. This architecture effectively learned the complex, ill-posed mapping from aliased images to wavefront aberration coefficients. The team trained and validated the model using a large dataset of simulated point spread function (PSF) images generated under various aberration conditions and undersampling factors.
Simulation results demonstrated high accuracy, with a root-mean-square error (RMSE) of 0.0082 wavelengths for reconstructing key aberrations. The method significantly outperformed standalone CNN or LSTM models. The researchers also constructed an experimental off-axis three-mirror imaging system to validate their approach. By introducing known misalignments and comparing the network's predictions against measurements from a precision interferometer, they confirmed the method's effectiveness in a real-world setting, achieving a low RMSE of below 0.018 wavelengths.
This work provides a practical software-based solution for a critical problem in advanced optical engineering. By enabling precise wavefront sensing in systems where hardware-based solutions are impractical, this technique facilitates better alignment and potential real-time correction for compact, high-performance optics used in space exploration and earth observation, ultimately leading to sharper and more accurate images from these instruments.
XUE Donglin
Changchun Institute of Optics, Fine Mechanics and Physics
E-mail: xuedl@ciomp.ac.cn