中文 |

Complex-Valued Network Restores High-Resolution Images From Sparse LiDAR Data

Author: FENG Jiahao |

Researchers from the Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, published a paper titled "Complex Primal–Dual Network for Sparse Aperture Inverse Synthetic Aperture LiDAR Imaging" in IEEE Transactions on Geoscience and Remote Sensing.

In this paper, the researchers report a significant advancement in computational imaging and remote sensing. The team has successfully developed a novel framework, the Complex-valued Primal-Dual Network, to address the persistent challenge of image defocusing in Inverse Synthetic Aperture LiDAR systems. By accurately reconstructing high-fidelity images from incomplete and non-uniform data, this research marks a crucial step toward practical, high-resolution observation of non-cooperative targets.

Inverse Synthetic Aperture LiDAR is a cutting-edge active imaging technology that merges the benefits of synthetic aperture radar with the high precision of laser illumination. It is capable of capturing images with resolution beyond the diffraction limit, making it indispensable for applications such as aircraft monitoring and space debris detection. 

However, in real-world scenarios, the technology often encounters "sparse aperture" conditions. Factors such as atmospheric turbulence, high signal repetition frequencies, and complex system dynamics result in the loss of echo signals. This data incompleteness disrupts the phase coherence required for image formation, causing traditional algorithms to produce blurred or unfocused results that obscure vital target details.

To overcome these obstacles, the research team introduced the Complex-valued Primal-Dual Network. This innovative approach bridges the gap between conventional optimization theory and modern network architectures. Instead of relying on static, iterative calculations, the method "unrolls" the optimization process into a learnable structure. 

A key component of this system is the Complex-valued Fusion Inception Module, which is specifically designed to process the complex-valued nature of laser signals. Unlike standard real-valued networks, this module can effectively manipulate phase information and extract features at multiple scales, ensuring that even subtle structural details of the target are preserved during reconstruction.

Experimental validations demonstrate the superior performance of this new method. When tested against standard reconstruction techniques, the network consistently delivers well-focused images with clear backgrounds, even under conditions of low signal-to-noise ratio and severe data loss. It effectively mitigates the noise and artifacts that typically plague sparse aperture imaging. 

Beyond image quality, the method also exhibits remarkable computational efficiency. By learning the optimal parameters for reconstruction, it significantly reduces the processing time required to generate a final image, making real-time applications more feasible.

This work provides a solution for the deployment of Inverse Synthetic Aperture LiDAR in complex environments. By enabling accurate imaging with limited data, the Complex-valued Primal-Dual Network enhances the reliability of remote sensing systems. 

This advancement holds promise for the future of deep-space exploration, ensuring that next-generation optical instruments can deliver precise, high-resolution data regardless of environmental constraints or hardware limitations.

Contact

WANG Bin

Changchun Institute of Optics, Fine Mechanics and Physics

E-mail:




       Copyright @ 吉ICP备06002510号 2007 CIOMP130033