中文 |

Hybrid Framework Enhances Image Quality for Rotating Sparse Aperture Systems

Author: FENG Jiahao |

Researchers from the Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, report a significant advancement in the field of optical remote sensing. The team has developed a novel image restoration framework designed specifically for one-dimensional rotating sparse aperture imaging systems. By integrating mathematical transformations with deep learning techniques, this research successfully addresses the inherent blurring and data loss issues in lightweight space telescopes, achieving high-resolution imaging capabilities that were previously difficult to attain with such hardware.

In the domain of space observation, the pursuit of higher resolution typically demands telescopes with larger apertures. However, traditional large-aperture mirrors are heavy, bulky, and exorbitantly expensive to launch. The one-dimensional rotating sparse aperture system offers a compelling solution to this "weight versus resolution" paradox. By using a rectangular or slit-like aperture that rotates to cover a circular area, these systems significantly reduce payload weight. Yet, this hardware advantage comes at a cost: the resulting images often suffer from modulation transfer function degradation and severe blurring, as the sparse aperture fails to capture the complete frequency spectrum of the scene in a single snapshot.

To overcome these optical limitations without adding physical mass, the research team introduces a sophisticated "coarse-to-fine" restoration strategy. The method begins by addressing the frequency gaps caused by the sparse design. The researchers utilize a multi-scale transform approach to fuse images captured at various rotation angles. This process effectively stitches together the missing spectral information, creating a synthesized image that contains the full range of spatial data. Unlike simple superposition, this transformation ensures that the complementary information from different angles is optimally combined to form a complete, albeit initially blurry, representation of the target.

Following this data fusion, the framework employs a specialized neural network to perform the final deblurring and detail enhancement. This deep learning model is trained to understand the complex point spread function unique to rotating apertures. It analyzes the fused image to distinguish between actual object details and system-induced artifacts. By intelligently reversing the blur and suppressing noise, the network restores fine textures and sharp edges that traditional filtering methods often smooth over or distort. This hybrid approach leverages the strengths of both analytical mathematical models and data-driven artificial intelligence.

Experimental results demonstrate the robustness of this new framework. The restored images exhibit exceptional clarity, with significant improvements in contrast and structural fidelity compared to conventional methods. The system effectively recovers visual information that would otherwise be lost to optical diffraction and sparse sampling. This work holds immense promise for the future of aerospace technology. By validating that high-quality imagery can be computationally reconstructed from lightweight, sparse-aperture hardware, the research paves the way for the next generation of agile, cost-effective remote sensing satellites capable of delivering precise observations of Earth and deep space.

Contact

ZHAO Yang

Changchun Institute of Optics, Fine Mechanics and Physics

E-mail:




       Copyright @ 吉ICP备06002510号 2007 CIOMP130033