Optical remote sensing images are subject to various noisy images during the process of acquisition and transmission, which blur the details such as edge texture and reduce image quality. The remote sensing images without clear and high-quality must be pre-processed with noise reduction. After the remote sensing image denoising process, the accuracy in the fields of feature recognition and scene semantic segmentation will be greatly improved.
In a study published in Remote Sensing, a research group led by Prof. LV Hengyi from the Changchun Institute of Optics, Fine Mechanics and Physics (CIOMP) of the Chinese Academy of Sciences (CAS) proposed a remote sensing image denoising method based on deep and shallow feature fusion and attention mechanism.
Early on, researchers used filters to reduce noise in images. They used to work well in images with reasonable noise levels. However, applying these filters can blur the image. If the image is too noisy, the denoised image will be very blurry and most of the key details in the image will lost. Our research team solved this problem nicely by using deep learning architecture. Currently, deep learning method far surpasses the traditional denoising method.
A new remote sensing image denoising network (RSIDNet) based on deep learning methods was proposed. Extensive experiments on synthetic Gaussian noise datasets and real noise datasets showed that RSIDNet achieves good results, and the effectiveness of each module is verified by ablation experiments.
RSIDNet can improve the loss of detail information in denoised images in traditional denoising methods, and retain more high-frequency components, which can improve the performance of subsequent image processing.