中文 |

Researchers Proposed Novel HHDNet Approach for Enhancing Underwater Image Clarity

Author: YANG Linan |

Underwater imaging has gained attention in recent years due to its applications in marine exploration, archaeology, and underwater robot operations. However, images captured underwater often suffer from noise, blurring, and color distortion caused by the scattering and absorption of light by water molecules and suspended particles. These issues significantly degrade image quality, hindering effective analysis and operations.

To address this challenge, Scientists from the Changchun Institute of Optics, Fine Mechanics and Physics, affiliated with the Chinese Academy of Sciences, have developed various image denoising methods. An article, published in the journal Sensors, presents a novel underwater image denoising method named HHDNet, which leverages a hybrid dual-branch network architecture to significantly improve image clarity and quality. 

Traditional model-based denoising methods, such as bilateral filters, wavelet transforms, and non-local means, have been widely used for image enhancement. However, these methods often struggle to remove complex noise patterns effectively while preserving image details. Convolutional neural networks (CNNs) have emerged as powerful tools for image denoising, demonstrating remarkable performance.

Yet, CNNs may fail to capture long-range pixel interactions and may lack flexibility in adapting to varying noise types and intensities.Recently, Transformer-based networks have shown promising results in capturing long-range dependencies, but they are computationally intensive. Inspired by these advancements, the proposed HHDNet combines the benefits of both CNNs and Transformers to address underwater image denoising challenges.

The HHDNet algorithm adopts a Gaussian blur-based frequency domain decomposition strategy to separate input images into high-frequency and low-frequency components. This separation facilitates targeted noise removal. The high-frequency branch utilizes a Global Context Extractor (GCE) module that incorporates depthwise separable convolutions and a mixed attention mechanism to capture both local details and global dependencies, enabling effective removal of high-frequency abrupt noise.

In contrast, the low-frequency branch employs efficient residual convolutional units, leveraging the minimal noise information present in this frequency band. During training, the GCE module leverages the inductive bias of convolutions to assist the mixed attention mechanism in rapid convergence, ensuring robust denoising capabilities. This hybrid approach ensures that HHDNet can efficiently handle noise while preserving important image details.

Extensive experiments conducted on various underwater image datasets demonstrate that HHDNet outperforms existing methods in terms of both denoising effectiveness and computational efficiency. The dual-branch architecture enables HHDNet to independently process high- and low-frequency components, resulting in a more precise and tailored denoising strategy. Moreover, the GCE module with its mixed attention mechanism exhibits strong capabilities in removing high-frequency noise while maintaining critical image features.

The proposed HHDNet algorithm has significant practical implications for underwater vision systems. Improved image clarity and quality can significantly enhance the performance of underwater robots in tasks such as exploration, salvage, and imaging. Furthermore, denoised underwater images provide more reliable data for scientific observations and marine engineering projects, enabling more accurate and efficient decision-making.

In conclusion, the HHDNet algorithm represents an important step forward in underwater image denoising, offering a hybrid approach that combines the strengths of CNNs and Transformers. Its effective separation and targeted removal of noise in the frequency domain, coupled with the use of advanced attention mechanisms, make it a valuable tool for enhancing underwater image clarity and quality.

Contact

ZHANG Da

Changchun lnstitute of Optics, Fine Mechanics and Physics

E-mail:




       Copyright @ 吉ICP备06002510号 2007 CIOMP130033