• <noscript id="y4y0w"><source id="y4y0w"></source></noscript>
    <table id="y4y0w"><option id="y4y0w"></option></table>
  • <li id="y4y0w"></li>
    <noscript id="y4y0w"></noscript>
    <noscript id="y4y0w"><kbd id="y4y0w"></kbd></noscript>
    <noscript id="y4y0w"><source id="y4y0w"></source></noscript>
    <menu id="y4y0w"></menu>
    <table id="y4y0w"><rt id="y4y0w"></rt></table>
  • 基于天氣退化感知的動態自適應圖像復原方法

    Dynamically Adaptive Image Restoration based on Perceived Weather-related Degradation

    • 摘要: 在雨、霧、雪和低照度等復雜天氣條件下,圖像和視頻質量顯著降低,進而影響視覺感知效果和視覺信息利用價值. 因此,如何有效去除復雜天氣對圖像質量的影響一直是研究的熱點問題. 為了解決這些問題,本文提出了基于天氣退化感知的一體化動態自適應圖像復原網絡DegRestorNet. 該方法通過天氣退化感知機制生成包含多維度信息的退化感知場景描述符,進一步引導模型基于動態卷積自適應調整復原策略,從而動態適應雨、霧、雪、低照度等不同類型及程度的質量退化,實現復雜天氣條件下的一體化圖像復原. 首先,設計了面向退化感知的場景描述符生成器(Degradation-aware scene descriptor generator, DASDG),包含退化類型識別和退化程度評估模塊,用以生成退化感知場景描述符. 其次,設計了基于動態卷積的退化自適應復原網絡(Dynamic convolution-based degradation-adaptive restoration network, DCDRN),在U-Net結構基礎上引入了退化感知動態卷積Transformer模塊,并根據退化感知場景描述符動態調整參數,從而適配不同天氣圖像退化特性,實現對復雜天氣退化圖像的自適應復原. 最后,本文在復雜天氣公開數據集CDD和以RAISE數據庫為基礎、采用CDD圖像集生成策略構建的復雜天氣數據集DSD上進行了全面的實驗評估. 與OKNet、RestorNet等主流算法的主客觀對比實驗表明,在雨、霧、雪、低照度等多種天氣導致的退化圖像復原任務中,DegRestorNet的綜合表現更優. 消融實驗結果進一步驗證了所設計的場景描述符生成器和退化自適應復原網絡的有效性,顯示了該方法的優越性.

       

      Abstract: Images and videos often suffer from a severe degradation in quality in complex weather conditions such as rain, haze, snow, and low illumination. In addition, this degradation significantly reduces the utility and reliability of visual information and impairs human understanding of scenes. More critically, it poses a substantial challenge to intelligent vision systems used in applications such as autonomous driving, surveillance, and robot perception that rely heavily on high-quality visual input for accurate decision-making. Therefore, robust and generalizable image restoration techniques are urgently needed to ensure reliable visual understanding under adverse environmental conditions. Recently, all-in-one image restoration approaches have attracted considerable attention as a topic of active research due to their ability to handle various types of degradation within a single unified framework. These methods aim to reduce redundancy and improve generalization under diverse and complex conditions. Existing all-in-one restoration approaches can be broadly categorized into two types. The first category employs a unified network architecture to process multiple types of degradation simultaneously. Although architecturally simple, these methods typically use fixed parameters, which limits their adaptability to dynamically changing degradation patterns in real-world weather. The second category adopts prompt-based learning mechanisms for degradation-aware restoration. Although they can be more flexible, such methods often fail to effectively model the complex and nonlinear relationships among different types of degradation, different levels of severity, and content-specific features. To overcome these limitations, we propose DegRestorNet as a novel all-in-one dynamically adaptive image restoration model designed for complex and mixed-weather scenarios. DegRestorNet introduces a weather degradation-aware mechanism that generates multidimensional scene descriptors to capture the type and severity of degradation observed in a given image. These descriptors guide the restoration strategy by dynamically adjusting convolutional operations in real time to enable the model to adapt flexibly to different visual degradations such as streaks of rain, fog, low light, and snowfall. The proposed DegRestorNet consists of two major modules, including a degradation-aware scene descriptor generator (DASDG) and a dynamic convolution-based degradation-adaptive restoration network (DCDRN). The DASDG contains two sub-modules designed to recognize different types and severities of degradation. The former identifies various types of degradation (e.g., rain, haze, snow, and low illumination), whereas the latter quantifies the severity of each type. These outputs are fused into a unified degradation-aware scene descriptor to establish a hierarchical representation of the characteristics of the degradation through a decoupled and layered parsing mechanism. This descriptor offers fine-grained prior guidance for the restoration process to improve the quality and accuracy of the restoration. The DCDRN module adopts an encoder-decoder architecture. In the encoder, a cross-attention mechanism is introduced to integrate scene descriptor semantics with visual features at the feature extraction stage. For the decoder, we designed a degradation-aware dynamic convolutional transformer module that adaptively generates convolution kernels and attention weights conditioned on the scene descriptor. This allows the network to adapt dynamically to various degradation scenarios throughout the restoration process. Finally, comprehensive experimental evaluations were conducted on the public CDD dataset of weather images and a synthesized dataset of complex weather conditions named DSD, which was constructed based on the public RAISE dataset using the CDD image generation strategy. Compared with state-of-the-art image restoration methods, DegRestorNet achieved superior performance in restoring images affected by rain, haze, snow, and low-light conditions with significantly fewer parameters. The results of ablation studies further verify the effectiveness of the proposed degradation-aware scene descriptor and dynamic convolutional architecture. Overall, our results demonstrate the superiority and practical applicability of the proposed method in real-world scenarios.

       

    /

    返回文章
    返回
  • <noscript id="y4y0w"><source id="y4y0w"></source></noscript>
    <table id="y4y0w"><option id="y4y0w"></option></table>
  • <li id="y4y0w"></li>
    <noscript id="y4y0w"></noscript>
    <noscript id="y4y0w"><kbd id="y4y0w"></kbd></noscript>
    <noscript id="y4y0w"><source id="y4y0w"></source></noscript>
    <menu id="y4y0w"></menu>
    <table id="y4y0w"><rt id="y4y0w"></rt></table>
  • 啪啪啪视频