Monte Carlo Denoising via Multi-scale Auxiliary Feature Fusion Guided Transformer
SIGGRAPH Asia 2023 Technical Communications, 2023•dl.acm.org
Deep learning-based single-frame Monte Carlo denoising techniques have demonstrated
remarkable results in photo-realistic rendering research. However, the current state-of-the-
art methods relying on self-attention mechanisms underutilize auxiliary features and struggle
to preserve intricate high-frequency details in complex scenes. Employing a generative
adversarial architecture, we present a transformer-based denoising network guided by multi-
scale auxiliary feature. The proposed U-shaped denoising network extracts multi-scale …
remarkable results in photo-realistic rendering research. However, the current state-of-the-
art methods relying on self-attention mechanisms underutilize auxiliary features and struggle
to preserve intricate high-frequency details in complex scenes. Employing a generative
adversarial architecture, we present a transformer-based denoising network guided by multi-
scale auxiliary feature. The proposed U-shaped denoising network extracts multi-scale …
Deep learning-based single-frame Monte Carlo denoising techniques have demonstrated remarkable results in photo-realistic rendering research. However, the current state-of-the-art methods relying on self-attention mechanisms underutilize auxiliary features and struggle to preserve intricate high-frequency details in complex scenes. Employing a generative adversarial architecture, we present a transformer-based denoising network guided by multi-scale auxiliary feature. The proposed U-shaped denoising network extracts multi-scale texture and geometric features from auxiliaries, modulating them to guide the improved transformer module’s denoising process. The improved transformer module employs cross-channel self-attention to capture non-local relationships with near-linear computational complexity. Additionally, a gating mechanism is introduced in the transformer module’s feed-forward network, enhancing information flow. Extensive experiments on noisy images with varied per-pixel sampling rates demonstrate the method’s superiority in quantitative metrics and visual perception compared with state-of-the-art methods. Our method excels notably in intricate scenes with complex hair and texture details, which are historically challenging to denoise.

Showing the best result for this search. See all results