[PDF][PDF] CAVC: Cosine Attention Video Colorization
Proceedings Copyright, 2024•scitepress.org
Video colorization is a challenging task, demanding deep learning models to employ
diverse abstractions for a comprehensive grasp of the task, ultimately yielding high-quality
results. Currently, in example-based colorization approaches, the use of attention processes
and convolutional layers has proven to be the most effective method to produce good
results. Following this way, in this paper we propose Cosine Attention Video Colorization
(CAVC), which is an approach that uses a single attention head with shared weights to …
diverse abstractions for a comprehensive grasp of the task, ultimately yielding high-quality
results. Currently, in example-based colorization approaches, the use of attention processes
and convolutional layers has proven to be the most effective method to produce good
results. Following this way, in this paper we propose Cosine Attention Video Colorization
(CAVC), which is an approach that uses a single attention head with shared weights to …
Abstract
Video colorization is a challenging task, demanding deep learning models to employ diverse abstractions for a comprehensive grasp of the task, ultimately yielding high-quality results. Currently, in example-based colorization approaches, the use of attention processes and convolutional layers has proven to be the most effective method to produce good results. Following this way, in this paper we propose Cosine Attention Video Colorization (CAVC), which is an approach that uses a single attention head with shared weights to produce a refinement of the monochromatic frame, as well as the cosine similarity between this sample and the other channels present in the image. This entire process acts as a pre-processing of the data from our autoencoder, which performs a feature fusion with the latent space extracted from the referent frame, as well as with its histogram. This architecture was trained on the DAVIS, UVO and LDV datasets and achieved superior results compared to state-of-the-art models in terms of FID metric in all the datasets.
scitepress.org
Showing the best result for this search. See all results