Contrastive Conditional Latent Diffusion for Audio-visual Segmentation. (arXiv:2307.16579v1 [cs.CV])
Click here to flash read.
We propose a latent diffusion model with contrastive learning for
audio-visual segmentation (AVS) to extensively explore the contribution of
audio. We interpret AVS as a conditional generation task, where audio is
defined as the conditional variable for sound producer(s) segmentation. With
our new interpretation, it is especially necessary to model the correlation
between audio and the final segmentation map to ensure its contribution. We
introduce a latent diffusion model to our framework to achieve
semantic-correlated representation learning. Specifically, our diffusion model
learns the conditional generation process of the ground-truth segmentation map,
leading to ground-truth aware inference when we perform the denoising process
at the test stage. As a conditional diffusion model, we argue it is essential
to ensure that the conditional variable contributes to model output. We then
introduce contrastive learning to our framework to learn audio-visual
correspondence, which is proven consistent with maximizing the mutual
information between model prediction and the audio data. In this way, our
latent diffusion model via contrastive learning explicitly maximizes the
contribution of audio for AVS. Experimental results on the benchmark dataset
verify the effectiveness of our solution. Code and results are online via our
project page: https://github.com/OpenNLPLab/DiffusionAVS.
No creative common's license