Variational self-supervised learning

Yükleniyor...
Küçük Resim

Tarih

2025-04-06

Dergi Başlığı

Dergi ISSN

Cilt Başlığı

Yayıncı

Cornell Univ

Erişim Hakkı

info:eu-repo/semantics/openAccess

Araştırma projeleri

Organizasyon Birimleri

Dergi sayısı

Özet

We present Variational Self-Supervised Learning (VSSL), a novel framework that combines variational inference with self-supervised learning to enable efficient, decoder-free representation learning. Unlike traditional VAEs that rely on input reconstruction via a decoder, VSSL symmetrically couples two encoders with Gaussian outputs. A momentum-updated teacher network defines a dynamic, data-dependent prior, while the student encoder produces an approximate posterior from augmented views. The reconstruction term in the ELBO is replaced with a cross-view denoising objective, preserving the analytical tractability of Gaussian KL divergence. We further introduce cosine-based formulations of KL and log-likelihood terms to enhance semantic alignment in high-dimensional latent spaces. Experiments on CIFAR-10, CIFAR-100, and ImageNet-100 show that VSSL achieves competitive or superior performance to leading self-supervised methods, including BYOL and MoCo V3. VSSL offers a scalable, probabilistically grounded approach to learning transferable representations without generative reconstruction, bridging the gap between variational modeling and modern self-supervised techniques.

Açıklama

Anahtar Kelimeler

Self-supervised learning, Variational inference, Representation learning, Encoder-only models

Kaynak

Arxiv

WoS Q Değeri

N/A

Scopus Q Değeri

Cilt

Sayı

Künye

Yavuz, M. C. & Yanıkoğlu, B. (2025). Variational self-supervised learning. Arxiv, 1-6. doi:https://doi.org/10.48550/arXiv.2504.04318