CSE5519 Advances in Computer Vision (Topic F: 2025: Representation Learning)
Can Generative Models Improve Self-Supervised Representation Learning?
Novelty in SSL with Generative Models
- Use generative models to generate synthetic data to train self-supervised representation learning models.
- Use generative augmentation to generate new data from the original data using a generative model. (with gaussian noise, or other data augmentation techniques)
- Using standard augmentation techniques like flipping, cropping, and color jittering with generative techniques can further improve the performance of the self-supervised representation learning models.
Tip
This paper shows that using generative models to generate synthetic data can improve the performance of self-supervised representation learning models. The key seems to be the use of generative augmentation to generate new data from the original data using a generative model.
However, both representation learning and generative modeling have some hallucinations. I wonder will these kinds of hallucinations would be reinforced, and the bias in the generation model would propagate to the representation learning model in the process of generative augmentation?
Last updated on