How robust are unsupervised representation learning methods (e.g. SSL) to distirbution shift compared to supervised learning?
𝐒𝐡𝐨𝐫𝐭 𝐚𝐧𝐬𝐰𝐞𝐫: Quite!
𝐋𝐨𝐧𝐠 𝐚𝐧𝐬𝐰𝐞𝐫: Our #ICLR2023 paper http://arxiv.org/pdf/2206.08871.pdf
Joint work with Imant Daunhawer & Amartya Sanyal @amartya
We find that unsupervised learning methods significantly outperforms supervised ones on synthetic domain generalisation datasets (e.g. CdSprites, MNIST-CIFAR)
The curse of simplicity bias observed in supervised learning is successfully avoided by SSL/Autoencoders! (2/n)