How robust are unsupervised representation learning methods (e.g. SSL) to distirbution shift compared to supervised learning?
𝐒𝐡𝐨𝐫𝐭 𝐚𝐧𝐬𝐰𝐞𝐫: Quite!
𝐋𝐨𝐧𝐠 𝐚𝐧𝐬𝐰𝐞𝐫: Our #ICLR2023 paper http://arxiv.org/pdf/2206.08871.pdf
Joint work with Imant Daunhawer & Amartya Sanyal @amartya
We verify this by subsampling realistic DG datasets to create extreme distribution shift between train and test splits, and find unsupervised learning to perform much better on these extreme versions of realistic DG datasets! (5/n)