Organize the papers you find in one place to build your own library,
share with your community, and find out what other researchers have read.
Join 20,000+ scientists from over 80 countries to research 10x smarter.
Adversarial target-invariant representation learning for domain generalization
In contrast to standard assumptions within the empirical risk minimization setting, several applications of machine learning observe distribution shifts across training and test data. As such, a number of domain generalization strategies have been introduced with the goal of achieving good performance on out-of-distribution samples. In this work, we are interested in finding a set of target distributions for which it is possible to guarantee generalization. We show that pair-wise invariance across train distributions ensures invariance to any target domain that can be explained through a mixture of available training domains. We thus present an upper-bound for the risk on the target distribution that depends on a discrepancy measure between pairs of source domains. Following this insight, we introduce an adversarial approach in which pair-wise divergences are estimated and minimized. Experiments on two domain generalization benchmarks for object recognition show that the proposed method yields higher average accuracy on the target domains in comparison to previously introduced adversarial strategies, as well as recently proposed methods based on learning invariant representations.