Feature Stylization and Domain-aware Contrastive Learning for Domain Generalization

Abstract

Domain generalization aims to enhance the model robustness against domain shift without accessing target domain. Since the number of available domains is limited during training, recent approaches focus on generating samples of novel domains. Nevertheless, they struggle with optimization when synthesizing abundant domains, or cause distortion of original semantics. To these ends, we propose a novel domain generalization framework where feature statistics are utilized to transfer the original features to ones with novel domain properties. To preserve original semantics before stylization, we first decompose features into high and low frequency components. Afterwards, we stylize only the texture cues in low-frequency components are stylized according to manipulated domain statistics, while preserving shape cues in high-frequency components. As a final step, we re-merge the components to synthesize novel domain features. To enhance domain robustness, we utilize the stylized features to maintain the model consistency in terms of features as well as outputs. We achieve the feature consistency with the novel domain-aware supervised contrastive loss which ensures domain invariance while increasing class discriminability. Moreover, we enhance the output consistency by exploiting the consistency loss which minimizes the disagreement between outputs. Experimental results demonstrate the effectiveness of the proposed feature stylization and losses. Through quantitative comparison, we verify the lead of our method upon existing state-of-the-art methods on two benchmarks, PACS and Office-Home.

Publication
The 29th ACM International Conference on Multimedia
(MM 2021)