Abstract:
Traditional image-to-image translation studies typically assume that images can be decomposed into independent content and style representations. However, effectively utilizing style representations to guide translation outcomes remains challenging. To address exemplar-based image translation, we introduced a texture co-occurrence discriminator into the framework to disentangle style and content while controlling specific translation styles. A memory-guided image patch comparison mechanism was established to further enhance semantic understanding. Experiments demonstrate that the proposed method successfully accomplishes exemplar-based translation tasks and surpasses state-of-the-art techniques in generation quality for conventional translation objectives.