TY - JOUR
T1 - Zero-Shot Image Harmonization With Generative Model Prior
AU - Chen, Jianqi
AU - Zhang, Yilan
AU - Zou, Zhengxia
AU - Chen, Keyan
AU - Shi, Zhenwei
N1 - Publisher Copyright:
© IEEE. 1999-2012 IEEE.
PY - 2025
Y1 - 2025
N2 - We propose a zero-shot approach to image harmonization, aiming to overcome the reliance on large amounts of synthetic composite images in existing methods. These methods, while showing promising results, involve significant training expenses and often struggle with generalization to unseen images. To this end, we introduce a fully modularized framework inspired by human behavior. Leveraging the reasoning capabilities of recent foundation models in language and vision, our approach comprises three main stages. Initially, we employ a pretrained vision-language model (VLM) to generate descriptions for the composite image. Subsequently, these descriptions guide the foreground harmonization direction of a text-to-image generative model (T2I). We refine text embeddings for enhanced representation of imaging conditions and employ self-attention and edge maps for structure preservation. Following each harmonization iteration, an evaluator determines whether to conclude or modify the harmonization direction. The resulting framework, mirroring human behavior, achieves harmonious results without the need for extensive training. We present compelling visual results across diverse scenes and objects, along with quantitative comparisons validating the effectiveness of our approach.
AB - We propose a zero-shot approach to image harmonization, aiming to overcome the reliance on large amounts of synthetic composite images in existing methods. These methods, while showing promising results, involve significant training expenses and often struggle with generalization to unseen images. To this end, we introduce a fully modularized framework inspired by human behavior. Leveraging the reasoning capabilities of recent foundation models in language and vision, our approach comprises three main stages. Initially, we employ a pretrained vision-language model (VLM) to generate descriptions for the composite image. Subsequently, these descriptions guide the foreground harmonization direction of a text-to-image generative model (T2I). We refine text embeddings for enhanced representation of imaging conditions and employ self-attention and edge maps for structure preservation. Following each harmonization iteration, an evaluator determines whether to conclude or modify the harmonization direction. The resulting framework, mirroring human behavior, achieves harmonious results without the need for extensive training. We present compelling visual results across diverse scenes and objects, along with quantitative comparisons validating the effectiveness of our approach.
KW - Image harmonization
KW - diffusion model
KW - image composition
KW - zero-shot method
UR - https://www.scopus.com/pages/publications/85217015401
U2 - 10.1109/TMM.2025.3535343
DO - 10.1109/TMM.2025.3535343
M3 - 文章
AN - SCOPUS:85217015401
SN - 1520-9210
VL - 27
SP - 4494
EP - 4507
JO - IEEE Transactions on Multimedia
JF - IEEE Transactions on Multimedia
ER -