Compositional Text-to-Image Synthesis with Attention Map Control of Diffusion Models

Authors

  • Ruichen Wang OPPO Research Institute
  • Zekang Chen South China University of Technology
  • Chen Chen OPPO Research Institute
  • Jian Ma OPPO Research Institute
  • Haonan Lu OPPO Research Institute
  • Xiaodong Lin Rutgers University

DOI:

https://doi.org/10.1609/aaai.v38i6.28364

Keywords:

CV: Computational Photography, Image & Video Synthesis, ML: Deep Generative Models & Autoencoders

Abstract

Recent text-to-image (T2I) diffusion models show outstanding performance in generating high-quality images conditioned on textual prompts. However, they fail to semantically align the generated images with the prompts due to their limited compositional capabilities, leading to attribute leakage, entity leakage, and missing entities. In this paper, we propose a novel attention mask control strategy based on predicted object boxes to address these issues. In particular, we first train a BoxNet to predict a box for each entity that possesses the attribute specified in the prompt. Then, depending on the predicted boxes, a unique mask control is applied to the cross- and self-attention maps. Our approach produces a more semantically accurate synthesis by constraining the attention regions of each token in the prompt to the image. In addition, the proposed method is straightforward and effective and can be readily integrated into existing cross-attention-based T2I generators. We compare our approach to competing methods and demonstrate that it can faithfully convey the semantics of the original text to the generated content and achieve high availability as a ready-to-use plugin. Please refer to https://github.com/OPPO-Mente-Lab/attention-mask-control.

Published

2024-03-24

How to Cite

Wang, R., Chen, Z., Chen, C., Ma, J., Lu, H., & Lin, X. (2024). Compositional Text-to-Image Synthesis with Attention Map Control of Diffusion Models. Proceedings of the AAAI Conference on Artificial Intelligence, 38(6), 5544-5552. https://doi.org/10.1609/aaai.v38i6.28364

Issue

Section

AAAI Technical Track on Computer Vision V