Anywhere: A Multi-Agent Framework for Reliable and Diverse Foreground-Conditioned Image Inpainting

T Xie, R Ma, Q Wang, X Ye, F Liu, Y Tai… - arXiv preprint arXiv …, 2024 - arxiv.org
T Xie, R Ma, Q Wang, X Ye, F Liu, Y Tai, Z Zhang, Z Yi
arXiv preprint arXiv:2404.18598, 2024arxiv.org
Recent advancements in image inpainting, particularly through diffusion modeling, have
yielded promising outcomes. However, when tested in scenarios involving the completion of
images based on the foreground objects, current methods that aim to inpaint an image in an
end-to-end manner encounter challenges such as" over-imagination", inconsistency
between foreground and background, and limited diversity. In response, we introduce
Anywhere, a pioneering multi-agent framework designed to address these issues. Anywhere …
Recent advancements in image inpainting, particularly through diffusion modeling, have yielded promising outcomes. However, when tested in scenarios involving the completion of images based on the foreground objects, current methods that aim to inpaint an image in an end-to-end manner encounter challenges such as "over-imagination", inconsistency between foreground and background, and limited diversity. In response, we introduce Anywhere, a pioneering multi-agent framework designed to address these issues. Anywhere utilizes a sophisticated pipeline framework comprising various agents such as Visual Language Model (VLM), Large Language Model (LLM), and image generation models. This framework consists of three principal components: the prompt generation module, the image generation module, and the outcome analyzer. The prompt generation module conducts a semantic analysis of the input foreground image, leveraging VLM to predict relevant language descriptions and LLM to recommend optimal language prompts. In the image generation module, we employ a text-guided canny-to-image generation model to create a template image based on the edge map of the foreground image and language prompts, and an image refiner to produce the outcome by blending the input foreground and the template image. The outcome analyzer employs VLM to evaluate image content rationality, aesthetic score, and foreground-background relevance, triggering prompt and image regeneration as needed. Extensive experiments demonstrate that our Anywhere framework excels in foreground-conditioned image inpainting, mitigating "over-imagination", resolving foreground-background discrepancies, and enhancing diversity. It successfully elevates foreground-conditioned image inpainting to produce more reliable and diverse results.
arxiv.org