CoCoG: Controllable Visual Stimuli Generation Based on Human Concept Representations
CoCoG: Controllable Visual Stimuli Generation Based on Human Concept Representations
Chen Wei, Jiachen Zou, Dietmar Heinke, Quanying Liu
Proceedings of the Thirty-Third International Joint Conference on Artificial Intelligence
Main Track. Pages 3178-3186.
https://doi.org/10.24963/ijcai.2024/352
A central question for cognitive science is to understand how humans process visual scenes, i.e, to uncover human low-dimensional concept representation space from high-dimensional visual stimuli. Generating visual stimuli with controlling concepts is the key. However, there are currently no generative models in AI to solve this problem. Here, we present the Concept based Controllable Generation (CoCoG) framework. CoCoG consists of two components, a simple yet efficient AI agent for extracting interpretable concept and predicting human decision-making in visual similarity judgment tasks, and a conditional generation model for generating visual stimuli given the concepts. We quantify the performance of CoCoG from two aspects, the human behavior prediction accuracy and the controllable generation ability. The experiments with CoCoG indicate that 1) the reliable concept embeddings in CoCoG allows to predict human behavior with 64.07% accuracy in the THINGS-similarity dataset; 2) CoCoG can generate diverse stimuli through the control of concepts; 3) CoCoG can manipulate human similarity judgment behavior by intervening key concepts.
CoCoG offers visual objects with controlling concepts to advance our understanding of causality in human cognition. The code of CoCoG framework is available at https://github.com/ncclab-sustech/CoCoG.
Keywords:
Humans and AI: HAI: Cognitive systems
Humans and AI: HAI: Cognitive modeling
Humans and AI: HAI: Human-computer interaction