Training socially aligned language models on simulated social interactions
arXiv preprint arXiv:2305.16960, 2023•arxiv.org
Social alignment in AI systems aims to ensure that these models behave according to
established societal values. However, unlike humans, who derive consensus on value
judgments through social interaction, current language models (LMs) are trained to rigidly
replicate their training corpus in isolation, leading to subpar generalization in unfamiliar
scenarios and vulnerability to adversarial attacks. This work presents a novel training
paradigm that permits LMs to learn from simulated social interactions. In comparison to …
established societal values. However, unlike humans, who derive consensus on value
judgments through social interaction, current language models (LMs) are trained to rigidly
replicate their training corpus in isolation, leading to subpar generalization in unfamiliar
scenarios and vulnerability to adversarial attacks. This work presents a novel training
paradigm that permits LMs to learn from simulated social interactions. In comparison to …
Social alignment in AI systems aims to ensure that these models behave according to established societal values. However, unlike humans, who derive consensus on value judgments through social interaction, current language models (LMs) are trained to rigidly replicate their training corpus in isolation, leading to subpar generalization in unfamiliar scenarios and vulnerability to adversarial attacks. This work presents a novel training paradigm that permits LMs to learn from simulated social interactions. In comparison to existing methodologies, our approach is considerably more scalable and efficient, demonstrating superior performance in alignment benchmarks and human evaluations. This paradigm shift in the training of LMs brings us a step closer to developing AI systems that can robustly and accurately reflect societal norms and values.
arxiv.org
Showing the best result for this search. See all results