Large human language models: A need and the challenges

N Soni, HA Schwartz, J Sedoc… - arXiv preprint arXiv …, 2023 - arxiv.org
arXiv preprint arXiv:2312.07751, 2023arxiv.org
As research in human-centered NLP advances, there is a growing recognition of the
importance of incorporating human and social factors into NLP models. At the same time,
our NLP systems have become heavily reliant on LLMs, most of which do not model authors.
To build NLP systems that can truly understand human language, we must better integrate
human contexts into LLMs. This brings to the fore a range of design considerations and
challenges in terms of what human aspects to capture, how to represent them, and what …
As research in human-centered NLP advances, there is a growing recognition of the importance of incorporating human and social factors into NLP models. At the same time, our NLP systems have become heavily reliant on LLMs, most of which do not model authors. To build NLP systems that can truly understand human language, we must better integrate human contexts into LLMs. This brings to the fore a range of design considerations and challenges in terms of what human aspects to capture, how to represent them, and what modeling strategies to pursue. To address these, we advocate for three positions toward creating large human language models (LHLMs) using concepts from psychological and behavioral sciences: First, LM training should include the human context. Second, LHLMs should recognize that people are more than their group(s). Third, LHLMs should be able to account for the dynamic and temporally-dependent nature of the human context. We refer to relevant advances and present open challenges that need to be addressed and their possible solutions in realizing these goals.
arxiv.org
Showing the best result for this search. See all results