TY - JOUR A1 - Triantafyllopoulos, Andreas A1 - Schuller, Bjorn W. A1 - Iymen, Gokce A1 - Sezgin, Metin A1 - He, Xiangheng A1 - Yang, Zijiang A1 - Tzirakis, Panagiotis A1 - Liu, Shuo A1 - Mertes, Silvan A1 - André, Elisabeth A1 - Fu, Ruibo A1 - Tao, Jianhua T1 - An overview of affective speech synthesis and conversion in the deep learning era T2 - Proceedings of the IEEE N2 - Speech is the fundamental mode of human communication, and its synthesis has long been a core priority in human–computer interaction research. In recent years, machines have managed to master the art of generating speech that is understandable by humans. However, the linguistic content of an utterance encompasses only a part of its meaning. Affect, or expressivity, has the capacity to turn speech into a medium capable of conveying intimate thoughts, feelings, and emotions—aspects that are essential for engaging and naturalistic interpersonal communication. While the goal of imparting expressivity to synthesized utterances has so far remained elusive, following recent advances in text-to-speech synthesis, a paradigm shift is well under way in the fields of affective speech synthesis and conversion as well. Deep learning, as the technology that underlies most of the recent advances in artificial intelligence, is spearheading these efforts. In this overview, we outline ongoing trends and summarize state-of-the-art approaches in an attempt to provide a broad overview of this exciting field. KW - Electrical and Electronic Engineering Y1 - 2023 UR - https://opus.bibliothek.uni-augsburg.de/opus4/frontdoor/index/index/docId/104403 UR - https://nbn-resolving.org/urn:nbn:de:bvb:384-opus4-1044036 DO - 10.1109/jproc.2023.3250266 SN - 0018-9219 SN - 1558-2256 VL - 111 IS - 10 SP - 1355 EP - 1381 PB - IEEE CY - Piscataway, NJ ER -