default search action
João P. Cabral
Person information
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2023
- [c36]João P. Cabral, D. Govind:
Voice source correlates of acted male speech emotions. SMM 2023 - [e5]Meghna Pandharipande, João P. Cabral, Venkata Subramanian Viraraghavan, Subhrojyoti Chaudhuri:
2023 Workshop on Speech, Music and Mind, SMM 2023, Trinity College, Dublin, Ireland, August 18, 2023. ISCA 2023 [contents] - 2022
- [j6]Darragh Higgins, Katja Zibrek, João P. Cabral, Dónal Egan, Rachel McDonnell:
Sympathy for the digital: Influence of synthetic voice on affinity, social presence and empathy for photorealistic virtual humans. Comput. Graph. 104: 116-128 (2022) - [j5]João P. Cabral, Gerard B. Remijn:
The Duration of an Auditory Icon Can Affect How the Listener Interprets Its Meaning. ACM Trans. Appl. Percept. 19(2): 8:1-8:16 (2022) - [e4]Meghna Pandharipande, Alexander Schindler, Venkata Subramanian Viraraghavan, João P. Cabral:
2022 Workshop on Speech, Music and Mind, SMM 2022, virutal, September 15, 2022. ISCA 2022 [contents] - 2021
- [j4]Beatriz Raposo de Medeiros, João Paulo Cabral, Alexsandro R. Meireles, Andre A. Baceti:
A comparative study of fundamental frequency stability between speech and singing. Speech Commun. 128: 15-23 (2021) - [c35]Christian Saam, João P. Cabral:
Using multi-speaker models for single speaker Spanish synthesis in the Blizzard 2021. Blizzard Challenge 2021 - [c34]Katja Zibrek, João P. Cabral, Rachel McDonnell:
Does Synthetic Voice alter Social Response to a Photorealistic Character in Virtual Reality? MIG 2021: 11:1-11:6 - [e3]Meghna Pandharipande, Pavel Matejka, Venkata Subramanian Viraraghavan, João P. Cabral, Alexander Schindler:
2021 Workshop on Speech, Music and Mind, SMM 2021, Brno University of Technology, Brno, Czech Republic, August 27, 2021. ISCA 2021 [contents] - 2020
- [e2]Venkata Subramanian Viraraghavan, Saho Xi, Alexander Schindler, João P. Cabral, Gauri Deshpande, Sachin Patel:
2020 Workshop on Speech, Music and Mind, SMM 2020, Nanjing, China, October 23, 2020. ISCA 2020 [contents]
2010 – 2019
- 2019
- [j3]Leigh Clark, Philip R. Doyle, Diego Garaialde, Emer Gilmartin, Stephan Schlögl, Jens Edlund, Matthew P. Aylett, João P. Cabral, Cosmin Munteanu, Justin Edwards, Benjamin R. Cowan:
The State of Speech in HCI: Trends, Themes and Challenges. Interact. Comput. 31(4): 349-371 (2019) - [c33]Benjamin R. Cowan, Philip R. Doyle, Justin Edwards, Diego Garaialde, Ali Hayes-Brady, Holly P. Branigan, João P. Cabral, Leigh Clark:
What's in an accent?: the impact of accented synthetic speech on lexical choice in human-machine dialogue. CUI 2019: 23:1-23:8 - [c32]João P. Cabral, Alexsandro R. Meireles:
Transformation of voice quality in singing using glottal source features. SMM 2019 - [c31]Svetlana Rudenko, João P. Cabral:
Synaesthesia: How can it be used to enhance the audio-visual perception of music and multisensory design in digitally enhanced environments? SMM 2019 - [e1]Venkata Subramanian Viraraghavan, Alexander Schindler, João P. Cabral, Gauri Deshpande, Sachin Patel:
2019 Workshop on Speech, Music and Mind, SMM 2019, Vienna, Austria, September 14, 2019. ISCA 2019 [contents] - [i2]Benjamin R. Cowan, Philip R. Doyle, Justin Edwards, Diego Garaialde, Ali Hayes-Brady, Holly P. Branigan, João P. Cabral, Leigh Clark:
What's in an accent? The impact of accented synthetic speech on lexical choice in human-machine dialogue. CoRR abs/1907.11146 (2019) - 2018
- [i1]Leigh Clark, Philip R. Doyle, Diego Garaialde, Emer Gilmartin, Stephan Schlögl, Jens Edlund, Matthew P. Aylett, João P. Cabral, Cosmin Munteanu, Benjamin R. Cowan:
The State of Speech in HCI: Trends, Themes and Challenges. CoRR abs/1810.06828 (2018) - 2017
- [c30]João Paulo Cabral, Benjamin R. Cowan, Katja Zibrek, Rachel McDonnell:
The Influence of Synthetic Voice on the Evaluation of a Virtual Character. INTERSPEECH 2017: 229-233 - 2016
- [c29]João P. Cabral, Christian Saam, Eva Vanmassenhove, Stephen Bradley, Fasih Haider:
The ADAPT entry to the Blizzard Challenge 2016. Blizzard Challenge 2016 - [c28]Eva Vanmassenhove, João P. Cabral, Fasih Haider:
Prediction of Emotions from Text using Sentiment Analysis for Expressive Speech Synthesis. SSW 2016: 21-26 - 2015
- [c27]João Paulo Cabral, Yuyun Huang, Christy Elias, Ketong Su, Nick Campbell:
Interface for monitoring of engagement from audio-visual cues. AVSP 2015 - [c26]Yuyun Huang, Christy Elias, João P. Cabral, Atul Nautiyal, Christian Saam, Nick Campbell:
Towards Classification of Engagement in Human Interaction with Talking Robots. HCI (27) 2015: 741-746 - [c25]Séamus Lawless, Peter Lavin, Mostafa Bayomi, João P. Cabral, M. Rami Ghorab:
Text Summarization and Speech Synthesis for the Automated Generation of Personalized Audio Presentations. NLDB 2015: 307-320 - 2014
- [j2]João P. Cabral, Korin Richmond, Junichi Yamagishi, Steve Renals:
Glottal Spectral Separation for Speech Synthesis. IEEE J. Sel. Top. Signal Process. 8(2): 195-208 (2014) - [j1]Éva Székely, Zeeshan Ahmed, Shannon Hennig, João P. Cabral, Julie Carson-Berndsen:
Predicting synthetic voice style from facial expressions. An application for augmented conversations. Speech Commun. 57: 63-75 (2014) - [c24]Zeeshan Ahmed, João P. Cabral:
HMM-based speech synthesiser for the Urdu language. SLTU 2014: 92-97 - 2013
- [c23]Henrique Campos, Joana Campos, João P. Cabral, Carlos Martinho, Jeppe Herlev Nielsen, Ana Paiva:
My dream theatre. AAMAS 2013: 1357-1358 - [c22]João P. Cabral:
Uniform concatenative excitation model for synthesising speech without voiced/unvoiced classification. INTERSPEECH 2013: 1082-1086 - [c21]João P. Cabral, Julie Carson-Berndsen:
Towards a Better Representation of the Envelope Modulation of Aspiration Noise. NOLISP 2013: 67-74 - 2012
- [c20]Kalu U. Ogbureke, João P. Cabral, Julie Carson-Berndsen:
Explicit duration modelling in HMM-based speech synthesis using a hybrid hidden Markov model-multilayer perceptron. SAPA@INTERSPEECH 2012: 58-63 - [c19]Udochukwu Ogbureke, João P. Cabral, Julie Carson-Berndsen:
Using multilayer perceptron for voicing strength estimation in HMM-based speech synthesis. ISSPA 2012: 683-688 - [c18]Udochukwu Ogbureke, João P. Cabral, Julie Carson-Berndsen:
Explicit duration modelling in HMM-based speech synthesis using continuous hidden Markov Model. ISSPA 2012: 700-705 - [c17]Éva Székely, João P. Cabral, Mohamed Abou-Zleikha, Peter Cahill, Julie Carson-Berndsen:
Evaluating expressive speech synthesis from audiobook corpora for conversational phrases. LREC 2012: 3335-3339 - [c16]João P. Cabral, Mark Kane, Zeeshan Ahmed, Mohamed Abou-Zleikha, Éva Székely, Amalia Zahra, Kalu U. Ogbureke, Peter Cahill, Julie Carson-Berndsen, Stephan Schlögl:
Rapidly Testing the Interaction Model of a Pronunciation Training System via Wizard-of-Oz. LREC 2012: 4136-4142 - [c15]Éva Székely, Zeeshan Ahmed, João P. Cabral, Julie Carson-Berndsen:
WinkTalk: a demonstration of a multimodal speech synthesis platform linking facial expressions to expressive synthetic voices. SLPAT@HLT-NAACL 2012: 5-8 - 2011
- [c14]Peter Cahill, Udochukwu Ogbureke, João P. Cabral, Éva Székely, Mohamed Abou-Zleikha, Zeeshan Ahmed, Julie Carson-Berndsen:
UCD Blizzard Challenge 2011 Entry. Blizzard Challenge 2011 - [c13]João P. Cabral, Steve Renals, Junichi Yamagishi, Korin Richmond:
HMM-based speech synthesiser using the LF-model of the glottal source. ICASSP 2011: 4704-4707 - [c12]João P. Cabral, John Kane, Christer Gobl, Julie Carson-Berndsen:
Evaluation of Glottal Epoch Detection Algorithms on Different Voice Types. INTERSPEECH 2011: 1989-1992 - [c11]Éva Székely, João P. Cabral, Peter Cahill, Julie Carson-Berndsen:
Clustering Expressive Speech Styles in Audiobooks Using Glottal Source Parameters. INTERSPEECH 2011: 2409-2412 - [c10]Mark Kane, João P. Cabral, Amalia Zahra, Julie Carson-Berndsen:
Introducing difficulty-levels in pronunciation learning. SLaTE 2011: 37-40 - 2010
- [c9]João P. Cabral, Steve Renals, Korin Richmond, Junichi Yamagishi:
An HMM-based speech synthesiser using glottal post-filtering. SSW 2010: 365-370
2000 – 2009
- 2009
- [c8]J. Sebastian Andersson, João P. Cabral, Leonardo Badino, Junichi Yamagishi, Robert A. J. Clark:
Glottal Source and Prosodic Prominence Modelling in HMM-based Speech Synthesis for the Blizzard Challenge 2009. Blizzard Challenge 2009 - 2008
- [c7]João P. Cabral, Steve Renals, Korin Richmond, Junichi Yamagishi:
Glottal spectral separation for parametric speech synthesis. INTERSPEECH 2008: 1829-1832 - 2007
- [c6]Guilherme Raimundo, João P. Cabral, Celso de Melo, Luís C. Oliveira, Ana Paiva, Isabel Trancoso:
Telling Stories with a Synthetic Character: Understanding Inter-modalities Relations. COST 2102 Workshop (Vietri) 2007: 310-323 - [c5]João P. Cabral, Steve Renals, Korin Richmond, Junichi Yamagishi:
Towards an improved modeling of the glottal source in statistical parametric speech synthesis. SSW 2007: 113-118 - 2006
- [c4]João P. Cabral, Luís C. Oliveira:
Emovoice: a system to generate emotions in speech. INTERSPEECH 2006 - [c3]Guilherme Raimundo, João P. Cabral, Celso de Melo, Luís C. Oliveira, Ana Paiva:
Storytelling - The Difference Between Fantasy and Reality. IVA 2006: 464 - 2005
- [c2]João P. Cabral, Luís C. Oliveira:
Pitch-synchronous time-scaling for prosodic and voice quality transformations. INTERSPEECH 2005: 1137-1140 - [c1]João P. Cabral, Luís C. Oliveira:
Pitch-synchronous time-scaling for high-frequency excitation regeneration. INTERSPEECH 2005: 1513-1516
Coauthor Index
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2024-10-07 22:15 CEST by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint