default search action
EUROSPEECH 1993: Berlin, Germany
- Third European Conference on Speech Communication and Technology, EUROSPEECH 1993, Berlin, Germany, September 22-25, 1993. ISCA 1993
Keynotes
- Janet M. Baker:
Dictation, directories, and data bases; emerging PC applications forlarge vocabulary speech recognition. 3-10 - William J. Barry, Paul Dalsgaard:
Speech database annotation. the importance of a multi-lingual approach. 13-20 - Lori Lamel, Jean-Luc Gauvain:
Identifying non-linguistic speech features. 23-30 - Jeremy Peckham:
A new generation of spoken dialogue systems: results and lessons from the sundial project. 33-40 - Roger K. Moore:
Whither a theory of speech pattern processing? 43-47 - Peter Noll:
Speech coding for communications. 479-488 - Hermann Ney:
Modeling and search in continuous speech recognition. 491-498 - Maxine Eskénazi:
Trends in speaking styles research. 501-509 - John S. Bridle:
Models of speech recognition; personal perspectives on particular approaches. 513-515 - Kai-Fu Lee:
The conversational computer: an apple perspective. 1377-1384 - Ute Jekosch:
Speech quality assessment and evaluation. 1387-1394 - Jan P. H. van Santen:
Timing in text-to-speech systems. 1397-1404 - Roberto Pieraccini, Esther Levin, Enrique Vidal:
Learning how to understand language. 1407-1412
Speech Coding
- Kazunori Ozawa, Masahiro Serizawa, Toshiki Miyano, Toshiyuki Nomura:
M-LCELP speech coding at bit-rates below 4kbps. 51-54 - Eduardo López Gonzalo, Luis A. Hernández Gómez:
Fast vector quantization using neural maps for CELP at 2400bps. 55-58 - Ulrich Balss, U. Kipper, Herbert Reininger, Dietrich Wolf:
Improving the speech quality of CELP-coders by optimizing the long-term delay determination. 59-62 - Carmen García-Mateo, José Luis Alba-Castro, Luis A. Hernández Gómez:
A stochastic speech coder with multi-band long-term prediction. 63-66 - B. W. M. Wery, Herman J. M. Steeneken:
Intelligibility evaluation of 4-5 kbps CELP and MBE vocoders: the hermes program experiment. 67-70 - Przemyslaw Dymarski, Nicolas Moreau:
Algorithms for the CELP coder with ternary excitation. 241-244 - Michel Mauc, Geneviève Baudoin, Milan Jelinek:
Complexity reduction for federal standard 1016 CELP coder. 245-248 - Friedhelm Wuppermann, Christiane Antweiler, M. Kappelan:
Objective analysis of the GSM half rate speech codec candidates. 249-252 - Ira A. Gerson, Mark A. Jasiuk:
A 5600 BPS VSELP speech coder candidate for half-rate GSM. 253-256 - Ahmet M. Kondoz, Barry G. Evans, M. R. Suddle:
A speech coder for TV programme description. 257-260 - Satoshi Miki, Kazunori Mano, Hitoshi Ohmuro, Takehiro Moriya:
Pitch synchronous innovation CELP (PSI-CELP). 261-264 - Asunción Moreno, José A. R. Fonollosa, Josep Vidal:
Vocoder design based on HOS. 519-522 - Nigel Sedgwick:
Emulation of a formant vocoder at 600 and 800 bps. 523-526 - W. Ma, Ahmet M. Kondoz, Barry G. Evans:
A pitch synchronized synthesizer for the IMBE vocoder. 527-530 - Thierry Dutoit, Henri Leich:
An analysis of the performances of the MBE model when used in the context of a text-to-speech system. 531-534 - C. F. Chan:
High-quality synthesis of LPC speech using multiband excitation model. 535-538 - Yair Shoham:
High-quality speech coding at 2.4 kbps based on time-frequency interpolation. 741-744 - Luca Marcato, Enzo Mumolo:
Coding of speech signal by fractal techniques. 745-748 - Naomi Asanuma, Hiromi Nagabuchi:
A new reference signal for evaluating the quality of speech coded at low bit rates. 749-752 - Changxue Ma, Douglas D. O'Shaughnessy:
A psychophysical study of fourier phase and amplitude coding of speech. 753-756
Articulatory Modelling
- Denis Beautemps, Pierre Badin, Rafael Laboissière:
Recovery of vocal tract midsagittal and area functions from speech signal for vowels and fricative consonants. 73-76 - Shrikanth S. Narayanan, Abeer A. Alwan:
Strange attractors and chaotic dynamics in the production of voiced and voiceless fricatives. 77-80 - Noël Nguyen, Philip Hoole:
Frequency variations of the lowest main spectral peak in sibilant clusters. 81-84 - Hélène Loevenbruck, Pascal Perrier:
Vocalic reduction : prediction of acoustic and articulatory variabilities with invariant motor commands. 85-88 - Christophe Savariaux, Pascal Perrier, Jean Pierre Orliaguet:
Compensating for labial perturbation in a rounded vowel: an acoustic and articulatory study. 89-92 - Rudolph Sock, Anders Löfqvist:
Resistance of bilabials /p, b/ to anticipatory labial and mandibular coarticulation from vowel types /i, a, u/. 541-544 - Mounir Jomaa, Christian Abry:
Jaw phasings and velocity profiles in arabic. 545-548 - Morten Olesen:
Derivation of the transfer function for a speech production model including the nasal cavity. 549-552 - Mats Båvegård, Jesper Högberg:
Using artificial neural nets to compare different vocal tract models. 553-556 - Arne Kjell Foldvik, Ulf Kristiansen, Jorn Kvaerness:
A time-evolving three-dimensional vocal tract model by means of magnetic resonance imaging (MRI). 557-558
Voice Source Analysis and Modelling
- Juergen Schroeter, Bert Cranen:
Physiologically-motivated modeling of the voice source in articulatory analysis/synthesis. 95-98 - Luís C. Oliveira:
Estimation of source parameters by frequency analysis. 99-102 - Helmer Strik, Bert Cranen, Lou Boves:
Fitting a LF-model to inverse filter signals. 103-106 - Jean Schoentgen:
Modelling the glottal pulse with a self-excited threshold auto-regressive model. 107-110 - Joachim Denzler, Ralf Kompe, Andreas Kießling, Heinrich Niemann, Elmar Nöth:
Going back to the source: inverse filtering of the speech signal with ANNs. 111-114
HMM-Based Recognition System
- Manuel A. Leandro, José Manuel Pardo:
Low cost speaker dependent isolated word speech preselection system using static phoneme pattern recognition. 117-120 - Lori Lamel, Jean-Luc Gauvain:
High performance speaker-independent phone recognition using CDHMM. 121-124 - Jean-Luc Gauvain, Lori Lamel, Gilles Adda, Martine Adda-Decker:
Speaker-independent continuous speech dictation. 125-128 - Ernst Günter Schukat-Talamazzini, Heinrich Niemann, Wieland Eckert, Thomas Kuhn, Stefan Rieck:
Automatic speech recognition without phonemes. 129-132 - Takashi Seino, Seiichi Nakagawa:
Spoken language identification using ergodic HMM with emphasized state transition. 133-136
Speech Signal Processing
- Bruno Apolloni, Dario Crivelli, Marco Amato:
Neural time warping. 139-142 - Philippe Le Cerf, Dirk Van Compernolle:
Speaker independent small vocabulary speech recognition using MLPs for phonetic labeling. 143-146 - Andrzej Drygajlo:
Multiresolution time-sequency speech processing based on orthogonal wavelet packet pulse forms. 147-150 - Eliathamby Ambikairajah, M. Keane, Liam Kilmartin, Graham Tattersall:
The application of the wavelet transform for speech processing. 151-154 - Naoto Iwahashi, Yoshinori Sagisaka:
Duration modelling with multiple split regression. 329-332 - Gerry Altmann, Duncan Young:
Factors affecting adaptation to time-compressed speech. 333-336 - Marc Roelands, Werner Verhelst:
Waveform similarity based overlap-add (WSOLA) for time-scale modification of speech: structures and evaluation. 337-340 - Hsiao-Chuan Wang, Hsiao-Fen Pai:
A study on the weighting factors of two-dimensional cepstral distance measure. 341-344 - Yves Kamp, Changxue Ma:
Connection between weighted LPC and higher-order statistics for AR model estimation. 345-347
Speaker Recognition
- Claude C. Chibelushi, John S. Mason, R. Deravi:
Integration of acoustic and visual speech for speaker recognition. 157-160 - Claude Montacié, Jean-Luc Le Floch:
Discriminant AR-vector models for free-text speaker verification. 161-164 - J. Thompson, John S. Mason:
Within class optimization of cepstra for speaker recognition. 165-168 - Frédéric Bimbot, Luc Mathan:
Text-free speaker recognition using an arithmetic-harmonic sphericity measure. 169-172
Data Bases, Speech Assessment, Noisy Speech
- Asunción Moreno, Dolors Poch, Antonio Bonafonte, Eduardo Lleida, Joaquim Llisterri, José B. Mariño, Climent Nadeu:
Albayzin speech database: design of the phonetic corpus. 175-178 - Carlos M. Ribeiro, Isabel Trancoso, António Joaquim Serralheiro:
A software tool for speech collection, recognition and reproduction. 179-182 - Matti Karjalainen, Toomas Altosaar:
An object-oriented database for speech processing. 183-186 - Dominic S. F. Chan, Adrian Fourcin:
Automatic annotation using multi-sensor data. 187-190 - Christoph Draxler, Hans G. Tillmann, Barbara Eisen:
Prolog tools for accessing the phondat database of spoken German. 191-194 - Ute Jekosch:
Cluster-similarity: a useful database for speech processing. 195-198 - Giuseppe Castagneri, Giuseppe Di Fabbrizio, Antonio Massone, Mario Oreglia:
SIRVA - a large speech database collected on the Italian telephone network. 199-201 - Herman J. M. Steeneken, Jan A. Verhave, Tammo Houtgast:
Objective assessment of speech communication systems; introduction of a software based procedure. 203-206 - Sven W. Danielsen:
Enhanced direct assessment of speech input systems within the SAM-a esprit project. 207-210 - Pascale Nicolas, Pascal Romeas:
Evaluation of prosody in the French version of multilingual text-to-speech synthesis: neutralising segmental information in preliminary tests. 211-214 - Sokol Saliu, Hideki Kasuya, Yasuo Endo, Yoshinobu Kikuchi:
A clinical voice evaluation system. 215-218 - Alan Wrench, Mary S. Jackson, Mervyn A. Jack, David S. Soutar, A. Gerry Robertson, Janet MacKenzie, John Laver:
A speech therapy workstation for the assessment of segmental quality: voiceless fricatives. 219-222 - Josep M. Salavedra, Enrique Masgrau, Asunción Moreno, Xavier Jove:
A speech enhancement system using higher order ar estimation in real environments. 223-226 - Régine Le Bouquin, Gérard Faucon, A. Akbariazirani:
Proposal of a composite measure for the evaluation of noise cancelling methods in speech processing. 227-230 - Philip Mark Crozier, Barry M. G. Cheetham, C. Holt, E. Munday:
The use of linear prediction and spectral scaling for improving speech enhancement. 231-234 - Helge B. D. Sørensen, Uwe Hartmann:
Robust speaker-independent speech recognition using non-linear spectral subtraction based IMELDA. 235-238
Phonetics
- Willem H. Vieregge, A. P. A. Broeders:
Intra- and interspeaker variation of /r/ in dutch. 267-270 - Mechtild Tronnier, Masatake Dantsuji:
An acoustic approach to fricatives in Japanese and German. 271-274 - Céu Viana, Isabel Trancoso, Carlos M. Ribeiro, Amalia Andrade, Ernesto d'Andrade:
The relationship between spelled and spoken portuguese: implications for speech synthesis and recognition. 275-278 - Mark S. Schmidt, S. Fitt, C. Scott, Mervyn A. Jack:
Phonetic transcription standards for european names (ONOMASTICA). 279-282 - Ove Andersen, Paul Dalsgaard, William J. Barry:
Data-driven identification of poly- and mono-phonemes for four european languages. 759-762 - Sheri Hunnicutt, Helen M. Meng, Stephanie Seneff, Victor W. Zue:
Reversible letter-to-sound sound-to-letter generation based on parsing word morphology. 763-766 - Jan Moore, Peter Roach:
The role of context in the automatic recognition of stressed syllables. 767-770 - Duncan Young, Gerry Altmann, Anne Cutler, Dennis Norris:
Metrical structure and the perception of time-compressed speech. 771-774 - Valerie Pasdeloup, José Morais, Régine Kolinsky:
Are stress and phonemic string processed separately? evidence from speech illusions. 775-778
Phoneme Classification and Labelling
- R. J. J. H. van Son, Louis C. W. Pols:
Vowel identification as influenced by vowel duration and formant track shape. 285-288 - William Goldenthal, James R. Glass:
Modelling spectral dynamics for vowel classification. 289-292 - Milan Stamenkovic, Juraj Bakran, Peter Tancig, Marijan Miletic:
Perceptive and spectral volumes of synthesized and natural vowels. 293-296 - Ryszard Gubrynowicz, Adam Wrzoskowicz:
Labeller - a system for automatic labelling of speech continuous signal. 297-300 - Ake Andersson, Holger Broman:
Towards automatic speech-to-text alignment. 301-304
Duration Modelling in HMMs
- Nelly Suaudeau, Régine André-Obrecht:
Sound duration modelling and time-variable speaking rate in a speech recognition system. 307-310 - M. Jones, Philip C. Woodland:
Using relative duration in large vocabulary speech recognition. 311-314 - Yifan Gong, William C. Treurniet:
Duration of phones as function of utterance length and its use in automatic speech recognition. 315-318 - Mark E. Forsyth, Mervyn A. Jack:
Duration modelling and multiple codebooks in semi-continuous HMMs for speaker verification. 319-322 - Mike Hochberg, Harvey F. Silverman:
Constraining model duration variance in HMM-based connected-speech recognition. 323-326
Speaker Adaptation and Normalization
- Christine Tuerk, Tony Robinson:
A new frequency shift function for reducing inter-speaker variance. 351-354 - Yoshio Ono, Hisashi Wakita, Yunxin Zhao:
Speaker normalization using constrained spectra shifts in auditory filter domain. 355-358 - Yunxin Zhao:
Self-learning speaker adaptation based on spectral variation source decomposition. 359-362 - Tetsuo Kosaka, Edward Willems, Jun-ichi Takami, Shigeki Sagayama:
A dynamic approach to speaker adaptation of hidden Markov networks for speech recognition. 363-366 - Lars Knohl, Ansgar Rinscheid:
Speaker normalization and adaptation based on feature-map projection. 367-370
Speech Analysis, Articulatory Modelling
- Marcel de Leeuw, Jean Caelen:
Pitch synchronous calculation of acoustic cues using a cochlea model. 373-376 - Steve McLaughlin, Andrew Lowry:
Nonlinear dynamical systems concepts in speech analysis. 377-380 - Arno J. Klaassen:
Grouping of acoustical events using cable neurons and the theory of neuronal group selection. 381-384 - I. R. Gransden, Steve W. Beet:
Computationally efficient methods of calculating instantaneous frequency for auditory analysis. 385-388 - Francesco Cutugno, Pietro Maturi:
Analysing connected speech with wavelets: some Italian data. 389-392 - Krzysztof Marasek:
Speech transients analysis using AR-smoothed wigner-ville distribution. 393-396 - Michel Pitermann, Jean Caelen:
Comparison of the variability of formants and formant targets using dynamic modeling. 397-400 - Jean Schoentgen, Zoubir Azami:
Pitch-synchronous formant extraction by means of a compound auto-regressive model. 401-404 - Bernard Teston:
A new air flowmeter design for the investigation of speech production. 405-408 - Emanuela Magno Caldognetto, Kyriaki Vagges, Giancarlo Ferrigno, Claudio Zmarich:
Articulatory dynamics of lips in Italian /'vpv/ and /'vbv/ sequences. 409-412 - Ahmed M. Elgendy:
Restricted distribution of pharyngeal segments: acoustical or mechanical constraints? 413-416 - Yohan Payan, Pascal Perrier:
Vowel normalization by articulatory normalization first attemps for vowel transitions. 417-420 - Nobuhiro Miki, Naohisa Kamiyama, Nobuo Nagai:
Synthesis and analysis of vocal source with vibration of larynx. 421-424 - Imad Znagui, Sami Boudelaa:
Towards an acoustic-phonetic classification of modern standard arabic vowels. 425-427 - Alain Marchal, Christine Meunier:
Divers' speech: variable encoding strategies. 429-432 - Lourdes Aguilar, Beatriz Blecua, Maria Machuca, Rafael Marín:
Phonetic reduction processes in spontaneous speech. 433-436 - N. R. Ganguli:
Spectral characteristics of fricative sound. 437-440 - Jean-François Bonastre, Henri Meloni:
Automatic speaker recognition and analytic process. 441-444 - Danielle Duez:
Second formant locus-nucleus patterns in French and Swedish. 445-447 - Christine Meunier:
Temporal organisation of segments and sub-segments in consonant clusters. 449-452 - Abdelkader Bétari, Rémy Bulot:
Automatic recognition of arabic stop consonants. 453-456 - M. Inés Torres, P. Iparraguirre:
Acoustic-phonetic decoding of Spanish occlusive consonants. 457-460 - Philip Christov:
Normalized vowel system representation for comparative phonetic studies. 461-464 - Cécile Thilly:
Influence of prevocalic consonant on vowel duration in French CV[p] utterances. 465-468 - Peter E. Czigler:
Temporal variation in consonant clusters in Swedish. 469-471 - Wiktor Jassem:
Discriminant analysis of continuous consonantal spectra. 473-476
Prosody: Rhythm, Style, Emotion
- Edmund Rooney, Miriam Eckert, Steven M. Hiller, Rebecca Vaughan, John Laver:
Training consonants in a computer-aided system for pronunciation teaching. 561-564 - Andrej Miksic, Bogomir Horvat:
Rhythm analysis of speech and music signals. 565-568 - Gitta P. M. Laan, Dick R. van Bergem:
The contribution of pitch contour, phoneme durations and spectral features to the character of spontaneous and read aloud speech. 569-572 - Juan María Garrido, Joaquim Llisterri, Carme de la Mota, Antonio Rios:
Prosodic differences in reading style: isolated vs. contextualized sentences. 573-576 - Jean Vroomen, René Collier, Sylvie J. L. Mozziconacci:
Duration and intonation in emotional speech. 577-580
Improved Algorithms for HMMs
- Colin M. Ayer, Melvyn J. Hunt, David Michael Brookes:
A discriminatively derived linear transform for improved speech recognition. 583-586 - Marco Saerens:
Hidden Markov models assuming a continuous-time dynamic emission of acoustic vectors. 587-590 - Saeed Vaseghi, P. N. Conner, Ben P. Milner:
Speech modelling using cepstral-time feature matrices. 591-594 - Yoshiharu Abe, Kunio Nakajima:
A bounded transition hidden Markov model for continuous speech recognition. 595-598 - Ami Moyal, Arnon Cohen:
Speaker independent phoneme recognition using a heuristic search. 599-602 - Fritz Class, Alfred Kaltenmeier, Peter Regel-Brietzmann:
Optimization of an HMM - based continuous speech recognizer. 803-806 - Marco Saerens, Hervé Bourlard:
Linear and nonlinear prediction for speech recognition with hidden Markov models. 807-810 - M. N. Lokbani, Denis Jouvet, Jean Monné:
Segmental post-processing of the n-best solutions in a speech recognition system. 811-814 - Tatsuo Matsuoka, Chin-Hui Lee:
A study of on-line Bayesian adaptation for HMM-based speech recognition. 815-818 - B. A. Maxwell, Philip C. Woodland:
Hidden Markov models using shared vector linear predictors. 819-822
Noisy Speech and Enhancement
- Maurizio Omologo, Piergiorgio Svaizer:
Talker localization and speech enhancement in a noisy environment using a microphone array based acquisition system. 605-608 - Takao Kobayashi, Toshio Kanno, Satoshi Imai:
Generalized cepstral modeling of speech degraded by additive noise. 609-612 - Stelios Bakamidis, George Carayannis:
Noise quality improvement through SVD equalization. 613-616 - Fei Xie, Dirk Van Compernolle:
Speech enhancement by nonlinear spectral estimation - a unifying approach. 617-620 - Kristian Kroschel, Keld Lange:
Subband array processing for speech enhancement. 621-624
Speaker Variability
- Vincent Pean, Sheila M. Williams, Maxine Eskénazi:
The design and recording of icy, a corpus for the study of intraspeaker variability and the characterisation of speaking styles. 627-630 - Andrej Ljolje:
Speaker clustering for improved speech recognition. 631-634 - Henk van den Heuvel, Bert Cranen, A. C. M. Rietveld:
Speaker-variability in spectral bands of dutch vowel segments. 635-638 - Shuichi Itahashi, Kimihito Tanaka:
A method of classification among Japanese dialects. 639-642 - Juan Antonio Hernández-Méndez, Aníbal R. Figueiras-Vidal:
Measuring similarities among speakers by means of neural networks. 643-646
Segmentation and Labelling
- Maria Rangoussi, Stylianos Bakamidis, George Carayannis:
Robust endpoint detection of speech in the presence of noise. 649-652 - Bianca Angelini, Fabio Brugnara, Daniele Falavigna, Diego Giuliani, Roberto Gretter, Maurizio Omologo:
Automatic segmentation and labeling of English and Italian speech databases. 653-656 - Azarshid Farhat, Guy Perennou, Régine André-Obrecht:
A segmental approach versus a centisecond one for automatic phonetic time-alignment. 657-660 - I. Heroaez, J. Barandiaran, Enrique Monte, Borja Etxebarria:
A segmentation algorithm based on acoustical features using a self organizing neural network. 661-663 - Piero Cosi:
SLAM: segmentation and labelling automatic module. 665-668 - Christian Heise, Hans-Heinrich Bothe:
Phone and syllable segmentation by concurrent window modules. 669-672 - Barbara Eisen:
Reliability of speech segmentation and labelling at different levels of transcription. 673-676 - Dick R. van Bergem:
On the perception of acoustic and lexical vowel reduction. 677-680 - Brit van Ooyen, Anne Cutler, Pier Marco Bertinetto:
Click detection in Italian and English. 681-684 - Andrew R. Nix, M. Gareth Gaskell, William D. Marslen-Wilson:
Phonological variation and mismatch in lexical access. 685-688 - Monique van Zon, Béatrice de Gelder:
Perception of word boundaries by dutch listeners. 689-692 - Anne Bonneau, Linda Djezzar, Yves Laprie:
Perception of French stop bursts, implications for stop identification. 693-696 - Zdravko Kacic, Bogomir Horvat:
Using isofrequency neural column for harmonic sound scene decomposition. 697-700 - A. K. Datta:
Do ear perceive vowel through formants? 701-704 - Trupti Vyas, Michael J. Pont, Seyed J. Mashari:
Speech recognition using auditory models and neural networks. 705-708 - Changxue Ma, Armin Kohlrausch:
The influence of temporal processes on spectral masking patterns of harmonic complex tones and vowels. 709-712 - Hisao Kuwabara:
Temporal effect on the perception of continuous speech and a possible mechanism in the human auditory system. 713-716 - Edward Jones, Eliathamby Ambikairajah:
Comparison of various adaptation mechanisms in an auditory model for the purpose of speech processing. 717-720 - I. A. Vartanian, Tatiana V. Chernigovskaya:
Sensory-motor manifestations of speech-hearing interaction. 721-723 - Tatiana V. Chernigovskaya, I. A. Vartanian, T. I. Tokareva:
Syllable perception: lateralization of native and foreign languages. 725-726 - Michael J. Pont:
Simulation of short-latency auditory evoked potentials: a pilot study. 727-730 - Régine Kolinsky, José Morais:
Intermediate representations in spoken word recognition: a cross-linguistic study of word illusions. 731-734 - Jianfen Cao:
Time - varing manner on formant trajectories of Chinese diphthongs. 735-738 - Yifan Gong, Jean Paul Haton:
Iterative transformation and alignment for speech labeling. 1759-1762 - Kai Hübener, Andreas Hauenstein:
Controlling search in segmentation lattices of speech signals. 1763-1766 - Hiroshi Shimodaira, Mitsuru Nakai:
Accent phrase segmentation using transition probabilities between pitch pattern templates. 1767-1770 - Wolfgang Reichl, Günther Ruske:
Syllable segmentation of continuous speech with artificial neural networks. 1771-1774 - Mats Blomberg, Rolf Carlson:
Labelling of speech given its text representation. 1775-1778
Prosody: Analysis and Modelling of F0 Contours
- Louis ten Bosch:
On the automatic classification of pitch movements. 781-784 - U. Jensen, Roger K. Moore, Paul Dalsgaard, Børge Lindberg:
Modelling of intonation contours at the sentence level using CHMMs and the 1961 o'connor and arnold scheme. 785-788 - Paul Taylor:
Automatic recognition of intonation from F0 contours using the rise/fall/connection model. 789-792 - Edouard Geoffrois:
A pitch contour analysis guided by prosodic event detection. 793-796 - Grazyna Demenko, Ignacy Nowak, Janusz Imiolczyk:
Analysis and synthesis of pitch movements in a read polish text. 797-800
Speech Recognition in Noise
- William A. Ainsworth, Georg F. Meyer:
Noise adaptation: speech recognition by auditory models and human listeners. 825-828 - Juan Arturo Nolazco-Flores, Steve J. Young:
Adapting a HMM-based recogniser for noisy speech enhanced by spectral subtraction. 829-832 - Tetsunori Kobayashi, Ryuji Mine, Katsuhiko Shirai:
Speech recognition under the unstationary noise based on the noise Markov model and spectral-subtraction. 833-836 - Mark J. F. Gales, Steve J. Young:
HMM recognition in noise using parallel model combination. 837-840 - Laurent Buniet, Dominique Fohr, Yolande Anglade, Jean-Claude Junqua, Jean-Marie Pierrel:
Selectively trained neural networks for connected word recognition in noisy environments. 841-844
Speaker Independency
- Bianca Angelini, Fabio Brugnara, Daniele Falavigna, Diego Giuliani, Roberto Gretter, Maurizio Omologo:
A baseline of a speaker independent continuous speech recognizer of Italian. 847-850 - Lalit R. Bahl, Peter V. de Souza, P. S. Gopalakrishnan, David Nahamoo, Michael Picheny:
Word lookahead scheme for cross-word right context models in a stack decoder. 851-854 - David B. Grayden, Michael S. Scordilis:
Recognition of obstruent phonemes in speaker-independent fluent speech using a hierarchical approach. 855-858 - Bernd Plannerer, Günther Ruske:
A continuous speech recognition system using phonotactic constraints. 859-862
Speech Synthesis
- M. Ouadou, A. Rajouani, M. Zyoute, J. Rosenfeld, Mohamed Najim:
Joint arabic-hebrew speech synthesis system. 865-868 - Eduardo López Gonzalo, Gábor Olaszy, Géza Németh:
Improvements of the Spanish version of the multivox text-to-speech system. 869-872 - Mats Ljungqvist, Hiroya Fujisaki:
Generating intonation for Swedish text-to-speech conversion using a quantitative model for the F0 contour. 873-876 - Peter Meyer, Hans-Wilhelm Rühl, Regina Krüger, Marianne Kugler, L. Vogten, A. Dirksen, Karim Belhoula:
PHRITTS - a text-to-speech synthesizer for the German language. 877-880 - Karim Belhoula:
Rule-based grapheme-to-phoneme conversion of names. 881-884 - Iain R. Murray, Morag M. Black:
A prototype text-to-speech system for scottish gaelic. 885-887 - Janusz Imiolczyk, Ignacy Nowak, Grazyna Demenko:
A text-to-speech system for polish. 889-892 - Marian J. Macchi, Mary Jo Altom, Dan Kahn, Sharad Singhal, Murray F. Spiegel:
Intelligibility as a function of speech coding method for template-based speech synthesis. 893-896 - Maggie Gaved:
Pronunciation and text normalisation in applied text-to-speech systems. 897-900 - Jill House, Catriona MacDermid, Scott McGlashan, Andrew Simpson, Nick J. Youd:
Evaluating synthesised prosody in simulations of an automated telephone enquiry service. 901-904 - Katherine Morton, Marcel Tatham:
Speech synthesis in dialogue systems. 905-908 - Elissaveta Abadjieva, Iain R. Murray, John L. Arnott:
Applying analysis of human emotional speech to enhance synthetic speech. 909-912 - Eric Lewis, Marcel Tatham:
A generic front end for text-to-speech synthesis systems. 913-916 - Robert W. P. Luk, Robert I. Damper:
Experiments with silent-e and affix correspondences in stochastic phonographic transduction. 917-920 - Georg Fries:
Phoneme-dependent speech synthesis in the time and frequency domains. 921-924 - Inger Karlsson, Lennart Neovius:
Speech synthesis experiments with the glove synthesiser. 925-928 - Volker Kraft:
Auditory detection of discontinuities in synthesis-by-concatenation. 929-932 - Yun-Keun Lee, Seung-Kwon Ahn:
Effects of the phase jitters on naturalness of synthesized speech. 933-936 - Briony Williams:
Letter-to-sound rules for the welsh language. 937-940
Dialogue Structure
- Christel Müller, Fred Runge:
Dialogue design principles - key for usability of voice processing. 943-946 - Hans Dybkjær, Niels Ole Bernsen, Laila Dybkjær:
Wizard-of-oz and the trade-off between naturalness and recogniser constraints. 947-950 - Cerian E. Jones, Roberto Garigliano:
Dialogue analysis and generation: a theory for modelling natural English dialogue. 951-954 - Catriona MacDermid:
Features of naive callers' dialogues with a simulated speech understanding and dialogue system. 955-958 - Fabrice Duermael, Bertrand Gaiffe:
Refering to actions in man-machine command dialogues. 959-962 - Yoichi Yamashita, Riichiro Mizoguchi:
Next utterance prediction based on two kinds of dialog models. 1161-1164 - T. Andemach, G. Deville, Luc Mortier:
The design of a real world wizard of oz experiment for a speech driven telephone directory information system. 1165-1168 - Sheryl R. Young:
Dialog structure and plan recognition in spontaneous spoken dialog. 1169-1172 - Julia Hirschberg, Christine H. Nakatani:
A speech-first model for repair identification in spoken language systems. 1173-1176 - Sheryl R. Young, Wayne H. Ward:
Recognition confidence measures for spontaneous spoken dialog. 1177-1179
Language Modelling
- R. Zhao, Patrick Kenny, Paul Labute, Douglas D. O'Shaughnessy:
Issues in large scale statistical language modeling. 965-968 - Roberto Garigliano, Kevin Johnson, Russell James Collingham:
A data-driven case for a spontaneous speech grammar. 969-972 - Reinhard Kneser, Hermann Ney:
Improved clustering techniques for class-based statistical language modelling. 973-976 - Jeremy H. Wright, Gareth J. F. Jones, Harvey Lloyd-Thomas:
A consolidated language model for speech recognition. 977-980 - Michael K. McCandless, James R. Glass:
Empirical acquisition of word and phrase classes in the atis domain. 981-984 - Tung-Hui Chiang, Keh-Yih Su:
The effects of parameter smoothing on robust learning in syntactic ambiguity resolution. 1183-1186 - Enrique Vidal, Roberto Pieraccini, Esther Levin:
Learning associations between grammars: a new approach to natural language understanding. 1187-1190 - Michèle Jardino, Gilles Adda:
Language modelling for CSR of large corpus using automatic classification of words. 1191-1194 - Helmut Lucke:
Inference of stochastic context-free grammar rules from example data using the theory of Bayesian belief propagation. 1195-1198 - Petra Witschel:
Constructing linguistic oriented language models for large vocabulary speech recognition. 1199-1202
Prosody: Prosodic Parameter Manipulation
- Eduardo Rodríguez Banga, Carmen García-Mateo:
New frequency domain prosodic modification techniques. 987-990 - Hai-Dong Wang, D. Degryse, Fabrizio Carrara:
A prosody modification approach for auditory user feedback in the spell pronunciation teaching system. 991-994 - Tohru Takagi, Eiichi Miyasaka:
A speech prosody conversion system with a high quality speech analysis-synthesis method. 995-998 - Marc Swerts, René Collier:
On the perceived serial position of discourse units. 999-1002 - Paul C. Bagshaw, Steven M. Hiller, Mervyn A. Jack:
Enhanced pitch tracking and the processing of F0 contours for computer aided intonation teaching. 1003-1006
New Architectures for Neural Networks
- Chakib Tadj, Franck Poirier:
Improved DVQ algorithm for speech recognition: a new adaptive learning rule with neurons annihilation. 1009-1012 - Taro Sasaki, Tadashi Kitamura, Akira Iwata:
Speaker-independent 212 word recognition using combNET-II. 1013-1016 - M. Asunción Castaño, Enrique Vidal, Francisco Casacuberta:
Learning direct acoustic-to-semantic mappings through simple recurrent networks. 1017-1020
Noise Reduction and Channel Adaption
- Saeed Vaseghi, Ben P. Milner:
Noise-adaptive hidden Markov models based on wiener filters. 1023-1026 - K. F. Wong, S. H. Leung, H. C. Ng:
Noisy speech recognition using singular value decomposition and two-sided linear prediction. 1027-1030 - Franck Martin, Kiyohiro Shikano, Yasuhiro Minami:
Recognition of noisy speech by composition of hidden Markov models. 1031-1034 - Yuqing Gao, Jean Paul Haton:
Noise reduction and speech recognition in noise conditions tested on LPNN-based continuous speech recognition system. 1035-1038 - Michael Trompf, Ralf Richter, Harald Eckhardt, Heidi Hackbarth:
Combination of distortion-robust feature extraction and neural noise reduction for ASR. 1039-1042 - Chafic Mokbel, Jean Monné, Denis Jouvet:
On-line adaptation of a speech recognizer to variations in telephone line conditions. 1247-1250 - Matthias Wittmann, Otto Schmidbauer, Abdulmesih Aktas:
Online channel compensation for robust speech recognition. 1251-1254 - Patrice Alexandre, Jérôme Boudy, Philip Lockwood:
Evaluation of car noise reduction/compensation techniques for digit recognition in a speaker-independent context. 1255-1258 - A. Brancaccio, C. Pelaez:
Experiments on noise reduction techniques with robust voice detector in car environment. 1259-1262
Word Spotting
- Satoshi Nakamura, Toshio Akabane, Seiji Hamaguchi:
Robust word spotting in adverse car environments. 1045-1048 - Richard C. Rose:
Definition of subword acoustic units for wordspotting. 1049-1052 - Jiro Kiyama, Yoshiaki Itoh, Ryuichi Oka:
Spontaneous speech recognition by sentence spotting. 1053-1056 - Philippe Jeanrenaud, Kenney Ng, Man-Hung Siu, Jan Robin Rohlicek, Herbert Gish:
Phonetic-based word spotter: various configurations and application to event spotting. 1057-1060 - Akihiro Imamura, Mikio Kitai:
An application of word-spotting in a voice activated service entry system. 1061-1064 - Eduardo Lleida, José B. Mariño, Josep M. Salavedra, Antonio Bonafonte, Enrique Monte, A. Martinez:
Out-of-vocabulary word modelling and rejection for keyword spotting. 1265-1268 - Mary O'Kane, P. E. Kenne:
Word and phrase spotting with limited training. 1269-1272 - Jean-Marc Boite, Hervé Bourlard, Bart D'hoore, Marc Haesen:
A new approach towards keyword spotting. 1273-1276 - Jorge Alvarez-Cercadillo, Luis A. Hernández Gómez:
Grammar learning and word spotting using recurrent neural networks. 1277-1280 - Shigeki Okawa, Tetsunori Kobayashi, Katsuhiko Shirai:
Word spotting in conversational speech based on phonemic unit likelihood by mutual information criterion. 1281-1284
Speech Processing and Coding
- F. Dohnal:
Generalized frequency domain adaptive filter for acoustic echo canceller. - Joel Crestel, Michel Guitton:
Estimation of speech signal classification features in a simulated hyperbaric environment. 1069-1072 - Petr Pollák, Pavel Sovka, Jan Uhlír:
Noise suppression system for a car. 1073-1076 - Peter Heitkämper, Michael Walker II:
Adaptive gain control and echo cancellation for hands-free telephone systems. 1077-1080 - W. Nick Campbell:
Predicting segmental durations for accommodation within a syllable-level timing framework. 1081-1084 - Tore Fjällbrant, Fisseha Mekuria, Shahrokh Amirijoo:
A filtersank based on physiologically measured characteristics in an auditory model for speech signal processing. 1085-1088 - Fu-Rong Jean, Chih-Chung Kuo, Hsiao-Chuan Wang:
Spectral sensitivity weighted transform coding for LSP parameters. 1089-1092 - Rainer Martin:
An efficient algorithm to estimate the instantaneous SNR of speech signals. 1093-1096 - Laurent Mauuary, Jean Monné:
Speech/non-speech detection for voice response systems. 1097-1100 - Alexander Osipov, Vladimir Zentsov:
Time-spectral approach to compiling speech reconstruction. 1101-1104 - J. A. Haigh, John S. Mason:
A voice activity detector based on cepstral analysis. 1105-1106 - Jürgen W. Paulus, Christiane Antweiler, Christian G. Gerlach:
High quality coding of wideband speech at 24 kbit/s. 1107-1110 - H. Dia, Gang Feng, Yannick Mahieux:
A 32 kbit/s wideband speech coder based on transform coding. 1111-1114 - Oded Gottesman, Yair Shoham:
Realtime implementation of high-quality 32 kbps wideband LD-CELP coder. 1115-1118 - A. Popescu, D. Vicard, François Bill Druilhe:
A fixed-point implementation of the 16 kb/s LD-CELP speech coding algorithm. 1119-1122 - Christian G. Gerlach:
Optimality of sequential quantization in analysis-by-synthesis speech codecs. 1123-1126 - Radwan Kastantin, Gang Feng:
A sub-band MPLPC coder for high quality speech coding at 16 kbit/s. 1127-1130 - Enzo Mumolo, Alessio Rebelli:
Optimal multepulse excitation determination by simulated annealing. 1131-1134 - K. W. Law, C. F. Chan:
Split vector quantization of the LPC parameters using weighted lattice structure. 1135-1138 - Stefan Bruhn:
A new approach to noiseless interframe coding of LPC parameters in vector quantizer applications. 1139-1142 - Torbjørn Svendsen:
Efficient quantization of speech spectral information. 1143-1146 - Stefan Feldes:
Enhancing robustness of coded LPC-spectra to channel errors by use of residual redundancy. 1147-1150 - S. A. Atungsiri, Ahmet M. Kondoz, Barry G. Evans:
Multi-rate source and channel coding for mobile communication systems. 1151-1154 - Takehiro Moriya, Satoshi Miki, Kazunori Mano, Hitoshi Ohmuro:
Training method of the excitation codebook for CELP. 1155-1158
Prosody: Phrasing
- Gösta Bruce, Björn Granström, Kjell Gustafson, David House:
Phrasing strategies in prosodic parsing and speech synthesis. 1205-1208 - Eva Strangert, Bo Strangert:
Prosody in the perception of syntactic boundaries. 1209-1210 - Jan-Roelof de Pijper, Angelien Sanderman:
Prosodic cues to the perception of constituent boundaries. 1210-1214 - Esther Grabe, Tara Hoist, Francis Nolan, Paul Warren:
Acoustic cues to syntactic structure - evidence from prosodic and segmental effects. 1215-1218 - Frédéric Beaugendre, Anne Lacheret-Dujour:
Automatic generation of French intonation based on a perceptual study and morpho-syntactic information. 1219-1222
MLPs and TDNNs for Speech Recognition
- Stephen A. Zahorian, Zaki B. Nossair, Claude A. Norton III:
A partitioned neural network approach for vowel classification using smoothed time/frequency features. 1225-1228 - Tadashi Kitamura:
Speaker-independent 100 word recognition using dynamic spectral features of speech and a neural network. 1229-1232 - Ming Zhu, Klaus Fellbaum:
Speaker independent isolated word recognition using vector quantization and neural networks. 1233-1236 - Kjell Elenius, Hans G. C. Tråvén:
Multi-layer perceptrons and probabilistic neural networks for phoneme recognition. 1237-1240 - C. Simon Blackburn, Julie Vonwiller, Robin W. King:
Automatic accent classification using artificial neural networks. 1241-1244 - Mark A. Huckvale:
The benefits of tiered segmentation for the recognition of phonetic properties. 1473-1476 - David M. Lubensky:
Generalized context-dependent phone modeling using artificial neural networks. 1477-1480 - Hermann Hild, Alex Waibel:
Speaker-independent connected letter recognition with a multi-state time delay neural network. 1481-1484 - Ulrich Bodenhausen, Alex Waibel:
Tuning by doing: flexibility through automatic structure optimization. 1485-1488 - Christoph Windheuser, Frédéric Bimbot:
Phonetic features for spelled letter recognition with a time delay neural network. 1489-1492 - Veronika Bappert, Matthias Jobst:
Training of a time-delay neural network for speech recognition by solving stiff differential equations. 1493-1496
Speech Translation, Language Identification, Parsers
- Shigeki Sagayama, Jun-ichi Takami, Akito Nagai, Harald Singer, Kouichi Yamaguchi, Kazumi Ohkura, Kenji Kita, Akira Kurematsu:
ATREUS: a speech recognition front-end for a speech translation system. 1287-1290 - Tsuyoshi Morimoto, Toshiyuki Takezawa, Fumihiro Yato, Shigeki Sagayama, Toshihisa Tashiro, Masaaki Nagata, Akira Kurematsu:
ATR's speech translation system: ASURA. 1291-1294 - Monika Woszczyna, Noah Coccaro, Andreas Eisele, Alon Lavie, Arthur E. McNair, Thomas Polzin, Ivica Rogina, Carolyn P. Rosé, Tilo Sloboda, Masaru Tomita, J. Tsutsumi, Naomi Aoki-Waibel, Alex Waibel, Wayne H. Ward:
Recent advances in JANUS: a speech translation system. 1295-1298 - Manny Rayner, Ivan Bretan, David M. Carter, Michael Collins, Vassilios Digalakis, Björn Gambäck, Jaan Kaja, Jussi Karlgren, Bertil Lyberg, Stephen G. Pulman, Patti Price, Christer Samuelsson:
Spoken language translation with MID-90's technology: a case study. 1299-1302 - Timothy J. Hazen, Victor W. Zue:
Automatic language identification using a segment-based approach. 1303-1306 - Yeshwant K. Muthusamy, Kay M. Berkling, Takayuki Arai, Ronald A. Cole, Etienne Barnard:
A comparison of approaches to automatic language identification using telephone speech. 1307-1310 - Ying Cheng, Yves Normandin, Paul Fortier:
Integration of neural networks and robust parsers in natural language understanding. 1311-1314 - Pierre Dauchy, Christophe Mignot, Claude Valot:
Joint speech and gesture analysis some experimental results on multimodal interface. 1315-1318 - Keikichi Hirose, Yasuharu Asano:
Generation of speech reply in the speech response system. 1319-1322 - Evangelos Dermatas, George Kokkinakis:
A fast multilingual probabilistic tagger. 1323-1326 - Jin'ichi Murakami, Hiroki Yamatomo, Shigeki Sagayama:
The possibility for acquisition of statistical network grammar using ergodic HMM. 1327-1330 - Evelyne Millien, Roland Kuhn:
A robust analyzer for spoken language understanding. 1331-1334 - R. T. Dutton, John C. Foster, Mervyn A. Jack, F. W. M. Stentiford:
Identifying usability attributes of automated telephone services. 1335-1338 - Andrew Hunt:
Utilising prosody to perform syntactic disambiguation. 1339-1342 - Steven M. Hiller, Edmund Rooney, Jean-Paul Lefèvre, Mervyn A. Jack:
Spell: an automated system for computer-aided pronunciation teaching. 1343-1346 - Edmund Rooney, Rebecca Vaughan, Steven M. Hiller, Fabrizio Carraro, John Laver:
Training vowel pronunciation using a computer-aided teaching system. 1347-1350 - Mary Zajicek, Ken Brownsey:
Methods for traversing a pre-recorded speech message network to optimise dialogue in telephone answering systems. 1351-1354 - Roger Hanes, Jo Salter, Paul Popay, Frances Hedley:
Service creation tools for creating speech interactive services. 1355-1358 - Julia Hirschberg, Jacques M. B. Terken:
Deaccentuation and persistence of grammatical function and surface position. 1359-1362 - Stefan Euler, K. Riedel:
Design and implementation of a speech server for unix based multimedia applications. 1363-1366 - David Goodine, Victor W. Zue:
Romaine: a lattice based approach to lexical access. 1367-1370 - Toffee A. Albina, Erica G. Bernstein, David M. Goblirsch, Douglas E. Lake:
A system for clustering spoken documents. 1371-1374
Dialogue Evalution
- Nathalie A. Vergeynst, Keith Edwards, John C. Foster, Mervyn A. Jack:
Spoken dialogues for human-computer interaction over the telephone: complexity measures. 1415-1418 - Lynette Hirschman, Christine Pao:
The cost of errors in a spoken language system. 1419-1422 - Andrew Simpson, Norman M. Eraser:
Black box and glass box evaluation of the SUNDIAL system. 1423-1426 - Cristina Delogu, Andrea Di Carlo, Ciro Sementina, Silvia Stecconi:
A methodology for evaluating human-machine spoken language interaction. 1427-1430 - Philippe Morin, Jean-Claude Junqua:
Error correction and ambiguity resolution in multimodal man-machine dialogue. 1431-1434
Data Bases
- Marie-Franoise Castaing, Dominique True-Martini:
Analysis of the speaker and operator behaviours. 1437-1439 - Alix de Ginestel-Mailland, Martine de Calmès, Guy Perennou:
Multi-level transcription of speech corpora from orthographic forms. 1441-1444 - Andrej Ljolje, Michael D. Riley:
Automatic segmentation of speech for TTS. 1445-1448 - Olivier Boëffard, B. Cherbonnel, Françoise Emerard, S. White:
Automatic segmentation and quality evaluation of speech unit inventories for concatenation-based, multilingual PSOLA text-to-speech systems. 1449-1452
Letter to Sound and Architecture for TTS
- Bert Van Coile:
On the development of pronunciation rules for text-to-speech synthesis. 1455-1458 - Walter Daelemans, Antal van den Bosch:
Tabtalk: reusability in data-oriented grapheme-to-phoneme conversion. 1459-1462 - Anders Lindström, Mats Ljungqvist, Kjell Gustafson:
A modular architecture supporting multiple hypotheses for conversion of text to phonetic and linguistic entities. 1463-1466 - Jon P. Iles, William H. Edmondson:
The use of a non-linear model for text-to-speech conversion. 1467-1470
Perception
- Astrid van Wieringen, John K. Cullen, Louis C. W. Pols:
The perceptual relevance of CV- and VC- transitions in identifying stop consonants: cross-language results. 1499-1502 - Vincent J. van Heuven, Willy Jongenburger:
Perceptual effects of place and voicing assimilation in dutch consonants. 1503-1506 - Brit van Ooyen:
Detection of vowels and consonants by human listeners: effects of minimising auditory memory load. 1507-1510 - Gérard Bailly:
Resonances as possible representation of speech in the auditory-to-articulatory transform. 1511-1514 - Rob Goedemans, Vincent J. van Heuven:
A perceptual explanation of the weightlessness of the syllable onset. 1515-1518
Search Algorithms
- Enrico Bocchieri:
A study of the beam-search algorithm for large vocabulary continuous speech recognition and methods for improved efficiency. 1521-1524 - Lorenzo Fissore, Egidio P. Giachin, Pietro Laface, P. Massafra:
Using grammars in forward and backward search. 1525-1528 - Gernot A. Fink, Franz Kummert, Gerhard Sagerer, Bernd Seestaedt:
Robust interpretation of speech. 1529-1532 - I. Lee Hetherington, Michael S. Phillips, James R. Glass, Victor W. Zue:
A* word network search for continuous speech recognition. 1533-1536 - Roxane Lacouture, Yves Normandin:
Efficient lexical access strategies. 1537-1540
Speech Recognition, HMMs, NNs
- M. Inés Torres, Francisco Casacuberta:
Multiple codebook Spanish phone recognition using semicontinuous hidden Markov models. 1543-1546 - Antonio Bonafonte, Xavier Ros, Jose B. Marifio:
An efficient algorithm to find the best state sequence in HSMM. 1547-1550 - Alex Acero, Carlos Crespo-Casas, Celinda de la Torre, Juan Carlos Torrecilla:
Robust HMM-based endpoint detector. 1551-1554 - Isabel Galiano, Francisco Casacuberta:
Experiments on Spanish phone recognition using automatically derived phonemic baseforms. 1555-1558 - Seiichi Nakagawa, Hideyuki Suzuki, Li Zhao:
Evaluation of VQ-distortion based HMM. 1559-1562 - Jianming Song:
Continuous HMM for word spotting and rejection of non vocabulary word in speech recognition over telephone networks. 1563-1566 - Qiang Huo, Chorkin Chan, Chin-Hui Lee:
Bayesian learning of the parameters of discrete and tied mixture HMMs for speech recognition. 1567-1570 - Gernot A. Fink, Franz Kummert, Gerhard Sagerer, Ernst Günter Schukat-Talamazzini:
Speech recognition using semantic hidden Markov networks. 1571-1574 - Simon Downey, Martin J. Russell, Peter Nowell, David Bijl, Kirsta Galloway, Keith Ponting:
Experiments in vocabulary independent speech recognition using phoneme decision trees. 1575-1578 - Mark J. F. Gales, Steve J. Young:
Segmental hidden Markov models. 1579-1582 - Xue Wang, Louis ten Bosch, Louis C. W. Pols:
Impact of dimensionality and correlation of observation vectors in HMM-based speech recognition. 1583-1586 - Fritz Class, Alfred Kaltenmeier, Peter Regel-Brietzmann:
Evaluation of an HMM speech recognizer with various continuous speech databases. 1587-1590 - Adam Wrzoskowicz:
Hidden Markov models for noisy speech recognition. 1591-1594 - Dionysis E. Tsoukalas, John Mourjopoulos, George Kokkinakis:
Neural network speech enhancer utilizing masking properties. 1595-1598 - Maria J. Castro, Juan C. Perez:
Comparison of geometric, connections and structural techniques on a difficult isolated word recognition task. 1599-1602 - Abdelhamid Mellouk, Patrick Gallinari, F. Rauscher:
Prediction and discrimination in neural networks for continuous speech recognition. 1603-1606 - Shuping Ran, J. Bruce Millar:
Two schemes of phonetic feature extraction using artificial neural networks. 1607-1610 - Bojan Petek, Anuska Ferligoj:
On use of discriminant analysis in predictive connectionist speech recognition. 1611-1614 - N. H. Russell, Frank Fallside, Richard W. Prager:
Non-linear time compression for lexical access. 1615-1618 - Richard Brierton, Nigel Sedgwick:
Talker enrollment for speech recognition by synthesis. 1619-1622 - Kazuya Takeda, Naomi Inoue, Shingo Kuroiwa, Tomohiro Konuma, Seiichi Yamamoto:
Improving robustness of network grammar by using class HMM. 1623-1626 - J. A. Elliott, Mark E. Forsyth, Fergus R. McInnes, N. W. Ramsey:
Parallelising k-means clustering on distributed memory MIMD computers. 1627-1630 - P. Berenyi, Klára Vicsi:
On the proper sub-word unit inventory for CSR. 1631-1634 - Li Deng, Don X. Sun:
Speech recognition using the atomic speech units constructed from overlapping articulatory features. 1635-1638 - Olivier Siohan, Yifan Gong, Jean Paul Haton:
A Bayesian approach to phone duration adaptation for lombard speech recognition. 1639-1642 - Javier Hernando, José B. Mariño, Climent Nadeu:
Multiple multilabeling to improve HMM-based speech recognition in noise. 1643-1646 - Lutoslawa Richter, Piotr Domagaia:
Discrimination of polish stop consonants based on mapped techniques. 1647-1650
Spoken Language Dialogue
- Wieland Eckert, Scott McGlashan:
Managing spoken dialogues for information services. 1653-1656 - Paul Heisterkamp:
Ambiguity and uncertainty in spoken dialogue. 1657-1660 - Elisabetta Gerbino, Morena Danieli:
Managing dialogue in a continuous speech understanding system. 1661-1664 - Pierre Lefebvre, G. Duncan, Frank Poirier:
Speaking with computers: a multimodal approach. 1665-1668 - Philippe Morin, Jean-Claude Junqua:
Habitable interaction in goal-oriented multimodal dialogue systems. 1669-1672
Speech Input/Output Assessment
- Jorn Stern Nielsen, Bo Baungaard:
Test of voice quality on ATM based equipment. 1675-1678 - Harald Klaus, H. Klix, Jochem Sotscheck, Klaus Fellbaum:
An evaluation system for ascertaining the quality of synthetic speech based on subjective category rating tests. 1679-1682 - Arnd Mariniak:
A global framework for the assessment of synthetic speech without subjects. 1683-1686 - Lennart Neovius, Parimala Raghavendra:
Comprehension of KTH text-to-speech with "listening speed" paradigm. 1687-1690 - Hans G. Tillmann, Bernd Pompino-Marschall:
Theoretical principles concerning segmentation, labelling strategies and levels of categorical annotation for spoken language database systems. 1691-1694 - Peter J. Wyard:
The comparative assessment of commercial speech recognisers. 1881-1884 - A. Riccio, F. Ceglie, A. Brancaccio:
Reliable assessment of speech recognisers for telephone environment. 1885-1888 - Martine Garnier-Rizet:
Evaluation of a rule-based text-to-speech system for French at the segmental level. 1889-1892 - Cristina Delogu, Andrea Paoloni, Paola Ridolfi, Kyriaki Vagges:
Intelligibility of speech produced by text-to-speech synthesizers over the orthophonic and telephonic channel. 1893-1896 - Murray F. Spiegel:
Using the ORATOR® synthesizer for a public reverse-directory service: design, lessons, and recommendations. 1897-1900
Synthesis: Sound Generation
- Sophie Grau, Christophe d'Alessandro, Gaël Richard:
A speech formant synthesizer based on harmonic + random formant-waveforms representations. 1697-1700 - Alexander G. Hauptmann:
SPEAKEZ: a first experiment in concatenation synthesis from a large corpus. 1701-1704 - Joop Kerkhoff, Lou Boves:
Designing control rules for a serial pole-zero vocal tract model. 1705-1708 - Shin'ya Nakajima:
English speech synthesis based on multi-layered context oriented clustering; towards multi-lingual speech synthesis. 1709-1712 - Christine Tuerk, Tony Robinson:
Speech synthesis using artificial neural networks trained on cepstral coefficients. 1713-1716
Hybrid HMMs/ANNs for Speech Recognition
- Steve Renals, David J. C. MacKay:
Bayesian regularisation methods in a hybrid MLP-HMM system. 1719-1722 - Philipp Schmid, Ronald A. Cole, Mark A. Fanty, Hervé Bourlard, M. Haessen:
Real-time, neural network-based, French alphabet recognition with telephone speech. 1723-1726 - Gerhard Rigoll:
Joint optimization of multiple neural codebooks in a hybrid connectionist-HMM speech recognition system. 1727-1729 - Mikko Kurimo:
Using LVQ to enhance semi-continuous hidden Markov models for phonemes. 1731-1734 - Pablo Aibar, Francisco Casacuberta:
An improvement of the two-level DP matching algorithm using k-NN techniques for acoustic-phonetic decoding. 1735-1738 - Hervé Bourlard, Jean-Marc Boite, Bart D'hoore, Marco Saerens:
Performance comparison of hidden Markov models and neural networks for task dependent and independent isolated word recognition. 1925-1928 - Patrick Haffner:
Connectionist speech recognition with a global MMI algorithm. 1929-1932 - Denys Boiteau, Patrick Haffner:
Connectionist segmental post-processing of the n-best solutions in isolated and connected word recognition task. 1933-1936 - Jean-Pierre Martens, Annemie Vorstermans, Nick Cremelie:
A new dynamic programming/multi-layer perceptron hybrid for continuous speech recognition. 1937-1940 - Tony Robinson, Luís B. Almeida, Jean-Marc Boite, Hervé Bourlard, Frank Fallside, Mike Hochberg, Dan J. Kershaw, Phil Kohn, Yochai Konig, Nelson Morgan, João Paulo Neto, Steve Renals, Marco Saerens, Chuck Wooters:
A neural network based, speaker independent, large vocabulary, continuous speech recognition system: the WERNICKE project. 1941-1944
Visual Cues
- Hans-Heinrich Bothe, Frauke Rieger, Robert Tackmann:
Visual coarticulation effects in syllable environment. 1741-1744 - Christine H. Shadle, John N. Carter, T. P. Monks, J. Field:
Depth measurement of face and palate by structured light. 1745-1748 - Louis-Jean Boë, Sonia Kandel, Annie Chappelet, Tahar Lallouache:
Visiolab: a multimedia environment for the study of bimodal speech perception. 1749-1752 - Jordi Robert-Ribes, Tahar Lallouache, Pierre Escudier, Jean-Luc Schwartz:
Integrating auditory and visual representations for audiovisual vowel recognition. 1753-1756
Telecommunication, Application Aspects
- Bo Baungaard, Jorn Stern Nielsen:
Speech recognition over packetized voice systems. 1781-1784 - I. W. G. Jenkins:
Voice applications on BT's derived services network. 1785-1788 - Jean-Yves Magadur, Frédéric Gavignet, François Andry, Francis Charpentier:
A French oral dialogue system for flight reservations over the telephone. 1789-1792 - Shingo Kuroiwa, Kazuya Takeda, Naomi Inoue, Izuru Nogaito, Seiichi Yamamoto, Makoto Shozakai, Kunihiko Owa, Masahiko Takahashi, Ryuuji Matsumoto:
A voice-activated extension telephone exchange system. 1793-1796 - William C. G. Ortel, Dina Yashchin:
The VOIS project in retrospect. 1797-1800 - Eduardo Lleida, José B. Mariño, Arturo Moreno:
TELEMACO - a real time keyword spotting application for voice dialling. 1801-1804 - Peter J. Wyard:
The relative importance of the factors affecting recogniser performance with telephone speech. 1805-1808 - Thomas Burger, Ulrich Schultheiß:
A robust acoustic echo canceller for a hands-free voice-controlled telecommunication terminal. 1809-1812 - J. E. Hart, Patrick A. Naylor, Oguz Tanrikulu:
Polyphase allpass IIR structures for sub-band acoustic echo cancellation. 1813-1816 - James Monaghan, Christine Cheepen:
Speech input systems and their effect on written language skills. 1817-1820 - Gábor Olaszy, Géza Németh:
Voxaid: an interactive speaking communication aid software for the speech impaired. 1821-1824 - U. Hartmann, K. Hermansen, F. K. Fink:
Feature extraction for profoundly deaf people. 1825-1828 - Alfred Hauenstein:
Architecture of a 10, 000 word real time speech recognizer. 1829-1832 - Thomas Hermann, Harald Eckhardt, Michael Trompf, Heidi Hackbarth:
A noise-robust real-time word recognition hardware module. 1833-1836 - Myoung-Wan Koo:
KARS: a speaker-independent, vocabulary-independent speech recognition system. 1837-1840 - Fergus R. McInnes, J. A. Elliott, N. W. Ramsey, Mark E. Forsyth, Andrew M. Sutherland, Mervyn A. Jack:
A parallel processing keyword recogniser for police national computer enquiries. 1841-1844 - Andrea Paoloni, Torbjørn Svendsen, Bernhard Kaspar, Denis Johnston, Gunnar Hult:
Cost232: speech recognition over the telephone line. 1845-1848 - Valérie Hazan, Bo Shi:
Individual variability in the perception of synthetic speech. 1849-1852 - Ye. K. Ludovic, V. V. Pilipenko, G. E. Tseitlin, L. I. Nagornaya, T. Terzian:
Speech recognition system and its application for blind PC users. 1853-1855
Spoken Language Dialogue Application
- Bradley Music, Claus Povlsen:
The NLP module of a spoken language dialogue system for Danish flight reservations. 1859-1862 - Davide Clementino, Lorenzo Fissore:
A man-machine dialogue system for speech access to train timetable information. 1863-1866 - Mats Blomberg, Rolf Carlson, Kjell Elenius, Björn Granström, Joakim Gustafson, Sheri Hunnicutt, Roger Lindell, Lennart Neovius:
An experimental dialogue system: waxholm. 1867-1870 - Wieland Eckert, Thomas Kuhn, Heinrich Niemann, Stefan Rieck, A. Scheuer, Ernst Günter Schukat-Talamazzini:
A spoken dialogue system for German intercity train timetable inquiries. 1871-1874 - Kyriaki Labropoulou, Nikos Fakotakis:
A telephone banking system based on HMM keyword recognition. 1875-1878 - Ian Lewin, Martin J. Russell, David M. Carter, Sue Browning, Keith Ponting, Stephen G. Pulman:
A speech-based route enquiry system built from general-purpose components. 2047-2050 - Changwen Yang, Douglas D. O'Shaughnessy:
The inks ATIS system and its n-best interface. 2051-2054 - Tsuneo Nitta, Yasuyuki Masai, Jun'ichi Iwasaki, Shin'ichi Tanaka, Bi Karwo, Hiroshi Matsu'ura:
A multimodal directory guidance system with an interactive mechanism. 2055-2058 - Hélène Bonneau-Maynard, Jean-Luc Gauvain, David Goodine, Lori Lamel, Joseph Polifroni, Stephanie Seneff:
A French version of the MIT-ATIS system: portability issues. 2059-2062 - James R. Glass, David Goodine, Michael S. Phillips, Shinsuke Sakai, Stephanie Seneff, Victor W. Zue:
A bilingual Voyager system. 2063-2066
Synthesis: Articulatory and Source Modelling
- Bernd J. Kröger:
A gestural approach for controlling an articulatory speech synthesizer. 1903-1906 - Paul Boersma:
An articulatory synthesizer for the simulation of consonants. 1907-1910 - Rolf Carlson, Lennart Nord:
Vowel dynamics in a text-to-speech system some considerations. 1911-1914 - Ida Frehr, Marianne Elmlund, Henrik Nielsen:
Improving the spectral balance of digital speech synthesis applied to a female, synthetic voice. 1915-1918 - Yasushi Ishikawa, Takashi Ebihara, Kunio Nakajima:
A new model of excitation for text-to-speech synthesis. 1919-1922
Syntactical Constraints
- François Charpillet, Joseph Di Martino:
A level-building top-down parsing algorithm for context-free grammars in continuous speech recognition. 1947-1949 - Russell James Collingham, Roberto Garigliano:
Using anti-grammar and semantic categories for the recognition of spontaneous speech. 1951-1954 - Ryosuke Isotani, Shigeki Sagayama:
Speech recognition using particle n-grams and content-word n-grams. 1955-1958 - Pierre Dupont:
Dynamic use of syntactical knowledge in continuous speech recognition. 1959-1962
Pathological Voice Analysis
- Fabrice Plante, Jocelyne Borel, Christian Berger-Vachon, Isabelle Kauffmann:
Acoustic detection of laryngeal diseases in children. 1965-1968 - Dimitar D. Deliyski:
Acoustic model and evaluation of pathological voice production. 1969-1972 - Hideki Kasuya, Yasuo Endo, Sokol Saliu:
Novel acoustic measurements of jitter and shimmer characteristics from pathological voice. 1973-1976 - Guus de Krom:
An experiment involving the consistency and reliability of voice quality ratings for different types of speech fragments. 1977-1980 - Lennart Nord, Britta Hammarberg, Elisabet Lundstrom:
Laryngectomee speech in noise - voice effort and intelligibility. 1981-1984
Speech Analysis: Pitch and Prosody
- Berit Horvei, Georg Ottesen, Sverre Stensby:
Analysing prosody by means of a double tree structure. 1987-1990 - Geneviève Caelen-Haumont:
Prosody and discourse interpretation. 1991-1994 - George Epitropakis, Dimitris Tambakas, Nikos Fakotakis, George Kokkinakis:
Duration modelling for the greek language. 1995-1998 - George Epitropakis, Nickolas Yiourgalis, George Kokkinakis:
Prosody control of TTS-systems based on linguistic analysis. 1999-2002 - Ralf Kompe, Andreas Kießling, Thomas Kuhn, Marion Mast, Heinrich Niemann, Elmar Nöth, K. Ott, Anton Batliner:
Prosody takes over: a prosodically guided dialog system. 2003-2006 - Philippe Langlais, Henri Meloni:
Integration of a prosodic component in an automatic speech recognition system. 2007-2010 - Merle Horne, Marcus Filipsson, Mats Ljungqvist, Anders Lindström:
Referent tracking in restricted texts using a lemmatized lexicon: implications for generation of intonation. 2011-2014 - Robert Bannert:
Perceptual significance of focus accent in spoken Swedish. 2015-2016 - Silvio Montrésor, Marc Baudry:
Pitch estimation of speech signal with the wavelet transform. 2017-2020 - JaeYeol Rheem, Myung Jin Bae, SouGuil Ann:
A spectral AMDF method for pitch extraction of noise-corrupted speech. 2021-2024 - Gao Yang, Henri Leich:
A reliable postprocessor for pitch determination algorithms. 2025-2028 - Georg F. Meyer, William A. Ainsworth:
Vowel pitch period extraction by models of neurones in the mammalian brain-stem. 2029-2032 - Jean Schoentgen, Raoul De Guchteneere:
Auto-regressive linear models of jitter. 2033-2036 - Jianing Wei, David Howells, Andrew Faulkner, Adrian Fourcin:
Larynx period detection methods in speech pattern hearing AIDS. 2037-2040 - Renée van Bezooijen:
Fundamental frequency of dutch women: an evaluative study. 2041-2044
Applications
- Hiroya Fujisaki, Sumio Ohno, Hideki Nasuno, Keikichi Hirose:
Proposal and implementation of a spoken word recognizer using utterance normalization and multiple templates on a single VLSI chip. 2069-2072 - Robert Strong:
CASPER: a speech interface for the macintosh. 2073-2076 - Claudia Ellermann, Stijn Van Even, Caroline B. Huang, Linda Manganaro:
Dragon systems' experiences in small to large vocabulary multi-lingual speech recognition applications. 2077-2080 - Denis Jouvet, M. N. Lokbani, Jean Monné:
Application of the n-best solutions algorithm to speaker-independent spelling recognition over the telephone. 2081-2084 - Jerome Braun, Baruch Mazor:
Language based approach to system control in speech recognition systems. 2085-2088
Synthesis: Systems, Syntax, Prosody
- Marcello Balestri, Stefano Lazzaretto, Pier Luigi Salza, Stefano Sandri:
The CSELT system for Italian text-to-speech synthesis. 2091-2094 - Mamoun Alissali, Gérard Bailly:
COMPOST: a client-server model for applications using text-to-speech systems. 2095-2098 - Christof Traber:
Syntactic processing and prosody control in the SVOX TTS system for German. 2099-2102 - Scott Prevost, Mark Steedman:
Using context to specify intonation in speech synthesis. 2103-2106 - Masanobu Abe, Hirokazu Sato:
Statistical analysis of the acoustic and prosodic characteristics of different speaking styles. 2107-2110
Large Vocabulary Systems
- Satoru Hayamizu, Katunobu Itou, Kazuyo Tanaka:
Detection of unknown words in large vocabulary speech recognition. 2113-2116 - Patrick Kenny, Paul Labute, Zhishun Li, Rene Hollan, Matthew Lennig, Douglas D. O'Shaughnessy:
A very fast method for scoring phonetic transcriptions. 2117-2120 - I. Lee Hetherington, Victor W. Zue:
New words: implications for continuous speech recognition. 2121-2124 - Volker Steinbiss, Hermann Ney, Reinhold Haeb-Umbach, B.-H. Iran, Ute Essen, Reinhard Kneser, Martin Oerder, Hans-Günter Meier, Xavier L. Aubert, Christian Dugast, Dieter Geller, W. Höllerbauer, H. Bartosik:
The Philips research system for large-vocabulary continuous-speech recognition. 2125-2128 - Yasuhiro Minami, Kiyohiro Shikano, Tomokazu Yamada, Tatsuo Matsuoka:
Very-large-vocabulary continuous speech recognition algorithm for telephone directory assistance. 2129-2132
Continuous Speech Recognition Systems
- Shoichi Matsunaga, Tomokazu Yamada, Kiyohiro Shikano:
Dictation system using inductively auto-generated syntax. 2135-2138 - Jean-Yves Antoine, Bertrand Caillaud, Jean Caelen:
Syntax-semantics cooperation in micro: a multi-agent speech understanding system. 2139-2142 - Mei-Yuh Hwang, Fil Alleva, Xuedong Huang:
Senones, multi-pass search, and unified stochastic modeling in sphinx-II. 2143-2146 - Sunil Issar, Wayne H. Ward:
CMLPs robust spoken language understanding system. 2147-2150 - Shinsuke Sakai, Michael S. Phillips:
J-SUMMIT: Japanese spontaneous speech recognition. 2151-2154
Human Factors
- Jerome R. Bellegarda, Dimitri Kanevsky:
A new interface paradigm: automatic recognition of integrated speech and handwriting information. 2157-2160 - Alexander I. Rudnicky:
Factors affecting choice of speech over keyboard and mouse in a simple data-retrieval task. 2161-2164 - Sara Basson, Dina Yashchin, Ashok Kalyanswamy, Kim E. A. Silverman:
Comparing synthesizers for name and address provision: field trial results. 2165-2168 - Kim E. A. Silverman, Ashok Kalyanswamy, Julie Silverman, Sara Basson, Dina Yashchin:
Synthesiser intelligibility in the context of a name-and-address information service. 2169-2172 - Ruth Marzi:
Enhancing user acceptance at the managerial workplace. 2173-2175
Complex Forms of Speech & Speaker Recognition
- Bernhard Suhm, Monika Woszczyna, Alex Waibel:
Detection and transcription of new words. 2179-2182 - Víctor M. Jiménez, Andrés Marzal, Enrique Vidal:
Efficient enumeration of sentence hypotheses in connected word recognition. 2183-2186 - Douglas D. O'Shaughnessy:
Locating disfluencies in spontaneous speech: an acoustical analysis. 2187-2190 - Roselyne Nguyen, Kamel Smaïli, Jean Paul Haton, Guy Perennou:
Integration of phonological knowledge in a continuous speech recognition system. 2191-2194 - Pierre Dumouchel, Douglas D. O'Shaughnessy:
Prosody and continuous speech recognition. 2195-2198 - Henning Bergmann, Hans-Hermann Hamer, Andreas Noll, Annedore Paeseler, Horst Tomaschewski:
Spoken-language processing for restricted domains: a sublanguage approach. 2199-2202 - Steve J. Young, Philip C. Woodland:
The use of state tying in continuous speech recognition. 2203-2206 - Philip C. Woodland, Steve J. Young:
The HTK tied-state continuous speech recogniser. 2207-2210 - Laurence Devillers, Christian Dugast:
Combination of training criteria to improve continuous speech recognition. 2211-2214 - Igor Zlokarnik:
Experiments with an articulatory speech recognizer. 2215-2218 - Giuliano Antoniol, Mauro Cettolo, Marcello Federico:
Techniques for robust recognition in restricted domains. 2219-2221 - Feriel Mouria, Yifan Gong, Jean Paul Haton:
Use of explicit context-dependent phonemic model in continuous speech recognition. 2223-2226 - Yifan Gong:
Base transformation for environment adaptation in continuous speech recognition. 2227-2230 - Baruch Mazor, Ming-Whei Feng:
Improved a-posteriori processing for keyword spotting. 2231-2234 - Javier Ortega-Garcia, José Manuel Páez-Borrallo, Luis A. Hernández Gómez:
Single and multi-channel speech enhancement for a word spotting system. 2235-2238 - Hermann Ney, Ute Essen:
Estimating 'small' probabilities by leaving-one-out. 2239-2242 - Sheryl R. Young, Wayne H. Ward:
Semantic and pragmatically based re-recognition of spontaneous speech. 2243-2246 - Bernd Hildebrandt, Gernot A. Fink, Franz Kummert, Gerhard Sagerer:
Modeling of time constituents for speech understanding. 2247-2250 - Václav Matousek:
Phonetic segmentation method for the continuous czech speech recognition. 2251-2254 - Alexander G. Hauptmann, Lin Lawrence Chase, Jack Mostow:
Speech recognition applied to reading assistance for children: a baseline language model. 2255-2258 - David Weenink, Louis C. W. Pols:
Modelling speaker normalization by adapting the BIAS in a neural net. 2259-2262 - Thierry Artières, Patrick Gallinari:
Neural models for extracting speaker characteristics in speech modelization systems. 2263-2266 - Joachim Zinke:
Influence of pattern compression on speaker verification. 2267-2270 - Florian Schiel:
A comparative study of speaker adaptation under realistic conditions. 2271-2274 - D. A. Irvine, F. J. Owens:
A comparison of speaker recognition techniques for telephone speech. 2275-2278 - Johan de Veth, Guido Gallopyn, Hervé Bourlard:
Speaker verification over telephone channels based on concatenated phonemic hidden Markov models. 2279-2282 - Stephen Cox:
Speaker adaptation using a predictive model. 2283-2286 - Z. P. Sun, John S. Mason:
Combining features via LDA in speaker recognition. 2287-2290 - J. M. Elvira, Rolando A. Carrasco:
Neural networks for speech and speaker recognition through a digital telephone exchange. 2291-2294 - Mohammad Mehdi Homayounpour, Jean-Philippe Goldman, Gérard Chollet, Jacqueline Vaissière:
Performance comparison of machine and human speaker verification. 2295-2298 - M. I. Hannah, Andrew T. Sapeluk, Robert I. Damper, I. M. Roger:
The effect of utterance length and content on speaker-verifier performance. 2299-2302 - Antanas Lipeika, Joana Lipeikiene:
The use of pseudostationary segments for speaker identification. 2303-2306 - A. Federico, Andrea Paoloni:
Bayesian decision in the speaker recognition by acoustic parametrization of voice samples over telephone lines. 2307-2310
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.