default search action
Matt Huenerfauth
Person information
- affiliation: Rochester Institute of Technology, NY, USA
- affiliation (2006 - 2014): City University of New York, Queens College, Flushing, NY, USA
- affiliation (PhD 2006): University of Pennsylvania, Philadelphia, PA, USA
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2024
- [j31]Elahe Vahdani, Longlong Jing, Matt Huenerfauth, Yingli Tian:
Multi-Modal Multi-Channel American Sign Language Recognition. Int. J. Artif. Intell. Robotics Res. 1(1): 2450001:1-2450001:25 (2024) - [j30]Saad Hassan, Caluã de Lacerda Pataca, Akhter Al Amin, Laleh Nourian, Diego Navarro, Sooyeon Lee, Alexis Gordon, Matthew Watkins, Garreth W. Tigwell, Matt Huenerfauth:
Exploring the Benefits and Applications of Video-Span Selection and Search for Real-Time Support in Sign Language Video Comprehension among ASL Learners. ACM Trans. Access. Comput. 17(3): 14:1-14:35 (2024) - [c85]Oliver Alonzo, Sooyeon Lee, Akhter Al Amin, Mounica Maddela, Wei Xu, Matt Huenerfauth:
Design and Evaluation of an Automatic Text Simplification Prototype with Deaf and Hard-of-hearing Readers. ASSETS 2024: 40:1-40:18 - [c84]Caluã de Lacerda Pataca, Saad Hassan, Nathan Tinker, Roshan Lalintha Peiris, Matt Huenerfauth:
Caption Royale: Exploring the Design Space of Affective Captions from the Perspective of Deaf and Hard-of-Hearing Individuals. CHI 2024: 899:1-899:17 - 2023
- [c83]Caluã de Lacerda Pataca, Matthew Watkins, Roshan L. Peiris, Sooyeon Lee, Matt Huenerfauth:
Visualization of Speech Prosody and Emotion in Captions: Accessibility for Deaf and Hard-of-Hearing Users. CHI 2023: 831:1-831:15 - [c82]Akhter Al Amin, Joseph Mendis, Raja S. Kushalnagar, Christian Vogler, Matt Huenerfauth:
Who is speaking: Unpacking In-text Speaker Identification Preference of Viewers who are Deaf and Hard of Hearing while Watching Live Captioned Television Program. W4A 2023: 44-53 - [c81]Akhter Al Amin, Saad Hassan, Sooyeon Lee, Matt Huenerfauth:
Understanding How Deaf and Hard of Hearing Viewers Visually Explore Captioned Live TV News. W4A 2023: 54-65 - [c80]Akhter Al Amin, Saad Hassan, Matt Huenerfauth, Cecilia Ovesdotter Alm:
Modeling Word Importance in Conversational Transcripts: Toward improved live captioning for Deaf and hard of hearing viewers. W4A 2023: 79-83 - 2022
- [j29]Oliver Alonzo, Lisa B. Elliot, Becca Dingman, Sooyeon Lee, Akhter Al Amin, Matt Huenerfauth:
Reading-Assistance Tools Among Deaf and Hard-of-Hearing Computing Professionals in the U.S.: Their Reading Experiences, Interests and Perceptions of Social Accessibility. ACM Trans. Access. Comput. 15(2): 16:1-16:31 (2022) - [c79]Saad Hassan, Akhter Al Amin, Caluã de Lacerda Pataca, Diego Navarro, Alexis Gordon, Sooyeon Lee, Matt Huenerfauth:
Support in the Moment: Benefits and use of video-span selection and search for sign-language video comprehension among ASL learners. ASSETS 2022: 29:1-29:14 - [c78]Saad Hassan, Sooyeon Lee, Dimitris N. Metaxas, Carol Neidle, Matt Huenerfauth:
Understanding ASL Learners' Preferences for a Sign Language Recording and Automatic Feedback System to Support Self-Study. ASSETS 2022: 85:1-85:5 - [c77]Saad Hassan, Akhter Al Amin, Alexis Gordon, Sooyeon Lee, Matt Huenerfauth:
Design and Evaluation of Hybrid Search for American Sign Language to English Dictionaries: Making the Most of Imperfect Sign Recognition. CHI 2022: 195:1-195:13 - [c76]Oliver Alonzo, Jessica Trussell, Matthew Watkins, Sooyeon Lee, Matt Huenerfauth:
Methods for Evaluating the Fluency of Automatically Simplified Texts with Deaf and Hard-of-Hearing Adults at Various Literacy Levels. CHI 2022: 267:1-267:10 - [c75]Abraham Glasser, Matthew Watkins, Kira Hart, Sooyeon Lee, Matt Huenerfauth:
Analyzing Deaf and Hard-of-Hearing Users' Behavior, Usage, and Interaction with a Personal Assistant Device that Understands Sign-Language Input. CHI 2022: 306:1-306:12 - [c74]Akhter Al Amin, Saad Hassan, Sooyeon Lee, Matt Huenerfauth:
Watch It, Don't Imagine It: Creating a Better Caption-Occlusion Metric by Collecting More Ecologically Valid Judgments from DHH Viewers. CHI 2022: 459:1-459:14 - [c73]Matthew Seita, Sooyeon Lee, Sarah Andrew, Kristen Shinohara, Matt Huenerfauth:
Remotely Co-Designing Features for Communication Applications using Automatic Captioning with Deaf and Hearing Pairs. CHI 2022: 460:1-460:13 - [c72]Akher Al Amin, Joseph Mendis, Raja S. Kushalnagar, Christian Vogler, Sooyeon Lee, Matt Huenerfauth:
Deaf and Hard of Hearing Viewers' Preference for Speaker Identifier Type in Live TV Programming. HCI (7) 2022: 200-211 - [c71]Akhter Al Amin, Saad Hassan, Cecilia O. Alm, Matt Huenerfauth:
Using BERT Embeddings to Model Word Importance in Conversational Transcripts for Deaf and Hard of Hearing Users. LT-EDI 2022: 35-40 - [i10]Akhter Al Amin, Saad Hassan, Cecilia O. Alm, Matt Huenerfauth:
Using BERT Embeddings to Model Word Importance in Conversational Transcripts for Deaf and Hard of Hearing Users. CoRR abs/2206.12368 (2022) - [i9]Saad Hassan, Matthew Seita, Larwan Berke, Yingli Tian, Elaine Gale, Sooyeon Lee, Matt Huenerfauth:
ASL-Homework-RGBD Dataset: An annotated dataset of 45 fluent and non-fluent signers performing American Sign Language homeworks. CoRR abs/2207.04021 (2022) - 2021
- [j28]Danielle Bragg, Naomi Caselli, Julie A. Hochgesang, Matt Huenerfauth, Leah Katz-Hernandez, Oscar Koller, Raja S. Kushalnagar, Christian Vogler, Richard E. Ladner:
The FATE Landscape of Sign Language AI Datasets: An Interdisciplinary Perspective. ACM Trans. Access. Comput. 14(2): 7:1-7:45 (2021) - [j27]Saad Hassan, Oliver Alonzo, Abraham Glasser, Matt Huenerfauth:
Effect of Sign-recognition Performance on the Usability of Sign-language Dictionary Search. ACM Trans. Access. Comput. 14(4): 18:1-18:33 (2021) - [j26]Sushant Kafle, Becca Dingman, Matt Huenerfauth:
Deaf and Hard-of-hearing Users Evaluating Designs for Highlighting Key Words in Educational Lecture Videos. ACM Trans. Access. Comput. 14(4): 20:1-20:24 (2021) - [c70]Sooyeon Lee, Abraham Glasser, Becca Dingman, Zhaoyang Xia, Dimitris N. Metaxas, Carol Neidle, Matt Huenerfauth:
American Sign Language Video Anonymization to Support Online Participation of Deaf and Hard of Hearing Users. ASSETS 2021: 22:1-22:13 - [c69]Sedeeq Al-khazraji, Becca Dingman, Sooyeon Lee, Matt Huenerfauth:
At a Different Pace: Evaluating Whether Users Prefer Timing Parameters in American Sign Language Animations to Differ from Human Signers' Timing. ASSETS 2021: 40:1-40:12 - [c68]Oliver Alonzo, Jessica Trussell, Becca Dingman, Matt Huenerfauth:
Comparison of Methods for Evaluating Complexity of Simplified Texts among Deaf and Hard-of-Hearing Adults at Different Literacy Levels. CHI 2021: 279:1-279:12 - [c67]Vaishnavi Mande, Abraham Glasser, Becca Dingman, Matt Huenerfauth:
Deaf Users' Preferences Among Wake-Up Approaches during Sign-Language Interaction with Personal Assistant Devices. CHI Extended Abstracts 2021: 370:1-370:6 - [c66]Saad Hassan, Matt Huenerfauth, Cecilia Ovesdotter Alm:
Unpacking the Interdependent Systems of Discrimination: Ableist Bias in NLP Systems through an Intersectional Lens. EMNLP (Findings) 2021: 3116-3123 - [c65]Akhter Al Amin, Abraham Glasser, Raja S. Kushalnagar, Christian Vogler, Matt Huenerfauth:
Preferences of Deaf or Hard of Hearing Users for Live-TV Caption Appearance. HCI (8) 2021: 189-201 - [c64]Akhter Al Amin, Saad Hassan, Matt Huenerfauth:
Effect of Occlusion on Deaf and Hard of Hearing Users' Perception of Captioned Video Quality. HCI (8) 2021: 202-220 - [c63]Matt Huenerfauth:
Human-Computer Interaction and Automatic Text Simplification: Understanding the Perspective of Deaf and Hard of Hearing Users (abstract). CTTS@SEPLN 2021 - [c62]Abraham Glasser, Vaishnavi Mande, Matt Huenerfauth:
Understanding deaf and hard-of-hearing users' interest in sign-language interaction with personal-assistant devices. W4A 2021: 24:1-24:11 - [c61]Matthew Seita, Sarah Andrew, Matt Huenerfauth:
Deaf and hard-of-hearing users' preferences for hearing speakers' behavior during technology-mediated in-person and remote conversations. W4A 2021: 25:1-25:12 - [c60]Akhter Al Amin, Saad Hassan, Matt Huenerfauth:
Caption-occlusion severity judgments across live-television genres from deaf and hard-of-hearing viewers. W4A 2021: 26:1-26:12 - [i8]Saad Hassan, Matt Huenerfauth, Cecilia Ovesdotter Alm:
Unpacking the Interdependent Systems of Discrimination: Ableist Bias in NLP Systems through an Intersectional Lens. CoRR abs/2110.00521 (2021) - 2020
- [j25]Paula Conn, Taylor Gotfrid, Qiwen Zhao, Rachel Celestine, Vaishnavi Mande, Kristen Shinohara, Stephanie Ludi, Matt Huenerfauth:
Understanding the Motivations of Final-year Computing Undergraduates for Considering Accessibility. ACM Trans. Comput. Educ. 20(2): 15:1-15:22 (2020) - [j24]Sushant Kafle, Matt Huenerfauth:
Usability evaluation of captions for people who are deaf or hard of hearing. ACM SIGACCESS Access. Comput. 122: 1 (2020) - [j23]Sushant Kafle, Abraham Glasser, Sedeeq Al-khazraji, Larwan Berke, Matthew Seita, Matt Huenerfauth:
Artificial intelligence fairness in the context of accessibility research on intelligent systems for people who are deaf or hard of hearing. ACM SIGACCESS Access. Comput. 125: 4 (2020) - [c59]Qiwen Zhao, Vaishnavi Mande, Paula Conn, Sedeeq Al-khazraji, Kristen Shinohara, Stephanie Ludi, Matt Huenerfauth:
Comparison of Methods for Teaching Accessibility in University Computing Courses. ASSETS 2020: 6:1-6:12 - [c58]Oliver Alonzo, Lisa B. Elliot, Becca Dingman, Matt Huenerfauth:
Reading Experiences and Interest in Reading-Assistance Tools Among Deaf and Hard-of-Hearing Computing Professionals. ASSETS 2020: 45:1-45:13 - [c57]Sedeeq Al-khazraji, Becca Dingman, Matt Huenerfauth:
Empirical Investigation of Users' Preferred Timing Parameters for American Sign Language Animations. CHI Extended Abstracts 2020: 1-7 - [c56]Oliver Alonzo, Matthew Seita, Abraham Glasser, Matt Huenerfauth:
Automatic Text Simplification Tools for Deaf and Hard of Hearing Adults: Benefits of Lexical Simplification and Providing Users with Autonomy. CHI 2020: 1-13 - [c55]Danielle Bragg, Meredith Ringel Morris, Christian Vogler, Raja S. Kushalnagar, Matt Huenerfauth, Hernisa Kacorri:
Sign Language Interfaces: Discussing the Field's Biggest Challenges. CHI Extended Abstracts 2020: 1-5 - [c54]Matthew Seita, Matt Huenerfauth:
Deaf Individuals' Views on Speaking Behaviors of Hearing Peers when Using an Automatic Captioning App. CHI Extended Abstracts 2020: 1-8 - [c53]Abraham Glasser, Vaishnavi Mande, Matt Huenerfauth:
Accessibility for Deaf and Hard of Hearing Users: Sign Language Conversational User Interfaces. CIU 2020: 55:1-55:3 - [c52]Peter Yeung, Oliver Alonzo, Matt Huenerfauth:
Interest and Requirements for Sound-Awareness Technologies Among Deaf and Hard-of-Hearing Users of Assistive Listening Devices. HCI (9) 2020: 147-158 - [c51]Elahe Vahdani, Longlong Jing, Yingli Tian, Matt Huenerfauth:
Recognizing American Sign Language Nonmanual Signal Grammar Errors in Continuous Videos. ICPR 2020: 6043-6050 - [c50]Larwan Berke, Matthew Seita, Matt Huenerfauth:
Deaf and hard-of-hearing users' prioritization of genres of online video content requiring accurate captions. W4A 2020: 3:1-3:12 - [i7]Elahe Vahdani, Longlong Jing, Yingli Tian, Matt Huenerfauth:
Recognizing American Sign Language Nonmanual Signal Grammar Errors in Continuous Videos. CoRR abs/2005.00253 (2020)
2010 – 2019
- 2019
- [j22]Larwan Berke, Matt Huenerfauth, Kasmira Patel:
Design and Psychometric Evaluation of American Sign Language Translations of Usability Questionnaires. ACM Trans. Access. Comput. 12(2): 6:1-6:43 (2019) - [j21]Sushant Kafle, Matt Huenerfauth:
Predicting the Understandability of Imperfect English Captions for People Who Are Deaf or Hard of Hearing. ACM Trans. Access. Comput. 12(2): 7:1-7:32 (2019) - [j20]Matt Huenerfauth, Kathleen F. McCoy:
Editorial: A Message from the Outgoing Editors-in-Chief. ACM Trans. Access. Comput. 12(3): 10:1-10:2 (2019) - [c49]Danielle Bragg, Oscar Koller, Mary Bellard, Larwan Berke, Patrick Boudreault, Annelies Braffort, Naomi Caselli, Matt Huenerfauth, Hernisa Kacorri, Tessa Verhoef, Christian Vogler, Meredith Ringel Morris:
Sign Language Recognition, Generation, and Translation: An Interdisciplinary Perspective. ASSETS 2019: 16-31 - [c48]Sushant Kafle, Peter Yeung, Matt Huenerfauth:
Evaluating the Benefit of Highlighting Key Words in Captions for People who are Deaf or Hard of Hearing. ASSETS 2019: 43-55 - [c47]Oliver Alonzo, Abraham Glasser, Matt Huenerfauth:
Effect of Automatic Sign Recognition Performance on the Usability of Video-Based Search Interfaces for Sign Language Dictionaries. ASSETS 2019: 56-67 - [c46]Larwan Berke, Khaled Albusays, Matthew Seita, Matt Huenerfauth:
Preferred Appearance of Captions Generated by Automatic Speech Recognition for Deaf and Hard-of-Hearing Viewers. CHI Extended Abstracts 2019 - [c45]Abhishek Kannekanti, Sedeeq Al-khazraji, Matt Huenerfauth:
Design and Evaluation of a User-Interface for Authoring Sentences of American Sign Language Animation. HCI (7) 2019: 258-267 - [c44]Utsav Shah, Matthew Seita, Matt Huenerfauth:
Evaluation of User-Interface Designs for Educational Feedback Software for ASL Students. HCI (7) 2019: 507-525 - [c43]Sushant Kafle, Cecilia Ovesdotter Alm, Matt Huenerfauth:
Fusion Strategy for Prosodic and Lexical Representations of Word Importance. INTERSPEECH 2019: 1313-1317 - [c42]Sushant Kafle, Cissi Ovesdotter Alm, Matt Huenerfauth:
Modeling Acoustic-Prosodic Cues for Word Importance Prediction in Spoken Dialogues. SLPAT@NAACL-HLT 2019: 9-16 - [i6]Sushant Kafle, Cecilia O. Alm, Matt Huenerfauth:
Modeling Acoustic-Prosodic Cues for Word Importance Prediction in Spoken Dialogues. CoRR abs/1903.12238 (2019) - [i5]Longlong Jing, Elahe Vahdani, Matt Huenerfauth, Yingli Tian:
Recognizing American Sign Language Manual Signs from RGB-D Videos. CoRR abs/1906.02851 (2019) - [i4]Danielle Bragg, Oscar Koller, Mary Bellard, Larwan Berke, Patrick Boudreault, Annelies Braffort, Naomi Caselli, Matt Huenerfauth, Hernisa Kacorri, Tessa Verhoef, Christian Vogler, Meredith Ringel Morris:
Sign Language Recognition, Generation, and Translation: An Interdisciplinary Perspective. CoRR abs/1908.08597 (2019) - [i3]Sushant Kafle, Abraham Glasser, Sedeeq Al-khazraji, Larwan Berke, Matthew Seita, Matt Huenerfauth:
Artificial Intelligence Fairness in the Context of Accessibility Research on Intelligent Systems for People who are Deaf or Hard of Hearing. CoRR abs/1908.10414 (2019) - 2018
- [j19]Matt Huenerfauth, Kathleen F. McCoy:
Introduction to the ASSETS'16 Special Issue. ACM Trans. Access. Comput. 11(1): 1:1 (2018) - [j18]Matt Huenerfauth, Kathleen F. McCoy:
Introduction to this Issue. ACM Trans. Access. Comput. 11(2): 7:1 (2018) - [j17]Matt Huenerfauth, Kathleen F. McCoy:
Introduction to the Special Issue on Fabrication Technologies and Do-It-Yourself Accessibility. ACM Trans. Access. Comput. 11(3): 12:1 (2018) - [c41]Matthew Seita, Khaled Albusays, Sushant Kafle, Michael Stinson, Matt Huenerfauth:
Behavioral Changes in Speakers who are Automatically Captioned in Meetings with Deaf or Hard-of-Hearing Peers. ASSETS 2018: 68-80 - [c40]Sedeeq Al-khazraji, Larwan Berke, Sushant Kafle, Peter Yeung, Matt Huenerfauth:
Modeling the Speed and Timing of American Sign Language to Generate Realistic Animations. ASSETS 2018: 259-270 - [c39]Larwan Berke, Sushant Kafle, Matt Huenerfauth:
Methods for Evaluation of Imperfect Captioning Tools by Deaf or Hard-of-Hearing Users at Different Reading Literacy Levels. CHI 2018: 91 - [c38]Yuancheng Ye, Yingli Tian, Matt Huenerfauth, Jingya Liu:
Recognizing American Sign Language Gestures From Within Continuous Videos. CVPR Workshops 2018: 2064-2073 - [c37]Dhananjai Hariharan, Sedeeq Al-khazraji, Matt Huenerfauth:
Evaluation of an English Word Look-Up Tool for Web-Browsing with Sign Language Video for Deaf Readers. HCI (7) 2018: 205-215 - [c36]Sushant Kafle, Matt Huenerfauth:
A Corpus for Modeling Word Importance in Spoken Dialogue Transcripts. LREC 2018 - [c35]Stephanie Ludi, Matt Huenerfauth, Vicki L. Hanson, Nidhi Rajendra Palan, Paula Garcia:
Teaching Inclusive Thinking to Undergraduate Students in Computing Programs. SIGCSE 2018: 717-722 - [i2]Sushant Kafle, Matt Huenerfauth:
A Corpus for Modeling Word Importance in Spoken Dialogue Transcripts. CoRR abs/1801.09746 (2018) - 2017
- [j16]Jinjuan Heidi Feng, Matt Huenerfauth:
Overview of the ASSETS 2016 conference. ACM SIGACCESS Access. Comput. 117: 3-8 (2017) - [j15]Matt Huenerfauth, Elaine Gale, Brian Penly, Sree Pillutla, Mackenzie Willard, Dhananjai Hariharan:
Evaluation of Language Feedback Methods for Student Videos of American Sign Language. ACM Trans. Access. Comput. 10(1): 2:1-2:30 (2017) - [j14]Hernisa Kacorri, Matt Huenerfauth, Sarah Ebling, Kasmira Patel, Kellie Menzies, Mackenzie Willard:
Regression Analysis of Demographic and Technology-Experience Factors Influencing Acceptance of Sign Language Animation. ACM Trans. Access. Comput. 10(1): 3:1-3:33 (2017) - [c34]Khaled Albusays, Stephanie Ludi, Matt Huenerfauth:
Interviews and Observation of Blind Software Developers at Work to Understand Code Navigation Challenges. ASSETS 2017: 91-100 - [c33]Larwan Berke, Christopher Caulfield, Matt Huenerfauth:
Deaf and Hard-of-Hearing Perspectives on Imperfect Automatic Speech Recognition for Captioning One-on-One Meetings. ASSETS 2017: 155-164 - [c32]Sushant Kafle, Matt Huenerfauth:
Evaluating the Usability of Automatically Generated Captions for People who are Deaf or Hard of Hearing. ASSETS 2017: 165-174 - [c31]Matt Huenerfauth, Kasmira Patel, Larwan Berke:
Design and Psychometric Evaluation of an American Sign Language Translation of the System Usability Scale. ASSETS 2017: 175-184 - [c30]Nidhi Rajendra Palan, Vicki L. Hanson, Matt Huenerfauth, Stephanie Ludi:
Teaching Inclusive Thinking in Undergraduate Computing. ASSETS 2017: 399-400 - [i1]Sushant Kafle, Matt Huenerfauth:
Evaluating the Usability of Automatically Generated Captions for People who are Deaf or Hard of Hearing. CoRR abs/1712.02033 (2017) - 2016
- [c29]Hernisa Kacorri, Matt Huenerfauth:
Continuous Profile Models in ASL Syntactic Facial Expression Synthesis. ACL (1) 2016 - [c28]Lisa B. Elliot, Michael Stinson, James R. Mallory, Donna Easton, Matt Huenerfauth:
Deaf and Hard of Hearing Individuals' Perceptions of Communication with Hearing Colleagues in Small Groups. ASSETS 2016: 271-272 - [c27]Chenyang Zhang, Yingli Tian, Matt Huenerfauth:
Multi-modality American Sign Language recognition. ICIP 2016: 2881-2885 - [e2]Jinjuan Heidi Feng, Matt Huenerfauth:
Proceedings of the 18th International ACM SIGACCESS Conference on Computers and Accessibility, ASSETS 2016, Reno, NV, USA, October 23-26, 2016. ACM 2016, ISBN 978-1-4503-4124-0 [contents] - 2015
- [c26]Matt Huenerfauth, Elaine Gale, Brian Penly, Mackenzie Willard, Dhananjai Hariharan:
Comparing Methods of Displaying Language Feedback for Student Videos of American Sign Language. ASSETS 2015: 139-146 - [c25]Hernisa Kacorri, Matt Huenerfauth, Sarah Ebling, Kasmira Patel, Mackenzie Willard:
Demographic and Experiential Factors Influencing Acceptance of Sign Language Animation by Deaf Users. ASSETS 2015: 147-154 - [c24]Hernisa Kacorri, Matt Huenerfauth:
Comparison of Finite-Repertoire and Data-Driven Facial Expressions for Sign Language Avatars. HCI (8) 2015: 393-403 - [c23]Sarah Ebling, Matt Huenerfauth:
Bridging the gap between sign language machine translation and sign language animation using sequence classification. SLPAT@Interspeech 2015: 2-9 - [c22]Matt Huenerfauth, Pengfei Lu, Hernisa Kacorri:
Synthesizing and Evaluating Animations of American Sign Language Verbs Modeled from Motion-Capture Data. SLPAT@Interspeech 2015: 22-28 - [c21]Hernisa Kacorri, Matt Huenerfauth:
Evaluating a Dynamic Time Warping Based Scoring Algorithm for Facial Expressions in ASL Animations. SLPAT@Interspeech 2015: 29-35 - 2014
- [j13]Pengfei Lu, Matt Huenerfauth:
Collecting and evaluating the CUNY ASL corpus for research on American Sign Language animation. Comput. Speech Lang. 28(3): 812-831 (2014) - [j12]Matt Huenerfauth, Kathy McCoy:
Greetings from the New Editors-in-Chief. ACM Trans. Access. Comput. 5(3): 6:1 (2014) - [c20]Hernisa Kacorri, Matt Huenerfauth:
Implementation and evaluation of animation controls sufficient for conveying ASL facial expressions. ASSETS 2014: 261-262 - [c19]Hernisa Kacorri, Allen Harper, Matt Huenerfauth:
Measuring the Perception of Facial Expressions in American Sign Language Animations with Eye Tracking. HCI (7) 2014: 553-563 - 2013
- [j11]Hernisa Kacorri, Pengfei Lu, Matt Huenerfauth:
Effect of Displaying Human Videos During an Evaluation Study of American Sign Language Animation. ACM Trans. Access. Comput. 5(2): 4:1-4:31 (2013) - [c18]Hernisa Kacorri, Allen Harper, Matt Huenerfauth:
Comparing native signers' perception of American Sign Language animations and videos via eye tracking. ASSETS 2013: 9:1-9:8 - [c17]Hernisa Kacorri, Pengfei Lu, Matt Huenerfauth:
Evaluating Facial Expressions in American Sign Language Animations for Accessible Online Information. HCI (6) 2013: 510-519 - 2012
- [j10]Matt Huenerfauth, Pengfei Lu:
Effect of spatial reference and verb inflection on the usability of sign language animations. Univers. Access Inf. Soc. 11(2): 169-184 (2012) - [c16]Pengfei Lu, Matt Huenerfauth:
Learning a Vector-Based Model of American Sign Language Inflecting Verbs from Motion-Capture Data. SLPAT@HLT-NAACL 2012: 66-74 - [e1]Matt Huenerfauth, Sri Kurniawan:
The 14th International ACM SIGACCESS Conference on Computers and Accessibility, ASSETS '12, Boulder, CO, USA, October 22 - 24, 2012. ACM 2012, ISBN 978-1-4503-1321-6 [contents] - 2011
- [j9]Pengfei Lu, Matt Huenerfauth:
Data-Driven Synthesis of Spatially Inflected Verbs for American Sign Language Animation. ACM Trans. Access. Comput. 4(1): 4:1-4:29 (2011) - [c15]Matt Huenerfauth, Pengfei Lu, Andrew Rosenberg:
Evaluating importance of facial expression in american sign language and pidgin signed english animations. ASSETS 2011: 99-106 - [c14]Pengfei Lu, Matt Huenerfauth:
Collecting an American Sign Language Corpus through the Participation of Native Signers. HCI (8) 2011: 81-90 - 2010
- [j8]Matt Huenerfauth:
Participation of high school and undergraduate students who are deaf in research on American Sign Language animation. ACM SIGACCESS Access. Comput. 97: 14-24 (2010) - [j7]Matt Huenerfauth, Pengfei Lu:
Accurate and Accessible Motion-Capture Glove Calibration for Sign Language Data Collection. ACM Trans. Access. Comput. 3(1): 2:1-2:32 (2010) - [c13]Matt Huenerfauth, Pengfei Lu:
Modeling and synthesizing spatially inflected verbs for American sign language animations. ASSETS 2010: 99-106 - [c12]Martin Jansche, Lijun Feng, Matt Huenerfauth:
Reading difficulty in adults with intellectual disabilities: analysis with a hierarchical latent trait model. ASSETS 2010: 277-278 - [c11]Lijun Feng, Martin Jansche, Matt Huenerfauth, Noemie Elhadad:
A Comparison of Features for Automatic Readability Assessment. COLING (Posters) 2010: 276-284 - [c10]Pengfei Lu, Matt Huenerfauth:
Collecting a Motion-Capture Corpus of American Sign Language for Data-Driven Generation Research. SLPAT@NAACL 2010: 89-97
2000 – 2009
- 2009
- [j6]Matt Huenerfauth:
A Linguistically Motivated Model for Speed and Pausing in Animations of American Sign Language. ACM Trans. Access. Comput. 2(2): 9:1-9:31 (2009) - [c9]Matt Huenerfauth, Lijun Feng, Noemie Elhadad:
Comparing evaluation techniques for text readability software for adults with intellectual disabilities. ASSETS 2009: 3-10 - [c8]Pengfei Lu, Matt Huenerfauth:
Accessible motion-capture glove calibration protocol for recording sign language data from deaf subjects. ASSETS 2009: 83-90 - [c7]Lijun Feng, Noemie Elhadad, Matt Huenerfauth:
Cognitively Motivated Features for Readability Assessment. EACL 2009: 229-237 - [c6]Matt Huenerfauth:
Improving Spatial Reference in American Sign Language Animation through Data Collection from Native ASL Signers. HCI (7) 2009: 530-539 - [r1]Matt Huenerfauth, Vicki L. Hanson:
Sign Language in the Interface. The Universal Access Handbook 2009: 1-18 - 2008
- [j5]Matt Huenerfauth:
Spatial, Temporal, and Semantic Models for American Sign Language Generation: Implications for Gesture Generation. Int. J. Semantic Comput. 2(1): 21-45 (2008) - [j4]Matt Huenerfauth, Liming Zhao, Erdan Gu, Jan M. Allbeck:
Evaluation of American Sign Language Generation by Native ASL Signers. ACM Trans. Access. Comput. 1(1): 3:1-3:27 (2008) - [j3]Matt Huenerfauth:
Generating American Sign Language animation: overcoming misconceptions and technical challenges. Univers. Access Inf. Soc. 6(4): 419-434 (2008) - [c5]Matt Huenerfauth:
Evaluation of a psycholinguistically motivated timing model for animations of american sign language. ASSETS 2008: 129-136 - 2007
- [c4]Matt Huenerfauth, Liming Zhao, Erdan Gu, Jan M. Allbeck:
Evaluating American Sign Language generation through the participation of native ASL signers. ASSETS 2007: 211-218 - 2006
- [j2]Matt Huenerfauth:
Representing coordination and non-coordination in American Sign Language animations. Behav. Inf. Technol. 25(4): 285-295 (2006) - 2005
- [j1]Matt Huenerfauth:
American Sign Language natural language generation and machine translation. ACM SIGACCESS Access. Comput. 81: 12-15 (2005) - [c3]Matt Huenerfauth:
American Sign Language Generation: Multimodal NLG with Multiple Linguistic Channels. ACL 2005: 37-42 - [c2]Matt Huenerfauth:
Representing coordination and non-coordination in an american sign language animation. ASSETS 2005: 44-51 - 2004
- [c1]Matt Huenerfauth:
A Multi-Path Architecture for Machine Translation of English Text into American Sign Language Animation. HLT-NAACL (Student Research Workshop) 2004
Coauthor Index
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2024-11-08 20:33 CET by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint