default search action
24th IUI 2019: Marina del Ray, CA, USA
- Wai-Tat Fu, Shimei Pan, Oliver Brdiczka, Polo Chau, Gaelle Calvary:
Proceedings of the 24th International Conference on Intelligent User Interfaces, IUI 2019, Marina del Ray, CA, USA, March 17-20, 2019. ACM 2019, ISBN 978-1-4503-6272-6
Keynotes
- Michelle X. Zhou:
Getting virtually personal: making responsible and empathetic "her" for everyone. - David Gunning:
DARPA's explainable artificial intelligence (XAI) program. - Ashwin Ram:
Innovating with AI.
Recommender systems
- Lukas Eberhard, Simon Walk, Lisa Posch, Denis Helic:
Evaluating narrative-driven movie recommendations on Reddit. 1-11 - Shudan Zhong, Hong Xu:
Intelligently recommending key bindings on physical keyboards with demonstrations in Emacs. 12-17 - Hanna Schäfer, Martijn C. Willemsen:
Rasch-based tailored goals for nutrition assistance systems. 18-29
Natural language and speech
- Viktor Schlegel, Benedikt Lang, Siegfried Handschuh, André Freitas:
Vajra: step-by-step programming with natural language. 30-39 - Vidya Setlur, Melanie Tory, Alex Djalali:
Inferencing underspecified natural language utterances in visual analysis. 40-51 - Shunichi Tahara, Kazushi Ikeda, Keiichiro Hoashi:
Empathic dialogue system based on emotions extracted from tweets. 52-56
IUI for wearable and mobile
- Granit Luzhnica, Eduardo E. Veas:
Background perception and comprehension of symbols conveyed through vibrotactile wearable displays. 57-64 - Yen-Chia Hsu, Jennifer L. Cross, Paul Dille, Michael Tasota, Beatrice Dias, Randy Sargent, Ting-Hao (Kenneth) Huang, Illah R. Nourbakhsh:
Smell Pittsburgh: community-empowered mobile smell reporting system. 65-79 - Keum San Chun, Ashley B. Sanders, Rebecca Adaimi, Necole Streeper, David E. Conroy, Edison Thomaz:
Towards a generalizable method for detecting fluid intake with wrist-mounted sensors and adaptive segmentation. 80-85 - Qian Zhang, Dong Wang, Run Zhao, Yufeng Deng, Yinggang Yu:
ShopEye: fusing RFID and smartwatch for multi-relation excavation in physical stores. 86-95
Evaluation of IUI
- Malin Eiband, Sarah Theres Völkel, Daniel Buschek, Sophia Cook, Heinrich Hussmann:
When people and algorithms meet: user-reported problems in intelligent everyday applications. 96-106 - Aaron Springer, Steve Whittaker:
Progressive disclosure: empirically motivated approaches to designing effective transparency. 107-120 - Sven Charleer, Francisco Gutiérrez, Katrien Verbert:
Supporting job mediator and job seeker through an actionable dashboard. 121-131
Affective and aesthetic IUI
- Tugba Kulahcioglu, Gerard de Melo:
Paralinguistic recommendations for affective word clouds. 132-143 - Surjya Ghosh, Kaustubh Hiware, Niloy Ganguly, Bivas Mitra, Pradipta De:
Does emotion influence the use of auto-suggest during smartphone typing? 144-149 - Marko Tkalcic, Nima Maleki, Matevz Pesek, Mehdi Elahi, Francesco Ricci, Matija Marolt:
Prediction of music pairwise preferences from facial expressions. 150-159 - Chris Donahue, Ian Simon, Sander Dieleman:
Piano Genie. 160-164
Big data and analytics
- James Abello, Fred Hohman, Varun Bezzam, Duen Horng Chau:
Atlas: local graph exploration in a global context. 165-176 - Meraj Ahmed Khan, Arnab Nandi:
Flux capacitors for JavaScript deloreans: approximate caching for physics-based data interaction. 177-185 - Doris Jung Lin Lee, Himel Dev, Huizi Hu, Hazem Elmeleegy, Aditya G. Parameswaran:
Avoiding drill-down fallacies with VisPilot: assisted exploration of data subsets. 186-196
Assistive IUIs
- Leo Neat, Peng Ren, Siyang Qin, Roberto Manduchi:
Scene text access: a comparison of mobile OCR modalities for blind users. 197-207 - Cong Chen, Ajay Chander, Kanji Uchino:
Guided play: digital sensing and coaching for stereotypical play behavior in children with autism. 208-217 - Min Hun Lee, Daniel P. Siewiorek, Asim Smailagic, Alexandre Bernardino, Sergi Bermúdez i Badia:
Learning to assess the quality of stroke rehabilitation exercises. 218-228
Explainable AI
- Shi Feng, Jordan L. Boyd-Graber:
What can AI do for me?: evaluating machine learning interpretations in cooperative play. 229-239 - James Schaffer, John O'Donovan, James Michaelis, Adrienne Raglin, Tobias Höllerer:
I can do better than your AI: expertise and explanations. 240-251 - Christine T. Wolf:
Explainability scenarios: towards scenario-based XAI design. 252-257 - Carrie J. Cai, Jonas Jongejan, Jess Holbrook:
The effects of example-based explanations in a machine learning interface. 258-262 - Upol Ehsan, Pradyumna Tambwekar, Larry Chan, Brent Harrison, Mark O. Riedl:
Automated rationale generation: a technique for explainable AI and its effects on human perceptions. 263-274 - Jonathan Dodge, Q. Vera Liao, Yunfeng Zhang, Rachel K. E. Bellamy, Casey Dugan:
Explaining models: an empirical study of how explanations impact fairness judgment. 275-285 - Tianyi Li, Gregorio Convertino, Ranjeet Kumar Tayi, Shima Kazerooni:
What data should I protect?: recommender and planning support for data security analysts. 286-297 - Ludovik Coba, Laurens Rook, Markus Zanker, Panagiotis Symeonidis:
Decision making strategies differ in the presence of collaborative explanations: two conjoint studies. 291-302
Intelligent visualization
- Katie Watson, Samuel S. Sohn, Sasha Schriber, Markus Gross, Carlos Manuel Muñiz, Mubbasir Kapadia:
StoryPrint: an interactive visualization of stories. 303-311 - Alireza Karduni, Isaac Cho, Ryan Wesslen, Sashank Santhanam, Svitlana Volkova, Dustin Lockhart Arendt, Samira Shaikh, Wenwen Dou:
Vulnerable to misinformation?: Verifi! 312-323 - Johnny Torres, Sixto García, Enrique Peláez:
Visualizing authorship and contribution of collaborative writing in e-learning environments. 324-328 - Sébastien Lallé, Cristina Conati:
The role of user differences in customization: a case study in personalization for infovis-based content. 329-339
Augmented and mixed reality
- Styliani Kleanthous, Elena Matsi:
Analyzing user's task-driven interaction in mixed reality. 340-344 - Yuxiang Gao, Chien-Ming Huang:
PATI: a projection-based augmented table-top interface for robot programming. 345-355 - Wallace Santos Lages, Doug A. Bowman:
Walking with adaptive augmented reality workspaces: design and usage patterns. 356-366 - Willis Fulmer, Tahir Mahmood, Zhongyu Li, Shaoting Zhang, Jian Huang, Aidong Lu:
ImWeb: cross-platform immersive web browsing for online 3D neuron database exploration. 367-378
Explanations in recommender systems
- Pigi Kouki, James Schaffer, Jay Pujara, John O'Donovan, Lise Getoor:
Personalized explanations for hybrid recommender systems. 379-390 - Chun-Hua Tsai, Peter Brusilovsky:
Explaining recommendations in an interactive hybrid social recommender. 391-396 - Martijn Millecamp, Nyi Nyi Htun, Cristina Conati, Katrien Verbert:
To explain or not to explain: the effects of personal characteristics when explaining music recommendations. 397-407 - Vicente Dominguez, Pablo Messina, Ivania Donoso-Guzmán, Denis Parra:
The effect of explanations and algorithmic accuracy on visual recommender systems of artistic images. 408-416
Agent-based IUIs
- Ron Artstein, Carla Gordon, Usman Sohail, Chirag Merchant, Andrew Jones, Julia Campbell, Matthew Trimmer, Jeffrey Bevington, Christopher Engen, David R. Traum:
Digital survivor of sexual assault. 417-425 - Yoshimasa Ohmoto, So Kumano, Toyoaki Nishida:
Induction of an active attitude by short speech reaction time toward interaction for decision-making with multiple agents. 426-431 - Chi-Hsien (Eric) Yen, Yu-Chun (Grace) Yen, Wai-Tat Fu:
An intelligent assistant for mediation analysis in visual analytics. 432-436 - Ziang Xiao, Michelle X. Zhou, Wai-Tat Fu:
Who should be my teammates: using a conversational agent to understand individuals and help teaming. 437-447 - Justin D. Weisz, Mohit Jain, Narendra Nath Joshi, James Johnson, Ingrid Lange:
BigBlueBot: teaching strategies for successful human-agent interactions. 448-459
Trust in automation
- Kun Yu, Shlomo Berkovsky, Ronnie Taib, Jianlong Zhou, Fang Chen:
Do I trust my machine teammate?: an investigation from perception to decision. 460-468 - Robin M. Richter, Maria Jose Valladares, Steven C. Sutherland:
Effects of the source of advice and decision task on decisions to request expert advice. 469-475
User-adaptive IUIs
- Katri Leino, Antti Oulasvirta, Mikko Kurimo:
RL-KLM: automating keystroke-level modeling with reinforcement learning. 476-480 - Camille Gobert, Kashyap Todi, Gilles Bailly, Antti Oulasvirta:
SAM: a modular framework for self-adapting web menus. 481-484 - Protiva Rahman, Arnab Nandi:
Transformer: a database-driven approach to generating forms for constrained interaction. 485-496 - Joseph Chee Chang, Nathan Hahn, Adam Perer, Aniket Kittur:
SearchLens: composing and capturing complex user interests for exploratory search. 498-509
Automated driving
- Thomas Köhn, Matthias Gottlieb, Michael Schermann, Helmut Krcmar:
Improving take-over quality in automated driving by interrupting non-driving tasks. 510-517 - Charlie Hewitt, Ioannis Politis, Theocharis Amanatidis, Advait Sarkar:
Assessing public perception of self-driving cars: the autonomous vehicle acceptance model. 518-527 - Anna-Katharina Frison, Philipp Wintersberger, Tianjia Liu, Andreas Riener:
Why do you like to drive automated?: a context-dependent analysis of highly automated driving to elaborate requirements for intelligent user interfaces. 528-537 - Philipp Wintersberger, Dmitrijs Dmitrenko, Clemens Schartmüller, Anna-Katharina Frison, Emanuela Maggioni, Marianna Obrist, Andreas Riener:
S(C)ENTINEL: monitoring automated vehicles with olfactory reliability displays. 538-546
Collaborative interfaces
- Vikram Mohanty, David Thames, Sneha Mehta, Kurt Luther:
Photo sleuth: combining human expertise and face recognition to identify historical portraits. 547-557 - Jean Y. Song, Stephan J. Lemmer, Michael Xieyang Liu, Shiyan Yan, Juho Kim, Jason J. Corso, Walter S. Lasecki:
Popup: reconstructing 3D video using particle filtering to aggregate crowd responses. 558-569 - Nathawan Charoenkulvanich, Rie Kamikubo, Ryo Yonetani, Yoichi Sato:
Assisting group activity analysis through hand detection and identification in multiple egocentric videos. 570-574 - An T. Nguyen, Matthew Lease, Byron C. Wallace:
Explainable modeling of annotations in crowdsourcing. 575-579 - Irshad Abibouraguimane, Kakeru Hagihara, Keita Higuchi, Yuta Itoh, Yoichi Sato, Tetsu Hayashida, Maki Sugimoto:
CoSummary: adaptive fast-forwarding for surgical videos by detecting collaborative scenes using hand regions and gaze positions. 580-590
Interactive machine learning
- Dustin Arendt, Emily Saldanha, Ryan Wesslen, Svitlana Volkova, Wenwen Dou:
Towards rapid interactive machine learning: evaluating tradeoffs of classification without representation. 591-602 - Oznur Alkan, Elizabeth M. Daly, Adi Botea, Abel N. Valente, Pablo Pedemonte:
Where can my career take me?: harnessing dialogue for interactive career goal recommendations. 603-613 - Yolanda Gil, James Honaker, Shikhar Gupta, Yibo Ma, Vito D'Orazio, Daniel Garijo, Shruti Gadewar, Qifan Yang, Neda Jahanshad:
Towards human-guided machine learning. 614-624 - Isha Chaturvedi, Farshid Hassani Bijarbooneh, Tristan Braud, Pan Hui:
Peripheral vision: a new killer app for smart glasses. 625-636 - Huy Viet Le, Sven Mayer, Niels Henze:
Investigating the feasibility of finger identification on capacitive touchscreens using deep learning. 637-649
Multi-modal interfaces & experience transfer
- Qian Zhang, Dong Wang, Run Zhao, Yinggang Yu:
MyoSign: enabling end-to-end sign language recognition with wearables. 650-660 - Arjun Srinivasan, Mira Dontcheva, Eytan Adar, Seth Walker:
Discovering natural language commands in multimodal interfaces. 661-672 - Paridhi Maheshwari, Nitish Bansal, Surya Dwivedi, Rohan Kumar, Pranav Manerikar, Balaji Vasan Srinivasan:
Exemplar based experience transfer. 673-680
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.