default search action
15th HCI 2013: Las Vegas, Nevada, USA
- Masaaki Kurosu:
Human-Computer Interaction. Interaction Modalities and Techniques - 15th International Conference, HCI International 2013, Las Vegas, NV, USA, July 21-26, 2013, Proceedings, Part IV. Lecture Notes in Computer Science 8007, Springer 2013, ISBN 978-3-642-39329-7
Speech, Natural Language and Auditory Interfaces
- Christina Alexandris:
Controlling Interaction in Multilingual Conversation. 3-12 - Christina Alexandris, Ioanna Malagardi:
Linguistic Processing of Implied Information and Connotative Features in Multilingual HCI Applications. 13-22 - Dimitrios Rigas, Badr Almutairi:
Investigating the Impact of Combining Speech and Earcons to Communicate Information in E-government Interfaces. 23-31 - Dimitra Anastasiou, Kristiina Jokinen, Graham Wilcock:
Evaluation of WikiTalk - User Studies of Human-Robot Interaction. 32-42 - Georgios Galatas, Gerasimos Potamianos, Fillia Makedon:
Robust Multi-Modal Speech Recognition in Two Languages Utilizing Video and Distance Information from the Kinect. 43-48 - Myounghoon Jeon, Ju-Hwan Lee:
The Ecological AUI (Auditory User Interface) Design and Evaluation of User Acceptance for Various Tasks on Smartphones. 49-58 - Ladislav Kunc, Tomás Macek, Martin Labský, Jan Kleindienst:
Speech-Based Text Correction Patterns in Noisy Environment. 59-66 - Hoang-An Le, Khoi-Nguyen C. Mac, Truong-An Pham, Vinh-Tiep Nguyen, Minh-Triet Tran:
Multimodal Smart Interactive Presentation System. 67-76 - Sofiane Medjkoune, Harold Mouchère, Simon Petitrenaud, Christian Viard-Gaudin:
Multimodal Mathematical Expressions Recognition: Case of Speech and Handwriting. 77-86 - Kellie Morrissey, Jurek Kirakowski:
'Realness' in Chatbots: Establishing Quantifiable Criteria. 87-96 - David G. Novick, Iván Gris:
Grounding and Turn-Taking in Multimodal Multiparty Conversation. 97-106 - Aasish Pappu, Ming Sun, Seshadri Sridharan, Alexander I. Rudnicky:
Situated Multiparty Interaction between Humans and Agents. 107-116 - Felix Rabe, Ipke Wachsmuth:
Enhancing Human Computer Interaction with Episodic Memory in a Virtual Guide. 117-125 - Yuichiro Shimizu, Tetsuaki Nakamura, Maki Sakamoto:
System of Generating Japanese Sound Symbolic Expressions Using Genetic Algorithm. 126-134 - Hongmei Wang, Ava Gailliot, Douglas Hyden, Ryan Lietzenmayer:
A Knowledge Elicitation Study for Collaborative Dialogue Strategies Used to Handle Uncertainties in Speech Communication While Using GIS. 135-144
Gesture and Eye-Gaze Based Interaction
- Andreas Braun, Arthur Fischer, Alexander Marinc, Carsten Stocklöw, Martin Majewski:
Context-Based Bounding Volume Morphing in Pointing Gesture Application. 147-156 - Francesco Carrino, Antonio Ridi, Rolf Ingold, Omar Abou Khaled, Elena Mugellini:
Gesture vs. Gesticulation: A Test Protocol. 157-166 - Stefano Carrino, Maurizio Caon, Omar Abou Khaled, Rolf Ingold, Elena Mugellini:
Functional Gestures for Human-Environment Interaction. 167-176 - Hsien-Tsung Chang, Yu-Wen Li, Huan-Ting Chen, Shih-Yi Feng, Tsung-Tien Chien:
A Dynamic Fitting Room Based on Microsoft Kinect and Augmented Reality Technologies. 177-185 - Weiqin Chen:
Gesture-Based Applications for Elderly People. 186-195 - Enkhbat Davaasuren, Jiro Tanaka:
MOBAJES: Multi-user Gesture Interaction System with Wearable Mobile Device. 196-204 - Yoshinobu Ebisawa, Kiyotaka Fukumoto:
Head-Free, Remote Gaze Detection System Based on Pupil-Corneal Reflection Method with Using Two Video Cameras - One-Point and Nonlinear Calibrations. 205-214 - Farzin Farhadi-Niaki, S. Ali Etemad, Ali Arya:
Design and Usability Analysis of Gesture-Based Control for Common Desktop Tasks. 215-224 - Dekun Gao, Naoaki Itakura, Tota Mizuno, Kazuyuki Mito:
Study of Eye-Glance Input Interface. 225-234 - Tomomi Gotoh, Takahiro Kida, Munehiro Takimoto, Yasushi Kambayashi:
Multi-User Interaction with Shadows. 235-242 - Weimin Guo, Cheng Cheng, Mingkai Cheng, Yonghan Jiang, Honglin Tang:
Intent Capturing through Multimodal Inputs. 243-251 - Jan Hendrik Hammer, Jürgen Beyerer:
Robust Hand Tracking in Realtime Using a Single Head-Mounted RGB Camera. 252-261 - Kristiina Jokinen:
Multimodal Feedback in First Encounter Interactions. 262-271 - Toshifumi Kurosawa, Buntarou Shizuki, Jiro Tanaka:
Keyboard Clawing: Input Method by Clawing Key Tops. 272-280 - Unseok Lee, Jiro Tanaka:
Finger Controller: Natural User Interaction Using Finger Gestures. 281-290 - Tao Ma, William G. Wee, Chia Yung Han, Xuefu Zhou:
A Method for Single Hand Fist Gesture Input to Enhance Human Computer Interaction. 291-300 - Amilcar Meneses Viveros, Erika Hernandez Rubio:
Kinect©, as Interaction Device with a Tiled Display. 301-311 - Atsuo Murata, Raku Uetsugi, Takehito Hayami:
Study on Cursor Shape Suitable for Eye-gaze Input System. 312-319 - Atsuo Murata, Kazuya Hayashi, Makoto Moriwaka, Takehito Hayami:
Study on Character Input Methods Using Eye-gaze Input Interface. 320-329 - Atsuo Murata, Takehito Hayami, Keita Ochi:
Proposal of Estimation Method of Stable Fixation Points for Eye-gaze Input Interface. 330-339 - Keita Okuuchi, Koh Kakusho, Takatsugu Kojima, Daisuke Katagami:
Modeling Situation-Dependent Nonverbal Expressions for a Pair of Embodied Agent in a Dialogue Based on Conversations in TV Programs. 340-347 - Mariano Perez Pelaez, Ryo Suzuki, Ikuro Choh:
Research on a Large Digital Desktop Integrated in a Traditional Environment for Informal Collaboration. 348-357 - Alexandros Pino, Evangelos Tzemis, Nikolaos Ioannou, Georgios Kouroupetroglou:
Using Kinect for 2D and 3D Pointing Tasks: Performance Evaluation. 358-367 - Taebeum Ryu, Jaehong Lee, Myung Hwan Yun, Ji Hyoun Lim:
Conditions of Applications, Situations and Functions Applicable to Gesture Interface. 368-377 - Nobuchika Sakata, Tomoyuki Kobayashi, Shogo Nishida:
Communication Analysis of Remote Collaboration System with Arm Scaling Function. 378-387 - Matthias Schwaller, Simon Brunner, Denis Lalanne:
Two Handed Mid-Air Gestural HCI: Point + Command. 388-397 - Yoshimasa Tawatsuji, Kazuaki Kojima, Tatsunori Matsui:
Experimental Study Toward Modeling of the Uncanny Valley Based on Eye Movements on Human/Non-human Faces. 398-407 - Tony Tung, Randy Gomez, Tatsuya Kawahara, Takashi Matsuyama:
Multi-party Human-Machine Interaction Using a Smart Multimodal Digital Signage. 408-415 - Takuto Yoshikawa, Yuusaku Mita, Takuro Kuribara, Buntarou Shizuki, Jiro Tanaka:
A Remote Pointing Technique Using Pull-out. 416-426
Touch-Based Interaction
- Jérôme Barbé, Marion Wolff, Régis Mollard:
Human Centered Design Approach to Integrate Touch Screen in Future Aircraft Cockpits. 429-438 - Marcela Câmara, Priscilla Fonseca de Abreu Braz, Ingrid Teixeira Monteiro, Alberto Raposo, Simone Diniz Junqueira Barbosa:
Evaluating Devices and Navigation Tools in 3D Environments. 439-448 - Kristen K. Greene, Franklin P. Tamborello II, Ross J. Micheals:
Computational Cognitive Modeling of Touch and Gesture on Mobile Multitouch Devices: Applications and Challenges for Existing Theory. 449-455 - Yuichiro Kinoshita, Masayuki Sugiyama, Kentaro Go:
A Page Navigation Technique for Overlooking Content in a Digital Magazine. 456-461 - Kazuyoshi Murata, Masatsugu Hattori, Yu Shibuya:
Effect of Unresponsive Time for User's Touch Action of Selecting an Icon on the Video Mirror Interface. 462-468 - Anna Noguchi, Toshifumi Kurosawa, Ayaka Suzuki, Yuichiro Sakamoto, Tatsuhito Oe, Takuto Yoshikawa, Buntarou Shizuki, Jiro Tanaka:
Evaluation of a Soft-Surfaced Multi-touch Interface. 469-478 - Michael Schmidt, Gerhard Weber:
Recognition of Multi-touch Drawn Sketches. 479-490 - Yu Shibuya, Hikaru Kawakatsu, Kazuyoshi Murata:
A Web Browsing Method on Handheld Touch Screen Devices for Preventing from Tapping Unintended Links. 491-496 - Sumit Srivastava, Ramesh Chandra Tripathi:
Real Time Mono-vision Based Customizable Virtual Keyboard Using Finger Tip Speed Analysis. 497-505 - Chih-Hung Ting, Teng-Yao Tsai, Yi-Pai Huang, Wen-Jun Zeng, Ming-Hui Lin:
Human Factor Research of User Interface for 3D Display. 506-512 - Chau Thai Truong, Duy-Hung Nguyen-Huynh, Minh-Triet Tran, Anh Duc Duong:
Collaborative Smart Virtual Keyboard with Word Predicting Function. 513-522 - Norshahriah Wahab, Halimah Badioze Zaman:
The Implementation of Multi-touch Table to Support the Military Decision Making through Critical Success Factors (CSFs). 523-529 - Siju Wu, Samir Otmane, Guillaume Moreau, Myriam Servières:
Design of a Visual Query Language for Geographic Information System on a Touch Screen. 530-539 - Yugang Zhang, Bifeng Song, Wensheng Min:
Target Orientation Effects on Movement Time in Rapid Aiming Tasks. 540-548
Haptic Interaction
- Michael P. Clamann, Wenqi Ma, David B. Kaber:
Comparison of Enhanced Visual and Haptic Features in a Virtual Reality-Based Haptic Simulation. 551-560 - Brendan Corbett, Takehiko Yamaguchi, Shijing Liu, Lixiao Huang, Sangwoo Bahn, Chang S. Nam:
Influence of Haptic Feedback on a Pointing Task in a Haptically Enhanced 3D Virtual Environment. 561-567 - Anak Agung Gede Dharma, Takuma Oami, Yuhki Obata, Li Yan, Kiyoshi Tomimatsu:
Design of a Wearable Haptic Vest as a Supportive Tool for Navigation. 568-577 - Anak Agung Gede Dharma, Kiyoshi Tomimatsu:
Mapping Texture Phase Diagram of Artificial Haptic Stimuli Generated by Vibrotactile Actuators. 578-586 - Wonil Hwang, Dongsoo Kim:
Preliminary Design of Haptic Icons from Users. 587-593 - Wooram Jeon, Yueqing Li, Sangwoo Bahn, Chang S. Nam:
Assessing the Effectiveness of Vibrotactile Feedback on a 2D Navigation Task. 594-600 - Kasun Karunanayaka, Sanath Siriwardana, Chamari Priyange Edirisinghe, Ryohei Nakatsu, Gopalakrishnakone Ponnampalam:
Magnetic Field Based Near Surface Haptic and Pointing Interface. 601-609 - Ja Young Lee, Sangwoo Bahn, Chang S. Nam:
Use of Reference Frame in Haptic Virtual Environments: Implications for Users with Visual Impairments. 610-617 - Shijing Liu, Sangwoo Bahn, Heesun Choi, Chang S. Nam:
Behavioral Characteristics of Users with Visual Impairment in Haptically Enhanced Virtual Environments. 618-625
Graphical User Interfaces and Visualisation
- Guy Andre Boy, Donald Platt:
A Situation Awareness Assistant for Human Deep Space Exploration. 629-636 - Mario Caruso, Febo Cincotti, Francesco Leotta, Massimo Mecella, Angela Riccio, Francesca Schettini, Luca Simione, Tiziana Catarci:
My-World-in-My-Tablet: An Architecture for People with Physical Impairment. 637-647 - Leonardo L. B. V. Cruciol, Li Weigang:
AHPM as a Proposal to Improve Interaction with Air Traffic Controllers. 648-657 - Jill L. Drury, Mark S. Pfaff, Gary L. Klein, Yikun Liu:
Decision Space Visualization: Lessons Learned and Design Principles. 658-667 - Elaine Froehlich, Brian Lucid, Heather Shaw:
The Language of Motion: A Taxonomy for Interface. 668-677 - Christian Fuchs, Sérgio Ferreira, João B. Sousa, Gil Manuel Gonçalves:
Adaptive Consoles for Supervisory Control of Multiple Unmanned Aerial Vehicles. 678-687 - Lawrence J. Henschen, Julia C. Lee:
A Web-Based Interface for a System That Designs Sensor Networks. 688-697 - Romina Kühn, Diana Lemme, Thomas Schlegel:
An Interaction Concept for Public Displays and Mobile Devices in Public Transport. 698-705 - Vera Oblaender, Maximilian Eibl:
Study of Interaction Concepts in 3D Virtual Environment. 706-711 - Tatsuhito Oe, Buntarou Shizuki, Jiro Tanaka:
Undo/Redo by Trajectory. 712-721 - Takeo Sakairi, Takashi Tamada, Katsuyuki Kamei, Yukio Goto:
Multi-layer Control and Graphical Feature Editing Using Server-Side Rendering on Ajax-GIS. 722-729 - Takehiko Sakamoto, Shin Takahashi, Jiro Tanaka:
A Method for Discussing Musical Expression between Music Ensemble Players Using a Web-Based System. 730-739 - Kazuki Tamura, Tomohiro Yoshikawa, Takeshi Furuhashi:
A Study on Document Retrieval System Based on Visualization to Manage OCR Documents. 740-749 - Chui Yin Wong, Chee-Weng Khong, Kimberly Chu, Muhammad Asyraf Mhd Pauzi, Man Leong Wong:
Audio-Visual Documentation Method for Digital Storytelling for a Multimedia Art Project. 750-758
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.