An Eye-Tracking System based on Inner Corner-Pupil Center Vector and Deep Neural Network
Abstract
:1. Introduction
- When a user looks at the screen, the system can accurately estimate the point of gaze coordinates.
- The proposed system does not require a fixed head apparatus, and can still accurately estimate the point of gaze when the users move their head.
- Our system is a low-cost eye-tracking system that can just run on the user’s PC and webcam, without the need for other commercial equipment.
- Our system can be easy for users to operate or set up the system and more comfortable for some disabled users.
2. Related Works
3. The Proposed Method
3.1. Data Collection
3.2. Eye Image Extraction
3.3. Pupil Center Extraction
3.4. Capturing Eye Corners
3.5. Feature Extraction
3.5.1. Pupil Center-Eye Corner Vector
3.5.2. Inner Corner-Pupil Center Vector
3.6. Deep Neural Network
- The DNN is focused on the neural network’s deep structure.
- The features are transformed into other feature spaces between the hidden layers and it can help the prediction accuracy.
4. Experimental Results
4.1. Multilayer Perceptron Experiment Results
4.2. Radial Basis Function Network Experiment Results
4.3. Deep Neural Network Experiment Results
4.4. Eye Tracking Experiment
5. Conclusions
Author Contributions
Funding
Acknowledgments
Conflicts of Interest
References
- The University of Bradford: Video Eye Tracker. Available online: http://web.dmz.brad.ac.uk/research/rkt-centres/visual-computing/facilities/eye-tracking/ (accessed on 15 June 2019).
- EYESO Eye Tracker Based System. Available online: http://www.eyeso.net/ (accessed on 11 June 2019).
- SMI Eye Tracking Glasses. Available online: https://www.smivision.com/eye-tracking/products/mobile-eye-tracking/ (accessed on 6 May 2019).
- Tobii Pro Glasses 2. Available online: https://www.tobiipro.com/product-listing/tobii-pro-glasses-2/ (accessed on 20 June 2019).
- The Eye Tribe. Available online: https://theeyetribe.com/theeyetribe.com/about/index.html (accessed on 22 July 2019).
- Tobii Pro X3-120. Available online: https://www.tobiipro.com/product-listing/tobii-pro-x3-120/ (accessed on 20 June 2019).
- What Brands of Eye Tracker Are Available and the Corresponding Price Range. Available online: https://kknews.cc/tech/2a4mpez.html (accessed on 12 January 2019).
- Zhang, X.; Yuan, S.; Chen, M.; Liu, X. A Complete System for Analysis of Video Lecture Based on Eye Tracking. IEEE Access 2018, 6, 49056–49066. [Google Scholar] [CrossRef]
- Kar, A.; Corcoran, P. GazeVisual: A Practical Software Tool and Web Application for Performance Evaluation of Eye Tracking Systems. IEEE Trans. Consum. Electron. 2019, 65, 293–302. [Google Scholar] [CrossRef]
- Kurzhals, K.; Hlawatsch, M.; Seeger, C.; Weiskopf, D. Visual Analytics for Mobile Eye Tracking. IEEE Trans. Visual Comput. Graph. 2017, 23, 301–310. [Google Scholar] [CrossRef] [PubMed]
- Zhang, X.; Yuan, S. An Eye Tracking Analysis for Video Advertising: Relationship Between Advertisement Elements and Effectiveness. IEEE Access 2018, 6, 10699–10707. [Google Scholar] [CrossRef]
- Moacdieh, N.M.; Sarter, N. The Effects of Data Density, Display Organization, and Stress on Search Performance: An Eye Tracking Study of Clutter. IEEE Trans. Hum.-Mach. Syst. 2017, 47, 886–895. [Google Scholar] [CrossRef]
- Yan, B.; Pei, T.; Wang, X. Wavelet Method for Automatic Detection of Eye-Movement Behaviors. IEEE Sens. J. 2019, 19, 3085–3091. [Google Scholar] [CrossRef]
- Wu, T.; Wang, P.; Lin, Y.; Zhou, C. A Robust Noninvasive Eye Control Approach For Disabled People Based on Kinect 2.0 Sensor. IEEE Sens. Lett. 2017, 1, 1–4. [Google Scholar] [CrossRef]
- Cornia, M.; Baraldi, L.; Serra, G.; Cucchiara, R. Predicting Human Eye Fixations via an LSTM-Based Saliency Attentive Model. IEEE Trans. Image Process. 2018, 27, 5142–5154. [Google Scholar]
- Kümmerer, M.; Wallis, T.S.A.; Gatys, L.A.; Bethge, M. Understanding Low- and High-Level Contributions to Fixation Prediction. In Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 4789–4798. [Google Scholar]
- Guestrin, E.D.; Eizenman, M. General theory of remote gaze estimation using the pupil center and corneal reflections. IEEE Trans. Biomed. Eng. 2006, 53, 1124–1133. [Google Scholar] [CrossRef] [PubMed]
- Li, J.; Li, S. Gaze Estimation From Color Image Based on the Eye Model With Known Head Pose. IEEE Trans. Human-Machine Syst. 2016, 46, 414–423. [Google Scholar] [CrossRef]
- Ni, Y.; Sun, B. A Remote Free-Head Pupillometry Based on Deep Learning and Binocular System. IEEE Sens. J. 2019, 19, 2362–2369. [Google Scholar] [CrossRef]
- Sesma, L.; Villanueva, A.; Cabeza, R. Evaluation of Pupil Center-eye Corner Vector for Gaze Estimation Using a Web Cam. In Proceedings of the Symposium on Eye Tracking Research and Applications, Santa Barbara, CA, USA, 28–30 March 2012; pp. 217–220. [Google Scholar]
- Redmon, J.; Farhadi, A. YOLOv3: An Incremental Improvement. arXiv 2018, arXiv:1804.02767. [Google Scholar]
- Kasprowski, P.; Harezlak, K. Cheap and easy PIN entering using eye gaze. Ann. UMCS Inf. 2014, 14, 75–83. [Google Scholar] [CrossRef] [Green Version]
- Kumar, M.; Garfinkel, T.; Boneh, D.; Winograd, T. Reducing Shoulder-surfing by Using Gaze-based Password Entry. In Proceedings of the 3rd Symposium on Usable Privacy and Security, Pittsburgh, PA, USA, 18–20 July 2007; pp. 13–19. [Google Scholar]
- Zhai, S.; Morimoto, C.; Ihde, S. Manual and Gaze Input Cascaded (MAGIC) Pointing. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Pittsburgh, PA, USA, 15–20 May 1999; pp. 246–253. [Google Scholar]
- Agustin, J.; Mateo, J.; Hansen, J.; Villanueva, A. Evaluation of the Potential of Gaze Input for Game Interaction. PsychNology J. 2009, 7, 213–236. [Google Scholar]
Feature | Coordinate | Number of Neurons | Training Average Error | Test Average Error |
---|---|---|---|---|
ICPCV-6D | x | 3 | 47.34 | 59.39 |
y | 9 | 52.87 | 78.43 | |
ICPCV | x | 6 | 33.61 | 41.43 |
y | 25 | 41.82 | 58.42 | |
PCECV | x | 4 | 55.43 | 75.83 |
y | 50 | 48.17 | 72.37 |
Feature | Coordinate | Number of Neurons | Training Average Error | Test Average Error |
---|---|---|---|---|
ICPCV-6D | x | 5 | 77.86 | 75.81 |
y | 3 | 49.95 | 70.21 | |
ICPCV | x | 15 | 70.15 | 60.47 |
y | 8 | 29.88 | 48.94 | |
PCECV | x | 9 | 55.74 | 65.96 |
y | 6 | 39.91 | 49.58 |
Feature | Coordinate | Number of Neurons | Training Average Error | Test Average Error |
---|---|---|---|---|
ICPCV-6D | x | 10 | 47.66 | 103.20 |
y | 2 | 72.45 | 94.40 | |
ICPCV | x | 2 | 59.90 | 57.12 |
y | 2 | 62.11 | 78.93 | |
PCECV | x | 5 | 50.81 | 77.44 |
y | 10 | 41.25 | 73.31 |
Feature | Coordinate | Number of Neurons | Training Average Error | Test Average Error |
---|---|---|---|---|
ICPCV-6D | x | 7 | 79.99 | 95.42 |
y | 9 | 62.40 | 84.23 | |
ICPCV | x | 7 | 53.54 | 68.60 |
y | 15 | 37.02 | 53.92 | |
PCECV | x | 10 | 45.24 | 66.26 |
y | 5 | 44.14 | 50.46 |
Feature | Coordinate | Number of Neurons | Training Average Error | Test Average Error |
---|---|---|---|---|
ICPCV-6D | x | 10,20,20,20,10 | 25.81 | 43.28 |
y | 5,10,10,10,5 | 12.96 | 104.66 | |
ICPCV | x | 5,10,10,10,5 | 5.27 | 41.33 |
y | 5,5,5,5,5 | 20.02 | 63.65 | |
PCECV | x | 10,20,20,20,10 | 25.81 | 43.28 |
y | 5,10,10,10,5 | 12.96 | 104.66 |
Feature | Coordinate | Number of Neurons | Training Average Error | Test Average Error |
---|---|---|---|---|
ICPCV-6D | x | 5,5,5,5,5 | 68.57 | 79.98 |
y | 5,10,10,10,5 | 35.56 | 60.35 | |
ICPCV | x | 10,20,20,20,10 | 11.39 | 54.71 |
y | 10,20,20,20,10 | 15.01 | 51.76 | |
PCECV | x | 5,10,10,10,5 | 20.60 | 57.41 |
y | 5,5,5,5,5 | 18.29 | 50.16 |
ICPCV-6D | ICPCV | PCECV | |
---|---|---|---|
Average error distance of experiment 1 | 105.92 | 81.65 | 75.21 |
Average error distance of experiment 2 | 106.64 | 84.38 | 102.45 |
Average error distance of experiment 3 | 124.40 | 66.36 | 110.13 |
Average of the average error distance of 3 experiments | 112.32 | 77.46 | 95.93 |
Number of Neurons | 5,5,5,5,5 | 5,10,10,10,5 | 10,20,20,20,10 |
---|---|---|---|
Training average error | 18.39 | 8.34 | 25.81 |
Testing average error | 67.66 | 44.06 | 43.28 |
Number of Neurons | 5,5,5,5,5 | 5,10,10,10,5 | 10,20,20,20,10 |
---|---|---|---|
Training average error | 15.46 | 12.96 | 25.12 |
Testing average error | 118.38 | 104.66 | 127.54 |
Number of Neurons | 5,5,5,5,5 | 5,10,10,10,5 | 10,20,20,20,10 |
---|---|---|---|
Training average error | 59.26 | 5.27 | 14.10 |
Testing average error | 63.50 | 41.33 | 41.41 |
Number of Neurons | 5,5,5,5,5 | 5,10,10,10,5 | 10,20,20,20,10 |
---|---|---|---|
Training average error | 20.02 | 13.97 | 12.75 |
Testing average error | 63.65 | 64.92 | 67.92 |
Number of Neurons | 5,5,5,5,5 | 5,10,10,10,5 | 10,20,20,20,10 |
---|---|---|---|
Training average error | 11.38 | 12.17 | 43.76 |
Testing average error | 62.29 | 62.75 | 82.22 |
Number of Neurons | 5,5,5,5,5 | 5,10,10,10,5 | 10,20,20,20,10 |
---|---|---|---|
Training average error | 7.53 | 18.05 | 8.38 |
Testing average error | 68.23 | 74.24 | 69.97 |
Number of Neurons | 5,5,5,5,5 | 5,10,10,10,5 | 10,20,20,20,10 |
---|---|---|---|
Training average error | 37.56 | 25.47 | 36.12 |
Testing average error | 62.93 | 65.18 | 70.22 |
Number of Neurons | 5,5,5,5,5 | 5,10,10,10,5 | 10,20,20,20,10 |
---|---|---|---|
Training average error | 42.62 | 37.58 | 50.11 |
Testing average error | 65.17 | 57.81 | 55.71 |
Number of Neurons | 5,5,5,5,5 | 5,10,10,10,5 | 10,20,20,20,10 |
---|---|---|---|
Training average error | 62.38 | 36.25 | 58.16 |
Testing average error | 73.16 | 80.40 | 84.21 |
Number of Neurons | 5,5,5,5,5 | 5,10,10,10,5 | 10,20,20,20,10 |
---|---|---|---|
Training average error | 74.01 | 35.56 | 38.93 |
Testing average error | 71.89 | 60.35 | 79.16 |
Number of Neurons | 5,5,5,5,5 | 5,10,10,10,5 | 10,20,20,20,10 |
---|---|---|---|
Training average error | 26.09 | 15.15 | 11.39 |
Testing average error | 55.21 | 60.40 | 54.71 |
Number of Neurons | 5,5,5,5,5 | 5,10,10,10,5 | 10,20,20,20,10 |
---|---|---|---|
Training average error | 26.65 | 9.72 | 15.01 |
Testing average error | 53.77 | 53.78 | 51.76 |
User 1 | × | ◯ | ◯ | ◯ | ◯ | ◯ | ◯ | × | ◯ |
User 2 | × | ◯ | ◯ | ◯ | ◯ | ◯ | × | ◯ | × |
User 3 | ◯ | × | ◯ | ◯ | ◯ | ◯ | ◯ | ◯ | × |
User 4 | ◯ | ◯ | ◯ | ◯ | ◯ | ◯ | ◯ | × | ◯ |
User 5 | × | × | ◯ | ◯ | ◯ | ◯ | ◯ | ◯ | × |
User 6 | × | ◯ | ◯ | ◯ | ◯ | ◯ | ◯ | ◯ | ◯ |
User 7 | × | ◯ | ◯ | ◯ | ◯ | ◯ | × | × | × |
User 8 | ◯ | × | × | ◯ | ◯ | ◯ | ◯ | ◯ | × |
User 9 | ◯ | ◯ | ◯ | ◯ | ◯ | ◯ | ◯ | × | × |
User 10 | × | ◯ | ◯ | ◯ | ◯ | ◯ | ◯ | ◯ | ◯ |
40% | 70% | 90% | 100% | 100% | 100% | 80% | 60% | 40% |
Paper Reference | Setup (Camera, LED) | Accuracy/Metrics | Operating Condition |
---|---|---|---|
[22] | Commercial tracker, 1 camera | 61.1% | User dependent |
[23] | Commercial tracker, 1 camera | Error rate 15% | None |
[24] | Commercial tracker, 1 camera | Completion time, no. of hits/misses | None |
[25] | 1 camera | Mean error rate 22.5% | None |
Our system | 1 camera | 100% () | None |
© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Su, M.-C.; U, T.-M.; Hsieh, Y.-Z.; Yeh, Z.-F.; Lee, S.-F.; Lin, S.-S. An Eye-Tracking System based on Inner Corner-Pupil Center Vector and Deep Neural Network. Sensors 2020, 20, 25. https://doi.org/10.3390/s20010025
Su M-C, U T-M, Hsieh Y-Z, Yeh Z-F, Lee S-F, Lin S-S. An Eye-Tracking System based on Inner Corner-Pupil Center Vector and Deep Neural Network. Sensors. 2020; 20(1):25. https://doi.org/10.3390/s20010025
Chicago/Turabian StyleSu, Mu-Chun, Tat-Meng U, Yi-Zeng Hsieh, Zhe-Fu Yeh, Shu-Fang Lee, and Shih-Syun Lin. 2020. "An Eye-Tracking System based on Inner Corner-Pupil Center Vector and Deep Neural Network" Sensors 20, no. 1: 25. https://doi.org/10.3390/s20010025
APA StyleSu, M. -C., U, T. -M., Hsieh, Y. -Z., Yeh, Z. -F., Lee, S. -F., & Lin, S. -S. (2020). An Eye-Tracking System based on Inner Corner-Pupil Center Vector and Deep Neural Network. Sensors, 20(1), 25. https://doi.org/10.3390/s20010025