Next Article in Journal
An Efficient LiDAR Point Cloud Map Coding Scheme Based on Segmentation and Frame-Inserting Network
Next Article in Special Issue
Sensors Technology for Medical Robotics
Previous Article in Journal
Recent Developments in the Field of Optical Immunosensors Focusing on a Label-Free, White Light Reflectance Spectroscopy-Based Immunosensing Platform
Previous Article in Special Issue
Dual Mode pHRI-teleHRI Control System with a Hybrid Admittance-Force Controller for Ultrasound Imaging
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Understanding Emotions in Children with Developmental Disabilities during Robot Therapy Using EDA

Department of Robotic Science and Technology, Chubu University, Kasugai 487-8501, Japan
*
Author to whom correspondence should be addressed.
Sensors 2022, 22(14), 5116; https://doi.org/10.3390/s22145116
Submission received: 23 May 2022 / Revised: 19 June 2022 / Accepted: 4 July 2022 / Published: 7 July 2022
(This article belongs to the Special Issue Sensors Technology for Medical Robotics)

Abstract

:
Recent technological advancements have led to the emergence of supportive robotics to help children with developmental disabilities become independent. In conventional research, in robot therapy, experiments are often conducted by operating the robot out of the subject’s sight. In this paper, robot therapy using a system that can autonomously recognize the emotions of a child with developmental disabilities and provide feedback was developed. The aim was to quantitatively infer emotional changes in children using skin conductance (EDA) during robot therapy. It was demonstrated that the robot could recognize emotions autonomously and provide feedback to the subjects. Additionally, a quantitative evaluation was conducted using EDA. By analyzing the symptoms related to developmental disorders, it may be possible to improve the recognition rate and tailor therapy based on symptoms.

1. Introduction

With the development of robot technology in recent years, research on educational support robots, which support or replace the education of teachers and parents, has been attracting substantial attention in educational facilities such as nursery schools and schools, as well as in homes. One in seven people worldwide have several disabilities. Among them, even though it varies depending on the definition, the portion of children was estimated as 5.1 percent of children aged 0–14 years-old have moderate or severe disabilities [1]. However, according to the World Health Organization, people with disabilities are particularly vulnerable to a lack of services such as healthcare, rehabilitation, support, and assistance [2]. Recent technological developments may offer the potential to alleviate these problems, meet user needs, and provide cheaper support systems. Therefore, assistive robotics that can help children with developmental disabilities to become independent have been introduced [3]. As support robotics to help children with developmental disabilities, robot therapy, in which robots and children with developmental disabilities interact with one another, has emerged. Robot therapy is effective for autistic children because they are not good at reading complicated facial expressions, but robots have a simpler structure than humans, so interpretation by autistic children enables easier active communication [4]. A study by Wood et al. showed the comparison of responses from children with a disability when they are in an interview with either robot or human [5]. Prior to this study, a similar experiment was conducted on healthy children, and it was found that the responses of children to the human and robot interviewers were very similar [6]. The results of the experiment with children with disabilities were same. It was demonstrated that they interacted with the human and robot interviewers in a very similar way. These results suggest that robot therapy may be effective for children with developmental disabilities.

1.1. Human–Robot Interaction and Robot Therapy

Human–robot interaction (HRI) is the field of understanding, designing, and evaluating robot systems that humans actually use or with which they coexist [7]. In recent years, robots have become more complex and sophisticated, and they have been integrated into everyday life, including workplaces, homes, hospitals, remote areas, dangerous environments, and battlefields [7,8,9]. As robots continue to enter people’s daily lives, it is of great importance to investigate the interactions between humans and robots. However, the structures, natures, and types of these interactions are dependent on the problems that are addressed in HRI research and development [7]. Moreover, in such interactions, robots communicate in the same manner as humans, using voice dialogue, gestures, and gazes. Thus, the term interaction in HRI is used in a fairly broad sense. In addition, effective analysis methods for HRI have not yet been elucidated or defined. Knowledge of human-to-human communication is often applied in this domain, and experiments may be designed or analyzed with reference to behavioral and clinical psychology [10].
In this context, the robot therapy field has been widely studied. Robot therapy is a psychotherapy technique mediated by robots that is aimed at improving autism and dementia. Regarding the therapeutic effect, research based on various verification methods has been conducted and quantitative knowledge is being accumulated [11]. Robot therapy is aimed at various people. In the field of elderly care, experiments have been conducted on the psychological and social effects of elderly people using lizard-type robots in nursing care facilities and, as a result, the mood of the elderly. This method was confirmed to be effective in improving depression [12]. Moreover, it takes elderly people’s preferences into account and has different appearances and functions [13].

1.2. Robots for Children with Autism

Autism spectrum disorder (ASD) is diagnosed when problems occur in daily life with the following symptoms. Individuals on the autism spectrum have difficulty in communication and speech delay, interpersonal interactions (communication with others), and obsession with particular objects [14]. ASD is a group of lifelong disorders that affect communication skills and the ability to understand social cues. It may also be associated with an intellectual disability or with other disorders or characteristics, such as hyperesthesia. Within this concept, the concept of ASD as a continuum has been redefined by including all of the various autism disorders and eliminating the boundaries with normality [15]. ASD is thought to be a congenital brain disorder. Other features include non-responsiveness to calls, hyperesthesia and bluntness, difficulty in sustaining concentration, poor attention and memory, difficulty in controlling emotions, immediate panic, and a dislike of environmental changes. Symptoms include those listed in [16].
Numerous studies in robots for children with autism and their findings present the effective and positive results of therapy by using the robots [17]. The reason to use the robots in this field is that, although it is difficult for autistic patients to read the complex facial expressions of humans, robots have a simpler structure than humans, which makes it easier for autistic children to interpret and communicate actively [4]. For example, Rudovic examined the effectiveness of robot therapy using three modalities (audio, visual, and autonomic physiology) [18]. Therefore, communication between a robot and an autistic child may help to improve the communication ability of the child. Besides this study, there are many studies and types of robots. We list the robots that have been used in robot therapy for children with autism, which is the focus of this study. Various designs have been implemented for robots aimed at patients with autism.
  • KASPAR is a child-sized robot that was developed by the Adaptive Systems Group of the University of Hertfordshire, which can create human facial expressions and gestures. KASPAR has been recently updated to its fifth version for working more suitably in autism therapy [19]. With the fifth version of KASPAR, four children with autism participated in a game regarding emotions.
  • Keepon is a robot with a yellow, round-shaped minimal body that was developed by the CareBots Project. The upper body has a head, left and right eyes, a wide-angle video camera equipped, and microphon-attached nose. With four degrees of freedom, Keepon expresses a state of mind with a gesture [20]. Twenty-five children participated in the environment, interacting with Keepon for 10 to 20 min, and the emotional communication was observed. Children made eye contact with, touched, and talked to Keepon [21].
  • Muu is also famous as a simply designed robot that has only one eye in the robot body. It is covered with a soft material. It moves irregularly due to the internal spring, and is covered with a soft material. In the experiment with Muu [22], children with autism played with Muu for 5 to 10 min. This study was run once a month and observed for six months. The case study showed that Muu, described as an inanimate robot in the study, works for children with autism as a social agent.
  • QTrobot, which was created by LuxAI, is an artificial-intelligence robot with an LCD face and a robot arm. It improves the emotion-recognition ability of autistic children by imitating their emotions with expressive faces and moving arms. The behavior of the robot is limited to a certain extent and the child is never denied. By using the attached tablet terminal, an individual can interact with QTrobot to increase the level of interest in the other party and to reduce confusion [23].
  • CASTOR is a small humanoid that was produced by Diego et al. It offers a replicable robot platform that integrates the concepts of soft actuators and compliant mechanisms, and is aimed at real-world therapies, including physical interactions between children and robots [24].
  • NAO is a small humanoid robot developed by Aldebaran Robotics (Softbank Robotics) that walks on two legs independently and has verbal and nonverbal communication functions [25]. This robot has been widely used and one of these fields is autism therapy, such as in study of Shamsuddin et al. [26]. They conducted a case study of autistic-child-and-robot interaction and compared his reactions in a normal class with people, by observation. The result shows that the child showed more active interactions when playing with robot then in the normal class.
The studies shows there are many different types of robots, but the results still indicate positive aspects for the children. These studies show that a robot could be an effective tool or agent for children with a developmental disability in their therapy session.

1.3. EDA in HRI

Electrodermal activity (EDA) is an electrical phenomenon in the skin [27] that has a psychological meaning. When the eccrine sweat glands are emotionally stimulated, they produce sweat, which has high electrical conductivity [28]. Therefore, the electrical properties of the skin change. As emotional stimuli cause changes in the skin conductance values through the activation of the eccrine sweat glands, EDA measurements can provide information for inferring emotions. Therefore, to quantify the degree of arousal caused by a stimulus, it is necessary to measure the amount of sweating, as indicated by an increase in the skin conductance value [29]. EDA measurements involve determining the electrical conductivity, resistance, impedance, and admittance of the skin [27]. When stimuli are recognized as important and cause an emotional response, the brain signals to the eccrine sweat glands through the sympathetic nerve branches of the autonomic nervous system, thereby activating sweating activity [30]. When sweat is secreted from the eccrine sweat glands, the skin conductance increases sharply and the skin conductance value increases. The process of increased skin conductance occurs within seconds of the stimulation occurring [31]. A sharp increase in skin conductance is known as the skin conductance response (SCR), electrocutaneous activity response, or peak. Previously, the SCR was commonly referred to as the galvanic skin reaction [32]. The arousal value is evaluated using Russell’s circumplex model, which defines emotions in a circle that is composed of comfort and discomfort, arousal, and drowsiness [33]. In this study, we use the emotions surprise, anger, joy, and sadness, indicated in red letters, for the four emotions with different degrees of arousal. In the experiment, the discrete wavelet transform and an SVM classifier were used for the EDA analysis. The reason for using the discrete wavelet transform is that the EDA signal is not periodic and its amplitude, phase, and frequency change. The wavelet transform is generally considered as effective in dealing with such unsteady signals [34]. The discrete wavelet transform is not easily affected by noise and it can easily be applied to unsteady signals [35]. The features that were used in this experiment are those that Rudovic demonstrated to be necessary for EDA analysis (z-normalized value, mean, SD, maximum, minimum, integral, slope, number of peaks, and number of zero intersections) [18]. The SVM is a supervised learning technique that can handle classification and regression, and it can perform the wavelet transform for EDA data. The SVM classifier was used to improve the emotion-classification performance significantly [36].
As mentioned in Section 1, children with autism are not good at reading complex facial expressions. However, by joining the robot with the therapist, it becomes possible to communicate more actively than when communicating only with the therapist, and emotional expression and communication abilities can be improved. In HRI, the robot is often operated by moving the robot to a location that is invisible to the subject (WoZ method [37]). The robot recognizes the emotion of the subject and feeds back the recognized emotion to the subject. Furthermore, in studies that have specialized in the developmental disorder known as autism, experiments produced quantified results using biometric data. However, in experiments that only included developmental disorders of children with developmental disabilities, a questionnaire was administered. Many experiments have been conducted in which guesses of emotional were made based on the above, and few have provided quantified results. Therefore, in this study, the interaction of the robot was autonomous, so that the emotions of the subject that were recognized could be fed back to the subject. Subsequently, the characteristics of the EDA in the emotions that were exhibited by a child with developmental disabilities during the interaction between the child and robot were extracted, and the emotions at that time were inferred from the EDA.

1.4. Contributions and Paper Overview

This work aims at developing a robot-assisted therapy system and analysis of emotions of children with autism during the therapy. A system was created in which a robot autonomously recognizes the emotions of a children with developmental disabilities and provides feedback to the therapist. During the therapy, the system understands children’s emotions by recognizing their faces. After that, we estimated their emotions by their EDA data. Besides a robot, various devices were needed for the precise measurement and performing of the therapy. By integrating various devices and a robot, the therapist was able to lead the robot-assisted therapy system. Moreover, the system was capable of providing feedback to the therapist so that he/she could understand the status of the children at the time. By analyzing their EDA, it was possible to quantitatively infer emotional changes during robot therapy. To the best of our knowledge, this is the first study to integrate all devices in the context of robot-assisted therapy for children with developmental disabilities. Previous work on robot-assisted therapy usually used an observation [26] as an assessment, and needed an operator for controlling the system [38]. In this work, by integrating all different devices into a system, it automatically recognizes and quantitatively assess the children’s emotional states. In addition, it became easier to use as the therapist was able to run the system. With this, we provide the future direction of robot-assisted-therapy research, that the therapist’s workload should be considered when developing the system and also the way to effectively use the analysis data of children’s emotional status during the therapy.

2. Methods

In previous studies, robot interactions were often manipulated by moving the robots out of sight of the subject (the WoZ method) [37]). Many studies used the WoZ method in robot-assisted autism therapy. In our previous study [39], we used WoZ method and the robot did not recognize the emotions of the subject and provide feedback. WoZ could help the robot interacts with human more naturally than interacting in an autonomous way. The therapy session, however, has a unique situation in comparison to general HRI, because of the presence of therapists. They are not the subjects of the experiment, but still the end user of the robot system and, at the same time, they are required to lead the sessions, instead of the robot operators. In this experiment, the robot recognized emotions autonomously and performed motions. The result of recognition was indicated by the light of the robot eyes so that the therapist understands if the robot correctly works. If the therapist judged that the recognition of robot is wrong or it is better to move on to other certain steps of session, they can control the robot directly through numeric keypad.
As a robot interacting with the children, we used NAO robot in this work. The NAO is a small humanoid capable of having a conversation with people and expressing emotions. This robot is used in therapy for the elderly and children with autism, and as a programming tool in many educational institutions and research facilities. In this experiment, the NAO moved only its upper body while sitting to express the four emotions.
Only the subject child and the therapist entered the room, and the NAO was enclosed in a partition to make it invisible at the start of the experiment. Each experiment was conducted for 10 min in three parts (the break-in, facial-expression game, and free parts,) as shown in Figure 1. A photograph of the experimental scene in the break-in and other parts obtained from camera 3 is presented in Figure 2. Both the facial-expression game and free parts faced the NAO.
  • Break-in part: The therapist showed the child a facial expression card for each of the four emotions and the child practiced expressing the emotions.
  • Facial-expression-game part: The NAO showed the child the facial expressions that were practiced in the break-in part. Thereafter, the therapist showed the facial-expression card that was identified by the NAO to the child and paired it with the facial expression performed by the NAO. The therapist then asked the child to perform the facial expressions specified by the NAO. The NAO instructed the child, Show me one of the four facial expressions, and the therapist encouraged the child to exhibit one. The NAO recognized the facial expressions and conveyed them to the subject. This was repeated until the correct answer was provided with four facial expressions.
  • Free-structure-play part: When the facial-expression-game part was over, the therapist could freely operate the NAO using the numeric keypad. At this time, the NAO could indicate four facial expressions and compliment the subject.
Approximately 10 min were required in the break-in part to stabilize the biological signal data of the subject. The use of the facial-expression card only by the subject and therapist allowed the subject to recognize the facial expression that the card expressed. When the break-in part was over, the therapist was asked to move the partition and chair, and the child and therapist played the game with facial expressions facing the NAO. When the facial expression game was over, the therapist could move the NAO by operating the numeric keypad. Therefore, the subject moved the NAO freely and the robot played with the subject. The subjects of the experiment were 11 children with developmental disabilities (6 males, 5 females; mean age: 6.9 years; standard deviation: 1.1).

2.1. Facial Expression Game Part

The facial expression game is a method that is used for emotional expression education for autistic children with robotic support. In the facial expression game in this experiment, the therapist used cards with basic facial expressions (sad, anger, surprise and happy) as Figure 3 [40].
The NAO showed the expressions of these emotions to the child, and the therapist asked the child which emotions the robot expressed and made them aware of the emotions. Thereafter, the NAO asked the child to perform one of the four facial expressions (sadness, joy, anger, and surprise). The NAO read the child’s facial expression and told the child, “You have this facial expression out of the four facial expressions.” The NAO showed the emotion to the child, asked if it was correct, and asked the therapist to answer whether it was correct or incorrect using the numeric keypad provided to the therapist. If the answer was correct, the process of showing the facial expression was repeated using the three remaining facial expressions, and the process continued until the final facial expression was completed. If the recognized facial expression was different from the facial expression performed by the child, the four facial expressions were repeated. If the answers were correct, the facial expressions that were answered correctly were removed and the process continued until the final facial expression was reached. These steps were obtained from the theory-of-mind concept and designed to foster “sociological imagination”, which is a key challenge faced by many children [41]. The NAO movements were created for each of the four emotions using the “Choregraphe” for moving the NAO.
For “sad,” the NAO brings its arm in front of its face from the sitting position and makes a crying voice while making a human crying movement. For “joy,” the NAO shakes its hands and expresses joy in a loud voice. For “anger,” the NAO raises its arm and swings it back and forth while using a low voice. For “surprise,” the robot raises its hand slightly and lowers its face slightly behind, making a human hiccup-like voice.

2.2. Devices

We used multiple devices in the experiment, but all data were acquired from multiple laptops instead of one. Therefore, all E4, video, and audio data were recorded using coordinated universal time (UTC), making it easier to set the time with the other data for later analysis. The E4 wristband that was used to acquire the EDA in this experiment was connected to the E4 streaming server using a Bluetooth connection, and the acquired data were uploaded online to E4 Connect. Subsequently, the uploaded data were extracted in real time, and the UTC and Japan Standard Time (JST) were sequentially assigned to the acquired data in the text file. The network flow of the E4 wristband is described later. Furthermore, the NAO was connected to a personal computer using a wired cable. This was to prevent the connection with the NAO from being disrupted during the experiment.
In this experiment, an E4 wristband was used to obtain the EDA. The E4 wristband is a wristwatch-type terminal developed by Empatica that can acquire biological signals in real time. Sweating increases as the arousal value increases. The E4 wristband reads the amount of sweating from the skin potential and acquires it as the skin-potential activity. The unit of the acquired data was microseconds. In addition to the skin-potential activity, the blood volume pulses, acceleration, heart rate, and body temperature can also be measured. In this experiment, the EDA was used as the biological signal that could be obtained. It is also possible to measure the EDA in real time via a Bluetooth connection using a smartphone. However, this makes it difficult to match the time with other data such as video; thus, in the experiment, it was written to a text file using a program.
We used three web cameras (Logitech C922) in the experiment. The first was used to photograph the entire laboratory, the second captured the facial expressions of the subject, and the third captured the state and sound of the experiment from the front.

2.3. Integration of Systems

The robot therapy system included an app that controls the NAO, a mood mirror for facial recognition, a numeric keypad whereby the therapist operated the NOA, a recording app, and an app that acquires biological signals from the E4 wristband.

2.3.1. NAO Interactive Application

The NAO app was created in Python using naoqi. First, the NAO introduced itself to the child and therapist. Thereafter, the NAO showed the child four facial expressions (sadness, joy, anger, and surprise). When the NAO was finished showing all of the emotions, the child was asked to show their facial expressions one by one. At that time, if the NAO could not recognize the facial expression and could not proceed, it was possible to proceed to the next expression by operating the numeric keypad. When the NAO had shown a series of facial expressions, the facial-expression game started. The NAO asked the child to express one of the facial expressions. At that time, the facial-expression data of the subject that was output from the mood mirror of the software for recognizing facial expressions was acquired, and the child was asked, “Is this the facial expression you are doing now?” Correct and incorrect answers were transmitted to the NAO by the therapist by operating the 10 keys. If the answer was correct, the game was played again with the three remaining facial expressions and if the answer was incorrect, the facial expression was repeated with the four facial expressions. Therefore, the game ended when the correct answer was provided four times (the NAO recognized all facial expressions). When the game was over, it was possible to operate the NAO with the numeric keypad when the child was told, “From here, play with the therapist’s teacher”. Thereafter, when the therapist pressed the end key, the program ended by saying “Thank you for playing with us today”.

2.3.2. Recording Systems

The recording app was a Python app that was created using Tkinter, OpenCV, and other tools. When the app was initially run, the subject-information input screen appeared. When the subject information was entered and the OK button was clicked, the GUI for outputting the video was displayed, and the video was displayed in real time. If the camera was not connected, a dummy image was output. The video was not shot as a video, but the video that was captured by the camera using OpenCV was displayed as a video by superimposing it similar to a flip book. All captured images were saved in png format, and the image number and time when the image was captured (UTC and JST) were recorded in the log file. Audio data were recorded using Pyaudio. The times when the recording started and ended were recorded in JST in the log file.

2.3.3. E4 Wristband System

The E4 app was also written in Python. A computer and E4 connected via Bluetooth were required in preparation for running the app. According to the specifications, when E4 was connected to the E4 streaming server, the acquisition of biological signals started, and the acquired biological-signal data were saved in E4 Connect via the Internet. In the E4 program, E4 Connect and telnet were connected, and the biological-signal data were constantly acquired and written to a text file separately for each piece of information. By visualizing the biological signals that were acquired at that time using a GUI, it was possible to confirm whether the data were acquired correctly at the same time. The acquired biological-signal data were also recorded as a set of acquired times (UTC and JST).

2.4. Analysis

Feng showed that the frequency analysis of raw EDA signals can provide a feature space for recognizing different emotions. To obtain a feature space for recognizing emotions, wavelet transform was performed on the center frequency and bandwidth parameters of the raw data to output the features, following which emotion classification was performed using an SVM classifier. It was found that the emotion classification performance was significantly improved by using the feature quantity after the wavelet transform and SVM classifier [36]. Ayata performed discrete wavelet transforms of EDA and brain waves, extracted the features, performed emotion recognition using machine learning, and compared machine-learning algorithms and feature-extraction methods [34]. Rudovic demonstrated the features (z-normalized value, mean, SD, maximum, minimum, integral, slope, number of peaks, and number of zero intersections) that are required for EDA analysis [18].
Therefore, in this experiment, the data that were obtained by extracting the features after performing the discrete wavelet transform and extracting the features as raw data of the EDA were subjected to machine learning. The reason for performing the wavelet transform is that biological signals are unsteady and change with time; therefore, the Fourier transform is inconvenient for analyzing EDA signals. EDA signals are nonperiodic and exhibit changes in the amplitude, phase, and frequency. Wavelet transforms are generally considered to be effective in dealing with such unsteady signals [34]. The discrete wavelet transform is a technique that was developed to overcome the shortcomings of the Fourier transform on unsteady signals, is less susceptible to noise, and can easily be applied to unsteady signals [35]. The machine learning we used in this work is a supervised learning technique using an SVM that can handle classification and regression, and it can deal with nonlinear data by margin maximization and the kernel method.

Features in EDA

The raw EDA data that were measured in this experiment and the data following the discrete wavelet transform were graphed. In this experiment, the data in the horizontal axis (time axis) direction were not constant because the time until the robot recognized the emotion of the subject varied. Noise data were removed from the data that were subjected to discrete wavelet transform and the number of peaks was reduced. The horizontal axis of the graph represents the time (s) and the vertical axis represents the measured EDA data [ μ S]. Moreover, the raw EDA data that were acquired in this experiment and the Z-values that were obtained from the data following the wavelet transform were graphed. This graph indicates the time on the horizontal axis (s) and Z-values on the vertical axis.

2.5. Hypothesis

As mentioned, Feng extracted the features following wavelet transform and demonstrated that the classification performance of the emotions was significantly improved by using the SVM classifier. Subsequently, Ayata performed discrete wavelet transform of the EDA and brain waves, extracted the features, and performed emotion recognition using machine learning to show the effectiveness of the discrete wavelet transform. Therefore, in these experimental data, it was considered that the emotion recognition is higher in the data that were obtained by performing wavelet transform and then extracting the features than in the method of extracting the features from the raw data.

3. Results

The EDA data and Z-values of the subjects are presented in Figure 4. The mean, standard deviation, maximum, minimum, integral, slope, and zero intersection that were obtained from the EDA data are listed in Table 1 and Table 2. The top-four datasets in the tables are the features that were calculated from the raw data, whereas the bottom four are the features that were obtained following the wavelet transform.

3.1. Features in Each Subject

  • When the average values of the raw data of subject 1 and the EDA data following wavelet transform were arranged in descending order, both were in the order of “joy,” “anger,” “sadness,” and “surprise.” As a result, in Russell’s circumplex model, “surprise,” with the highest arousal value, was the lowest.
  • When the average values of the EDA data from the raw data of subject 2 were arranged in descending order, the order was “surprise,” “anger,” “sadness,” and “joy.” However, when the EDA data following wavelet transform were sorted, the order was “surprise,” “sadness,” “joy,” and “anger.” In both of these results, the arousal values of Russell’s circumplex model were not in descending order, but the changes appeared in the order of the mean value of the EDA data before and after the wavelet transform was applied.
  • When the average values of the raw data of subject 3 and the EDA data following wavelet transform were arranged in descending order, both were in the order of “joy,” “surprise,” “sadness,” and “anger.” This result was not in descending order of the arousal value in Russell’s model.
  • When the average values of the raw data of subject 4 and the EDA data following wavelet transform were arranged in descending order, both were in the order of “sadness,” “anger,” “joy,” and “surprise.” As a result, the lowest on the arousal axis of Russell’s model, “sad,” was the highest, and the highest, “surprise,” was the lowest, on the arousal axis.
  • The EDA data of subject 5 had a higher EDA value for all data than the other subjects. When the average values of the raw data and the EDA data following wavelet transform were arranged in descending order, they were both in the order of “sad,” “surprise,” “joy,” and “anger.” In this result, “sad,” which should have the lowest arousal value in Russell’s model, was the highest.
  • When the average values of the raw data of subject 6 and the EDA data following wavelet transform were arranged in descending order, they were both in the order of “surprise,” “sadness,” “joy,” and “anger.” This result demonstrates that “sad,” which had the lowest arousal value in Russell’s model, was second highest.
  • The EDA data of subject 7 did not visually differ for each emotion. When the average values of the raw data and the EDA data following wavelet transform were arranged in descending order, they were both in the order of “joy,” “anger,” “sadness,” and “surprise.” This result indicates that “surprise,” which had the highest arousal value, was the highest in Russell’s model.
  • When the average values of the raw data of subject 8 and the EDA data following wavelet transform were arranged in descending order, both were in the order of “surprise,” “sadness,” “anger,” and “joy.” As a result, “sadness,” which was the lowest arousal value in Russell’s model, was the second highest.
  • When the average values of the raw data of subject 9 and the EDA data following wavelet transform were arranged in descending order, both were in the order of “sadness,” “anger,” “surprise,” and “joy.” This result was the highest for “sadness,” which had the lowest arousal value in Russell’s model.
  • When the average values of the raw data of subject 10 and the EDA data following wavelet transform were arranged in descending order, both were in the order of “anger,” “joy,” “surprise,” and “sadness.” According to this result, “sadness,” which had the lowest arousal value in Russell’s model, was the only one with the lowest arousal value.
  • When the average values of the raw data of subject 11 and the EDA data following wavelet transform were arranged in descending order, both were in the order of “surprise,” “sadness,” “joy,” and “anger.” This result was not in descending order of the arousal value in Russell’s model. Moreover, in this experiment, the time that was required to acquire the emotions was not constant, but subject 11 exhibited a significant change for each emotion according to the Z-value.

3.2. Average of EDA Data

The average values of the data that were acquired in this experiment for each emotion are depicted in Figure 5. One of the subjects had a larger value than the other subjects, we output the data in a log–log graph so that the distribution could be understood. When the EDA was sorted in descending order according the mean value of the raw data and the mean value of the data after the wavelet transform was performed, the raw data and wavelet transform data were the same, except for those of one person. In Russell’s circumplex model, when the emotional arousal values were arranged in descending order, the order was “surprise,” “anger,” “joy,” and “sadness.” None of the EDA data were arranged in the order of the arousal value.

3.3. Recognition Rate

The results of the machine learning (SVM) using the data that were obtained by extracting the features from the raw data indicate that the emotion recognition rate was 38.6%. The confusion matrix is presented in Table 3 and the recognition rate for each emotion is shown in Figure 6. The results of performing the wavelet transform, extracting the features, and performing machine learning indicate that the emotion recognition rate was 34.0%. The confusion matrix is presented in Table 3 and the recognition rate for each emotion is shown in Figure 4. In the confusion matrix, the emotion that was identified appears in the horizontal direction and the actual emotion is shown in the vertical direction, and the heat map is rounded to the nearest number. As illustrated in the left of Figure 6, for the analysis with the raw data, the recognition rate of “anger” was higher than that of the other emotions. As can be observed from the right of Figure 6, when the feature quantity was extracted following wavelet transform and emotion identification was performed, the identification rate of “surprise” was approximately 82%, which was higher than that of the other emotions. However, the emotional recognition rate of “sad” was less than 10% in both cases.

4. Discussion

In this experiment, a robot autonomously recognized emotions and could feed the recognized emotions back to the subject using the color of its eyes. However, the emotion recognition rate was extremely low. This is because the number of subjects was small and the data of various children with developmental disabilities were collectively identified without distinction. Furthermore, according to the emotion recognition results, the emotion recognition of “sadness” was less than 10% in both cases. “Sadness” had the lowest arousal value among the emotions used in Russell’s circumplex model. However, “surprise” had the highest arousal value among the emotions used. The “sad” emotions that were identified from Figure 2 and Figure 3 were classified as “anger” in the raw data and “surprise” after the discrete wavelet transform. When the features were extracted from the data were subjected to discrete wavelet transform and identified, over 50% of all four emotions were recognized as “surprise.” This may be related to the fact that children with developmental disabilities are not good at expressing their emotions. According to the experimental results, when the mean values of the EDA data for each of the four emotions were sorted in descending order, none of the data were arranged in descending order of the arousal value in Russell’s model. In Russell’s circumplex model, “sad,” with the lowest arousal value, often appeared in the top two when the average values of the EDA data were arranged in descending order; thus, the subjects expressed “sad” emotions. This could be because subjects could not express this emotion well, even if they believed they were doing so.

5. Conclusions

According to the experimental results, it was possible for the robot to recognize emotions autonomously and to feed the recognized emotions back to the subjects. However, emotion recognition using the EDA data exhibited a low recognition rate and posed a problem. In this experiment, the number of subjects was small; therefore, it was not possible to distinguish between the symptoms of children with developmental disorders. Furthermore, “sad” was classified as the emotion with the lowest arousal value among the emotions used in Russell’s circumplex model, but it was the emotion with the highest arousal value in this analysis result. Based on these results, it is necessary to analyze the symptoms of children with developmental disabilities.
Differences exist in the strengths and weaknesses of emotional expressions depending on the symptoms of children with developmental disorders. As the number of subjects in this experiment was small, the analysis was performed not by symptom-based analysis of children with developmental disabilities, but according to the group of children with developmental disabilities. Therefore, the recognition rate of emotions was considered to be low. Following the study of Robins [42], the types of development disabilities affect interactions with a robot, which takes the symptom-based analysis into account. In future experiments, by increasing the number of subjects and analyzing the symptoms of children with developmental disabilities, the characteristics of each symptom will appear in the EDA data, and it will be possible to identify emotions with a higher recognition rate than that obtained in this experiment. Moreover, in this experiment, the recognition rate was higher when the raw data were analyzed without the discrete wavelet transform; however, this also increased with the number of subjects. Thus, it is necessary to investigate whether the recognition rate can be improved by symptom analysis.

Author Contributions

J.L. is the lead author of this work. T.N. and J.L. conducted the method and interpreted the results. T.N. supported the data collection and analysis of the results. J.L. and T.N. contributed to the writing of the paper. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by Grant-in-Aid for Early-Career Scientists, grant no. JP 20K19327.

Institutional Review Board Statement

This research was approved by the institutional review board (IRB) of Chubu University, Japan.

Informed Consent Statement

Informed consent was obtained from all participants involved in the study.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. WHO. World Report on Disability 2011, Chapter 2 Disability—A Global Picture; WHO: Geneva, Switzerland, 2011. [Google Scholar]
  2. WHO. World Report on Disability 2011, Chapter 5 Assistance and Support; WHO: Geneva, Switzerland, 2011. [Google Scholar]
  3. Martinez-Martin, E.; Escalona, F.; Cazorla, M. Socially assistive robots for older adults and people with autism: An overview. Electronics 2020, 9, 367. [Google Scholar] [CrossRef] [Green Version]
  4. Lee, J.; Takehashi, H.; Nagai, C.; Obinata, G.; Stefanov, D. Which robot features can stimulate better responses from children with autism in robot-assisted therapy? Int. J. Adv. Robot. Syst. 2012, 9, 72. [Google Scholar] [CrossRef] [Green Version]
  5. Wood, L.J.; Dautenhahn, K.; Lehmann, H.; Robins, B.; Rainer, A.; Syrdal, D.S. Robot-Mediated Interviews: Do robots possess advantages over human interviewers when talking to children with special needs? In Social Robotics, Proceedings of the 5th International Conference on Social Robotics, ICSR 2013, Bristol, UK, 27–29 October 2013; Springer: Cham, Switzerland, 2013; pp. 54–63. [Google Scholar]
  6. Wood, L.J.; Dautenhahn, K.; Rainer, A.; Robins, B.; Lehmann, H.; Syrdal, D.S. Robot-mediated interviews-how effective is a humanoid robot as a tool for interviewing young children? PLoS ONE 2013, 8, e59448. [Google Scholar] [CrossRef] [PubMed]
  7. Goodrich, M.A.; Schultz, A.C. Human-robot interaction:A survey. Found. Trends® Hum. Comput. Interact. 2008, 1, 203–275. [Google Scholar] [CrossRef]
  8. Hentout, A.; Aouache, M.; Maoudj, A.; Akli, I. Human–robot interaction in industrial collaborative robotics: A literature review of the decade 2008–2017. Adv. Robot. 2019, 33, 764–799. [Google Scholar] [CrossRef]
  9. Vasco, V.; Antunes, A.G.P.; Tikhanoff, V.; Pattacini, U.; Natale, L.; Gower, V.; Maggiali, M. HR1 Robot: An Assistant for Healthcare Applications. Front. Robot. AI 2022, 9, 12. [Google Scholar] [CrossRef] [PubMed]
  10. Imai, M.; Ono, T. Human-Robot Interaction. J. Soc. Instrum. Control. Eng. 2005, 44, 846–852. (In Japanese) [Google Scholar]
  11. Kawashima, K. A Trial of Case-Study Classification and Extraction of Therapeutic Effects of Robot-Therapy: Literature Review with Descriptive-Analysis. Clin. Psychol. Dep. Res. Rep. 2013, 6, 155–167. [Google Scholar]
  12. Wada, K.; Shibata, T.; Tanie, K. Robot Therapy at a Health Service Facility for the Aged. Trans. Soc. Instrum. Control. Eng. 2006, 42, 386–392. [Google Scholar] [CrossRef]
  13. Troncone, A.; Saturno, R.; Buonanno, M.; Pugliese, L.; Cordasco, G.; Vogel, C.; Esposito, A. Advanced Assistive Technologies for Elderly People: A Psychological Perspective on Older Users’ Needs and Preferences (Part B). Acta Polytech. Hung 2021, 18, 29–44. [Google Scholar] [CrossRef]
  14. Kanner, L. Autistic disturbances of affective contact. Nervous Child 1943, 2, 217–250. [Google Scholar]
  15. Edition, F. Diagnostic and Statistical Manual of Mental Disorders (DSM-5®). Am. Psychiatr. Assoc. 2013, 21, 591–643. [Google Scholar]
  16. Sadock, B.J. Kaplan and Sadock’s Synopsis of Psychiatry: Behavioral Sciences/Clinical Psychiatry; Lippincott Williams & Wilkins: Baltimore, MD, USA, 2007. [Google Scholar]
  17. Alabdulkareem, A.; Alhakbani, N.; Al-Nafjan, A. A Systematic Review of Research on Robot-Assisted Therapy for Children with Autism. Sensors 2022, 22, 944. [Google Scholar] [CrossRef]
  18. Rudovic, O.; Lee, J.; Dai, M.; Schuller, B.; Picard, R.W. Personalized machine learning for robot perception of affect and engagement in autism therapy. Sci. Robot. 2018, 3, 19. [Google Scholar] [CrossRef] [Green Version]
  19. Wood, L.J.; Zaraki, A.; Robins, B.; Dautenhahn, K. Developing kaspar: A humanoid robot for children with autism. Int. J. Soc. Robot. 2021, 13, 491–508. [Google Scholar] [CrossRef] [Green Version]
  20. Kozima, H.; Nakagawa, C.; Yasuda, Y. Children–robot interaction: A pilot study in autism therapy. Prog. Brain Res. 2007, 164, 385–400. [Google Scholar]
  21. Kozima, H.; Michalowski, M.P.; Nakagawa, C. Keepon. Int. J. Soc. Robot. 2009, 1, 3–18. [Google Scholar] [CrossRef]
  22. Miyamoto, E.; Lee, M.; Fujii, H.; Okada, M. How can robots facilitate social interaction of children with autism? Possible implications for educational environments. In Proceedings of the Fifth International Workshop on Epigenetic Robotics: Modeling Cognitive Development in Robotic Systems, Nara, Japan, 22–24 July 2005. [Google Scholar]
  23. Costa, A.P.; Steffgen, G.; Lera, F.R.; Nazarikhorram, A.; Ziafati, P. Socially assistive robots for teaching emotional abilities to children with autism spectrum disorder. In Proceedings of the 3rd Workshop on Child-Robot Interaction at HRI2017, Vienna, Austria, 6 March 2017. [Google Scholar]
  24. Casas-Bocanegra, D.; Gomez-Vargas, D.; Pinto-Bernal, M.J.; Maldonado, J.; Munera, M.; Villa-Moreno, A.; Stoelen, M.F.; Belpaeme, T.; Cifuentes, C.A. An Open-Source Social Robot Based on Compliant Soft Robotics for Therapy with Children with ASD. Actuators 2020, 9, 91. [Google Scholar] [CrossRef]
  25. Shamsuddin, S.; Ismail, L.I.; Yussof, H.; Zahari, N.I.; Bahari, S.; Hashim, H.; Jaffar, A. Humanoid robot NAO: Review of control and motion exploration. In Proceedings of the 2011 IEEE International Conference on Control System, Computing and Engineering, Penang, Malaysia, 25–27 November 2011; pp. 511–516. [Google Scholar]
  26. Shamsuddin, S.; Yussof, H.; Ismail, L.I.; Mohamed, S.; Hanapiah, F.A.; Zahari, N.I. Initial response in HRI-a case study on evaluation of child with autism spectrum disorders interacting with a humanoid robot Nao. Procedia Eng. 2012, 41, 1448–1455. [Google Scholar] [CrossRef]
  27. Boucsein, W. Electrodermal Activity; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2012. [Google Scholar]
  28. Stern, R.M.; Ray, W.J.; Quigley, K.S. Psychophysiological Recording; Oxford University Press: Oxford, UK, 2001. [Google Scholar]
  29. Bradley, M.M.; Miccoli, L.; Escrig, M.A.; Lang, P.J. The pupil as a measure of emotional arousal and autonomic activation. Psychophysiology 2008, 45, 602–607. [Google Scholar] [CrossRef] [Green Version]
  30. Dawson, M.E.; Schell, A.M.; Filion, D.L. The electrodermal system. In Handbook of Psychophysiology; Cacioppo, J.T., Tassinary, L.G., Berntson, G.G., Eds.; Cambridge University Press: Cambridge, UK, 2017; pp. 217–243. [Google Scholar]
  31. Dawson, M.E.; Schell, A.M.; Courtney, C.G. The skin conductance response, anticipation, and decision-making. J. Neurosci. Psychol. Econ. 2011, 4, 111. [Google Scholar] [CrossRef]
  32. Caruelle, D.; Gustafsson, A.; Shams, P.; Lervik-Olsen, L. The use of electrodermal activity (EDA) measurement to understand consumer emotions–A literature review and a call for action. J. Bus. Res. 2019, 104, 146–160. [Google Scholar] [CrossRef]
  33. Russell, J.A. A circumplex model of affect. J. Personal. Soc. Psychol. 1980, 39, 1161. [Google Scholar] [CrossRef]
  34. Ayata, D.; Yaslan, Y.; Kamaşak, M. Emotion recognition via galvanic skin response: Comparison of machine learning algorithms and feature extraction methods. J. Electr. Electron. Eng. 2017, 17, 3147–3156. [Google Scholar]
  35. Akansu, A.N.; Serdijn, W.A.; Selesnick, I.W. Emerging applications of wavelets: A review. Phys. Commun. 2010, 3, 1–18. [Google Scholar] [CrossRef]
  36. Feng, H.; Golshan, H.M.; Mahoor, M.H. A wavelet-based approach to emotion classification using EDA signals. Expert Syst. Appl. 2018, 112, 77–86. [Google Scholar] [CrossRef]
  37. Kelley, J.F. An iterative design methodology for user-friendly natural language office information applications. ACM Trans. Inf. Syst. 1984, 2, 26–41. [Google Scholar] [CrossRef]
  38. Taheri, A.; Meghdari, A.; Alemi, M.; Pouretemad, H.; Poorgoldooz, P.; Roohbakhsh, M. Social robots and teaching music to autistic children: Myth or reality. In Social Robotics, Proceedings of the 8th International Conference on Social Robotics, ICSR 2016, Kansas City, MO, USA, 1–3 November 2016; Springer: Cham, Switzerland, 2016; pp. 541–550. [Google Scholar]
  39. Rudovic, O.; Lee, J.; Mascarell-Maricic, L.; Schuller, B.W.; Picard, R.W. Measuring engagement in robot-assisted autism therapy: A cross-cultural study. Front. Robot. AI 2017, 4, 36. [Google Scholar] [CrossRef] [Green Version]
  40. Gross, T.F. The perception of four basic emotions in human and nonhuman faces by children with autism and other developmental disabilities. J. Abnorm. Child Psychol. 2004, 32, 469–480. [Google Scholar] [CrossRef] [PubMed]
  41. Baron-Cohen, S.; Leslie, A.M.; Frith, U. Does the autistic child have a “theory of mind”? Cognition 1985, 21, 37–46. [Google Scholar] [CrossRef]
  42. Robins, B.; Ferrari, E.; Dautenhahn, K.; Kronreif, G.; Prazak-Aram, B.; Gelderblom, G.j.; Tanja, B.; Caprino, F.; Laudanna, E.; Marti, P. Human-centred design methods: Developing scenarios for robot assisted play informed by user panels and field trials. Int. J. Hum. Comput. Stud. 2010, 68, 873–898. [Google Scholar] [CrossRef] [Green Version]
Figure 1. The procedure of the interaction between child and NAO robot.
Figure 1. The procedure of the interaction between child and NAO robot.
Sensors 22 05116 g001
Figure 2. (Left) the scene in camera 1, C: child, T: therapist; (center) the scene in camera 3, C: child, T: therapist; (right) the experiment setup, C: child, T: therapist. At the very beginning of session, child and therapist sit in front of desk for around 10 min to relax before child was exposed to the robot. CAM 1 records only face and upper body of the participants. CAM 2 is used for recognizing the facial expressions. CAM 3 records the whole scene of session for reference.
Figure 2. (Left) the scene in camera 1, C: child, T: therapist; (center) the scene in camera 3, C: child, T: therapist; (right) the experiment setup, C: child, T: therapist. At the very beginning of session, child and therapist sit in front of desk for around 10 min to relax before child was exposed to the robot. CAM 1 records only face and upper body of the participants. CAM 2 is used for recognizing the facial expressions. CAM 3 records the whole scene of session for reference.
Sensors 22 05116 g002
Figure 3. (Left) facial-expression cards, from left, sad, anger, surprise, happy; (right) the expressions by NAO, from left, sad, anger, surprise, happy.
Figure 3. (Left) facial-expression cards, from left, sad, anger, surprise, happy; (right) the expressions by NAO, from left, sad, anger, surprise, happy.
Sensors 22 05116 g003
Figure 4. Raw and wavelet transferred data, and z scores of all subjects.
Figure 4. Raw and wavelet transferred data, and z scores of all subjects.
Sensors 22 05116 g004
Figure 5. Average of recognition rate in all subjects.
Figure 5. Average of recognition rate in all subjects.
Sensors 22 05116 g005
Figure 6. (Left) recognition rate for each emotion in raw data; (right) recognition rate for each emotion in wavelet-transferred data.
Figure 6. (Left) recognition rate for each emotion in raw data; (right) recognition rate for each emotion in wavelet-transferred data.
Sensors 22 05116 g006
Table 1. Features in EDA data of all subjects (1–8).
Table 1. Features in EDA data of all subjects (1–8).
S1EmotionMeanSDMax.Min.IntegralSlopePeakZero c.
RawAnger0.431490.002030.436540.427584.20634−0.0005070
Joy0.461360.011700.523600.437826.79693−0.00096100
Sadness0.414640.008170.440390.3981410.26080.00036170
Surprise0.352910.010880.387900.3277316.13930.00035420
WaveletAnger0.431500.001970.436690.427614.20636−0.000560
Joy0.461170.009490.494020.439016.79779−0.0010380
Sadness0.414650.008070.442040.3998410.26100.00036120
Surprise0.352910.010750.388820.3350416.13950.00034200
S2EmotionMeanSDMax.Min.IntegralSlopePeakZero c.
RawAnger0.529040.043890.576080.43916.21269−0.0094770
Joy0.479700.022360.509510.387903.25197−0.0015560
Sadness0.505820.237130.6605804.47646−0.0576560
Surprise0.684450.028830.733810.618605.305360.0030850
WaveletAnger0.431500.001970.436690.427614.20636−0.0005060
Joy0.479250.021330.503000.402573.24552−0.0009340
Sadness0.505910.232010.68548−0.004234.47763−0.057762
Surprise0.684450.028510.735360.617315.304900.0031440
RawAnger0.010570.003420.015370.001290.08179−0.0006580
Joy1.34860−0.052131.526680.323008.54974−0.0521360
Sadness0.139250.010500.162620.120360.66230−0.0007940
Surprise0.197590.024130.258650.167742.169170.00464120
WaveletAnger0.010580.003070.014950.003150.08181−0.0006330
Joy1.351710.215111.598360.507618.54695−0.0491450
Sadness0.139160.009980.155750.118710.66207−0.0007620
Surprise0.198870.024370.255800.169112.233380.0049660
S4EmotionMeanSDMax.Min.IntegralSlopePeakZero c.
RawAnger2.842520.191243.301772.6797124.83880.0579960
Joy2.511120.114412.76192.3664019.4837−0.0149950
Sadness3.132110.097463.376413.0231524.24890.0274650
Surprise2.259340.169822.591582.0294819.7752−0.0547840
WaveletAnger2.842700.191623.292002.6775224.83800.0581550
Joy2.510670.112612.746372.3721319.48200−0.0150230
Sadness3.132100.096333.374723.0508424.24870.0274620
Surprise2.259340.169822.591582.0294819.7752−0.0547840
S5EmotionMeanSDMax.Min.IntegralSlopePeakZero c.
RawAnger33.97710.8865136.045332.9277127.196−0.6272420
Joy34.35200.3432534.839433.4283163.1520.1648740
Sadness41.12730.4490941.630839.7299359.940−0.0825260
Surprise37.51901.5968139.741834.0201178.6220.7780620
WaveletAnger33.97070.8751236.063033.0540127.000−0.6478120
Joy34.35710.3370034.780733.5023163.0000.1590920
Sadness41.12720.4359341.529239.8615360.000−0.0842550
Surprise37.51621.6044939.525834.0479179.0000.7843110
S6EmotionMeanSDMax.Min.IntegralSlopePeakZero c.
RawAnger1.117671.056554.3018809.37177−0.3222450
Joy4.503571.627586.067180.8043939.17860.4767730
Sadness6.140010.106946.249705.8246923.0507−0.0733900
Surprise7.875000.308946.474274.7989492.4318−0.0355390
WaveletAnger1.117601.058804.29840−0.160299.38515−0.3242044
Joy4.504901.626746.033910.9692439.19200.4757830
Sadness6.141430.100926.249105.8737423.0504−0.0715210
Surprise5.871190.294146.474825.2558792.4337−0.0354960
S7EmotionMeanSDMax.Min.IntegralSlopePeakZero c.
RawAnger0.111540.003960.119040.097283.42962−0.00034230
Joy0.115400.016260.148480.078084.58717−0.00020270
Sadness0.102860.011970.125440.044803.789280.00033290
Surprise0.096540.013400.129280.055043.83536−0.00054290
WaveletAnger0.111530.003830.119880.098263.42974−0.00034180
Joy0.115400.016120.149510.081614.58725−0.00020220
Sadness0.102860.011760.118500.044233.789260.00033250
Surprise0.096540.013100.121130.058893.83521−0.00054220
S8EmotionMeanSDMax.Min.IntegralSlopePeakZero c.
RawAnger5.911030.178736.259885.60335116.7140.0031490
Joy4.682590.113624.847514.3535169.0545−0.01744120
Sadness6.439580.128886.689356.1940794.9965−0.0058870
Surprise6.700390.343907.778546.17104219.5470.01438210
WaveletAnger5.911380.176956.197665.61145117.0000.00329120
Joy4.682620.113024.847654.3986969.1000−0.0174590
Sadness6.439410.128536.687546.2115595.0000−0.0059060
Surprise6.700310.343007.695056.15898220.0000.01438180
Table 2. Features in EDA data of all subjects (9–11).
Table 2. Features in EDA data of all subjects (9–11).
S9EmotionMeanSDMax.Min.IntegralSlopePeakZero c.
RawAnger12.80380.5048113.610611.5710137.666−0.0928280
Joy12.46840.8240713.63759.82250155.7230.0446370
Sadness13.01960.6521514.032411.8252153.1470.0176890
Surprise12.52250.3353613.203012.0830134.6190.0871570
WaveletAnger12.80470.4967213.595311.6240138.000−0.0921060
Joy12.47110.8085313.68759.95330159.0000.0441550
Sadness13.02180.6370214.020311.9910153.0000.0187860
Surprise12.52250.3354313.210912.0873135.0000.0874150
S10EmotionMeanSDMax.Min.IntegralSlopePeakZero c.
RawAnger4.629880.540545.565413.5087268.39350.0909150
Joy4.394840.555535.060213.4895073.66480.09339120
Sadness2.429790.160782.693102.1259423.6907−0.0477930
Surprise3.633770.226594.041693.1276157.29700.0189170
WaveletAnger4.629500.538325.514743.4982268.39010.0910740
Joy4.395190.554965.053783.5131973.66760.0933770
Sadness2.429850.159342.694422.1608223.6916−0.0478820
Surprise3.633730.226054.020523.1182857.29660.0189440
S11EmotionMeanSDMax.Min.IntegralSlopePeakZero c.
Raw dataAnger4.776080.116095.004404.5806975.25650.0034880
Joy4.986690.142465.293704.78678118.4640.00435130
Sadness5.828860.185906.202505.6217562.68570.0262850
Surprise6.323820.262726.991166.0317193.27780.0023340
WaveletAnger4.776120.115785.003774.5827575.25600.0035070
Joy4.986700.142405.296044.78298118.0000.0043590
Sadness5.828870.185816.197855.6220962.68560.0262720
Surprise6.323890.262786.989306.0343893.27930.0022830
Table 3. Confusion matrix of raw data and wavelet data.
Table 3. Confusion matrix of raw data and wavelet data.
Confusion MatrixEmotionSurpriseAngerHappySad
Raw datasurprise4520
anger0740
happy1451
sad1811
Waveletsurprise9020
anger7130
happy6050
sad10100
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Nagae, T.; Lee, J. Understanding Emotions in Children with Developmental Disabilities during Robot Therapy Using EDA. Sensors 2022, 22, 5116. https://doi.org/10.3390/s22145116

AMA Style

Nagae T, Lee J. Understanding Emotions in Children with Developmental Disabilities during Robot Therapy Using EDA. Sensors. 2022; 22(14):5116. https://doi.org/10.3390/s22145116

Chicago/Turabian Style

Nagae, Taisuke, and Jaeryoung Lee. 2022. "Understanding Emotions in Children with Developmental Disabilities during Robot Therapy Using EDA" Sensors 22, no. 14: 5116. https://doi.org/10.3390/s22145116

APA Style

Nagae, T., & Lee, J. (2022). Understanding Emotions in Children with Developmental Disabilities during Robot Therapy Using EDA. Sensors, 22(14), 5116. https://doi.org/10.3390/s22145116

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop