Robot Assisted Feeding With A General-Purpose Manipulator
Robot Assisted Feeding With A General-Purpose Manipulator
Robot Assisted Feeding With A General-Purpose Manipulator
A GENERAL-PURPOSE MANIPULATOR
Seminar Report submitted in partial fulfillment of the requirements for
award of the Degree of
Master of Technology
in
Signal Processing
by
Ms. ANISHA ANTEX J
LBT19ECSP02
This is to certify that the Seminar report entitled “ROBOT-ASSISTED FEEDING WITH A
GENERAL-PURPOSE MOBILE MANIPULATOR ” submitted by Ms. ANISHA ANTEX J to
the Department of Electronics and Communication Engineering, LBS Institute of Technology for Women,
Poojappura, Thiruvananthapuram, Kerala, India, in partial fulfillment of the requirements for award of the
Degree of Master of Technology in Signal Processing is a bonafide record of the work carried out by
her under my supervision. The seminar report has not been submitted to any other university or
institution for any degree or diploma.
I would like to express my deep sense of gratitude to our Director Prof. (Dr) Abdul Rahiman and
Principal Dr. Jayamohan J for rendering all the facilities for the completion of my seminar work.
I am gratefully indebted to the Head of the Department, Seminar Coordinator Prof. Dr. Lizy
Abraham, Assistant Professor, Electronics and Communication Engineering department and my
seminar guide Prof. Dr. Deepambika V A, Assistant Professor, Electronics and Communication
Engineering Department, for their valuable advice and encouragement which enabled me to
complete this seminar work successfully.
I also thank all other Faculty members and Staff of Electronics and Communication Engineering
Department for their timely help and advice.
I wish to express my deep sense of gratitude to all my family members, well-wishers and friends
for their constant support throughout the course of this seminar work.
Above all I praise and thank the God Almighty for the support, guidance and blessings bestowed
on me throughout my seminar work.
3
ABSTRACT
A new meal assistance system serves as a feeding agent for people with motor impairments who
are unable to intake food by their own. Caregivers play an amazing role as a feeding assistant,
but care recipients may still feel embarrassed. For autonomous eating, the assistive system uses
a robotic arm and facial recognition to transfer food from plate to a person’s mouth. The system
uses a general-purpose manipulator Willow Garage PR2 which has the ability to perform tasks
like scooping, spoon wiping and delivery. A user can command a task through a web based
Graphical User Interface. Willingness to eat food is determined by the detection of mouth
opening. The face recognition system uses machine learning based on haar cascade classifier to
recognize the face of a patient. The robot is programmed with an RGB-D sensor to detect and
collect bites of food for people with disabilities. In this pandemic situation a contact free
alternative for feeding assistance is essential.
Key words: Mobile manipulator, GUI (Graphical User Interface), Histogram of oriented
gradient.
4
Contents
LIST OF FIGURES………………………………………………………………...……………….6
CHAPTER 1 ........................................................................................................................................ 1
INTRODUCTION ............................................................................................................................... 1
CHAPTER 2 ........................................................................................................................................ 3
BACKGROUND.................................................................................................................................. 3
CHAPTER 3 ........................................................................................................................................ 6
CHAPTER 4 ........................................................................................................................................ 9
METHODOLOGY .............................................................................................................................. 9
CHAPTER 5 ...................................................................................................................................... 16
IMPLEMENTATION....................................................................................................................... 16
5
CHAPTER 6 ...................................................................................................................................... 19
REFERENCES .................................................................................................................................. 20
6
LIST OF FIGURES
Figure 5.2 1 The number of daily successful and failed executions in [1]. ....................................... 18
6
LIST OF TABLES
Table 5.2. 1 Performance of the camera-based mouth open detection method in [8]. ..................................... 18
7
LIST OF ABBREVIATIONS
8
CHAPTER 1
INTRODUCTION
Individuals with motor impairments such as those with paraplegia, spinal cord injuries, war-
time injuries, or amputations rely on others to assist them in their daily activities. One of the
most important daily living activities that a person needs to be able to perform in order to feel
independent is self-feeding. This task can be impossible to perform by a subset of the older
adults and the handicapped population who undergo loss of upper limb functions. As a result,
there is a need for assistive devices capable of helping a person self-feed. These types of
devices are called Meal Assistance Robots.
Today care recipients are faced with increasing labour costs and a shortage of workers, and are
thus investing in robotics. Robots can perform tasks that most humans could not possibly do,
such as working in challenging conditions and being able to purport amazing feats with utmost
precision. What makes a firm ready for the challenges of the future is not just technology but
the management of human resources, especially the impact of robots on the employment and
motivation of employees in the health care sector. Assistive robots are a type of devices that can
provide physical, mental, or social assistance to people with disabilities. As autonomous service
robots become more affordable and thus available for the general public, there is a growing need
for user-friendly interfaces to meet basic needs such as walking, feeding, shaving etc.
Consequently, this system focuses on the development of a Meal Assistance Robot capable of
naturally and autonomously feeding a user. In order to achieve it, the robot detects the world
state through visual inputs and the user requests through visual stimulus. A Meal Assistance
Robot is an assistive device that is used to aid individuals who cannot independently direct
food to their mouths for consuming. With the lack of caretakers and the competitiveness of
modern society, finding alternative solutions for the self-feeding of disabled individuals is of
paramount importance. The meal feeder robot is controlled using user intentions based on eye
blinking incorporating camera- based mouth open detection systems.
The meal assistance system uses a general-purpose mobile manipulator, a Willow Garage
PR2, which has the potential to serve as a versatile form of assistive technology. The system
also provides a graphical user interface (GUI) for people with various motor impairments to
1
easily command three independent subtasks: scooping/stabbing (food acquisition), spoon-
wiping (removing excess food), and delivery (feeding).
The system provides active feeding assistance by delivering food from a bowl to a user’s mouth.
A user can select a preferred subtask via the graphical user interface. The system provides active
feeding assistance in which the PR2 uses visually-guided movements to autonomously
scoop/stab food and deliver the food inside a user’s mouth. A real time algorithm to detect eye
opening in a video sequence from a standard camera is proposed. The robot then automatically
estimates the location of food, scoops/stabs it depending on the current feeding tool, and places
it inside a user’s mouth. System allows 5 utensils and 2 types of bowls to adapt the functions to
the user-selected food. The system has software and hardware interfaces to allow caregivers to
replace the utensil and bowl depending on the type of food. The system also allows users to
access its interface from a web browser, enabling the use of a variety of devices and increasing
accessibility. Robot-assisted feeding system hardware consists of a PR2 robot, tool holders, and
additional sensors. While providing visually-guided feeding assistance, the PR2 holds a bowl
with its right end effector and a utensil in its left end effector. The user can command a task
through Graphical User Interface (GUI).
The organization of the seminar report is as follows. Following the introductory part is chapter 2,
which presents the related work. Chapter 2 is focusing on reviewing other related models.Chapter
3 consists of vision system and control system. Chapter 4 provides a brief description about the
methodology and implementation. The chapter also describes the block diagram, finite state
machine and working of the system. Chapter 5 shows the implementation and evaluates the result.
Chapter 6 gives a conclusion of the thesis as well as discusses on directions for future work.
2
CHAPTER 2
BACKGROUND
3
Feeding method is an important factor to be considered when designing a meal assistance robot.
There are different types of feeding methods are used for feeding to different patients who are
suffering disabilities. Basically, oral feeding and tube feeding are the mostly used feeding
methods for disabled patients.
Generally conscious patients are always preferred oral feeding methods. However disabled
patients need some external assistive feeding methods instead of their own hands. Spoon is the
most common external assistive feeding device used in oral feeding methods. A spoon-based
feeding method is the mostly used method for existing meal assistance robots. Handy1,
Winsford feeder, Meal buddy, My spoon, Neater eater, and Mealtime partner dining system are
some of the examples for spoon-based feeding systems. However, few meal assistance robots
which were designed with the grippers that can hold a bottle or glass. Self-feeding assistive
robot was designed by Korea National Rehabilitation Research Institute has both gripper and
spoon for the feeding purposes.
There is a wide variety of assistive manipulators – such as 7-DoF arms mounted on a wheelchair
or desk – to provide general assistance near the human. Thus, categorize the types of
manipulators in terms of mobility: fixed and mobile-base manipulators. The professional
vocational assistive robot (ProVAR) is a representative desktop manipulator placed in an office
workspace. Handy-1 is another adjustable table-mounted manipulator for ADLs such as eating,
drinking, and washing applications. The mounted robots were designed to perform various
ADLs using a general-purpose manipulator. However, the limited workspaces of the robots
restrict the range of available activities. A mobile base can increase the workspace of a robot
and the number of tasks it can perform. In feeding, the fixed-base robot often requires the
relocation of the robot or user by caregivers in the beginning or during the task. A fixed base
restricts the scope of assistive tasks. Without mobility, robots are restricted to a narrow set of
tasks.
The use of meal-assistance robots is an alternative solution for people with various levels of
motor impairments. The robots follow predefined trajectories where food and mouth locations
are hard coded. A recently released robot, Obi uses kinesthetic teaching from caregivers, but it
still provides passive assistance in that it only moves to the specified location rather than
adapting to the location of the user’s mouth. The robot also allows each user to select a desired
food-taking location via a graphical user interface (GUI). Kobayashi et al. introduced an
4
automatic remnant food scooping method using a laser range finder. Similarly, our system
selects a scooping location using an RGB-D camera and autonomously retrieves it. In terms of
delivering food, most robotic system uses passive feeding executions in which a robot conveys
food to a predefined location, typically in front of the user’s mouth. These systems depend on
the users’ upper body movement to reach the food.
Despite many attempts at both commercial and research level, the controlling of meal assistance
robots according to user intentions remains a challenging task. A user can command a sequence
of scooping-delivery motions via a joystick or a button. Therefore, several other control
methods such as chin movement-based control, eye tracking, head movement tracking,
Electrooculography (EOG) signal-based control, and Electromyography (EMG) signal-based
control have been explored. These techniques are expensive and hard to figure out that where
in the brain the electrical activity is coming from.
5
CHAPTER 3
FEEDING SYSTEM
Vision system is responsible for tracking the person’s face using an RGB-D sensor, detecting
food location and scooping and stabbing subtasks aim to pick up and hold food. Face
recognition system use either offline or real time input based on the application. In this work
design and evaluation of a real time face recognition using Convolutional Neural Network.
Convolutional Neural Network (CNN) is a deep learning algorithm, most commonly
recommended for applications using images, because it performs the combined task of feature
extraction and classification. Input layer holds the raw pixel values of the images, Convolution
layer consists of a kernel or filter of a fixed size which slides in a window fashion to perform
the convolution operation on the windowed image to extract features. Padding is applied onto
the size of input image to overcome uneven mapping with filter size. ReLU stands for rectified
linear units, which is an element wise activation function that assigns zero value to hidden
units. Pooling layer is responsible for down sampling and dimensionality reduction that in turn
reduces the computational power required to process data. Pooling layer also has a kernel or
function which slides like a window onto the input to extract dominant features that are
rotational and positional invariant. Max pooling and Average pooling are the two common
6
functions used. Fully connected layer, where each neuron in the input is connected to each
neuron in the output and this layer is responsible in computing the score of a particular class,
resulting in N outputs where N denotes the number of classes/categories to be classified. The
class with maximum score is decided as the predicted class of the CNN architecture.
The real-time input image captured from camera is first fed to Viola Jones algorithm for face
detection. The cropped face image is then converted into gray scale, resized to 120×120 pixels
and fed to first convolution layer comprising of 32 filters of size 3×3 pixels. It may be noted
that the weights of these filters are initialized to random numbers and they get updated using
back propagation algorithm over a set of few epochs to yield final weights for these filters.
These final weights are later used during classification phase. The output from convolution +
ReLU layer is fed to pooling layer with max pooling function using a window size of 4×4
pixels. Max pooling gave better accuracy over average pooling. The output from pooling layer
is fed to dropout layer. The output is then flattened and fed to dense layer for classification.
Manipulator executes planned motion primitives by generating joint torque commands τ ∈ Rn,
where n is the number of joints. To avoid having a stiff arm make contact with the human
body, we use a 50 Hz mid-level model predictive controller (MPC) followed by a 1 kHz low-
level PID controller with low gains. The MPC outputs a desired change of joint angles ∆θ∗
that minimizes a quadratic objective function including contact force and position error.
−1 2
𝑚
∗ 𝛥 𝑝𝑑 𝑇
𝛥𝜃 = 𝑎𝑟𝑔 𝑚𝑖𝑛 ‖( ) − 𝐽𝑒𝑒 (𝑘 + ∑ 𝐽𝑐𝑖 𝑘𝑐𝑖 𝐽𝑐𝑖 ) 𝑘𝛥𝜃‖ (1)
𝛥 𝑞𝑑 𝑖=1
subject to ∆θmin ≤ ∆θ ≤ ∆θmax θd = θ + ∆θ, where ∆θ, ∆pd, and ∆qd are the changes of joint
angles, end effector position, and end-effector orientation changes, respectively. We obtain
the orientation changes via spherical linear interpolation planning for an end-effector
trajectory.
Jee ∈ R6×n, K ∈ Rn×n, Kci ∈ R3×3, and Jci ∈ R3×nare the Jacobian matrix at the end effector, the
joint stiffness matrix, the contact stiffness matrices, and the Jacobian matrices at the ith contact
point on the robot’s skin sensor, respectively. Set Kci to zero to achieve consistent motions.
7
This prevented the possibility of the arm’s motion being changed due to a false positive
detection of contact by the tactile sensing sleeve. This results in the following simplified form
of Eq. (1)
2
𝛥 𝑝𝑑
𝛥𝜃 ∗ = 𝑎𝑟𝑔 𝑚𝑖𝑛 ‖( ) − 𝐽𝑒𝑒 𝛥𝜃‖ (2)
𝛥 𝑞𝑑
The low-level PID controller enables the robot to track the desired joint angle θd ∈ Rn:
τ= 𝑘(𝜃𝑑 − 𝜃) − 𝐷𝜃 + 𝜏̂ 𝑔 (3)
where D, θ, θ˙, and τg ∈ R n are the damping matrix, the current joint angles, the current
joint velocity, and a gravity compensation torque vector, respectively.
8
CHAPTER 4
METHODOLOGY
A robot is an integrated system made up of sensors, manipulators, control system and power
supply. The system consists of vision system and control system. Actuators are used to control
the motion. Arm Joint Controller controls the angle or position in order to achieve desired end
effector motion. Encoders are used to determine the number of rotations. Gripper controller is
used to hold utensils in the end effector.
Our robot-assisted feeding system hardware consists of a PR2 robot, tool holders, and additional
sensors. A user can command a subtask through GUI. The three subtasks are scooping, spoon
wiping and delivery. A head mounted camera and joint encoders are used to finds the highest
food location in a bowl using a head mounted RGB-D camera for scooping. While running the
subtasks, the system runs a multimodal execution monitor to detect anomalous behaviors using
different sensors. Multimodal execution monitor runs in parallel with the scooping and delivery
subtasks.
9
4.1.1 Task Management using FSM
An FSM (Finite State Machine) is a deterministic finite graph, which consists of states,
transitions, and events. FSMs have frequently been used in robotic manipulation tasks due to
their clear and easily implementable structure. Fig. 4.1.1.1 shows the system’s FSM. The GUI
is a device-agnostic interface that is used via a web browser. The interface consists of a live
video screen that displays the video output from the head or wrist-mounted cameras and a task
tab that provides buttons or bars to command a subtask or adjust internal parameters of the
system. In the feeding task tab, the user can select one of three subtask buttons:
Scooping/Stabbing, Clean spoon, and Feeding (Delivery). The system then executes the selected
subtask until finishing the subtask or receiving a stop command. The user can force the robot to
stop at any moment by clicking a full- screen stop button. The stop command is treated as an
anomalous event which triggers a corrective action following transition TA. In addition, users
can adjust the delivery location where the robot place the utensil with food inside the user’s
mouth.
10
4.2 Eye and Mouth opening detection
The robot is placed at a location from which it can reach the user’s mouth while holding a
utensil and a bowl. Willingness to eat food is determined by eye tracking and mouth open
detection using RGB-D camera. Features are extracted using Histogram of Oriented Gradients.
Eye opening is detected according to the Eye Aspect Ratio (EAR). Eye aspect ratio for eyeblink
detection is fast and easy to implement based on the ratio of distances between facial landmark
of the eyes. Mouth opening is detected using Mouth Aspect Ratio (MAR). Mouth Aspect Ratio
measures the ratio of the length of the mouth to the width of the mouth.
11
Figure 4.2. 2 Calculation of MAR
Out of these 68 landmarks in fig 4.2.3, keep the landmarks for the eyes and mouths only.
These are the points used to extract the features for our model.
12
4.3 Food Location Estimation
The food-location estimator finds a location where the robot can scoop or stab food from a
bowl. This estimator mitigates the chance of the robot making several scooping attempts
without getting any food. The food-location estimator attempts to find the best scooping or
stabbing location s ∗ ∈ R 3
given a food point cloud, ƴ, from the head-mounted RGB-D
camera. and the geometry of the bowl, such as its center and diameter, as prior knowledge.
Discretize the space of locations into five locations, S = {s1, s2, s3, s4, s5}, displayed as
yellow markers in Fig. 4.3.1. To exclude an irrelevant part of the point cloud like the
curvature of the bowl, define a binary decision function, Ψ: x → {0, 1}, that returns 0 if a
point location x ∈ R 3 is outside an ellipsoidal area in the bowl. Then compute the score of
the scooping or stabbing location as a weighted sum of food point counts. The weight is the
multivariate Gaussian probability density score centered at each location si with a sample co-
variance matrix of the food point cloud Σ¯ (Υ). Find a location with the highest score:
exp(−1∕2(x−s1 )T Σ(γ)−1
l x−5ⅈ)J
s ∗= arg max ∑ ( ψ(x)) (4)
√(2Π)k |∑(γ)|
𝑥
13
For scooping, the system finds the highest food location in a bowl using a head mounted
camera, Microsoft Kinect V2, and then scoops a spoonful of food from the bowl. A user can
then command a preferred subtask using the GUI. GUI uses an HTTP-based Motion JPEG
(MJPEG) server that transfers live video stream from the robot’s cameras. Multimodal
execution monitor runs in parallel with the scooping and delivery subtasks. When it detects an
anomalous execution that largely differs from typical non-anomalous executions, the system
pauses the current task execution and then moves the arm back to the starting initial pose of
the current subtask. In the feeding task tab, the user can select one of three subtask buttons:
Scooping/Stabbing, Clean spoon, and Feeding (Delivery). The scooping and stabbing subtasks
aim to pick up and hold food. The robot can also translate the goal pose with respect to the
estimated food location. The robot can vary the target scooping location by visually estimating
food locations within the bowl.
Fig 4.3.2 Right shows 5 representative utensils we used in our evaluation: a silicone spoon,
small/large plastic spoons, a plastic fork, and a metal fork. A person can mount a preferred
utensil into the 3D-printed tool-changer held by the robot and register transformation
information from the tool to the utensil tip. 5 utensils are already preregistered in the system.
Fig. 4.3.2 Left shows a bowl our robot typically held during our evaluation. Also attached a
handle to the bowl to enable a PR2 to easily grasp and hold it.
Figure 4.3. 2 A bowl with a handle and wiping bar and utensils
14
Food spilling can occur during scooping or stabbing due to excessive amounts of food and
imperfect manipulation. To prevent spilling from the bowl, mounted a 3D-printed bowl guard.
The system uses 3D-printed food guard and wiping bar on the bowl to wipe off excess food from
the bowl. The wiping bar is 13.5 cm long and is used to remove excess food at the bottom of the
spoon. The user can activate the wiping subtask by clicking the Clean Spoon button on the
system’s GUI. The robot drags the bottom surface of the spoon on the bar following a predefined
linear trajectory. To put food inside the mouth, the robot uses a 3D mouth pose estimator along
with a wrist-mounted RGB-D camera that allows the robot to observe the user’s face. For safety,
the system observes the user’s face and the utensil during feeding. However, the robot may not
fully observe the face due to occlusion by the other end effector and the utensil it holds. To
reduce some of these occlusions, system lifts up the camera while feeding, which enables the
landmark estimator. The food-location estimator finds a location where the robot can scoop or
stab food from a bowl. This estimator mitigates the chance of the robot making several scooping
attempts without getting any food.
15
CHAPTER 5
IMPLEMENTATION
Willingness to eat food is determined by Haar cascade classifier based real time eye tracking
and mouth open detection through the webcam in python 3.7.9. With a webcam, take images as
input. To access the webcam, made an infinite loop that will capture each frame. Haar cascade
classifiers are used to detect faces. After returns an array of detections with x, y coordinates,
height and width of the boundary box of the object. Iterate over the faces and drawn boundary
over eyes and mouth.
16
5.2 Evaluation and Results
To evaluate accuracy of tested landmark detectors, used the 300-VW dataset. It is a dataset
containing 50 videos where each frame has associated a precise annotation of facial landmarks.
The purpose of using this dataset is to demonstrate that recent landmark detectors are
particularly robust and precise in detecting eyes and mouth, i.e. the eye corners, contour of the
eye lids, mouth and contour of lips. Quantitatively, the accuracy of the landmark detection for a
face image is measured by the average relative landmark localization error, defined as usually
100
𝜖= ∑𝑁
𝑖=1‖𝑥1 − 𝑥
̂𝑖 ‖ 2 (5)
𝑘𝑁
where xi is the ground-truth location of landmark i in the image, xˆi is an estimated landmark
location by a detector, N is a number of landmarks and normalization factor κ is the inter-ocular
distance (IOD), i.e. Euclidean distance between eye centers in the image. To determine the
contours of eyes, the EAR is taken so as to calculate the distance. EAR will be larger and
relatively constant over time when eyes are open. It will be almost equal to zero when a blink
occurs.
In order to validate the automatic mouth position tracking stage, the algorithm was executed and
analyzed the results. Sample image sequence of the motion of spoon which was captured using
end-effector mounted camera from the fixed position at the beginning of the stage to correctly
centering along mouth at the end. For all the subjects, the proposed method was able to
effectively track the mouth position during all the trials.
Time taken to correctly track the mouth and center is recorded and Table 5.2.1 summarizes the
results for all the subjects. It can be seen that the proposed method was able to track the mouth
position of any user with an average time below 12 seconds. From these results, it was realized
that the purposed method was successfully able to identify the open mouth stage.
The 9 able-bodied participants believed that they were able to successfully feed themselves
using the system with an average score of 4.67 (out of 5) and 7.14% relative standard error
(RSE). The system was safe and easy-to-use with scores of 4.22 and 4.0, respectively.
17
Subjects Mouth open Time to identify mouth opening (s)
detection
Mean (𝜇) SD (±𝜎)
accuracy
Table 5.2. 1 Performance of the camera-based mouth open detection method in [8].
Figure 5.2 1 The number of daily successful and failed executions in [1].
The fig. 5.2.1 shows the number of daily successful and failed executions during the long-term
self-evaluation. Each day, the participant selected a utensil from the 5 utensils and a type of food
to be fed: yogurt (6 days), rice (6 days), fruit (6 days), cereal (1 day), cheese (3 days). The
participant also recorded the different kinds of failures that occurred during the experiment.
Throughout this evaluation, the system resulted in an average of 16 (3.6%) feeding failures out
of a total of 444 executions due to a camera fault, false alarms from the execution monitor, tool
collisions by system fault, a system freeze, and unknown reasons.
18
CHAPTER 6
CONCLUSION AND RESEARCH SCOPE
Any sort of assistive technology designed for disabled people increases their independency and
will result in increased mental health. Commercially available meal assistance robots only cover
a sub group of disabled people and researches need to be focused toward designing new assistive
feeder robots for meal assistance to cover patient with severe disabilities. Even though
researches have been carried out in the academic level to improve the usability of meal
assistance robots, those technologies have not been reached to the commercial stage to be used
with real end user such as person with quadriplegia. It is also necessary to develop feeder arm
to feed medicines as well.
The system can perform three independent subtasks: scooping/stabbing, spoon wiping, and
delivery with the help of a PR2 robot manipulator of 32 DOFs, where a user can command a
preferred subtask via a web-based GUI. As compared with the existing systems, this system
provides active feeding visually-guided active feeding assistance that autonomously delivers
food inside a user’s mouth. The robot assisted feeding system has the potential to reduce self-
feeding limitations in people with motor impairments by providing them the support they need
to perform feeding tasks. Overall, the design improved the accessibility and usability of the meal
assistance system for people with motor impairments that led to self-feeding disability.
Technologies like Kinect can be integrated to meal assistance robots in the use of motion
tracking systems. These systems can achieve considerable level of sensitivities and they can
even use for track the heartbeat of a subject which can be used as an extra command for the
assistive device such as a meal assistance robot. For example, the project done by Assuit
University of Egypt describe about the use of Kinect sensor to control the motion of a robot.
Moreover, concept of sensor fusion can be used to increase the reliability of meal assistance
robots. Fusions of accelerometers, electrocardiogram sensors, blow switches, 3D mapping and
motion tracking sensors like Kinect, vibration sensors can be used together to increase the
reliability and the effectiveness of meal assistance robots.
19
REFERENCES
20
11. Kostavelis, Ioannis, et al. "Understanding of human behavior with a robotic agent through daily
activity analysis." International Journal of Social Robotics 11.3 (2019): 437-462.
12. Phuc, Le Tran Huu, et al. "Applying the Haar-cascade Algorithm for Detecting Safety
Equipment in Safety Management Systems for Multiple Working
Environments." Electronics 8.10 (2019): 1079.
13. Shim, Kyung-Hwan, et al. "Assistive robotic arm control based on brain-machine interface with
vision guidance using convolution neural network." 2019 IEEE International Conference on
Systems, Man and Cybernetics (SMC). IEEE, 2019.
14. Fattal, Charles, et al. "SAM, an assistive robotic device dedicated to helping persons with
quadriplegia: Usability study." International Journal of Social Robotics 11.1 (2019): 89-103.
15. Candeias, Alexandre, et al. "Vision augmented robot feeding." PROCEEDINGS of the
European Conference on Computer Vision (ECCV). 2018.
16. Schwarz, Max, et al. "NimbRo Rescue: Solving disaster‐response tasks with the mobile
manipulation robot Momaro." Journal of Field Robotics 34.2 (2017): 400-425.
17. Perera, Chamika Janith, Thilina Dulantha Lalitharatne, and Kazuo Kiguchi. "EEG-controlled
meal assistance robot with camera-based automatic mouth position tracking and mouth open
detection." 2017 IEEE International Conference on Robotics and Automation (ICRA). IEEE,
2017.
21
22
23
24
25
26