Abstract
This paper describes a system based on a robot, called KuBo, which relies on cloud resources to extend its capabilities for human interaction and environmental sensing to provide services for independent living. The robot uses text-to-speech and speech recognition services as well as an electronic agenda and web resources to perform several tasks. Moreover it retrieves smart environmental data from a DataBase to be aware of the context. In this paper, the cloud robotics approach is used to increase the skills of a robot, endowing the system with abilities for human–robot interaction and environmental sensing. The robotic services have been defined with a focus group involving 19 elderly volunteers and the system has been tested in a real environment with a couple of elderly users for five days. The aim of the experiment was to test the technical feasibility of the proposed cloud services using quantitative tools. The technical results show a success rate of 86.2 % for the navigation task and more than 90 % for the speech capabilities. Furthermore, the robustness of the system was also confirmed by users’ qualitative feedback.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
1 Introduction
Robot companions are becoming more common and familiar in our lives (Dario 2011). According to the International Federation of Robotics’ statistics, in 2013, about 4 million service robots for personal and domestic use were sold, 28 % more than in 2012, increasing the value of sales to US$ 1.7 billion (IFR 2015). In particular, service robotics has received a lot of attention from industry and academia, in order to face societal and demographic challenges (Zaidi et al. 2006; Stula 2012).
According to ABI research (Solis and Carlaw 2013), service robots represent the second potential big market opportunity.
In this context, Ambient Assisted Living (AAL) solutions aim to meet users’ and stakeholders’ needs, providing ICT and robotic services able to assist the user during daily activities (van den Broek et al. 2010; Moschetti et al. 2014). The main benefits which can been achieved by service robots are:
-
support senior citizens during daily activities (e.g. participation in social events, reminders, surveillance) enhancing their independent living;
-
enhance the quality of life, compensating for motion and cognitive deficits;
-
improve the quality of health services, reducing the cost for society and public health system.
In general, domestic robots need to be friendly, interact with users, and autonomously move inside the house without revolutionizing the familiar environment. On the other hand, the solutions provided by robotics services should guarantee time continuity all day long. During the last years, two main robotics paradigms have been adopted. Stand-alone robots are designed according to a robot-centred approach, where the robot alone is in charge of the entire capabilities of sensing, planning and acting (Cesta et al. 2010; RP-Vita; Aldebaran). This approach has some limitations due to the extension of the robot sensing area, the robot payload, batteries and computational capabilities. Even though the robots increase their particular abilities, they remain insufficient for continuously supporting daily activities adequately. To better understand this turning point, just think of your daily life: you perform several activities in different contexts and rarely in one place. Now the question is: are stand-alone robots sufficient for continuously supporting a person during the day in a wide range of activities and in different environments? User requests are very different and depend on the specific needs of a specific moment.
The networked robot paradigm (Sanfeliu et al. 2008) distributes sensors and computational capabilities over a smart environment and intelligent agents, like wearable and personal devices, extending the effective sensing range of stand-alone robots, improving their ability to plan and cooperate (Coradeschi et al. 2014; Volkhardt et al. 2011; Simonov et al. 2012). Therefore, the problem of the continuity of the service and computational limitations still remains (Kamei and Nishio 2012).
In this context, the cloud robotics paradigm is trying to overcome the limitations of the stand-alone and networked robotics paradigms, by integrating robots with cloud computing resources (Kuffner 2010). Recently, the cloud robotic paradigm has been defined as “any robot or automation system that relies on either data or code from a network to support its operation, where not all sensing, computation and memory is integrated into a single standalone system” (Kehoe et al. 2015). This new generation of low-cost robots (Kuffner 2010) can use wireless networking, big data, machine learning techniques, and the Internet of Things to improve the quality of their services of assistance (Lorencik and Sincak 2013). Robots with different capabilities have the possibility to share data, knowledge, and skills, exchanging information with other agents connected to the network, leading to the reduction of the overall costs (Tenorth et al. 2011).
Cloud robotics is not a completely new idea. Indeed, during the 90s, Prof. Inaba (Inaba 1997) conceptualized the remote brain paradigm, in which the hardware agents can access to a remote “intelligence” with high computational abilities. But only over the last few years with the rise of mobile technologies has the growth of internet resources and the global penetration of smartphones [Mobile Planet] made concrete the idea of cloud robotics.
2 Related research
Over the last few years, several research groups have focused their efforts on the challenges of cloud robotics, and some recent examples of its application in service robotics can be found in the literature. First of all, additional concepts related to the cloud robotic paradigm need to be cited. Kamei et al. (Kamei and Nishio 2012) expand the concept of networked robotics, proposing a new research field called Cloud Networked robotics. They describe “The Life Support Robot Technology”, a Japanese project started in 2009, focused on the development of six robotic services with high safety, reliability, and adaptability. Furthermore, Chen et al. (Chen et al. 2010) introduced the concept of Robot as a Service (RaaS) which enforces the idea of a robot that uses services from a remote resources, “this all-in-one design gives the robot unit much more power and capacity, so that it can qualify as a fully self-contained cloud unit in the cloud computing environment.” Bonaccorsi et al. (Bonaccorsi et al. 2015) expand the cloud robotics introducing the concept of Cloud Service Robotics as “The integration of different agents that allow an efficient, effective and robust cooperation between robots, smart environments and citizens.”
As stated by Kehoe (Kehoe et al. 2015), the cloud robotics field can be divided according to its benefits. Some research has been focused on the use of large datasets including video, images and vast sensor networks which are difficult to manage with the on-board capacity. In particular, the software ODUfinder [Odufinder] is able to perform object recognition exploiting external databases which contain over 3500 pictures, whereas in Kehoe et al. (2013) a robot uses the Google object recognition engine. RoboEarth was the first EU project in the cloud robotics paradigm [RoboEarth]. It allows robots to share and store information as well as manipulation strategies and object recognition models. Robots can use the cloud to offload computation and collaborate to achieve a common task. Other publications report the use of external cloud computing resources to speed up computationally intensive tasks such as SLAM (Benavidez et al. 2015; Riazuelo et al. 2014) algorithms, object recognition (Oliveira and Isler 2013) and video and image analysis (Nister and Stewenius 2006). In particular, Quintas et al. (Quintas et al. 2011) proposed a cloud robotics context aware approach for an automated system composed of mobile robots and a smart home. In order to enhance the level of scalability of the system, this approach relied on cloud computing services. Additional research has been focused on the sharing of knowledge and on the use of crowd-sourcing as a resource for the robot.
Among commercial solutions, Gostai [Gostai] has developed a cloud robotics infrastructure called GostaiNET. The robot intelligence is no longer embedded in the robot but executed in the cloud, allowing the remote execution of tasks such as voice recognition, face detection, and speech algorithms on any compatible robot. Engineers at Romotive [Romotive] have developed a companion robot which learns while you play. Thanks to the cloud, anyone can control Romo from anywhere in the world.
In this context, the aim of the present paper is to design and develop an innovative cloud-based robotic system with a user-centred approach and evaluate its technical feasibility for supporting senior citizens in daily activities at home. The system, called KuBo, is based on a mobile robot, shaped as a piece of domestic furniture, with low on-board abilities, which relies on cloud resources to extend its capabilities for interacting with humans and sensing the environment. A meticulous methodology was followed to, firstly, define the technical specifications to design and develop Kubo according to the end-users’ requirements; and secondly, to assess the technical reliability and safety of the robot in performing navigation and speech capabilities in a real environment. In particular, a specific domestic use case was designed to test the system in an apartment, inhabited by a elderly couple. The goal of the use case was to demonstrate the technical feasibility of the system when used by users. Therefore the system was left for 5 consecutive days and the elderly, after appropriate training, were able to freely ask Kubo to provide the conceived services.
The rest of this paper is structured as follows. In Sect. 3, the authors detail the applied methodology used in this research. Section 4 describes the proposed system. Section 5 summarizes the results and Sect. 6 discusses the results. Section 7 concludes the paper.
3 Methodology
This section focuses on the methodology used to build and test the KuBo robotic system to support senior citizens. The methodology is based on four phases. Phase I has been dedicated to the definition of the services, starting from an analysis of the needs of the elderly. Phase II comprised the development and integration of the KuBo system. Then in Phase III, the experimental protocol was developed and defined. Lastly, during Phase IV, the system was tested in a real environment with real users for five days.
3.1 Phase I: KuBo service definition
The services of the system have been studied and designed applying the User Centred Design approach (Heerink et al. 2009) in order to identify a concept responding to usability and acceptability criteria. A focus group, involving 19 elderly volunteers, aged from 64 to 85 years (\(\mu \) = 73.05, \(\sigma \) = 6.55), was organized in order to define the capabilities of KuBo according to the end-users’ requirements, such as their needs and lifestyles. The outcome of the focus group has produced a set of services grouped into three main areas: the use of the robot to get help for some activities, the need to have information in appropriate situations, and remote user assistance (see Table 1). The first group includes the (I) Carrying Object service, which allows the user to call KuBo to get an object stored on its shelf, such as a tablet, book or prescription glasses, (II) Internet Access is used to access web resources, such as weather forecasts, by means of a speech interface. This service is also used autonomously by the robot in order to modulate its interaction with the user. For instance, if the user has to go outside for an appointment and the weather forecast states that it is going to rain, KuBo suggests taking an umbrella. The second group contains the (III) Reminder Service, which remembers commitments and appointments, and the (IV) Monitoring Service, which alerts the user when dangerous situations are recognized by the smart environment. With the (V) Telepresence service, a caregiver can use KuBo for remote assistance. Another outcome of the focus group is that the participants prefer to control the robot through a combination of GUI (tablet) and vocal commands.
3.2 Phase II: system architecture design and implementation
The hardware and software architecture of the entire KuBo system has been developed in order to meet the users’ needs coming from Phase I. It is described in detail in Sect. 4. This phase led to the development of a small sized robot connected with cloud resources and environmental sensing abilities.
3.3 Phase III: definition of the experimental protocol
The KuBo system experimentation was conducted as a case study in which the trials took place in a real private house for 5 days. The case study was designed in order to allow elderly people to have interaction with the KuBo robot in a sequence of simple tasks according to the services defined in Phase I. During this period of testing, quantitative data was collected in order to evaluate the performance of two KuBo system modules in, e.g. navigation tasks and speech capabilities. The navigation module was chosen in order to understand whether on board functionality was a successful strategy, while the speech capabilities was selected as an example of a cloud service.
3.3.1 Technical performance evaluation tool
The evaluation metrics for assessing the technical performance are related to KuBo navigation tasks and speech recognition abilities. The navigation of the robot has three possible states: RUNNING, SUCCEEDED and FAILED. A navigation task begins when the robot goes to RUNNING (i.e. it receives a goal) and ends when it passes to SUCCEEDED or FAILED. The speech functions were evaluated considering each single sentence and the correctness of the elaboration. The following parameters have been employed to globally evaluate the system:
-
Success rate This is the percentage of the total tasks that succeeded. It gives information about the reliability of the system.
$$\begin{aligned} Success\;rate (\%) = \frac{Succeeded\;tasks}{Total\;tasks}\cdot 100 \end{aligned}$$(1) -
Failure rate It is computed as
$$\begin{aligned} Failure\;rate (\%) = 100 - Success\;rate (\%) \end{aligned}$$(2) -
Effective robot velocity This is applied only to the navigation tasks: it represents the average velocity of KuBo. It is an important parameter which influences the robot’s acceptability (Salvini et al. 2010). For the experiments, the velocity of KuBo was limited to 0.2 m/s. It is also related to the safety of the system.
-
Confidence This is applied only to the speech recognition functions: it represents the probability of the correctness of the recognition.
In addition, at the end of the test period, a researcher asked the elderly volunteers to perform again the KuBo services applying the “think aloud” (TAL) method (Lewis 1982). With the TAL method, the user is encouraged to report aloud any action or thought while carrying out the tasks. All users’ verbalizations were transcribed and then analyzed. In this study, a general inductive approach for the analysis of qualitative evaluation data (Thomas 2006) was applied. This approach consists of three phases: using raw data to create categories, establish the pertinence of the categories with the research objective, and the development of a theory. In this study, a model could not be developed due to the small sample size. However, the aim of this analysis is not to investigate the usability and acceptability because the sample is too small and not statistically significant. Therefore this analysis gives holistic information about the robustness of the system from an user point of view.
3.4 Phase IV: case study setup
The experiment was conducted as a case study in which a couple of Italian elderly volunteers tested the system in their home for 5 days. In this case study, the smart environment described in Sect. 4.2 is a simplified version comprising gas sensors, used as a proof of concept. During the 5 days, all the services were tested, including a simulated test of a gas leak. The developers provided their assistance in the house the first two days of experiments; for the remaining days, the system was accessed remotely and the users were left free to use the system. At the end of the experiment, the elderly users were more confident with the KuBo system and they performed again the services applying the TAL while a researcher transcribed their opinions.
4 System architecture
The architecture of the system is based on three components: the KuBo robot, the Smart Environments, and the Cloud Software as a Service (SaaS) (see Fig. 1). The high level software layer of the robot relies on cloud resources to endow KuBo with additional functions in a modular way. It is able to extend its sensing capabilities through using the smart environment, accessing web resources, and exploiting powerful voice and speech recognition services. The Smart Environment is composed of several devices providing the user’s position within the house as well as sensors to monitor the temperature, human presence, water/gas leaks. For the sake of example, when a sensor triggers a gas leak event, the robot retrieves the user’s position and moves towards him/her, warning the person.
4.1 KuBo robot
In this section, the design process and the software architecture of the prototype is detailed. The robot complies with several features required by a domestic robot.
4.1.1 Design
The role of user acceptability for a companion robot is crucial, therefore during the design phase of the prototype, two key points have been primarily considered: reduced dimensions to move easily in a domestic environment, and the use of a modern design style to improve the appearance and favour the friendliness of the platform. Hence, KuBo is based on the youBot [Kuka], commercialized by KUKA, a small-sized holonomic mobile base, and it is equipped with a laser scanner for navigation purposes, a depth camera, speakers, a microphone, and a tablet for human–robot interaction.
In order to favour the acceptance of the robot (Salvini et al. 2010), the original platform was modified with a design inspired by a typical “coffee table”, a common piece of furniture in homes. Figure 2 shows the design process of the robot. The overall height extends by about 30 cm and a new cover, made of black opal methacrylate, is mounted to an internal frame. Some adhesive tape has been used to personalize the prototype. The final dimensions of the prototype are 40\(\,\times \,\)40\(\,\times \,\)60 cm.
4.1.2 Software
KuBo is conceived as a platform with low computational capabilities that has to exploit cloud resources to carry out its tasks. Figure 1 depicts the architecture of the system and the software layers of the KuBo robot. The Service Manager executes the tasks, using properly the robot functions. All the software modules are implemented in ROS (Quigley 2009) and the only on-board ability is that of autonomous indoor navigation, which relies on the ROS navigation stack and uses the Dynamic Window Approach (Fox et al. 1997) for local planning and Adaptive Monte Carlo (Thrun et al. 2005) for indoor localization. It uses a 2-D static map of the environment to navigate and a laser scanner for obstacle avoidance and self-localization abilities.
The Service Manager is able to use all the modules in order to accomplish a particular task. For example, when the Reminder Module triggers an event (e.g. a doctor appointment), the Manager retrieves the user’s position through the Smart Environment Module, moves KuBo to him/her, notifies the user of the event using text-to-speech (TTS), and waits for user confirmation (Speech Recognition). If the user has to go outside, an appointment in this case, the Service Manager retrieves the weather forecast by means of the Internet Module and use the TTS to communicate the downloaded information. All these modules, with the exception of the Navigation Stack, rely on cloud resources.
4.1.3 Robot cloud modules
The system is implemented with five cloud modules:
-
Smart environment module This connects KuBo with the DataBase Management Software (DBMS), allowing the retrieval of information from the database after an authentication phase. It runs two TCP clients: the first requests the user’s position any time the robot needs to reach him/her, the second makes polling requests (1 Hz) to identify any changes in the database concerning environmental alarms. All the communications follow a proper string protocol based on JSON codification.
-
Reminder module This is able to link the Google calendar service with KuBo. It is based on Google Calendar API v3 [Google Calendar] with JSON data object. Using this API, it is possible to search and retrieve calendar events, as well as create, edit, and delete events. The user can set appointments through a web browser (or the mobile App synchronized with the calendar) and this module is able to activate the reminder service at the proper time.
-
Text-to-speech module This connects the robot with the Acapela Voice as a Service API [Acapela] using HTTP connections. This module stores locally the sentences already converted, to reduce the response time. It sends a text string and plays the audio files received from the service.
-
Internet access module This is used to retrieve generic information from web sites. In this implementation, the authors have implemented a weather forecast service as an example. The robot retrieves an HTML file from a specific web site using the HTTP protocol. The file is then parsed to find the proper information to communicate.
-
Speech recognition module This connects the robot with the Google Speech Recognition API through HTTP connections. This module also implements a dictionary of keywords to elaborate user requests, such as move to a particular room, request the weather forecast, or ask for the current time.
4.2 Smart environment
The Smart Environment is composed of two ZigBee-based wireless sensor networks (WSNs), one for user localization and the other for environmental monitoring. The user localization network is designed to locate multiple users at the same time, using received signal strength (RSS) (Esposito et al. 2015; Cavallo 2014). The WSN for environmental monitoring is composed of several sensors able to monitor the temperature, human presence, water/gas leaks, and control the lights. These sensors are distributed in the house in order to have a real-time measurement of the environment’s conditions. The information is processed and stored in the Smart Environment DataBase (see Sect. 4.3). This system manages several alarm procedures, such as a door’s opening during the night, a water or gas leak, and door/windows open when the user is outside. The performance and the accuracy of this kind of system was presented in (Bonaccorsi et al. 2015).
As already explained in Sect. 3.4, during the 5-day experiment, a simplified version of the WSNs was used, in order to not be too intrusive in their daily life and compromise the overall experiment. A gas sensor was used as a proof of concept.
4.3 Cloud SaaS
The software modules described in Sect. 4.1.3 are connected with specific Cloud SaaS. In more detail, they comprise four modules:
-
Smart environment database This stores the data collected from the WSN, while the DBMS administers entries and queries avoiding direct connection between the hardware agents (WSN and robot) and personal data. It is implemented as a relational database, based on MySQL, which has several tables: one for each sensor type containing their outputs, one with a list of the installed sensors (typology and unique identification number), one collecting environmental alarms, and another table recording the user’s estimated position. The outputs from the physical agents and the estimated user position are sent to the DBMS and recorded in the DataBase.
-
Acapela VaaS This takes as input the text string to translate, a language, and a voice type, and produces an MP3 audio file. This service allows using several languages and several voices (male, female). The robot uses this service only for unknown sentences, to reduce the response time during the interaction phase.
-
Google services In this research, the developed system uses two Google Services. The first is the calendar API, while the second is the speech recognition API.
-
Web resources The Web is full of information that can be retrieved by the robot to improve the user interaction experience. In this implementation, the authors have implemented the weather forecast as an example. The robot uses the HTTP protocol to retrieve the proper information from a dedicated web site.
5 Result
The authors provide here some technical results about the performance of the navigation and speech interactions. During the experiments, KuBo performed 94 navigation tasks. The success and the failure rates are computed analysing navigation log files that were updated every 4 s or on any change in the robot’s state. The number of successes counts the transitions between the RUNNING state and the SUCCEEDED state, while a transition to a FAILURE state increments the total number of failed navigation tasks. Figure 3 shows the positions of KuBo during the navigation tasks that succeeded, for one day of the experimental sessions.
The results, presented in Table 2, show that the success rate of the navigation task is 86.2 % while the failure rate is 13.8 %.
For safety reasons, the velocity of KuBo is limited to 0.2 m/s and the effective robot velocity, computed as the mean value during the RUNNING state, is about 0.13 m/s. The velocity is not constant within a single navigation task, due to the complexity of the route and the presence of obstacles on the way.
By means of speech interaction, performed in the Italian language, the user can activate four robot activities. He can move the robot through rooms saying “move” or “go” plus the name of the room. He can confirm a reminder event with the phrase “thank you” and can ask for information about the time, the day, and the weather forecast. The robot is also able to react to general greetings like “hello” or “what’s your name”. The Google Speech Recognition API has produced good results during the use case (see Table 3). It has a perfect recognition rate with words like “thanks” or sentences like “what time/day is it”. Interesting considerations come up if we consider the failure cases. Since this service is intended to be used by smartphones and tablet applications, several utterances are translated with web entities, business company and geographical locations. The utterance “Ciao KuBo” (84 % success rate) has often produced the output “Yahoo”, while asking for the weather forecast produced, on rare occasions, an URL. Some words, like “KuBo” or “Casa” (home), are turned into locations like Cuba, Cannes or Cagliari.
Taking into account these results, the development of a speech recognition module that uses such a cloud resource has to consider these cases to improve and provide a better interaction experience with the user.
In addition, some qualitative results come to light from the TAL method and the users’ verbalizations have been transcribed and then analysed. The first step was to group the raw text data in order to identify some categories. The outcome of this analysis is the definition of five categories:
-
Aesthetics The aesthetic attractiveness of the robot with respect to the user Regarding the KuBo’s aesthetics, the general impression is positive because according to the elderly’s answers, the colors are judged enjoyable and the size of the robot is small enough to be used in an indoor environment. Furthermore the shape of KuBo reminds older users of a coffee table, a piece of furniture suitable in interior design.
-
Anxiety Negative emotional reactions evoked when the person uses it The participants reported being relaxed during the interaction with KuBo, because it is judged ease to use. Furthermore, the user had no anxious reaction, since they perceive themselves as having control over the robot, which looks like a small piece of furniture. Furthermore this outcome shows that the Effective Robot Velocity is adequate for domestic use by the elderly.
-
Reliability The user’s feeling about the robustness of the system The KuBo robot is judged reliable enough because it seems robust as confirmed by the high success rate. Moreover, according to the elderly people, the appearance of the robot is able to communicate its functions.
-
Ease of use The ease of the interaction modalities About the interaction modalities, the elderly perceive the interfaces based on vocal commands as easier to use than the graphical one. In fact, they have understood well which vocal command to use in order to interact with KuBo. The tablet introduces some difficulties, since they are not used to having it and, in addition, they need to wear prescription glasses. Perhaps the high confidence value of the speech capabilities positively influences the perceived ease of use.
-
Utility Usefulness of the services in daily life The participants consider very useful the robotic capabilities for the improvement of their independence and safety in daily life, because, on getting old, physical and cognitive impairments can arise. Lastly, the participants claim to have fun using KuBo and they are well disposed to using it in the future because they think that the KuBo services could help overcome loneliness. The high number of acteivations of the robotic services shows the willingness of the users to use the system.
6 Discussion
Considering the performance in the navigation tasks, the quantitative results show a failure rate of 13.8 %. Although this value is not very high, the failures are more concentrated in certain areas, because they are strictly correlated with KuBo self-localization errors. Particularly, the robot positions during FAILURE states are more concentrated in the kitchen (41.67 %) and dining areas (25 %) that are contiguous to the kitchen (see Fig. 4).
The experimental phase in the real environment highlights some computational limitation of the navigation performance. These high error values are due to the presence of tables and chairs that are typical furniture in these rooms, and to the low computational power of the robot’s PC.
The possibility of exploiting cloud solutions allowing the use of more computational power for this task will be investigated in future research. Such a solution is more advisable than replacing the hardware of the robot because the cloud resource is cheaper, shareable, and reliable.
Concerning the cloud functionalities, one of the main constraints depends on the delay sensitivity of the tasks (Hu et al. 2012). The choice between a stand-alone and a cloud architecture should rely on the maximum acceptable delay in the service delivery, which is also strictly correlated with the computational abilities and with the data-rate of the communication technology involved (i.e. the average value for LTE is 45 Mb/s whereas for home ADSL it is 10 Mb/s). So, a cloud robotics architecture should be designed taking into account the optimal trade-off between the distribution of the resources, the computational capabilities, and the performance of the tasks.
In this use-case, the users expressed neither positive nor negative comments about delays in the tasks. This suggests that the technical performance is acceptable from the user’s point of view. Effectively, the speech capabilities have an high success rate, as reported in Sect. 5. Since the Google Speech Recognition is designed to be used mainly in mobile applications, the outputs are sometimes related to web resources, business company and geographical locations. The development of a recognition module for assisted living has to take into account this outcome to provide a better interaction experience with the user.
In addition, the KuBo services meet the users’ needs because they were defined by 19 elderly people in order to develop a robotic system according to the final users’ requirements; in fact the two participants reported that the KuBo system was really useful for them. Regarding the characteristics of the robot, the Aesthetics category is an important acceptability factor because the appearance of KuBo is aesthetically pleasing for the participants. Furthermore, according to the elderly persons’ comments, the prototype could be easily integrated in a domestic environment since it is small and its shape reminds them of a coffee table. Moreover, the users’ feelings about the robot’s capabilities, defined as Reliability, was evaluated positively by older users: in fact they might not use a robot if its functionalities and capabilities are perceived as useless, dangerous, or not well performing (Klamer and Ben Allouch 2010). Concerning the Anxiety category, the participants reported not feeling any negative emotional reactions when using the KuBo system, and according to other studies (Heerink et al. 2009), an high degree of acceptability is correlated with a low level of anxiety. The KuBo system is judged easy to use and elderly participants say to be disposed to use in the future because they think that the service are useful to overcome loneliness.
7 Conclusion
In this paper, the authors described a robotic platform which provides cloud robotics services in a domestic environment.
From a technical point of view, the experiments demonstrate the technical feasibility of a cloud robotics solution to extend the interaction abilities of the robot.
In effect, the cloud robotics approach gives the possibility of increasing the skills of a robot in a modular way and endows the system with text-to-speech and speech recognition abilities for human interactions, smart environments for additional sensing, and access to internet resources.
Additionally, the experience with the couple of elderly users suggests a promising acceptance of the KuBo system. This will enourage us to perform future tests, promoting the use-case approach to a pilot site methodology, which will involve more users.
A video about the use case experiment is available at: https://youtu.be/uMjp8vN4MF8.
References
Acapela Voice As a Service web site http://www.acapela-vaas.com.
Aldebaran official web site available at: https://www.aldebaran.com/en.
Benavidez, P., Muppidi, M., Rad, P., Prevost, J. J., Jamshidi, M., & and Brown, P. D. L. (2015). Cloud-based realtime robotic visual SLAM, In 9th IEEE International Systems Conference (pp. 773–777).
Bonaccorsi, M., Fiorini, L., Sathyakeerthy, S., Saffiotti, A., Cavallo, F., & Dario, P. (2015). Design of cloud robotic services for senior citizens to improve independent living in multiple environments. Intelligenza Artificiale, 9, 63–72.
Cavallo, F., et al. (2014). Development of a socially believable multi-robot solution from town to home. Cognitive Computation, 6(4), 954–967.
Cesta, A., Coradeschi, S., Cortellessa, G., Gonzalez, J., Tiberio, L., & Von Rump, S. (2010). Enabling social interaction through embodiment in ExCITE. ForItAAL: Second Italian Forum on Ambient Assisted Living.
Chen, Y., Du, Z., & Garca-Acosta, M. (2010). Robot as a service in cloud computing, Service Oriented System Engineering (SOSE). In Fifth IEEE International Symposium.
Coradeschi, S., et al. (2014). A system for monitoring activities and physiological parameters and promoting social interaction for elderly. In Z. S. Hippe, J. L. Kulikowski, T. Mroczek, & J. Wtorek (Eds.), Human-computer systems interaction: backgrounds and applications 3 advances in intelligent systems and computing (pp. 261–271). New York: Springer.
Dario, P., et al. (2011). Robot companions for citizens. Procedia Computer Science, 7, 47–51.
Esposito, R., et al. (2015). Supporting active and healthy aging with advanced robotics integrated in smart environment. In Y. Morsi, A. Shukla, & C. Rathore (Eds.), Optimizing assistive technologies for aging populations (pp. 46–77). Hershey: Medical Information Science Reference. doi:10.4018/978-1-4666-9530-6.ch003.
Fox, D., Burgard, W., & Thrun, S. (1997). The dynamic window approach to collision avoidance. IEEE Robotics & Automation Magazine, 4, 23–33.
Google Calendar for developers official website: https://developers.google.com/google-apps/calendar/.
Gostai official website available at: http://www.gostai.com/activities/consumer.
Heerink, M., Krse, B., Evers, V., & Wielinga, B. (2009). Measuring acceptance of an assistive social robot: A suggested toolkit. In The 18th IEEE International Symposium on Robot and Human Interactive Communication, RO-MAN.
Hu, G., Tay, W. P., & Wen, Y. (2012). Cloud robotics: Architecture, challenges and applications. IEEE Network, 26, 21–28.
International Federation of Robotics: http://www.ifr.org. visited on February (2015).
Inaba, M. (1997). Remote-brained robots. In IJCAI (pp. 1593–1606).
Kamei, K., Nishio, S., Hagita, N., & Sato, M. (2012). Cloud networked robotics. IEEE Network, 26, 28–34.
Kehoe, B., Matsukawa, A., Candido, S., Kuffner, J., & Goldberg, K. (2013). Cloud-based robot grasping with the Google object recognition engine. In IEEE International Conference on Robotics and Automation (ICRA) (pp. 4263–4270).
Kehoe, B., Patil, S., Abbeel, P., & Goldberg, K. (2015). A survey of research on cloud robotics and automation. IEEE Transactions on Automation Science and Engineering, 12, 398–409.
Klamer, T., & Ben Allouch, S. (2010). Acceptance and use of a social robot by elderly users in a domestic environment. In Proceedings of the 4th International ICST Conference on Pervasive Computing Technologies for Healthcare (PervasiveHealth), Munchen, Germany.
Kuffner, J.J. (2010). Cloud-enabled robots. In IEEE-RAS International Conference on Humanoid Robotics, Nashville, TN.
KuKa youBot official website: http://www.youbot-store.com.
Lewis, C. (1982). Using the “thinking-aloud” method in cognitive interface design. Yorktown Heights: IBM TJ Watson Research Center.
Lorencik, D., & Sincak, P. (2013). Cloud Robotics: Current trends and possible use as a service. In IEEE 11th International Symposium on Applied Machine Intelligence and Informatics (SAMI), 2013.
Mobile Planet. Think with Google, our mobile planet, official website: http://think.withgoogle.com/mobileplanet.
Moschetti, A., Fiorini, L., Aquilano, M., Cavallo, F., & Dario, P. (2014). Preliminary findings of the AALIANCE2 ambient assisted living roadmap, ambient assisted living. Berlin: Springer.
Nister, D., & Stewenius, H. (2006). Scalable recognition with a vocabulary tree. IEEE Computer Society Conference Computer Vision and Pattern Recognition, 2, (pp. 2161–2168).
ODUFinder ROS package, available at: http://wiki.ros.org.
Oliveira, G., & Isler, V. (2013). View planning for cloud-based active object recognition, Department of Computer Science, University of Minnesota. Rep: Tech.
Quigley, M., et al. (2009). ROS: An open-source robot operating system. In ICRA Workshop on Open Source Software., Vol. 3. No. 3.2.
Quintas, J., Menezes, P., & Dias, J. (2011). Cloud robotics: Towards context aware robotic networks. In International Conference on Robotics.
Riazuelo, L., Civera, J., & Montiel, J. M. M. (2014). C2TAM: A Cloud framework for cooperative tracking and mapping. Robotics and Autonomous Systems, 62(4), 401–413.
RoboEarth project official website: http://roboearth.org.
Romotive official website: http://romotive.com.
RP-VITA: Remote telepresence, available at: http://www.intouchhealth.com/products-and-services/products/rp-vita-robot.
Salvini, P., Laschi, C., & Dario, P. (2010). Design for acceptability: Improving robots’ coexistence in man society. International Journal of Social Robotics, 2, 451–460.
Sanfeliu, A., Norihiro, H., & Saffiotti, A. (2008). Network robot systems. Robotics and Autonomous Systems, 56(10), 793–797.
Simonov, M., Bazzani, M., & Frisiello, A. (2012). Ubiquitous monitoring & service robot for care. In Proceedings of KI-2012 conference, Germany.
Solis, P., & Carlaw, S. (2013). Consumer and personal robotics, ABI Research.
Stula, S. (2012). Living in old age in Europe—current developments and challenges, No. 7, Working Paper.
Tenorth, M., Klank, U., Pangercic, D., & Beetz, M. (2011). Web enabled robots: Robots that used the web as an information resource. IEEE Robotics and Automation Magazine, 18(2), 58–68.
Thomas, D. R. (2006). A general inductive approach for analyzing qualitative evaluation data. American Journal of Evaluation, 27(2), 237–246.
Thrun, S., Burgard, W., & Fox, D. (2005). Probabilistic robotics. Cambridge: MIT Press.
van den Broek, G., Cavallo, F., & Wehrmann, C. (2010). AALIANCE ambient assisted living roadmap. Amsterdam: IOS press.
Volkhardt, M., et al. (2011). Playing hide and seek with a mobile companion robot, In 11th IEEE-RAS International Conference on Humanoid Robots (Humanoids).
Zaidi, A., Makovec, M., Fuchs, M., Lipszyc, B., Lelkes, O., Rummel, M., & de Vos, K. (2006). Poverty of elderly people in EU25, Policy Brief, August.
Acknowledgments
This work was supported by the European Community’s 7th Framework Program (FP7-ICT-2011) under Grant agreement No. 288899 (Robot-Era Project) and grant agreement No. 601116 (Echord++ project). Additionally, this work was supported by OmniaRoboCare Project Programma Operativo Regionale Tuscany Por CReO Fesr 2007-2013.
Author information
Authors and Affiliations
Corresponding author
Additional information
This is one of several papers published in Autonomous Robots comprising the “Special Issue on Assistive and Rehabilitation Robotics”.
Alessandro Manzi and Laura Fiorini have contributed equally to this work.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Manzi, A., Fiorini, L., Esposito, R. et al. Design of a cloud robotic system to support senior citizens: the KuBo experience. Auton Robot 41, 699–709 (2017). https://doi.org/10.1007/s10514-016-9569-x
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10514-016-9569-x