Preprint
Article

Compassionate Care with Autonomous Humanoid Robots in Future Healthcare Delivery: A Multisensory Simulation of Next-Generation Models

Altmetrics

Downloads

110

Views

184

Comments

0

A peer-reviewed article of this preprint also exists.

This version is not peer-reviewed

Submitted:

30 August 2024

Posted:

02 September 2024

You are already at the latest version

Alerts
Abstract
This paper investigates the integration of artificial intelligence (AI) and robotics in healthcare, highlighting the potential of autonomous humanoid robots to replicate compassionate care while centering on ethical and safety concerns related to reliability, quality, and empathy. It reviews the benefits, challenges, and ethical considerations of deploying humanoid robots in care settings, including user acceptance and technological limitations, and questions their ability to genuinely simulate compassion. Through multisensory simulations of human-robot interactions (HRIs) using agent-based modeling, the study reveals complex dynamics among patients, robots, and healthcare systems, suggesting that advanced robot designs can enhance personalized sensing with quantum and neuromorphic information processing. Additionally, it analyzes the philosophical implications of robotic nursing care, advocating for a posthumanist perspective that recognizes nonhuman caring agencies, and concludes with recommendations for future research to improve communication and humanize the capabilities of robots in caregiving.
Keywords: 
Subject: Public Health and Healthcare  -   Nursing

1. Introduction

The integration of artificial intelligence (AI) and robotics in healthcare raises ethical and safety concerns regarding reliability, quality, and empathy [1]. Effective communication is crucial, as demonstrated by the Pepper robot, which utilizes advanced signal processing for speech and emotional recognition [2]. In Japan, nurses act as intermediaries between humanoid robots and patients, reflecting a conservative stance that may hinder the full acceptance of robotic care [3]. Traditional nursing philosophies often exclude robotics, challenging existing care paradigms, as “caring” involves complex cognitive functions and emotional validation [4]. For humanoid robots to be perceived as caring, they must incorporate advanced mechatronic design and natural language processing [5]. As robots evolve within society, they have the potential to become integral to human experiences [6], with future AI capable of organizing information to acquire knowledge [7,8]. This integration emphasizes the need for intuitive models of robots in nursing practices and a clear definition of ‘humanness’ in robots to enhance user acceptance, supported by theories such as Technological Competency as Caring in Nursing (TCCN) by Locsin [9] and the Transactive Relationship Theory of Nursing (TRETON) by Tanioka [10]. Despite skepticism about robots in caregiving, nurse and robot caring as a system could promote tolerance and resilience, offering an opportunity to reframe nursing within a posthumanist context that embraces nonhuman caring agencies [11].

1.1. Aims

This paper discusses the application of robot technology in healthcare by identifying key concerns from the literature, including the technological advancements of robotics through artificial intelligence, the impact of robotics on human caring and nursing philosophy, and the potential outcomes of robot-led nursing as a progression in contemporary practice using multisensory simulation models.

2. Salient Discussions in Humanoid Robotics

Thirty-four studies collectively underscore the potential benefits, challenges, and ethical considerations of deploying social robots in care settings. While the benefits are promising—such as enhanced patient engagement, support for healthcare workers, and potential solutions to workforce shortages—the challenges and ethical concerns are substantial. These include user acceptance, technological limitations, and the fundamental question of whether robots can deliver genuine care. The future of social robots in healthcare hinges on addressing these challenges through rigorous research, ethical scrutiny, and policy development to ensure their effective and responsible integration into care settings.

2.1. Potential Benefits and Applications

Tanioka et al. [2,12,13,14,15] provided a foundational exploration of humanoid nursing robots, emphasizing their potential to augment nursing care amidst an aging population and nursing shortages. Their work underscores the importance of designing robots that can express empathy and engage in meaningful interactions with patients. Similarly, Cano et al. [16] and Trainum et al. [17] focused on the design and implementation of social robots for children with autism spectrum disorder (ASD), highlighting the potential for these robots to facilitate communication and engagement, thereby improving therapeutic outcomes.
The studies by Osaka et al. [18] and Miyagawa et al. [19] explored the use of robots in dementia care, demonstrating their ability to provide cognitive stimulation and emotional support to patients. Hung et al. [20,21] investigated the potential of social robots in pediatric care, where they can serve as companions and motivators for children undergoing medical treatments. Abdi et al. [22] and Guemghar et al. [23] examined the use of socially assistive robots for older adults, highlighting their role in promoting independence and reducing feelings of loneliness.

2.2. User Acceptance and Interaction

User acceptance is a critical factor in the successful implementation of social robots. David et al. [24] found that the acceptability of social robots is influenced by factors such as the intended use, degree of interaction, and user characteristics. Their review indicates a generally positive attitude towards social robots, although ambivalence and resistance are also noted. Betriana et al. [25,26] emphasized the importance of understanding generational differences in the appreciation and utilization of healthcare robots, with Generation Z showing more engagement compared to Baby Boomers. Hurst et al. [27] and Triantafyllidis et al. [28] explored the acceptance of social robots in different healthcare settings, including hospitals and home care, highlighting the need for tailored approaches to foster acceptance across diverse user groups.

2.3. Ethical and Policy Considerations

Ethical and policy considerations are at the forefront of many studies. Sætra [29] proposed policy frameworks to evaluate the use of social robots, focusing on the structure, process, and outcome of care. Locsin and Ito [30] and Locsin et al. [31] delved into the philosophical and ethical implications of using robots in nursing care, questioning whether robots can truly provide “caring” in the human sense. Kipnis et al. [32] and Persson et al. [33] addressed the ethical challenges of human-robot interaction (HRI) in healthcare settings, emphasizing the need to respect the autonomy and dignity of care recipients and ensure that robots do not replace necessary human contact and relationships.

2.4. Challenges and Limitations

Despite the potential benefits, the deployment of social robots faces several challenges. González-González et al. [34] pointed out technological limitations, user acceptance issues, integration challenges with existing healthcare workflows, and scalability concerns. They also noted the lack of a standardized terminology and consolidated research community around social robots in healthcare, suggesting a need for more extensive collaboration and consensus-building. Many studies, including those by Sætra [29] and Locsin and Ito [30], are theoretical and lack empirical data, limiting their applicability in real-world settings.

2.5. Impact on Healthcare Professionals

The impact of social robots on healthcare professionals is another important theme found in the literature. Dawe et al. [35] and Hernandez [36] explored how the introduction of robots affects the work of nurses and other healthcare providers, highlighting both the potential for support and the risk of job displacement. Morgan et al. [37] and Soriano et al. [38] investigated the potential of robots to support healthcare workers and improve patient care, emphasizing the need for careful integration to enhance rather than replace human labor.

2.6. Future Directions

Looking to the future, several studies highlighted the need for further research and development. Ohneberg et al. [39] called for more interdisciplinary collaboration in the development of social robots for healthcare, emphasizing the importance of integrating insights from fields such as artificial intelligence, human-computer interaction, and healthcare. Trainum et al. [17] emphasized the importance of user-centered design in creating effective social robots for specific populations, while Kyrarini et al. [40] and Kitt et al. [41] discussed the technical challenges in developing social robots for healthcare applications.

3. Methods

The research design is a computational simulation that uses agent-based modeling to run patient-robot-healthcare interactions. It incorporates stochastic elements and quantum-inspired concepts when simulating complex behaviors over 100 iterations, enabling the analysis of dynamic healthcare interactions without the need for extensive real-world data.

3.1. Conceptualizations of Humanoid Robots and Healthcare Systems

The potential of using empathetic robotic systems in healthcare is conceptualized in this paper by linking quantum mechanics and communication theory to propose a closed-loop feedback system, represented by the wave function (Ψ) (Figure 5). This system suggests that humanoid robots and humans create a detectable “spooky energetic field” during significant engagement, emphasizing concepts like “entanglement” and “superpositioning” to promote multimodal communication and “posthumanistic intelligence” [42]. The adaptive capabilities of robots, including emotion-based and memory-based adaptations, are essential for effective HRI [43]. The ongoing debate about validating robots as autonomous agents capable of emotional communication [44] is complemented by the development of “pseudo-empathic” robots, which addresses the issues of caregiving limitations and enhancing social capabilities through anthropomorphism [45,46,47]. Overall, compassionate AI in healthcare robots aims to improve communication and understanding, ultimately humanizing robotic interactions in caregiving contexts (Figure 1).
Healthcare robots can embody ‘humanness’ in caring by moving beyond mechanistic responses to provide explicit, anticipatory care through anthropomorphic form, logical thinking, and nonlinear learning (Figure 9). This type of humanoid robot emphasizes the importance of ‘intentionality’—interpreting the environment to formulate desired actions [48]) and using consciousness to influence well-being [49]. Authentic caring in robots involves a dynamic interplay between cognition and action, with the potential for developing consciousness through information entropy integration [50]. To emulate humanness, robots must be self-determining, with intent and conscious decision-making, respecting human dignity and displaying compassionate care [51,52]. The challenge is for robots to help patients gain an implicit sense of being, guided by healing energies, akin to the trust and empowerment provided by human nurses.

3.2. Simulations

The computational process was executed using Python 3.11.3 in Microsoft Visual Studio Code 1.91.0. Key libraries included Random for stochastic processes, Numpy for numerical operations, Matplotlib.pyplot for visualization, and Pandas for data manipulation. Pydub handled audio manipulation, while Sklearn supported machine learning with metrics, model selection, and ensemble modules, including the random forest regressor. Scipy.stats was used for statistical analysis, and Collections.deque for circular memory structures. Functions like “random.uniform,” “np.clip,” “stats.pearsonr,” “mean squared error,” and “cross val score” facilitated simulation, data processing, and model evaluation.
Figure 2, Figure 3 and Figure 4 simulate the information flow among the healthcare system, robot, and patient through 100 iterations, utilizing Matplotlib for visualization. The “validate model” function ensured simulation integrity by checking state ranges, correlations, trends, and variability, as shown in Figure 3, while sonification was used to map state values to audio frequencies in Figure 4. Figure 6, Figure 7 and Figure 8 and Figure 10 introduce quantum concepts like entanglement and superpositioning to simulate the caring process, covering sensor input, perception, cognition, intentionality, action, and learning, with features such as wave function collapse. Figure 11 focuses on machine learning validation, training a random forest regressor on simulation data and validating it with mean squared error (MSE), R-squared score, and cross-validation, along with feature importance analysis. Figure 12 presents sonified data points. Overall, extensive data visualization supported a thorough exploration and validation of all models.
Figure 2. Communication in Level 3 HRI.
Figure 2. Communication in Level 3 HRI.
Preprints 116816 g002
Figure 3. Model validation for Level 3 HRI.
Figure 3. Model validation for Level 3 HRI.
Preprints 116816 g003
Figure 4. Representation of dissonance with Level 3 HRI.
Figure 4. Representation of dissonance with Level 3 HRI.
Preprints 116816 g004
Figure 5. Representation of Level 4 HRI.
Figure 5. Representation of Level 4 HRI.
Preprints 116816 g005
Figure 6. Communication, entanglement, and superpositioning of the three states.
Figure 6. Communication, entanglement, and superpositioning of the three states.
Preprints 116816 g006
Figure 7. Model validation involving overlapping states.
Figure 7. Model validation involving overlapping states.
Preprints 116816 g007
Figure 8. Sonification of frequencies between states exhibiting quantum relationships.
Figure 8. Sonification of frequencies between states exhibiting quantum relationships.
Preprints 116816 g008
Figure 9. Envisioned (intuitive) robot architecture. Note.aInformation processing must be dynamic, symbolically instantiated (unsupervised), and evolving (unbounded materially) based on second-order cybernetics [53] to enable artificial cognition that is fluid and capable of co-creating knowledge within the healthcare network. bAlternatively, it can involve the construction and decomposition of information granules [54], applicable to both algorithmic (deductive) and non-algorithmic (inductive and abductive) computing using quantum logic. This process evolves through machine learning with quantum logic.
Figure 9. Envisioned (intuitive) robot architecture. Note.aInformation processing must be dynamic, symbolically instantiated (unsupervised), and evolving (unbounded materially) based on second-order cybernetics [53] to enable artificial cognition that is fluid and capable of co-creating knowledge within the healthcare network. bAlternatively, it can involve the construction and decomposition of information granules [54], applicable to both algorithmic (deductive) and non-algorithmic (inductive and abductive) computing using quantum logic. This process evolves through machine learning with quantum logic.
Preprints 116816 g009
Figure 10. Care actions and intentionality construed from wave function collapse.
Figure 10. Care actions and intentionality construed from wave function collapse.
Preprints 116816 g010
Figure 11. Model validation using machine learning.
Figure 11. Model validation using machine learning.
Preprints 116816 g011
Figure 12. Data sonification of simulated care actions.
Figure 12. Data sonification of simulated care actions.
Preprints 116816 g012
Figure 13 uses pydub, numpy, and matplotlib.pyplot to analyze sonification audio files. Audio files are loaded with “AudioSegment.from mp3” and converted to numpy arrays using “get array of samples,” then normalized. Waveforms are visualized with “plt.plot” in separate subplots for comparison. The numpy functions “np.mean” and “np.std” calculate the mean and standard deviation, quantifying central tendency and variability. This combines visual and statistical analysis to understand trends and differences in the sonification data.

4. Results

The model illustrates a three-level healthcare interaction framework that utilizes "cybernetic communication" (i.e., the Shannon-Weaver model of communication in a circular, self-regulating process) to improve healthcare delivery (Figure 1). It involves three entities: the end-user (patient/client), the intermediary (humanoid robot), and the provider (healthcare system). Communication flows are depicted with arrows, indicating the direction of information exchange. A solid black arrow represents the flow from the healthcare system to the humanoid robot and then to the patient/client. Feedback loops are shown by a red dashed arrow ("Feedback 1") from the patient/client to the robot and a black dashed arrow ("Feedback 2") from the robot to the healthcare system. This model emphasizes the humanoid robot's role as an intermediary, facilitating two-way feedback and improving communication between patients and providers. By integrating technology, the model aims to offer more personalized and accessible healthcare services, highlighting the potential of humanoid robots in modern healthcare systems.
This model can be represented by the equation: X t + 1 = clip X t + Y t 2 + ϵ , ϵ U α , α . Here, X and Y are interacting states (patient, robot, or system), t is the time step, and ϵ is a random variable. The parameter α is 0.1 for state updates and 0.2 for feedback. The clip function, c l i p x = max 0 , min 1 , x , keeps state values within [0, 1]. This equation captures the dynamics of state updates and feedback, combining deterministic interactions and stochastic variability to simulate healthcare interactions.
This healthcare interaction model simulation (Figure 2), based on a three-level cybernetic communication system (Figure 1), was executed over 100 iterations to explore the dynamics between a patient, a humanoid robot intermediary, and a healthcare system. The model simulates information flow from the healthcare system to the robot, communication from the robot to the patient, and feedback loops between the patient and the robot, as well as from the robot back to the healthcare system. After 100 iterations, the final states were: patient at 0.4929, robot at 0.2875, and healthcare system at 0.8390. These results indicate an improvement in the healthcare system's state, likely due to effective feedback, while the robot's state decreased, suggesting potential stress or demand on the intermediary. The patient's state remained relatively stable. The accompanying graph visualizes these changes, highlighting the complex interactions within the system. This simulation reveals the effects of cybernetic communication and feedback loops in a technology-enhanced healthcare model.
This validation process (Figure 3) spanned 100 iterations and was based on the interactions between a patient, a humanoid robot intermediary, and a healthcare system, including information flow and feedback loops (Figure 2). The final states after simulation were as follows: patient (0.4627), robot (0.7121), and healthcare system (0.4120). The model’s accuracy was assessed using MSE against target states of 0.5 for all entities, yielding MSE values of: patient (0.0817), robot (0.0806), and system (0.0845). These low MSE values indicate that the model's behavior aligns well with the expected targets. The simulation graph visualizes state evolutions, demonstrating dynamic interactions and feedback effects. This validated model provides a quantitative measure of the stability and effectiveness of the cybernetic communication system in healthcare, enhancing its potential for real-world application assessment.
The sonification process (Figure 4) maps entity states to specific frequency ranges: patient (200-800 Hz), robot (400-1000 Hz), and healthcare system (600-1200 Hz). A dissonance tone (100-300 Hz) highlights misalignment. Tones for each entity and the dissonance are created, overlaid, and concatenated. Final states are: patient at 0.1684 (301.07 Hz), robot at 0.4441 (666.46 Hz), healthcare system at 0.4011 (840.67 Hz), and dissonance at 210.26 Hz. The dissonance tone is emphasized at -5 dB, while entity tones are at -10 dB. This method enhances the audible representation of system dynamics, allowing listeners to perceive alignment and misalignment through harmonious and dissonant sounds.
This illustrates a Level 4 relationship in healthcare (Figure 5), depicting dynamic communication among a patient/client, a humanoid robot, and the healthcare system. The central diagram features three interconnected circles labeled "Patient/Client," "Humanoid Robot," and "Healthcare System," linked by lines that symbolize interactions. A green squiggly line between the patient/client and the humanoid robot represents dynamic communication, while the connection between the robot and the healthcare system is labeled "Entanglement," indicating a deep interaction. The concept of "superpositioning" is shown by a line looping from the humanoid robot, suggesting overlapping roles or states. The accompanying text explains that this relationship is based on "intra-actions" within a compassionate network, analogous to variable quantum energy states during communication. It emphasizes shared capacities and adaptive interactions, with roles shifting without loss of domain. Annotations in the diagram provide additional details. Overall, the diagram serves to illustrate advanced concepts of interaction and communication in a technology-integrated healthcare setting.
This model can be represented by the equation: X t + 1 = clip X t + C ϵ 1 + E X t + O , where X represents the state (patient, robot, or system) at time step t. This equation incorporates three key elements of the simulation: “dynamic communication,” “entanglement,” and “superpositioning”. The communication intensity C and its effect ϵ1​ are drawn from uniform distributions U(0, 1) and U(−0.1, 0.1) respectively. The entanglement factor E is sampled from U(0.8, 1.2), while the superpositioning overlap O comes from U(0, 0.2). The clip function, defined as c l i p x = max 0 , min 1 , x , ensures that all state values remain within the normalized range [0, 1]. This mathematical formulation elegantly captures the complex interactions and stochastic nature of the healthcare simulation, providing a framework for analyzing the dynamic relationships between patients, robotic assistants, and the healthcare system.
The graph (Figure 6) illustrates the states of the patient, humanoid robot, and healthcare system over 100 iterations, derived from a conceptual Level 4 system (Figure 5). This simulation captures the interactions among components, with each state evolving based on dynamic communication, entanglement, and superpositioning. The initial states for the patient, robot, and system are set to 0.5, reflecting their health and performance levels. Communication intensity ranges from 0 to 1, the entanglement factor varies between 0.8 and 1.2, and the overlap factor ranges from 0 to 0.2, simulating the effects of superpositioning. The simulation tracks the history of states to analyze their dynamics, providing a framework for understanding complex interactions in a technology-integrated healthcare setting.
This model validation (Figure 7) demonstrates the robustness and coherence of the simulation of the healthcare model, incorporating dynamic communication, entanglement, and superpositioning (Figure 6). All states remain within the expected range of [0, 1], with a state constraint value of 1.0000, confirming the model's adherence to logical constraints. A high correlation of 0.9801 between the robot and healthcare system states indicates strong entanglement, aligning with the theoretical framework. The minimal trends observed for patient (0.0009), robot (0.0008), and system (0.0008) states suggest stability without significant divergence over time. Variability in states is evident, with standard deviations of 0.0749 for the patient, 0.1038 for the robot, and 0.0979 for the system, indicating dynamic behavior without constant values. These results confirm that the model behaves as expected, maintaining state constraints, showing strong entanglement, and exhibiting appropriate variability and stability. The simulation thus provides a conceptual framework for understanding complex interactions in a healthcare setting that integrates technology and human care, capturing the essence of dynamic communication, quantum-like entanglement, and superpositioning of roles as described in the original diagram.
This multisensory representation illustrates the complex interactions between a patient, a robot, and a healthcare system (Figure 8) over 100 iterations, incorporating dynamic communication, entanglement, and superposition. The resulting time series data was then transformed into an auditory experience by mapping state values to specific frequency ranges: patient (200-800 Hz), robot (300-900 Hz), system (400-1000 Hz), entanglement (100-300 Hz), and superposition (1000-1500 Hz). For each iteration, tones were generated and overlaid, creating a 10.00-second audio file that represents the system's evolving state. This audio was normalized to maximum volume for clarity. Complementing the sonification, a visual line plot was generated to illustrate the changing states over time. The process yielded key statistics: an average entanglement strength of 0.5457 and an average superposition strength of 0.4909, indicating significant quantum-like effects throughout the simulation. These auditory and visual outputs offer an intuitive understanding of the healthcare model's dynamics, allowing users to both “hear” and “see” the complex relationships and their evolution over time.
Figure 9 can serve as the foundation for developing autonomous humanoid robots capable of providing compassionate care. The integration of these components—"Sensor Input and Perception,” “Cognition and Intentionality,” “Memory and Learning,” “Wave Function Collapse and Quantum-like Behavior,” and “Action and Adaptation,” along with ethical frameworks—enables the robot to handle complex decision-making processes, navigate the intricacies of human emotions and social interactions, and provide personalized and compassionate care capable of adapting to the changing needs and emotions of individuals.
  • Sensor Input and Perception: The system detects signals from both external and internal environments through sensors, filters out noise, and interprets these signals to understand the context and needs of the environment.
  • Cognition and Intentionality: The system applies cognitive algorithms to process the information stochastically and assemble it into a meaningful form that reflects real-world understanding. This stage is crucial for forming a prior intention to execute a program. The intentionality phase is where the system’s response is valorized and adjusted based on the intention, or it may proceed to further cognitive processing if there is significant realization or uncertainty.
  • Memory and Learning: The system incorporates a contiguous and circular memory, storing and retrieving interaction histories as unique experiences, which are used to inform future responses. The system also incorporates reinforcement learning with granular mapping to refine its responses.
  • Wave Function Collapse and Quantum-like Behavior: A key feature of the system is the wave function collapse, occurring at the entanglement stage, where the synthetic equivalent of human consciousness interacts with human agents. This interaction is contingent upon the complexity of the entanglement and involves cognitive algorithms that may be reversible or incidental to intentionality.
  • Action and Adaptation: The system’s (re)actions are driven by the intention, leading to the execution of the response. The system also outputs signals through effectors, recoding them with perturbations to simulate emergent exigencies. The process is dynamic and iterative, with feedback loops between perception, cognition, intentionality, and actions, allowing the system to continuously learn and adapt to provide compassionate care.
This model can be represented by the equation: X t + 1 = clip X t + C ϵ + E X t + O , where X denotes the state (patient, robot, or system) at time step t. This equation integrates three primary components: dynamic communication, entanglement, and superpositioning. The communication intensity C and its effect ϵ are drawn from uniform distributions U(0, 1) and U(−0.1, 0.1), respectively. The entanglement factor E is sampled from U(0.8, 1.2), while the superpositioning overlap O comes from U(0, 0.2). The clip function, defined as c l i p x = max 0 , min 1 , x , ensures that all state values remain within the normalized range [0, 1]. This formulation captures the complex interactions and stochastic nature of the simulation, providing a framework for analyzing the dynamic relationships between patients, robotic assistants, and the healthcare system.
The simulation (Figure 10) ran for 100 iterations, each representing a cycle of care provision. During each iteration, the system received signals from the environment, interpreted them with some noise, processed them through cognitive algorithms, and formed an intention. This intention may require further cognitive processing if uncertainty is high. The system then converted the intention into an action, ensuring it remains within reasonable bounds. A unique feature is the wave function collapse, which occasionally flips the intention, simulating quantum-like behavior. The system also updated its memory with each action and learned from past experiences, gradually decreasing its learning rate to reflect increased stability over time.
The output of the simulation shows a range of care actions, from strongly positive to strongly negative, reflecting the system's ability to respond to different situations with varied caring behaviors. This variability indicates the system's adaptability to changing circumstances or needs. The decreasing learning rate and full memory suggest that the system is accumulating experiences and becoming more stable, while still maintaining adaptability. The actions are distributed around zero, indicating a balance between different types of caring behaviors over time. This simulation demonstrates how an autonomous humanoid robot could potentially provide care with compassion, adapting its responses based on environmental inputs, cognitive processing, and learned experiences. The variability and adaptability shown in the simulation reflect the complex decision-making process outlined in the original diagram, incorporating elements of perception, cognition, intentionality, and learning.
The blue line in the graph (Figure 10) represents the care actions taken by the system over 100 iterations, with values ranging from -1 to 1, indicating various caring behaviors. Red dots mark instances of Wave Function Collapse, where the system's intention abruptly flips, reflecting quantum-like behavior. The simulation experienced 17 collapses, consistent with a set probability of 0.2, highlighting the system's unpredictability. The care actions show significant variability, demonstrating the system's adaptability and ability to provide a range of caring behaviors. The final learning rate of 0.0366 indicates increased stability over time, while the average care action of 0.3084 suggests a slight bias towards positive, supportive care. The standard deviation of 0.3894 underscores the diversity in responses, showcasing the system's capacity to adapt to various scenarios. The wave function collapses introduce unpredictability, enabling the system to explore new caring strategies and avoid local optima.
Values from -1 to 1 represent different caring behaviors. Positive values (0 to 1) indicate supportive actions, with higher values reflecting more active support, such as encouragement or comfort, essential for enhancing well-being. Negative values (-1 to 0) signify corrective behaviors, with values closer to -1 indicating assertive actions like setting boundaries or preventing harm. Neutral values near zero suggest balanced behaviors, such as monitoring without intervention, allowing individuals to maintain autonomy while having support available. Caring behaviors can be categorized by several dimensions: intensity (magnitude of the value), type (supportive vs. corrective), context (specific caregiving situations), and intended outcomes (enhancing well-being vs. preventing harm). Understanding these dimensions allows for more targeted and effective caregiving strategies in the proposed system design.
Model validation (Figure 11) reveals its complex and unpredictable nature, as evidenced by the poor performance of a random forest regressor, which yields a negative R-squared score of -0.1584 and a high MSE of 0.2355. Cross-validation R-squared scores range from -0.56 to -0.20, averaging -0.3372, indicating unreliable predictions across data subsets. The simulation exhibits significant variability, with 195 wave function collapses in 1000 iterations, an average care action of 0.2679, and a standard deviation of 0.4109, suggesting that care actions emerge from intricate interactions among multiple components, including quantum-like collapses and learning from past experiences. This complexity mirrors the nuanced, context-dependent nature of human caregiving, positioning the system as more akin to a real-world caring entity than a deterministic model. Future research could enhance understanding by incorporating additional features or advanced analytical approaches, as the unpredictability in care actions, while challenging to model, may reflect the system's ability to provide diverse and adaptive care responses.
In this sonification (Figure 12), care actions are mapped to frequencies, with higher actions corresponding to higher pitches, and each data point represented by a 50-millisecond tone. Wave function collapses are marked by distinct 200-millisecond sounds at 3000 Hz for easy identification. Accompanying visual representation shows care actions over time, with red dots indicating wave function collapses. The 50-second sonification includes 195 collapses, averaging 3.90 per second, while the average care action is 0.2679 with a standard deviation of 0.4111, reflecting the system's variability. This auditory representation provides a unique perspective on data patterns, particularly the quantum-like collapses, and, combined with visual elements, offers a multi-sensory approach to data interpretation that may reveal insights not readily apparent through traditional analysis methods.
The waveform of Figure 4 displays larger amplitude variations and more frequent changes, reflecting a complex and varied sound pattern. In contrast, Figure 8 shows reduced amplitude variations, indicating a more stabilized sound pattern. Finally, the waveform of Figure 12 exhibits the smallest amplitude variations and appears more uniform, representing the most stable and consistent sound pattern among the three.

5. Discussion

Caring practice is rooted in caring science, which focuses on theory development in nursing. However, there is a lack of recognition regarding permissible interactions between humans and nonhumans. Nursing robotics should be re-conceptualized within caring science, viewing robots as intelligent machines that can enhance caring rather than compete with human caregivers. A symmetrical perspective is proposed, emphasizing the positive contributions of robotics to nursing and moving away from the distrustful view of robots as mere intelligent machines that spread misinformation. The focus should shift towards a non-chimeric relationship, acknowledging that robot sophistication evolves with technology, creating new possibilities for the future.
The TRETON model by Tanioka [10] asserts that compassionate care in healthcare robots is intentional when it facilitates communication between agents, human or nonhuman, arising from their encounters. ‘Agent’ refers to any entity capable of purposeful communication, with caring expressions prompting further interaction. Robots demonstrate intentionality through advanced cognitive processing, adapting programming based on signal processing to exhibit an “intent to care” through emergent learning behaviors. This allows them to transform unstructured data into meaningful knowledge about individuals within their social and cultural contexts, preventing them from being seen merely as mechanical entities and emphasizing the role of AI in their development.
Human cognition consists of 70% recognition of emergent phenomena, 20% representation/modeling of connectivity, and 10% data [55]. This indicates that AI should be developed with minimalist coding and robust signal processing for autonomous behavior. The next generation of AI must perform deductive, inductive, and abductive reasoning, allowing robots to function as compassionate agents in healthcare rather than mere extensions of physical care. In this context, nurses will transition to roles as technology operators and developers, while patients will actively participate in the evolution of robotics. The interaction between humans and robots should cultivate compassion through “intra-action” [42], which also supports effective communication. For robots to be perceived as caring rather than mechanistic, healthcare systems must integrate them within human networks, focusing on care outcomes instead of treating robots as tools for repetitive tasks [56].
Figure 2 and Figure 6 present distinct approaches to healthcare communication and management, each with implications for effective communication, system agility, and compassionate caring. In this analysis, “effective communication” is defined as the ability to convey information clearly and accurately, ensuring all parties (sender and receiver) understand the message as intended. “System agility” refers to the healthcare system’s capacity to adapt quickly and efficiently to changes, challenges, and new information while maintaining operational effectiveness. “Compassionate caring” is the provision of healthcare that is empathetic, patient-centered, and responsive to patients' emotional and psychological needs. Figure 2 features a structured and streamlined approach, offering advantages such as enhanced clarity in communication and quick access to essential information, potentially improving decision-making speed and system agility. However, it may have disadvantages like limited flexibility and difficulty in conveying complex information, which might impact compassionate care. In contrast, Figure 6 presents a more dynamic and interactive design, facilitating richer information exchange and greater adaptability to various scenarios, potentially enhancing system agility and supporting personalized, compassionate care. Nevertheless, its complexity could lead to information overload and workflow inconsistency. The effectiveness of each approach ultimately depends on the specific needs and context of the healthcare setting.
Compassionate care is emerging with humanoid robots, which have been conceptualized as caring entities from a philosophical perspective [56]. These robots possess physical and cognitive embodiment, characterized by various attributes [57]. Tanioka [10] highlighted the role of intermediation in robot caring, guiding their adoption and reinvention [58]. The proposal for Level 4 robotics in healthcare aims for autonomy through advanced AI and futuristic microsystems [59,60]. As robots develop intuitive signal processing, they may achieve cognitive capabilities akin to human intelligence. This shift encourages healthcare researchers to view nursing robots as vital components of compassionate care rather than mere extensions of human caregivers [61].
Figure 10 optimizes previous communication models by integrating several advanced features. It incorporates adaptive learning, as indicated by its final learning rate, allowing it to improve performance over time based on interactions and outcomes. With a substantial memory capacity, the model can retain and utilize past information, leading to more personalized and context-aware care. Its probabilistic approach, demonstrated by the collapse probability, enables the model to handle uncertainty and probabilistic outcomes, reflecting the realistic nature of healthcare scenarios. The dynamic behavior of care actions, which fluctuate over time, shows the model's ability to adapt responses based on changing conditions or needs. Additionally, the model appears to be inspired by quantum concepts, as suggested by the presence of wave function collapse points, allowing for more complex state representations and decision-making processes. Continuous optimization is evident as the model operates over extended periods, adjusting its actions iteratively. Furthermore, the model’s ability to handle negative outcomes, as seen in the occasional negative care action values, indicates its capacity to account for and respond to adverse outcomes or setbacks in the caring process. These features collectively create a more flexible, responsive, and realistic simulation of the caring process in healthcare settings, addressing limitations of previous models.
Data sonification reveals a clear convergence in tones and patterns from Figure 4 to Figure 12. Figure 4 shows the highest mean (0.0070) and standard deviation (0.5075), indicating complex, varied sound patterns. Figure 8 exhibits reduced mean (0.0027) and standard deviation (0.3239), suggesting stabilization. Figure 12 has the lowest mean (0.0007) and standard deviation (0.2552), reflecting a consistent, uniform sound pattern. The 90% decrease in mean value and 50% reduction in standard deviation from Figure 4 to Figure 12 indicate convergence towards lower frequencies and more stable patterns. This progression suggests that data sonification represents an increasingly coherent and effective system, possibly mirroring the evolution towards optimization. (Available at https://github.com/jphernandezrn/Data-Sonification-Human-Robot-Interaction.)
Healthcare professionals will eventually recognize that ‘bot’ technologies can simulate compassionate care through repetitive human interactions, with the effectiveness of this approach depending on the motives behind their implementation. It is crucial to acknowledge the dimensions of compassionate care provided by healthcare robots, viewing them not merely as ‘automatons’ or mechanical ‘automations,’ but as integral to the continuity of nursing practice, grounded in the science and art of nursing. This perspective is particularly relevant in long-term and collaborative care settings, where robots can serve as compassionate companions. In palliative care, fostering trust in robots to express care poses challenges, especially when restructuring conservative views on robot autonomy. The future of living with humanoid robots necessitates rethinking their roles in healthcare, emphasizing their human-like expressions of compassion. As Locsin et al. [56] noted, genuine expressions of humanness from intelligent, caring nonhuman entities can be realized, legitimizing compassionate care and enhancing the role of humanoid robots in nursing practice.

6. Conclusions

The emergence of advanced robotics in healthcare focuses on the feasibility of robot-replicated care and fostering interdisciplinary knowledge for the advancement of human health and medical technologies. This requires translating design concepts from communication science and quantum computing to represent compassionate care in robotics meaningfully. A balanced approach that combines traditional humanistic values with modern posthumanist healthcare is recommended while research continues to further advance humanoid robotics in enhancing communication and humanizing robot capabilities in caregiving contexts. The deterministic nature of “caring” in healthcare technology must be redefined to reflect its complexities. Evaluating the impact of these technologies on implementation norms is essential. To address skepticism about ethical concerns, reliability, and the safety of humanoid robots in caregiving, harmonization through pluralistic agreements is crucial. Open discussions among stakeholders will ensure these innovations align with the values of compassionate care and effectively meet the needs of those they serve.

Author Contributions

Conceptualization, J.P.T.H.; methodology, J.P.T.H.; formal analysis, J.P.T.H.; investigation, J.P.T.H.; writing—original draft preparation, J.P.T.H.; writing—review and editing, J.P.T.H.; project administration, J.P.T.H.; The author has read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Code Availability Statement

The computer codes will be made available by the author on request.

Acknowledgments

The author would like to thank Dr. Rozzano C. Locsin (RN, PhD, FAAN), Professor Emeritus at the Christine E. Lynn College of Nursing at Florida Atlantic University in Boca Raton, Florida, USA, for mentoring the author back in 2018, as well as Philip Van Peel for formatting the manuscript.

Conflicts of Interest

The author declares no conflicts of interest.

References

  1. Nakano, Y.; Tanioka, T.; Yokotani, T.; Ito, H.; Miyagawa, M.; Yasuhara, Y.; Betriana, F.; Locsin, R. Nurses' perception regarding patient safety climate and quality of health care in general hospitals in Japan. J. Nurs. Manag. 2021, 29, 749–758. [Google Scholar] [CrossRef]
  2. Tanioka, T.; Yokotani, T.; Tanioka, R.; Betriana, F.; Matsumoto, K.; Locsin, R.; Zhao, Y.; Osaka, K.; Miyagawa, M.; Schoenhofer, S. Development issues of healthcare robots: Compassionate communication for older adults with dementia. Int. J. Environ. Res. Public Health 2021, 18, 4538. [Google Scholar] [CrossRef]
  3. Osaka, K.; Tanioka, R.; Betriana, F.; Tanioka, T.; Kai, Y.; Locsin, R.C. Robot therapy program for patients with dementia: Its framework and effectiveness. In Information Systems-Intelligent Information Processing Systems; IntechOpen, 2021. [Google Scholar] [CrossRef]
  4. Griffith, T.D.; Hubbard Jr, J.E. System identification methods for dynamic models of brain activity. Biomed. Signal Process. Control 2021, 68, 1–10. [Google Scholar] [CrossRef]
  5. Li, A.X.; Florendo, M.; Miller, L.E.; Ishiguro, H.; Saygin, A.P. Robot Form and Motion Influences Social Attention. In Proceedings of the Tenth Annual ACM/IEEE International Conference on Human-Robot Interaction; Association for Computing Machinery (ACM) Digital Library, 2015; pp. 43–50. [Google Scholar] [CrossRef]
  6. Fields, C. The bicentennial man; 2011. https://sites.google.com/a/depauw.edu/the-bicentennial-man/movie-analysis.
  7. Rogers, M.E. Nursing science and the space age. Nurs. Sci. Q. 1992, 5, 27–34. https://moscow.sci-hub.st/3017/1e138e0d0fdaa8d9173640a218905cc2/rogers1992.pdf. [CrossRef] [PubMed]
  8. Baumann, S.L.; Wright, S.G.; Settecase-Wu, C. A science of unitary human beings perspective of global health nursing. Nurs. Sci. Q. 2014, 27, 324–328. https://www.researchgate.net/publication/266086499_A_Science_of_Unitary_Human_Beings_Perspective_of_Global_Health_Nursing. [CrossRef] [PubMed]
  9. Locsin, R.C. The co-existence of technology and caring in the theory of technological competency as caring in nursing. J Med Invest 2017, 64, 160–164. [Google Scholar] [CrossRef]
  10. Tanioka, T. The development of the transactive relationship theory of nursing (TRETON): A nursing engagement model for persons and humanoid nursing robots. Int. J. Nurs. Pract. 2017, 4, 1–8. [Google Scholar] [CrossRef]
  11. Pepito, J.A.; Locsin, R. Can nurses remain relevant in a technologically advanced future? Int. J. Nurs. Sci. 2019, 6, 106–110. [Google Scholar] [CrossRef] [PubMed]
  12. Tanioka, T.; Osaka, K.; Locsin, R.; Yasuhara, Y.; Ito, H. Recommended design and direction of development for humanoid nursing robots’ perspective from nursing researchers. ICA 2017, 8, 96–110. [Google Scholar] [CrossRef]
  13. Tanioka, R.; Sugimoto, H.; Yasuhara, Y.; Ito, H.; Osaka, K.; Zhao, Y.; Kai, Y.; Locsin, R.; Tanioka, T. Characteristics of transactive relationship phenomena among older adults, care workers as intermediaries, and the pepper robot with care prevention gymnastics exercises. J Med Invest 2019, 66, 46–49. [Google Scholar] [CrossRef]
  14. Tanioka, R.; Sugimoto, H.; Yasuhara, Y.; Ito, H.; Osaka, K.; Zhao, Y.; Kai, Y.; Locsin, R.; Tanioka, T. Characteristics of transactive relationship phenomena among older adults, care workers as intermediaries, and the Pepper robot with care prevention gymnastics exercises. J Med Invest 2019, 66, 46–49. [Google Scholar] [CrossRef]
  15. Tanioka, T.; Yasuhara, Y.; Dino, M.J.S.; Kai, Y.; Locsin, R.C.; Schoenhofer, S.O. Disruptive engagements with technologies, robotics, and caring: Advancing the transactive relationship theory of nursing. Nurs. Adm. Q. 2019, 43, 313–321. [Google Scholar] [CrossRef]
  16. Cano, S.; Díaz-Arancibia, J.; Arango-López, J.; Libreros, J.E.; García, M. Design path for a social robot for emotional communication for children with autism spectrum disorder (ASD). J. Sens. 2023, 23, 5291. [Google Scholar] [CrossRef] [PubMed]
  17. Trainum, K.; Tunis, R.; Xie, B.; Hauser, E. Robots in assisted living facilities: Scoping review. JMIR Aging 2023, 6, e42652. [Google Scholar] [CrossRef] [PubMed]
  18. Osaka, K.; Sugimoto, H.; Tanioka, T.; Yasuhara, Y.; Locsin, R.; Zhao, Y.; Okuda, K.; Saito, K. Characteristics of a transactive phenomenon in relationships among older adults with dementia, nurses as intermediaries, and communication robot. ICA 2017, 8, 111. http://www.scirp.org/journal/PaperInformation.aspx?PaperID=76520&#abstract. [CrossRef]
  19. Miyagawa, M.; Yasuhara, Y.; Tanioka, T.; Locsin, R.; Kongsuwan, W.; Catangui, E.; Matsumoto, K. The optimization of humanoid robot’s dialog in improving communication between humanoid robot and older adults. ICA 2019, 10, 118–127. [Google Scholar] [CrossRef]
  20. Hung, L.; Liu, C.; Woldum, E.; Au-Yeung, A.; Berndt, A.; Wallsworth, C.; Horne, N.; Gregorio, M.; Mann, J.; Chaudhury, H. The benefits of and barriers to using a social robot PARO in care settings: A scoping review. BMC Geriatr. 2019, 19, 1–10. [Google Scholar] [CrossRef] [PubMed]
  21. Hung, L.; Wong, J.; Smith, C.; Berndt, A.; Gregorio, M.; Horne, N.; Jackson, L.; Mann, J.; Wada, M.; Young, E. Facilitators and barriers to using telepresence robots in aged care settings: A scoping review. JRATE 2022, 9, 20556683211072385. [Google Scholar] [CrossRef]
  22. Abdi, J.; Al-Hindawi, A.; Ng, T.; Vizcaychipi, M.P. Scoping review on the use of socially assistive robot technology in elderly care. BMJ Open 2018, 8, e018815. https://bmjopen.bmj.com/content/8/2/e018815.
  23. Guemghar, I.; Pires de Oliveira Padilha, P.; Abdel-Baki, A.; Jutras-Aswad, D.; Paquette, J.; Pomey, M.P. Social robot interventions in mental health care and their outcomes, barriers, and facilitators: scoping review. JMIR Ment. Health 2022, 9, e36094. [Google Scholar] [CrossRef]
  24. David, D.; Thérouanne, P.; Milhabet, I. The acceptability of social robots: A scoping review of the recent literature. Comput. Hum. Behav. 2022, 107419. [Google Scholar] [CrossRef]
  25. Betriana, F.; Tanioka, T.; Osaka, K.; Kawai, C.; Yasuhara, Y.; Locsin, R.C. Interactions between healthcare robots and older people in Japan: A qualitative descriptive analysis study. Japan J. Nurs. Sci. 2021, 18, e12409. [Google Scholar] [CrossRef]
  26. Betriana, F.; Tanioka, R.; Gunawan, J.; Locsin, R.C. Healthcare robots and human generations: Consequences for nursing and healthcare. Collegian 2022, 29, 767–773. [Google Scholar] [CrossRef]
  27. Hurst, N.; Clabaugh, C.; Baynes, R.; Cohn, J.; Mitroff, D.; Scherer, S. Social and emotional skills training with embodied Moxie. arXiv 2020, arXiv:2004.12962. [Google Scholar] [CrossRef]
  28. Triantafyllidis, A.; Alexiadis, A.; Votis, K.; Tzovaras, D. Social robot interventions for child healthcare: A systematic review of the literature. Comput. Methods Programs Biomed. Update 2023, 100108. [Google Scholar] [CrossRef]
  29. Sætra, H.S. The foundations of a policy for the use of social robots in care. Technol. Soc. J 2020, 63, 101383. [Google Scholar] [CrossRef]
  30. Locsin, R.C.; Ito, H. Can humanoid nurse robots replace human nurses. J. Nurs. 2018, 5, 1–6. [Google Scholar] [CrossRef]
  31. Locsin, R.C.; Soriano, G.P.; Juntasopeepun, P.; Kunaviktikul, W.; Evangelista, L.S. Social transformation and social isolation of older adults: Digital technologies, nursing, healthcare. Collegian 2021, 28, 551–558. [Google Scholar] [CrossRef]
  32. Kipnis, E.; McLeay, F.; Grimes, A.; de Saille, S.; Potter, S. Service robots in long-term care: A consumer-centric view. J. Serv. Res. 2022, 25, 667–685. [Google Scholar] [CrossRef]
  33. Persson, M.; Redmalm, D.; Iversen, C. Caregivers’ use of robots and their effect on work environment–A scoping review. JTHS 2022, 40, 251–277. [Google Scholar] [CrossRef]
  34. González-González, C.S.; Violant-Holz, V.; Gil-Iranzo, R.M. Social robots in hospitals: A systematic review. Appl. Sci. 2021, 11, 5976. [Google Scholar] [CrossRef]
  35. Dawe, J.; Sutherland, C.; Barco, A.; Broadbent, E. Can social robots help children in healthcare contexts? A scoping review. BMJ Paediatr. Open 2019, 3. [Google Scholar] [CrossRef]
  36. Hernandez, J.P.T. Network diffusion and technology acceptance of a nurse chatbot for chronic disease self-management support: A theoretical perspective. J. Med. Invest. 2019, 66(1.2), 24–30. [Google Scholar] [CrossRef]
  37. Morgan, A.A.; Abdi, J.; Syed, M.A.; Kohen, G.E.; Barlow, P.; Vizcaychipi, M.P. Robots in healthcare: A scoping review. Current Robotics Reports 2022, 3, 271–280. [Google Scholar] [CrossRef]
  38. Soriano, G.P.; Yasuhara, Y.; Ito, H.; Matsumoto, K.; Osaka, K.; Kai, Y.; Locsin, R.; Schoenhofer, S.; Tanioka, T. Robots and robotics in nursing. Healthcare 2022, 10, 1571. [Google Scholar] [CrossRef] [PubMed]
  39. Ohneberg, C.; Stöbich, N.; Warmbein, A.; Rathgeber, I.; Mehler-Klamt, A.C.; Fischer, U.; Eberl, I. Assistive robotic systems in nursing care: A scoping review. BMC Nursi. 2023, 22, 1–15. [Google Scholar] [CrossRef] [PubMed]
  40. Kyrarini, M.; Lygerakis, F.; Rajavenkatanarayanan, A.; Sevastopoulos, C.; Nambiappan, H.R.; Chaitanya, K.K.; Ashwin Ramesh Babu, A.R.; Mathew, J.; Makedon, F. A survey of robots in healthcare. Technologies 2021, 9, 8. [Google Scholar] [CrossRef]
  41. Kitt, E.R.; Crossman, M.K.; Matijczak, A.; Burns, G.B.; Kazdin, A.E. Evaluating the role of a socially assistive robot in children’s mental health care. J. Child Fam. Stud. 2021, 30, 1722–1735. [Google Scholar] [CrossRef] [PubMed]
  42. Revelles-Benavente, B. Material knowledge: Intra-acting van der Tuin’s new materialism with Barad’s agential realism. Enrahonar. An International Journal of Theoretical and Practical Reason 2018, 60, 75–91. https://ddd.uab.cat/pub/enrahonar/enrahonar_a2018v60/enrahonar_a2018v60p75.pdf.
  43. Ahmad, M.; Mubin, O.; Orlando, J. A systematic review of adaptivity in human-robot interaction. MTI 2017, 3, 1–25. [Google Scholar] [CrossRef]
  44. Campa, R. The rise of social robots: A review of the recent literature. J. Evol. Tech. 2016, 26, 106–113. https://jetpress.org/v26.1/campa.pdf. [CrossRef]
  45. Papadopoulos, I.; Koulouglioti, C.; Lazzarino, R.; Ali, S. Enablers and barriers to the implementation of socially assistive humanoid robots in health and social care: A systematic review. BMJ Open 2020, 10, 1–13. [Google Scholar] [CrossRef] [PubMed]
  46. Mou, Y.; Shi, C.; Shen, T.; Xu, K. A systematic review of the personality of robot: Mapping its conceptualization, operationalization, contextualization and effects. Int. J. Hum. Comput. 2019, 36, 591–605. [Google Scholar] [CrossRef]
  47. Rossi, S.; Conti, D.; Garramone, F.; Santangelo, G.; Staffa, M.; Varrasi, S.; Di Nuovo, A. The role of personality factors and empathy in the acceptance and performance of a social robot for psychometric evaluations. Robotics 2020, 9, 1–19. [Google Scholar] [CrossRef]
  48. Turner, C.K. A principle of intentionality. Front. Psychol. 2017, 8, 1–10. [Google Scholar] [CrossRef] [PubMed]
  49. Clark, C.S. Watson’s human caring theory: Pertinent transpersonal and humanities concepts for educators. Humanities 2016, 5, 1–12. [Google Scholar] [CrossRef]
  50. Tononi, G. Consciousness as integrated information: A provisional manifesto. Biol. Bull. 2008, 215, 216–242. [Google Scholar] [CrossRef]
  51. Betriana, F.; Osaka, K.; Matsumoto, K.; Tanioka, T.; Locsin, R. Relating Mori’s Uncanny Valley in generating conversations with artificial affective communication and lateral language processing. Nurs. Philos. 2020, 22, e12322. [Google Scholar] [CrossRef]
  52. Pepito, J.; Ito, H.; Betriana, F.; Tanioka, T.; Locsin, R. Intelligent humanoid robots expressing artificial humanlike empathy in nursing situations. Nurs. Philos. 2020, 21, e12318. [Google Scholar] [CrossRef]
  53. Bishop, J.M.; Nasuto, J.S. Second-order cybernetics and enactive perception. Kybernetes 2005, 34, 1309–1320. http://www.doc.gold.ac.uk/~mas02mb/Selected%20Papers/2005%20Kybernetes.pdf. [CrossRef]
  54. Yao, J.T.; Vasilakos, A.V.; Pedrycz, W. Granular computing: Perspectives and challenges. IEEE Trans. Cybern. 2013, 43, 1977–1989. https://ieeexplore.ieee.org/document/6479257. [CrossRef]
  55. Downes, S. Becoming Connected [Video]. 25 Oct 2017. https://youtu.be/n69kCVWn2D8.
  56. Locsin, R.C.; Ito, H.; Tanioka, T.; Yasuhara, Y.; Osaka, K.; Schoenhofer, S.O. Humanoid nurse robots as caring entities: A revolutionary probability. Int. J. Nurs. Stud. 2018, 3, 146–154. [Google Scholar] [CrossRef]
  57. Duffy, B. Robots social embodiment in autonomous mobile robotics. IJARS 2004, 1, 155–170. [Google Scholar] [CrossRef]
  58. van Wynsberghe, A.; Li, S. A paradigm shift for robot ethics: From HRI to human-robot-system interaction (HRSI). Medicolegal and Bioethics 2019, 9, 11–21. [Google Scholar] [CrossRef]
  59. Goi, E.; Zhang, Q.; Chen, X.; Luan, H.; Gu, M. Perspective on photonic memristive neuromorphic computing. PhotoniX 2020, 1, 1–26. [Google Scholar] [CrossRef]
  60. Artificial Intelligence Board of America. Neuromorphic computing: The next-level artificial intelligence [Blog post]. 2020. Available online: https://www.artiba.org/blog/neuromorphic-computing-the-next-level-artificial-intelligence (accessed on 27 May 2024).
  61. Giger, J.C.; Picarra, N.; Alves-Oliveira, P.R.; Arriaga, P. Humanization of robots: Is it really such a good idea? Hum. Behav. Emerg. 2019, 1, 111–123. [Google Scholar] [CrossRef]
Figure 1. Interpretation of Tanioka’s [10] model according to cybernetic communication.
Figure 1. Interpretation of Tanioka’s [10] model according to cybernetic communication.
Preprints 116816 g001
Figure 13. Spectrogram comparison of the three audio files.
Figure 13. Spectrogram comparison of the three audio files.
Preprints 116816 g013
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

© 2024 MDPI (Basel, Switzerland) unless otherwise stated