4. Results
The model illustrates a three-level healthcare interaction framework that utilizes "cybernetic communication" (i.e., the Shannon-Weaver model of communication in a circular, self-regulating process) to improve healthcare delivery (
Figure 1). It involves three entities: the end-user (patient/client), the intermediary (humanoid robot), and the provider (healthcare system). Communication flows are depicted with arrows, indicating the direction of information exchange. A solid black arrow represents the flow from the healthcare system to the humanoid robot and then to the patient/client. Feedback loops are shown by a red dashed arrow ("Feedback 1") from the patient/client to the robot and a black dashed arrow ("Feedback 2") from the robot to the healthcare system. This model emphasizes the humanoid robot's role as an intermediary, facilitating two-way feedback and improving communication between patients and providers. By integrating technology, the model aims to offer more personalized and accessible healthcare services, highlighting the potential of humanoid robots in modern healthcare systems.
This model can be represented by the equation: . Here, X and Y are interacting states (patient, robot, or system), t is the time step, and ϵ is a random variable. The parameter α is 0.1 for state updates and 0.2 for feedback. The clip function, , keeps state values within [0, 1]. This equation captures the dynamics of state updates and feedback, combining deterministic interactions and stochastic variability to simulate healthcare interactions.
This healthcare interaction model simulation (
Figure 2), based on a three-level cybernetic communication system (
Figure 1), was executed over 100 iterations to explore the dynamics between a patient, a humanoid robot intermediary, and a healthcare system. The model simulates information flow from the healthcare system to the robot, communication from the robot to the patient, and feedback loops between the patient and the robot, as well as from the robot back to the healthcare system. After 100 iterations, the final states were: patient at 0.4929, robot at 0.2875, and healthcare system at 0.8390. These results indicate an improvement in the healthcare system's state, likely due to effective feedback, while the robot's state decreased, suggesting potential stress or demand on the intermediary. The patient's state remained relatively stable. The accompanying graph visualizes these changes, highlighting the complex interactions within the system. This simulation reveals the effects of cybernetic communication and feedback loops in a technology-enhanced healthcare model.
This validation process (
Figure 3) spanned 100 iterations and was based on the interactions between a patient, a humanoid robot intermediary, and a healthcare system, including information flow and feedback loops (
Figure 2). The final states after simulation were as follows: patient (0.4627), robot (0.7121), and healthcare system (0.4120). The model’s accuracy was assessed using MSE against target states of 0.5 for all entities, yielding MSE values of: patient (0.0817), robot (0.0806), and system (0.0845). These low MSE values indicate that the model's behavior aligns well with the expected targets. The simulation graph visualizes state evolutions, demonstrating dynamic interactions and feedback effects. This validated model provides a quantitative measure of the stability and effectiveness of the cybernetic communication system in healthcare, enhancing its potential for real-world application assessment.
The sonification process (
Figure 4) maps entity states to specific frequency ranges: patient (200-800 Hz), robot (400-1000 Hz), and healthcare system (600-1200 Hz). A dissonance tone (100-300 Hz) highlights misalignment. Tones for each entity and the dissonance are created, overlaid, and concatenated. Final states are: patient at 0.1684 (301.07 Hz), robot at 0.4441 (666.46 Hz), healthcare system at 0.4011 (840.67 Hz), and dissonance at 210.26 Hz. The dissonance tone is emphasized at -5 dB, while entity tones are at -10 dB. This method enhances the audible representation of system dynamics, allowing listeners to perceive alignment and misalignment through harmonious and dissonant sounds.
This illustrates a Level 4 relationship in healthcare (
Figure 5), depicting dynamic communication among a patient/client, a humanoid robot, and the healthcare system. The central diagram features three interconnected circles labeled "Patient/Client," "Humanoid Robot," and "Healthcare System," linked by lines that symbolize interactions. A green squiggly line between the patient/client and the humanoid robot represents dynamic communication, while the connection between the robot and the healthcare system is labeled "Entanglement," indicating a deep interaction. The concept of "superpositioning" is shown by a line looping from the humanoid robot, suggesting overlapping roles or states. The accompanying text explains that this relationship is based on "intra-actions" within a compassionate network, analogous to variable quantum energy states during communication. It emphasizes shared capacities and adaptive interactions, with roles shifting without loss of domain. Annotations in the diagram provide additional details. Overall, the diagram serves to illustrate advanced concepts of interaction and communication in a technology-integrated healthcare setting.
This model can be represented by the equation: , where X represents the state (patient, robot, or system) at time step t. This equation incorporates three key elements of the simulation: “dynamic communication,” “entanglement,” and “superpositioning”. The communication intensity C and its effect ϵ1 are drawn from uniform distributions U(0, 1) and U(−0.1, 0.1) respectively. The entanglement factor E is sampled from U(0.8, 1.2), while the superpositioning overlap O comes from U(0, 0.2). The clip function, defined as , ensures that all state values remain within the normalized range [0, 1]. This mathematical formulation elegantly captures the complex interactions and stochastic nature of the healthcare simulation, providing a framework for analyzing the dynamic relationships between patients, robotic assistants, and the healthcare system.
The graph (
Figure 6) illustrates the states of the patient, humanoid robot, and healthcare system over 100 iterations, derived from a conceptual Level 4 system (
Figure 5). This simulation captures the interactions among components, with each state evolving based on dynamic communication, entanglement, and superpositioning. The initial states for the patient, robot, and system are set to 0.5, reflecting their health and performance levels. Communication intensity ranges from 0 to 1, the entanglement factor varies between 0.8 and 1.2, and the overlap factor ranges from 0 to 0.2, simulating the effects of superpositioning. The simulation tracks the history of states to analyze their dynamics, providing a framework for understanding complex interactions in a technology-integrated healthcare setting.
This model validation (
Figure 7) demonstrates the robustness and coherence of the simulation of the healthcare model, incorporating dynamic communication, entanglement, and superpositioning (
Figure 6). All states remain within the expected range of [0, 1], with a state constraint value of 1.0000, confirming the model's adherence to logical constraints. A high correlation of 0.9801 between the robot and healthcare system states indicates strong entanglement, aligning with the theoretical framework. The minimal trends observed for patient (0.0009), robot (0.0008), and system (0.0008) states suggest stability without significant divergence over time. Variability in states is evident, with standard deviations of 0.0749 for the patient, 0.1038 for the robot, and 0.0979 for the system, indicating dynamic behavior without constant values. These results confirm that the model behaves as expected, maintaining state constraints, showing strong entanglement, and exhibiting appropriate variability and stability. The simulation thus provides a conceptual framework for understanding complex interactions in a healthcare setting that integrates technology and human care, capturing the essence of dynamic communication, quantum-like entanglement, and superpositioning of roles as described in the original diagram.
This multisensory representation illustrates the complex interactions between a patient, a robot, and a healthcare system (
Figure 8) over 100 iterations, incorporating dynamic communication, entanglement, and superposition. The resulting time series data was then transformed into an auditory experience by mapping state values to specific frequency ranges: patient (200-800 Hz), robot (300-900 Hz), system (400-1000 Hz), entanglement (100-300 Hz), and superposition (1000-1500 Hz). For each iteration, tones were generated and overlaid, creating a 10.00-second audio file that represents the system's evolving state. This audio was normalized to maximum volume for clarity. Complementing the sonification, a visual line plot was generated to illustrate the changing states over time. The process yielded key statistics: an average entanglement strength of 0.5457 and an average superposition strength of 0.4909, indicating significant quantum-like effects throughout the simulation. These auditory and visual outputs offer an intuitive understanding of the healthcare model's dynamics, allowing users to both “hear” and “see” the complex relationships and their evolution over time.
Figure 9 can serve as the foundation for developing autonomous humanoid robots capable of providing compassionate care. The integration of these components—"Sensor Input and Perception,” “Cognition and Intentionality,” “Memory and Learning,” “Wave Function Collapse and Quantum-like Behavior,” and “Action and Adaptation,” along with ethical frameworks—enables the robot to handle complex decision-making processes, navigate the intricacies of human emotions and social interactions, and provide personalized and compassionate care capable of adapting to the changing needs and emotions of individuals.
Sensor Input and Perception: The system detects signals from both external and internal environments through sensors, filters out noise, and interprets these signals to understand the context and needs of the environment.
Cognition and Intentionality: The system applies cognitive algorithms to process the information stochastically and assemble it into a meaningful form that reflects real-world understanding. This stage is crucial for forming a prior intention to execute a program. The intentionality phase is where the system’s response is valorized and adjusted based on the intention, or it may proceed to further cognitive processing if there is significant realization or uncertainty.
Memory and Learning: The system incorporates a contiguous and circular memory, storing and retrieving interaction histories as unique experiences, which are used to inform future responses. The system also incorporates reinforcement learning with granular mapping to refine its responses.
Wave Function Collapse and Quantum-like Behavior: A key feature of the system is the wave function collapse, occurring at the entanglement stage, where the synthetic equivalent of human consciousness interacts with human agents. This interaction is contingent upon the complexity of the entanglement and involves cognitive algorithms that may be reversible or incidental to intentionality.
Action and Adaptation: The system’s (re)actions are driven by the intention, leading to the execution of the response. The system also outputs signals through effectors, recoding them with perturbations to simulate emergent exigencies. The process is dynamic and iterative, with feedback loops between perception, cognition, intentionality, and actions, allowing the system to continuously learn and adapt to provide compassionate care.
This model can be represented by the equation: , where X denotes the state (patient, robot, or system) at time step t. This equation integrates three primary components: dynamic communication, entanglement, and superpositioning. The communication intensity C and its effect ϵ are drawn from uniform distributions U(0, 1) and U(−0.1, 0.1), respectively. The entanglement factor E is sampled from U(0.8, 1.2), while the superpositioning overlap O comes from U(0, 0.2). The clip function, defined as , ensures that all state values remain within the normalized range [0, 1]. This formulation captures the complex interactions and stochastic nature of the simulation, providing a framework for analyzing the dynamic relationships between patients, robotic assistants, and the healthcare system.
The simulation (
Figure 10) ran for 100 iterations, each representing a cycle of care provision. During each iteration, the system received signals from the environment, interpreted them with some noise, processed them through cognitive algorithms, and formed an intention. This intention may require further cognitive processing if uncertainty is high. The system then converted the intention into an action, ensuring it remains within reasonable bounds. A unique feature is the wave function collapse, which occasionally flips the intention, simulating quantum-like behavior. The system also updated its memory with each action and learned from past experiences, gradually decreasing its learning rate to reflect increased stability over time.
The output of the simulation shows a range of care actions, from strongly positive to strongly negative, reflecting the system's ability to respond to different situations with varied caring behaviors. This variability indicates the system's adaptability to changing circumstances or needs. The decreasing learning rate and full memory suggest that the system is accumulating experiences and becoming more stable, while still maintaining adaptability. The actions are distributed around zero, indicating a balance between different types of caring behaviors over time. This simulation demonstrates how an autonomous humanoid robot could potentially provide care with compassion, adapting its responses based on environmental inputs, cognitive processing, and learned experiences. The variability and adaptability shown in the simulation reflect the complex decision-making process outlined in the original diagram, incorporating elements of perception, cognition, intentionality, and learning.
The blue line in the graph (
Figure 10) represents the care actions taken by the system over 100 iterations, with values ranging from -1 to 1, indicating various caring behaviors. Red dots mark instances of Wave Function Collapse, where the system's intention abruptly flips, reflecting quantum-like behavior. The simulation experienced 17 collapses, consistent with a set probability of 0.2, highlighting the system's unpredictability. The care actions show significant variability, demonstrating the system's adaptability and ability to provide a range of caring behaviors. The final learning rate of 0.0366 indicates increased stability over time, while the average care action of 0.3084 suggests a slight bias towards positive, supportive care. The standard deviation of 0.3894 underscores the diversity in responses, showcasing the system's capacity to adapt to various scenarios. The wave function collapses introduce unpredictability, enabling the system to explore new caring strategies and avoid local optima.
Values from -1 to 1 represent different caring behaviors. Positive values (0 to 1) indicate supportive actions, with higher values reflecting more active support, such as encouragement or comfort, essential for enhancing well-being. Negative values (-1 to 0) signify corrective behaviors, with values closer to -1 indicating assertive actions like setting boundaries or preventing harm. Neutral values near zero suggest balanced behaviors, such as monitoring without intervention, allowing individuals to maintain autonomy while having support available. Caring behaviors can be categorized by several dimensions: intensity (magnitude of the value), type (supportive vs. corrective), context (specific caregiving situations), and intended outcomes (enhancing well-being vs. preventing harm). Understanding these dimensions allows for more targeted and effective caregiving strategies in the proposed system design.
Model validation (
Figure 11) reveals its complex and unpredictable nature, as evidenced by the poor performance of a random forest regressor, which yields a negative
R-squared score of -0.1584 and a high MSE of 0.2355. Cross-validation
R-squared scores range from -0.56 to -0.20, averaging -0.3372, indicating unreliable predictions across data subsets. The simulation exhibits significant variability, with 195 wave function collapses in 1000 iterations, an average care action of 0.2679, and a standard deviation of 0.4109, suggesting that care actions emerge from intricate interactions among multiple components, including quantum-like collapses and learning from past experiences. This complexity mirrors the nuanced, context-dependent nature of human caregiving, positioning the system as more akin to a real-world caring entity than a deterministic model. Future research could enhance understanding by incorporating additional features or advanced analytical approaches, as the unpredictability in care actions, while challenging to model, may reflect the system's ability to provide diverse and adaptive care responses.
In this sonification (
Figure 12), care actions are mapped to frequencies, with higher actions corresponding to higher pitches, and each data point represented by a 50-millisecond tone. Wave function collapses are marked by distinct 200-millisecond sounds at 3000 Hz for easy identification. Accompanying visual representation shows care actions over time, with red dots indicating wave function collapses. The 50-second sonification includes 195 collapses, averaging 3.90 per second, while the average care action is 0.2679 with a standard deviation of 0.4111, reflecting the system's variability. This auditory representation provides a unique perspective on data patterns, particularly the quantum-like collapses, and, combined with visual elements, offers a multi-sensory approach to data interpretation that may reveal insights not readily apparent through traditional analysis methods.
The waveform of
Figure 4 displays larger amplitude variations and more frequent changes, reflecting a complex and varied sound pattern. In contrast,
Figure 8 shows reduced amplitude variations, indicating a more stabilized sound pattern. Finally, the waveform of
Figure 12 exhibits the smallest amplitude variations and appears more uniform, representing the most stable and consistent sound pattern among the three.
5. Discussion
Caring practice is rooted in caring science, which focuses on theory development in nursing. However, there is a lack of recognition regarding permissible interactions between humans and nonhumans. Nursing robotics should be re-conceptualized within caring science, viewing robots as intelligent machines that can enhance caring rather than compete with human caregivers. A symmetrical perspective is proposed, emphasizing the positive contributions of robotics to nursing and moving away from the distrustful view of robots as mere intelligent machines that spread misinformation. The focus should shift towards a non-chimeric relationship, acknowledging that robot sophistication evolves with technology, creating new possibilities for the future.
The TRETON model by Tanioka [
10] asserts that compassionate care in healthcare robots is intentional when it facilitates communication between agents, human or nonhuman, arising from their encounters. ‘Agent’ refers to any entity capable of purposeful communication, with caring expressions prompting further interaction. Robots demonstrate intentionality through advanced cognitive processing, adapting programming based on signal processing to exhibit an “intent to care” through emergent learning behaviors. This allows them to transform unstructured data into meaningful knowledge about individuals within their social and cultural contexts, preventing them from being seen merely as mechanical entities and emphasizing the role of AI in their development.
Human cognition consists of 70% recognition of emergent phenomena, 20% representation/modeling of connectivity, and 10% data [
55]. This indicates that AI should be developed with minimalist coding and robust signal processing for autonomous behavior. The next generation of AI must perform deductive, inductive, and abductive reasoning, allowing robots to function as compassionate agents in healthcare rather than mere extensions of physical care. In this context, nurses will transition to roles as technology operators and developers, while patients will actively participate in the evolution of robotics. The interaction between humans and robots should cultivate compassion through “intra-action” [
42], which also supports effective communication. For robots to be perceived as caring rather than mechanistic, healthcare systems must integrate them within human networks, focusing on care outcomes instead of treating robots as tools for repetitive tasks [
56].
Figure 2 and
Figure 6 present distinct approaches to healthcare communication and management, each with implications for effective communication, system agility, and compassionate caring. In this analysis, “effective communication” is defined as the ability to convey information clearly and accurately, ensuring all parties (sender and receiver) understand the message as intended. “System agility” refers to the healthcare system’s capacity to adapt quickly and efficiently to changes, challenges, and new information while maintaining operational effectiveness. “Compassionate caring” is the provision of healthcare that is empathetic, patient-centered, and responsive to patients' emotional and psychological needs.
Figure 2 features a structured and streamlined approach, offering advantages such as enhanced clarity in communication and quick access to essential information, potentially improving decision-making speed and system agility. However, it may have disadvantages like limited flexibility and difficulty in conveying complex information, which might impact compassionate care. In contrast,
Figure 6 presents a more dynamic and interactive design, facilitating richer information exchange and greater adaptability to various scenarios, potentially enhancing system agility and supporting personalized, compassionate care. Nevertheless, its complexity could lead to information overload and workflow inconsistency. The effectiveness of each approach ultimately depends on the specific needs and context of the healthcare setting.
Compassionate care is emerging with humanoid robots, which have been conceptualized as caring entities from a philosophical perspective [
56]. These robots possess physical and cognitive embodiment, characterized by various attributes [
57]. Tanioka [
10] highlighted the role of intermediation in robot caring, guiding their adoption and reinvention [
58]. The proposal for Level 4 robotics in healthcare aims for autonomy through advanced AI and futuristic microsystems [
59,
60]. As robots develop intuitive signal processing, they may achieve cognitive capabilities akin to human intelligence. This shift encourages healthcare researchers to view nursing robots as vital components of compassionate care rather than mere extensions of human caregivers [
61].
Figure 10 optimizes previous communication models by integrating several advanced features. It incorporates adaptive learning, as indicated by its final learning rate, allowing it to improve performance over time based on interactions and outcomes. With a substantial memory capacity, the model can retain and utilize past information, leading to more personalized and context-aware care. Its probabilistic approach, demonstrated by the collapse probability, enables the model to handle uncertainty and probabilistic outcomes, reflecting the realistic nature of healthcare scenarios. The dynamic behavior of care actions, which fluctuate over time, shows the model's ability to adapt responses based on changing conditions or needs. Additionally, the model appears to be inspired by quantum concepts, as suggested by the presence of wave function collapse points, allowing for more complex state representations and decision-making processes. Continuous optimization is evident as the model operates over extended periods, adjusting its actions iteratively. Furthermore, the model’s ability to handle negative outcomes, as seen in the occasional negative care action values, indicates its capacity to account for and respond to adverse outcomes or setbacks in the caring process. These features collectively create a more flexible, responsive, and realistic simulation of the caring process in healthcare settings, addressing limitations of previous models.
Data sonification reveals a clear convergence in tones and patterns from
Figure 4 to
Figure 12.
Figure 4 shows the highest mean (0.0070) and standard deviation (0.5075), indicating complex, varied sound patterns.
Figure 8 exhibits reduced mean (0.0027) and standard deviation (0.3239), suggesting stabilization.
Figure 12 has the lowest mean (0.0007) and standard deviation (0.2552), reflecting a consistent, uniform sound pattern. The 90% decrease in mean value and 50% reduction in standard deviation from
Figure 4 to
Figure 12 indicate convergence towards lower frequencies and more stable patterns. This progression suggests that data sonification represents an increasingly coherent and effective system, possibly mirroring the evolution towards optimization. (Available at
https://github.com/jphernandezrn/Data-Sonification-Human-Robot-Interaction.)
Healthcare professionals will eventually recognize that ‘bot’ technologies can simulate compassionate care through repetitive human interactions, with the effectiveness of this approach depending on the motives behind their implementation. It is crucial to acknowledge the dimensions of compassionate care provided by healthcare robots, viewing them not merely as ‘automatons’ or mechanical ‘automations,’ but as integral to the continuity of nursing practice, grounded in the science and art of nursing. This perspective is particularly relevant in long-term and collaborative care settings, where robots can serve as compassionate companions. In palliative care, fostering trust in robots to express care poses challenges, especially when restructuring conservative views on robot autonomy. The future of living with humanoid robots necessitates rethinking their roles in healthcare, emphasizing their human-like expressions of compassion. As Locsin et al. [
56] noted, genuine expressions of humanness from intelligent, caring nonhuman entities can be realized, legitimizing compassionate care and enhancing the role of humanoid robots in nursing practice.