Achieving Fluid, Trustworthy, and Contextually Aware Human-Robot Collaboration in Unstructured Environments
Achieving Fluid, Trustworthy, and Contextually Aware Human-Robot Collaboration in Unstructured Environments

The integration of robots into diverse sectors beyond traditional industrial applications is rapidly increasing, encompassing service robotics, home assistance, and critical areas such as disaster relief.1 This expansion brings forth the unique challenges presented by unstructured environments, which necessitate robots to exhibit a high degree of adaptability, robustness, and the ability to engage in seamless interaction with humans.4 To fully realize the potential of robots as collaborative partners rather than mere tools, the development of fluid, trustworthy, and contextually aware collaboration is of paramount importance.7 The transition from robots performing isolated tasks to working alongside humans in unpredictable settings requires a fundamental shift in their design, focusing on intelligence and interaction capabilities that can understand and respond to the complexities of human behavior and dynamic environments. This necessitates moving beyond pre-programmed routines towards more nuanced forms of artificial intelligence capable of adapting to unforeseen circumstances and building genuine partnerships with human counterparts.
1. The Foundation: Advancements in Multimodal Human-Robot Interaction
The rapid progress in artificial intelligence has enabled machines to move beyond basic information processing to the interpretation and generation of diverse data formats, including language, images, and video.10 This capability forms the bedrock of multimodal human-robot interaction (HRI), allowing robots to engage with humans through a variety of sensory channels. Computer vision plays a critical role in this domain, equipping robots with the ability to recognize and track human faces, movements, gestures, and gaze, thereby enabling responses to visual cues.12 This visual perception allows an autonomous vehicle, for example, to infer a pedestrian's intention to cross the street based on their gesture.10 Natural language processing (NLP) further enhances HRI by empowering robots to understand spoken commands, participate in dialogues, and generate human-like responses.11 This allows for more intuitive and natural communication, moving beyond simple button presses or pre-defined commands. Gesture recognition serves as another vital control modality, proving particularly useful in noisy environments where speech communication might be challenging.12 Beyond explicit commands, robots are also increasingly capable of analyzing facial expressions and body language to infer human emotions, a field known as affective computing.10 This allows a service robot to recognize a user's emotional state and respond accordingly.10 Furthermore, the potential of physiological signals, such as heart rate, electroencephalography (EEG), and electrocardiography (ECG), is being explored to gain an even deeper understanding of human emotional and cognitive states.15
Nonverbal elements constitute a significant portion of human communication 18, underscoring the necessity of multimodal approaches for achieving nuanced and effective HRI. Relying solely on a single modality like speech often proves insufficient to capture the full spectrum of human expression and intent. Multimodal feedback, including speech synthesis and visual cues such as lights and displays, further enhances the user experience by making interactions with robots more natural and intuitive.11 For instance, a robot can use text-to-speech technology to generate spoken responses to humans, mimicking different voices and even creating virtual robot voices.11 The convergence of advancements across AI domains, including computer vision, NLP, and affective computing, is establishing a robust foundation for robots to engage in interactions that more closely resemble human-to-human communication. This multimodality facilitates richer communication and a deeper comprehension of human intent and emotional states, which are indispensable for fluid collaboration. Behavioral AI, which predicts future behaviors and supports automated decision-making based on various forms of digital behavioral data, including multimodal cues, holds significant promise for enabling proactive and context-aware HRC in unstructured environments.10
2. Cognitive Alignment: Building and Maintaining Shared Mental Models
Effective collaboration between humans and robots hinges on the establishment of shared mental models (SMMs), which represent a team's collective understanding of a task and the strategies required for successful teamwork.19 Forming accurate mental models of teammates, whether human or artificial, is a critical aspect of collaborative endeavors.19 These SMMs encompass shared knowledge concerning the task at hand, the equipment being used, the roles and responsibilities of team members, and their respective goals and intentions.7
Several AI techniques are being explored to facilitate the development of SMMs in HRI. Explainable AI (xAI) techniques offer a pathway for humans to develop mental models of AI agents by providing insights into their otherwise opaque decision-making processes.19 Tools like the "After-Action Explanation (AAE)" have been proposed as formalisms to aid humans and AI agents in building a shared understanding of events, decisions, and thought processes through a review mechanism.19 Computational frameworks are also being developed to enable robots to estimate the mental models of human operators and trigger selective communication updates based on any discrepancies identified.23 Query-Feature Graphs can be utilized to map user goals to the features of an interface, thereby assisting in the formation of mental models of AI agents.24 Research is also focused on enabling robots to model the behavior of their human collaborators to infer their beliefs, intentions, and goals, representing what is known as first-order mental modeling.25 Furthermore, the concept of second-order mental modeling is emerging, where robots attempt to estimate a human's mental model of the robot itself.23
Maintaining these shared mental models requires dynamic updates based on real-time observations, fluctuations in performance, and changes in the environment.20 Mechanisms are needed for robots to track the progress of humans towards their assigned goals and adjust their own behavior accordingly.20 Maintaining consistency within SMMs presents a challenge due to potential differences in perception, knowledge states, and the asynchronous nature of information.21 To address the issue of information overload, information-theoretic update triggers are being explored to determine when to send updates to human users, avoiding overwhelming them.26 The ability of robots to reason about a human's understanding and tailor their communication accordingly signifies a move towards more intelligent and considerate HRI.23 Building and maintaining shared mental models is thus a cornerstone of achieving genuine human-robot teaming, with AI techniques playing a crucial role in enabling robots to comprehend human intentions and communicate their own understanding, fostering a mutual cognitive alignment that underpins fluid and trustworthy collaboration.
3. Intelligent Action: Proactive, Adaptive, and Legible Robot Behavior
For robots to effectively collaborate with humans in unstructured environments, they must exhibit intelligent action characterized by proactivity, adaptability, and legibility. Proactive behavior involves robots acting on their own initiative in an anticipatory manner to benefit humans.27 One AI approach to achieving this is intention-based proactivity, where the robot recognizes what a human intends to do and acts to facilitate the achievement of that goal.27 Another approach, prediction-based proactivity, involves the robot reasoning about potential future threats or opportunities and taking action to prevent undesirable outcomes or foster positive ones.27 Integrated systems that combine both intention-based and prediction-based reasoning are being developed to enable robots to exhibit more comprehensive proactive behaviors.27 Probabilistic models that exploit the temporal coherence and dynamic characteristics of tasks are also utilized to enable robots to anticipate human needs and act proactively.4 In collaborative tasks, robots can even take the lead when a human hesitates, based on their learned models of the task progression.4
Adaptive behavior is equally crucial, requiring robots to seamlessly adjust their actions to cooperate effectively with users in diverse and novel situations within unstructured environments.4 AI techniques such as reinforcement learning (RL) enable robots to learn and adapt their behavior based on interactions and feedback received from humans.31 Adaptive duration hidden semi-Markov models (ADHSMM) represent another approach, allowing robots to modify their behavior online based on the actions of the user.4 Furthermore, Theory of Mind (ToM) plays a significant role in enabling robots to adapt to users' preferences and infer their underlying beliefs and intentions.31
Legible behavior ensures that the robot's intentions and the reasoning behind its actions are clear and easily understandable to humans.32 AI frameworks that integrate inverse optimal control, intention recognition, and interactive planning are being developed to achieve both legibility and proactivity in robot behavior.32 Proactive actions themselves can often serve to implicitly communicate the robot's intent to the user, guiding them through the collaborative task.4 Additionally, the use of explainable AI (xAI) is vital for making the robot's decision-making processes more transparent to its human partners.19 The ability of robots to anticipate needs, adapt to dynamic situations, and clearly communicate their intentions, facilitated by advancements in AI, is essential for achieving truly collaborative partnerships in unstructured environments.
4. Establishing Confidence: Fostering Trust in Human-Robot Relationships
Trust is a cornerstone of successful human-robot interaction, significantly influencing decision-making processes and the efficiency of collaborative operations.9 In hybrid teams comprising both humans and robots, trust is essential for ensuring smooth collaboration and mitigating potential risks.9 Moreover, the level of trust that humans place in robots directly impacts their acceptance and willingness to utilize them.9
Several factors contribute to the development of trust in HRI. These include the robot's perceived trustworthiness, its degree of human-likeness, its apparent intelligence, and the affective responses it elicits in humans.33 The robot's ability to perform tasks effectively, its perceived benevolence or intention to act in the user's best interest, and its integrity or adherence to acceptable principles also play crucial roles.8 Predictable and natural movements on the part of the robot enhance user acceptance and foster trust.35 Transparent and ethical design principles in AI are also vital for maintaining user confidence.35 Positive initial interactions and sustained engagement over time contribute significantly to building trust.36 Interestingly, the level of trust in robots can dynamically vary depending on the complexity of the tasks involved.34 The user's perception of safety during interaction is another key factor influencing trust.37 Conversely, the "Uncanny Valley" effect, where robots that appear almost but not quite human can evoke feelings of unease, can negatively impact trust.35
Strategies for building and maintaining trust in HRI include designing robots that clearly communicate their capabilities and limitations to users.35 Engaging in genuine two-way interaction and reciprocal conversation is also crucial.36 Explainable AI plays a vital role by improving the transparency of the robot's decision-making processes, thereby enhancing trust.22 When robots make mistakes, employing effective trust repair strategies, such as offering apologies and providing explanations, can help to rebuild user confidence.37 Finally, the user's inherent propensity to trust automation and their general attitudes towards robots can influence their initial levels of trust.40 Ultimately, fostering trust in robots requires a comprehensive approach that prioritizes transparency, reliability, ethical design, and the ability to engage in meaningful and predictable interactions, while also carefully considering the robot's design to avoid negative responses associated with the Uncanny Valley.
5. Navigating Complexity: Addressing Challenges and Ensuring Safety in Unstructured Environments
Deploying collaborative robots in unstructured environments presents a unique set of challenges, particularly in ensuring safety and adaptability. Unlike the controlled settings of industrial environments, unstructured spaces such as homes and disaster sites demand that robots adapt to a wide variety of tasks and unpredictable situations.4 Ensuring the safety of humans who share these dynamic and unpredictable workspaces with robots is a paramount concern.43 Moreover, robots operating in these complex environments must possess the capability to effectively perceive and understand their surroundings.6
Several safety risks are commonly associated with industrial robots, including unexpected movements, system failures, collision hazards, and electrical hazards.43 To mitigate these risks, various preventive measures are employed, such as the implementation of safety guards, emergency stop buttons, physical barriers, and safety sensors.43 Collaborative robots, or cobots, are specifically designed with built-in safety mechanisms like force limiters, enabling them to work alongside humans without the need for extensive safety barriers.5 Comprehensive safety protocols, thorough employee training on robot operation and emergency response, and regular inspections and maintenance are essential for minimizing risks.43 Advanced software plays a crucial role in continuously monitoring robot performance and predicting potential failures or hazards through predictive maintenance.43 The increasing prevalence of HRC in diverse domains necessitates the development of new approaches to safety assessment and verification that cater to a broader user audience beyond robotics experts.42 The concept of "safety skills," representing a robot's ability to mitigate specific risks, along with standardized testing protocols, offers a promising cross-domain approach to safety validation.42
In specific unstructured environments like disaster relief zones, collaborative robots (cobots) can play a vital role in cooperating with human responders to mitigate risks and enhance the possibility of rescuing individuals in distress.46 However, deploying multi-robot systems in such complex and dangerous environments, such as tunnel disasters, presents significant challenges in terms of coordination, control system development, and real-world deployment.47 In the context of in-home assistance, robots are being explored for their potential to assist with daily tasks and provide support to individuals. In these scenarios, ensuring safe and intuitive interaction is crucial for user acceptance and well-being.13 Addressing the safety challenges inherent in deploying collaborative robots in unstructured environments requires a multifaceted strategy that includes designing intrinsically safe robots, implementing robust safety protocols, utilizing advanced monitoring and prediction systems, and developing standardized validation methods applicable across diverse applications.
6. The Physical Dimension: The Role of Embodied AI in Collaboration
Embodied AI, which refers to the integration of artificial intelligence into physical systems such as robots, is increasingly recognized as a critical paradigm for achieving truly intelligent and collaborative robots.49 This approach posits that intelligence is not solely a function of cognitive processes but is deeply intertwined with our physical interactions with the world.49 Physical embodiment significantly influences a robot's decision-making, learning processes, and interactions within real-world environments.52
Key aspects of embodied AI include the robot's ability to interact with the physical world through sensing and acting upon it.49 Learning through experience and adapting to dynamically changing conditions are fundamental to this paradigm.49 Embodied AI emphasizes contextual awareness, enabling robots to make decisions informed by their immediate surroundings, and often involves the integration of multiple sensory modalities, such as vision, touch, and sound, for a more effective perception of the environment.49 Furthermore, the ability to transfer knowledge acquired in virtual environments to physical robots operating in the real world is a crucial aspect of this field.50
Numerous research projects are currently exploring the potential of embodied AI in collaborative robotics. For instance, A*STAR's Human-Robot Collaborative AI (Collab AI) program focuses on creating robots that can learn tasks by observing humans and respond using a combination of vision, touch, and speech.53 Researchers at Aalborg University are investigating advanced AI for multimodal human-robot interaction within the framework of embodied AI, aiming to develop more intuitive and effective communication methods.52 The development of general-purpose robotics models, such as those being explored by Google, represents another avenue in embodied AI research, with the goal of creating robots capable of reasoning about and performing a wide range of complex tasks.54 The increasing interest in embodied multi-agent systems (EMAS) highlights the potential of this approach for tackling complex, real-world challenges.55 Advancements in tactile sensing technologies, such as those being made by Meta AI, are enhancing robots' ability to perceive and interact with the physical world through touch.56 Benchmarks and frameworks, like Meta AI's PARTNR, are also being developed to facilitate the evaluation of planning and reasoning capabilities in human-robot collaboration scenarios.56 By grounding AI in the physical world, embodied intelligence enables robots to learn and interact in a manner that more closely mirrors human cognition and behavior, leading to more effective and intuitive collaborative partnerships.
7. Collaborative Problem Solving: Negotiation and Error Recovery in Human-Robot Teams
In the context of human-robot teams operating in unstructured environments, the ability to engage in collaborative problem solving, including negotiation and error recovery, is essential for fluid and effective interaction. In shared spaces, goal conflicts between humans and autonomous robots are often unavoidable, necessitating mechanisms for negotiation and conflict resolution.57 There is a growing recognition that robots may need to exhibit a degree of assertiveness in these situations while still maintaining user acceptance.57 Logical argumentation is being explored as a means to facilitate true collaboration and shared decision-making between human and robot partners, moving beyond traditional models where the robot primarily acts as a subordinate.58
AI mechanisms are being developed to enable robots to participate in negotiation processes. This includes drawing upon psychological concepts such as negotiation and persuasion to equip robots with effective strategies for resolving conflicts.57 These strategies can be categorized based on their anticipated emotional impact (positive, neutral, negative) and their implementation modalities (auditory, visual, physical).57 Value-based practical reasoning mechanisms and dialogue protocols are also being investigated to allow robots to reflect on potential decisions and provide justifications for their chosen actions.58 The advent of Large Language Models (LLMs) presents new possibilities, with the potential for these models to serve as the "brain" for collaborative AI agents capable of engaging in negotiation and making informed decisions.59
Furthermore, the ability for robots to recover from errors gracefully is crucial for building trust and maintaining effective collaboration.39 Error recovery strategies are being developed where robots take responsibility for dialogue breakdowns and actively work to elicit cooperative intentions from their human partners.60 Dialogue modeling techniques that tightly integrate the flow of conversation with the task at hand, including specific provisions for error handling and repair activities, are also being implemented.61 During collaborative tasks, the ability to adjust plans and utilize feedback mechanisms is vital for effective error recovery.61 The way in which a robot handles mistakes can significantly influence user cooperation and the overall quality of the interaction.60 Equipping robots with the capacity to negotiate conflicting goals and recover from errors through AI-driven mechanisms is a critical step towards creating truly collaborative and reliable human-robot teams.
8. Conclusion and Future Outlook
Achieving fluid, trustworthy, and contextually aware human-robot collaboration in unstructured environments represents a complex yet increasingly attainable goal. Recent advancements in multimodal interaction, shared mental models, proactive and adaptive behavior, legibility, and the cultivation of trust are collectively paving the way for seamless HRC. Addressing the unique safety challenges posed by unstructured environments and leveraging the principles of embodied AI are also critical for realizing the full potential of robots as collaborative partners. Ongoing research into negotiation and error recovery mechanisms will further enhance the robustness and reliability of human-robot teams.
Looking ahead, several key research directions hold promise for advancing this field. Further exploration of advanced AI models, particularly large language models, is needed to enhance robots' understanding of human intent and the surrounding context. Developing more sophisticated techniques for building and dynamically updating shared mental models in complex, unstructured settings remains a crucial area of focus. Research aimed at endowing robots with more nuanced emotional intelligence and the ability to respond appropriately to a wider spectrum of human emotions is also essential. Longitudinal studies will be vital for gaining a deeper understanding of the evolution of trust and social dynamics in human-robot relationships over extended periods. The development of more robust and adaptable safety protocols specifically tailored for deploying collaborative robots in diverse unstructured environments is paramount. Continued research in embodied AI will be crucial for creating robots that can learn and interact with the physical world in a more natural and intuitive manner. Finally, exploring more advanced negotiation strategies and error recovery mechanisms that align with human social norms and expectations will be key to creating truly collaborative human-robot teams. A continued focus on human-centric design principles will ensure that these technologies are developed and deployed ethically, respecting human values and promoting well-being. The future of human-robot collaboration holds immense potential to revolutionize various aspects of our lives, transforming the way we work, live, and interact with technology.
Works cited
Are friends electric? The benefits and risks of human-robot relationships - PMC, accessed on May 16, 2025, https://pmc.ncbi.nlm.nih.gov/articles/PMC7809509/
Human–robot interaction: What changes in the workplace? - Eurofound - European Union, accessed on May 16, 2025, https://www.eurofound.europa.eu/en/publications/2024/human-robot-interaction-what-changes-workplace
AI and human-robot interaction: A review of recent advances and challenges - GSC Online Press, accessed on May 16, 2025, https://gsconlinepress.com/journals/gscarr/sites/default/files/GSCARR-2024-0070.pdf
Learning Controllers for Reactive and Proactive ... - Frontiers, accessed on May 16, 2025, https://www.frontiersin.org/journals/robotics-and-ai/articles/10.3389/frobt.2016.00030/full
COBOT Applications—Recent Advances and Challenges - MDPI, accessed on May 16, 2025, https://www.mdpi.com/2218-6581/12/3/79
Chapter 5 Robotics as an Enabler of Resiliency to Disasters: Promises and Pitfalls - Rutgers Computer Science, accessed on May 16, 2025, https://www.cs.rutgers.edu/~kb572/pubs/Robotics_Enabler_Resiliency_Disasters.pdf
The Importance of Shared Mental Models and Shared Situation Awareness for Transforming Robots from Tools to Teammates | Request PDF - ResearchGate, accessed on May 16, 2025, https://www.researchgate.net/publication/258716519_The_Importance_of_Shared_Mental_Models_and_Shared_Situation_Awareness_for_Transforming_Robots_from_Tools_to_Teammates
Exploring Human-Robot Interaction: Trust, Ethics, and the Path Ahead - AZoAi, accessed on May 16, 2025, https://www.azoai.com/article/Exploring-Human-Robot-Interaction-Trust-Ethics-and-the-Path-Ahead.aspx
Trust dynamics in human interaction with an industrial robot - Taylor & Francis Online, accessed on May 16, 2025, https://www.tandfonline.com/doi/full/10.1080/0144929X.2024.2316284
Artificial Behavior Intelligence: Technology, Challenges, and Future Directions - arXiv, accessed on May 16, 2025, https://arxiv.org/html/2505.03315v1
Recent advancements in multimodal human–robot ... - Frontiers, accessed on May 16, 2025, https://www.frontiersin.org/journals/neurorobotics/articles/10.3389/fnbot.2023.1084000/full
Recent advancements in multimodal human–robot interaction - PMC - PubMed Central, accessed on May 16, 2025, https://pmc.ncbi.nlm.nih.gov/articles/PMC10210148/
Future of Humanoid Robots in North America: A New Era Driven by AI Innovation, accessed on May 16, 2025, https://www.marketsandmarkets.com/blog/SE/humanoid-robot-north-america
Multimodal Human–Robot Interaction Using Gestures and Speech: A Case Study for Printed Circuit Board Manufacturing - MDPI, accessed on May 16, 2025, https://www.mdpi.com/2504-4494/8/6/274
Emotion Recognition in AI: Bridging Human Expressions and Machine Learning - IJFMR, accessed on May 16, 2025, https://ijfmr.com/papers/2025/1/34795.pdf
A Multimodal Facial Emotion Recognition Framework through the Fusion of Speech with Visible and Infrared Images - MDPI, accessed on May 16, 2025, https://www.mdpi.com/2414-4088/4/3/46
Emotion Recognition for Human-Robot Interaction – News - ai.umbc.edu, accessed on May 16, 2025, https://ai.umbc.edu/news/post/138871/
(PDF) Emerging Frontiers in Human–Robot Interaction - ResearchGate, accessed on May 16, 2025, https://www.researchgate.net/publication/379051087_Emerging_Frontiers_in_Human-Robot_Interaction
Enabling Rapid Shared Human-AI Mental Model Alignment via the After-Action Review, accessed on May 16, 2025, https://arxiv.org/html/2503.19607
cdn.aaai.org, accessed on May 16, 2025, https://cdn.aaai.org/ocs/9109/9109-40038-1-PB.pdf
A framework for developing and using shared mental models in human-agent teams, accessed on May 16, 2025, https://hrilab.tufts.edu/publications/scheutzetal17smm.pdf
The Utility of Explainable AI in Ad Hoc Human-Machine Teaming - Rohan Paleja, accessed on May 16, 2025, https://www.rohanpaleja.com/assets/pdf/the_utility_of_explainable_ai_.pdf
Use of Simulated Mental Models and Real-time Planning for Human-Robot Interaction, accessed on May 16, 2025, https://coogan.ece.gatech.edu/papers/pdf/ren2025scitech.pdf
Building Shared Mental Models between Humans and AI for Effective Collaboration - Harman Kaur, accessed on May 16, 2025, https://harmanpk.github.io/Papers/CHI2019_MentalModels_HAI.pdf
A Survey of Mental Modeling Techniques in Human–Robot Teaming, accessed on May 16, 2025, https://www.cairo-lab.com/papers/survey-mental-models.pdf
Establishing Shared Mental Models for Improved Human-Robot Team Performance -- Matthew Luebbers - YouTube, accessed on May 16, 2025, https://www.youtube.com/watch?v=wzQ8lD8DZPA
Two ways to make your robot proactive: Reasoning about human ..., accessed on May 16, 2025, https://pmc.ncbi.nlm.nih.gov/articles/PMC9420872/
Two ways to make your robot proactive: reasoning about human intentions, or reasoning about possible futures - arXiv, accessed on May 16, 2025, https://arxiv.org/pdf/2205.05492
Proactive Robot Assistance via Spatio-Temporal Object Modeling - Proceedings of Machine Learning Research, accessed on May 16, 2025, https://proceedings.mlr.press/v205/patel23a/patel23a.pdf
Learning Controllers for Reactive and Proactive Behaviors in Human–Robot Collaboration - Infoscience, accessed on May 16, 2025, https://infoscience.epfl.ch/record/220908/files/articles-10-3389-frobt-2016-00030.pdf
Enhancing Robot Assistive Behaviour with Reinforcement Learning and Theory of Mind - arXiv, accessed on May 16, 2025, https://arxiv.org/html/2411.07003v1
Legible and Proactive Robot Planning for Prosocial Human-Robot Interactions - AIModels.fyi, accessed on May 16, 2025, https://www.aimodels.fyi/papers/arxiv/legible-proactive-robot-planning-prosocial-human-robot
Factors affecting trust in high-vulnerability human-robot interaction contexts: A structural equation modelling approach | Request PDF - ResearchGate, accessed on May 16, 2025, https://www.researchgate.net/publication/339984302_Factors_affecting_trust_in_high-vulnerability_human-robot_interaction_contexts_A_structural_equation_modelling_approach
Complexity-Driven Trust Dynamics in Human–Robot Interactions: Insights from AI-Enhanced Collaborative Engagements - MDPI, accessed on May 16, 2025, https://www.mdpi.com/2076-3417/13/24/12989
The Uncanny Valley And Designing Trust in Human-Robot Interaction - iMotions, accessed on May 16, 2025, https://imotions.com/blog/insights/thought-leadership/the-uncanny-valley/
Human-robot dynamics: a psychological insight into the ethics of ..., accessed on May 16, 2025, https://www.emerald.com/insight/content/doi/10.1108/ijoes-01-2024-0034/full/html
Trust and Trustworthiness from Human-Centered Perspective in HRI -- A Systematic Literature Review | Request PDF - ResearchGate, accessed on May 16, 2025, https://www.researchgate.net/publication/388634088_Trust_and_Trustworthiness_from_Human-Centered_Perspective_in_HRI_--_A_Systematic_Literature_Review
(PDF) Complexity-Driven Trust Dynamics in Human–Robot Interactions: Insights from AI-Enhanced Collaborative Engagements - ResearchGate, accessed on May 16, 2025, https://www.researchgate.net/publication/376252604_Complexity-Driven_Trust_Dynamics_in_Human-Robot_Interactions_Insights_from_AI-Enhanced_Collaborative_Engagements
TOC | HRI2024 - Human-Robot Interaction, accessed on May 16, 2025, https://humanrobotinteraction.org/2024/toc/index.html
More Than a Feeling—Interrelation of Trust Layers in Human-Robot Interaction and the Role of User Dispositions and State Anxiety - Frontiers, accessed on May 16, 2025, https://www.frontiersin.org/journals/psychology/articles/10.3389/fpsyg.2021.592711/full
More Than a Feeling—Interrelation of Trust Layers in Human-Robot Interaction and the Role of User Dispositions and State Anxiety - PubMed Central, accessed on May 16, 2025, https://pmc.ncbi.nlm.nih.gov/articles/PMC8074795/
Validating Safety in Human–Robot Collaboration: Standards and ..., accessed on May 16, 2025, https://www.mdpi.com/2218-6581/10/2/65
Industrial robot safety considerations, standards and best practices to consider, accessed on May 16, 2025, https://www.controleng.com/industrial-robot-safety-considerations-standards-and-best-practices-to-consider/
Working Safely with Robot Workers: Recommendations for the New Workplace - PMC, accessed on May 16, 2025, https://pmc.ncbi.nlm.nih.gov/articles/PMC4779796/
11 Crucial Safety Considerations for Implementing Robotics and AI in Your Warehouse, accessed on May 16, 2025, https://ohsonline.com/Articles/2023/06/23/11-Crucial-Safety-Considerations-for-Implementing-Robotics-and-AI-in-Your-Warehouse.aspx
Collaborative robots (cobots) for disaster risk resilience: a framework ..., accessed on May 16, 2025, https://pmc.ncbi.nlm.nih.gov/articles/PMC10944857/
Disaster Rescue via Multi-Robot Collaboration: Development ..., accessed on May 16, 2025, https://www.researchgate.net/publication/368662074_Disaster_Rescue_via_Multi-Robot_Collaboration_Development_Control_and_Deployment
The Future of Humanoid Robots: Trends, Applications, and Companies - Digitopia, accessed on May 16, 2025, https://digitopia.co/blog/future-of-humanoid-robots/
Embodied AI Explained: Principles, Applications, and Future Perspectives, accessed on May 16, 2025, https://lamarr-institute.org/blog/embodied-ai-explained/
Embodied AI - Microsoft Research, accessed on May 16, 2025, https://www.microsoft.com/en-us/research/collaboration/embodied-ai/
Embodied AI: The race to build robots that think, move - and earn | Portfolio Adviser, accessed on May 16, 2025, https://portfolio-adviser.com/embodied-ai-the-race-to-build-robots-that-think-move-and-earn/
Robotics and Embodied AI - Aalborg University, accessed on May 16, 2025, https://www.mp.aau.dk/research/research-areas/robotics-and-automation/robotics-and-embodied-ai
Robotics/Embodied AI, accessed on May 16, 2025, https://www.a-star.edu.sg/htco/ai3/embodied-AI
When will we get the ChatGPT of robotics? The future of embodied AI is bright, accessed on May 16, 2025, https://www.therobotreport.com/embodied-ai-when-will-we-get-chatgpt-robotics/
Generative Multi-Agent Collaboration in Embodied AI: A Systematic Review - arXiv, accessed on May 16, 2025, https://arxiv.org/html/2502.11518v1
Advancing embodied AI through progress in touch perception, dexterity, and human-robot interaction - Meta AI, accessed on May 16, 2025, https://ai.meta.com/blog/fair-robotics-open-source/
Development and Testing of Psychological Conflict Resolution ..., accessed on May 16, 2025, https://pmc.ncbi.nlm.nih.gov/articles/PMC7945950/
Enabling human-robot collaboration via argumentation | Request PDF - ResearchGate, accessed on May 16, 2025, https://www.researchgate.net/publication/262401867_Enabling_human-robot_collaboration_via_argumentation
Multi-Agent Collaboration Mechanisms: A Survey of LLMs - arXiv, accessed on May 16, 2025, https://arxiv.org/html/2501.06322v1
(PDF) Who Is Responsible for a Dialogue Breakdown? An Error Recovery Strategy That Promotes Cooperative Intentions From Humans by Mutual Attribution of Responsibility in Human-Robot Dialogues - ResearchGate, accessed on May 16, 2025, https://www.researchgate.net/publication/332624424_Who_Is_Responsible_for_a_Dialogue_Breakdown_An_Error_Recovery_Strategy_That_Promotes_Cooperative_Intentions_From_Humans_by_Mutual_Attribution_of_Responsibility_in_Human-Robot_Dialogues
The Curious Robot as a Case Study for Comparing Dialogue Systems - AI Magazine, accessed on May 16, 2025, https://ojs.aaai.org/aimagazine/index.php/aimagazine/article/view/2382/2242
Improving Human-Robot Teaching by Quantifying and Reducing Mental Model Mismatch - arXiv, accessed on May 16, 2025, https://arxiv.org/pdf/2501.04755
Comments
Post a Comment