Abstract [eng] |
Multi-modal human–machine interfaces have recently undergone a remarkable transformation, progressing from simple human–robot interaction (HRI) to more advanced human–robot collaboration (HRC) and, ultimately, evolving into the concept of human–robot teaming (HRT). The aim of this work is to delineate a progressive path in this evolving transition. A structured, position-oriented review is proposed. Rather than aiming for an exhaustive survey, our objective is to propose a structured approach in a field that has seen diverse and sometimes divergent definitions of HRI/C/T in the literature. This conceptual review seeks to establish a unified and systematic framework for understanding these paradigms, offering clarity and coherence amidst their evolving complexities. We focus on integrating multiple sensory modalities — such as visual, aural, and tactile inputs — within human–machine interfaces. Central to our approach is a running use case of a warehouse workflow, which illustrates key aspects including modelling, control, communication, and technological integration. Additionally, we investigate recent advancements in machine learning and sensing technologies, emphasising robot perception, human intention recognition, and collaborative task engagement. Current challenges and future directions, including ethical considerations, user acceptance, and the need for explainable systems, are also addressed. By providing a structured pathway from HRI to HRT, this work aims to foster a deeper understanding and facilitate further advancements in human–machine interaction paradigms. |