
💣 Special Session Paper Countdown
Call for Special Sessions Papers
Special papers will follow instructions and deadlines of regular papers. However, make sure to assign your paper to a special session in the submission system with the corresponding code of the session.
Manuscripts will be reviewed through a rigorous single-blind peer-review process. The maximum length of the final submission is 6 pages, with 2 additional pages allowed at an extra charge. Papers presented in regular sessions will be included in the official conference proceedings.
RO-MAN 25 Special Sessions
Social Human-Robot Interaction of Human-care Service Robots (Code: 66f62)
Service robots with social intelligence are increasingly being integrated into our daily lives, aimed at enhancing both quality of life and operational efficiency. This session will bring together participants from diverse backgrounds, including Human-Robot Interaction design, social intelligence, decision-making, social psychology, and robotic social skills. The goal of the session is to explore how social robots can interact with humans in a socially meaningful way and facilitate the integration of service robots into society. The session will focus on three key social aspects of human-robot interaction: (1) the technical implementation of social robots and robotic products, (2) the design of form, function, and behavior, and (3) human behavior and expectations regarding interactions with these robots. This special session will focus on the latest advances in the field of social Human-Robot Interaction, social intelligence, and social skills, along with their applications, including clinical evaluations.
Organisers:
Ho Seok Ahn (University of Auckland, New Zealand) - hs.ahn@auckland.ac.nz
Minsu Jang (Electronics and Telecommunications Research Institute, South Korea) - minsu@etri.re.kr
Sonya S. Kwak (Korea Institute of Science and Technology, South Korea) - sonakwak@kist.re.kr
Min-Gyu Kim (Korea Institute of Robotics and Technology Convergence, South Korea) - mingyukim@kiro.re.kr
Youngwoo Yoon (Electronics and Telecommunications Research Institute, South Korea) - youngwoo@etri.re.kr
Explainable Human-Robot Interaction (Code: 1i354)
As the field of human-robot interaction (HRI) grows and socially aware robots work more closely with people in a number of critical domains, it is increasingly important to ensure that these robots are explainable, interpretable and transparent. Robots that can communicate the reasons and justifications for their beliefs and actions and that can act in a transparent manner may aid in people’s understanding of the robot, and can potentially improve trust in, acceptance of and engagement with the robot. This special session focuses on the growing intersection between explainability and HRI, addressing the challenges of designing and deploying robots that are capable of generating explanations and justifications for their decisions and communicating them effectively to humans. We invite researchers of all backgrounds and disciplines to submit their work in this area, on topics such as interactive explainability, multimodal explanations, interpretable human-robot interfaces, the impact of explanations on factors such as trust and engagement, and predictable robot behaviour in HRI settings, as well as the social and ethical implications of explainability. In this way, this session aims to advance the integration of explainable robotics into society, ensuring robots remain effective and accountable partners in a shared human-robot future.
Organisers:
Tamlin Love - tlove@iri.upc.edu
Pradip Pramanick - pradip.pramanick@unina.it
Jauwairia Nasir - jauwairia.nasir@uni-a.de
Antonio Andriella - aandriella@iri.upc.edu
Elmira Yadollahi - e.yadollahi@lancaster.ac.uk
Stefan Wermter - stefan.wermter@uni-hamburg.de
Adaptive and Adaptable Robots in Social Interactions (Code: 48wkt)
The success of personal robots depends on their ability to tailor their behaviour to meet individual needs, enhancing human-robot interaction (HRI) by increasing engagement, trust, and task performance. Achieving this requires robots to be both adaptive — autonomously learning and adjusting to user preferences — and adaptable, empowering users to customise behaviours through intuitive interfaces. Adaptive robots leverage advanced capabilities like Theory of Mind and proactive decision-making, while adaptable robots foster usability by enabling non-experts to modify behaviours through demonstrations and feedback. However, achieving personalisation introduces critical ethical and societal challenges that must be addressed. These include privacy risks, algorithmic biases, and reduced trust in sensitive domains like healthcare and education. Addressing these challenges demands human-centred design principles emphasising transparency, inclusivity, and user agency while balancing safety and societal values. This special session aligns with the conference theme, “Shaping our hybrid future with robots together,” by focusing on robots capable of adapting and personalising their behaviour to meet human needs. By fostering shared autonomy and empowering users, the session promotes the development of ethical, effective, and human-centred robotic systems that enable harmonious and collaborative human-robot relationships.
Organisers:
Antonio Andriella - aandriella@iri.upc.edu
Wing-Yue Geoffrey Louie - louie@oakland.edu
Barbara Bruno - barbara.bruno@kit.edu
Alessandro di Nuovo - a.dinuovo@she.ac.uk
Silvia Rossi - silvia.rossi@unina.it
LLM/GenAI-based Multimodal, Multilingual and Multitask Modeling Technologies for Robotic Systems (Code: 86gd7)
In the Natural Language Processing (NLP) field, we have seen significant progress in Large Language Models (LLMs) and Generative Artificial Intelligence (GenAI), with advancements in multimodal capabilities, reasoning, efficiency, and accessibility. The integration of LLMs and GenAI with robotics can improve task planning, human-robot interaction, and real-world applications. This special session focuses on LLM/GenAI-powered robots for autonomous tasks, tackling challenges to drive the future of intelligent robotics.
Organisers:
Sheng Li (School of Engineering, Institute of Science Tokyo, Japan) - sheng.li@ieee.org
Takahiro Shinozaki (School of Engineering, Institute of Science Tokyo, Japan) - shinot@ict.e.titech.ac.jp
Chenhui Chu (Department of Informatics, Kyoto University, Kyoto, Japan) - chu@i.kyoto-u.ac.jp
Huck C.-H. Yang (NVIDIA Research, Taiwan) - hucky@nvidia.com
Jiyi Li (University of Yamanashi, Kofu, Japan) - jyli@yamanashi.ac.jp
Bridging Trust and Context: Dynamic Interactions in HAI (Code: w793d)
The rapid advancement of artificial intelligence (AI) is transforming human-robot interaction, enabling unprecedented levels of collaboration and adaptability in hybrid environments. Building trust in dynamic contexts is a pivotal challenge for successful integration of humans and robots, as we shape a hybrid future where humans and robots collaborate seamlessly. This special session aims to explore the intricate relationship between trust and dynamic contexts in hybrid systems. By focusing on how intelligent agents and humans can collaborate with trust in the face of ever-changing contexts, this session aligns with the conference theme, "Shaping our hybrid future with robots together." Building on the success of the ten talks in the previous edition, this session emphasizes the evolution of trust in hybrid environments, where robots, humans, and AI systems interact across diverse contexts. We invite a broad spectrum of research that examines how dynamic contexts influence trust in HAI. Topics of interest include, but are not limited to, adaptive trust models in dynamic human-robot/AI interaction, context-aware communication strategies in HAI, the impact of social and cultural contexts on trust dynamics, and empirical studies on trust evolution in long-term human-agent interactions. Through this session, we aim to foster discussions on shaping hybrid futures that emphasize inclusion, adaptability, and trust as central to human-robot collaboration.
Organisers:
Yosuke Fukuchi - fukuchi@tmu.ac.jp
Kazunori Terada - terada.kazunori.u8@f.gifu-u.ac.jp
Michita Imai - michita@keio.jp
Seiji Yamada - seiji@nii.ac.jp
Social Robots for Mental Health and Well-Being (Code: j2ke7)
The integration of social robots into mental health care and well-being has emerged as a promising field of research, bridging robotics, psychology, cognitive science, and human-robot interaction (HRI). Effectively, social robots have the potential to redefine mental health care by combining human empathy with robotic consistency, accessibility, and adaptability. For instance, social robots can provide emotional support, companionship, and therapeutic interventions for diverse populations, including children, elderly individuals, and people with mental health challenges.
This special session aims to showcase cutting-edge research, innovative applications, and interdisciplinary approaches to designing and evaluating social robots for mental health and well-being. The goal is to explore how socially interactive robots can be designed, developed, and deployed to shape a hybrid future in which they work alongside mental health professionals and individuals to foster well-being and resilience.
Organisers:
Hanan Salam (New York University Abu Dhabi, UAE) - hanan.salam@nyu.edu
Oya Celiktutan (King's College London, UK) - oya.celiktutan@kcl.ac.uk
Marwa Mahmoud (University of Glasgow) -Marwa.Mahmoud@glasgow.ac.uk
Human Modeling for Hybrid Interactions with Robots (Code: a3848)
The growing reliability, efficiency, and computational capabilities of robotic platforms are pushing the design of innovative services that see robots acting and interacting with humans in both common-life and working scenarios. The effective use of robots in ecological environments is strictly connected to the capability of synthesizing safe and socially compliant behaviors. Furthermore, different types of robots (e.g., mobile, humanoid, desktop, pet) with different interaction skills and scenarios require hybrid interaction mechanisms (e.g., gesture, voice, touch, implicit through gaze, or other social cues). Robots need models of behavioral dynamics, mental states, expectations, and intentions of humans to realize smooth and acceptable interactions. The design of effective models capable of characterizing physical, cognitive, and behavioral features of humans, and combining them with the control and interacting strategies of robots still pose open research challenges. This special session fosters a multidisciplinary dialogue to design novel human-mediated robots by discussing aspects like: a) co-design of behaviors; b) modeling of human cognition and physical states as well as social dynamics; c) metrics and benchmarks to evaluate acceptability and efficacy of human-robot interactions; d) ethical regulations; e) hybrid approaches to improve legibility of robot behaviors and communication with humans in general.
Organisers:
Dr. Rachid Alami - rachid.alami@laas.fr
Dr. Phani-Teja Singamaneni - phani-teja.singamaneni@laas.fr
Dr. Gloria Beraldo - gloria.beraldo@istc.cnr.it
Dr. Riccardo De Benedictis - riccardo.debenedictis@istc.cnr.it
Dr. Francesca Fracasso - francesca.fracasso@istc.cnr.it
Prof. Masaki Takahashi - makahashi@sd.keio.ac.jp
Dr. Alessandro Umbrico - alessandro.umbrico@istc.cnr.it
Sustainable Autonomy: Connecting Awareness and Ethics in Human-Robot Interaction (Code: w98u9)
Human cognition handles uncertainty with appropriate situational awareness (SA), risk awareness, coordination, and decision making. However, current robotic agents need to make decisions using information beyond what is incorporated in human-based SA models. These agents have to fulfill increasingly complex autonomous operations via multi-layer computational reasoning and learning-enabled components for decision-making and perception of the environment, agents, and dynamics. The EIC Pathfinder project SymAware aims to address this problem by designing a novel architecture for SA in multi-agent systems, enabling safe collaboration of autonomous vehicles and drones. However, as these agents will function in practical real-life applications, in addition to communicating between each other, they will also need to interact with human users - drivers, pedestrians, drone operators, etc. Thus, modeling and implementing a SA architecture also requires an understanding of how to build an ethical and trustworthy human-agent interaction. How can the agents balance their autonomy and the human interests in high-risk scenarios to build trust? How can the agents take full advantage of their knowledge awareness while also safeguarding the users' privacy and data? How can we have agents with spatial-temporal awareness which also respect the social norms and personal boundaries of the humans? In this RO-MAN 2025 special session, we welcome all submissions that seek to investigate the various facets of artificial awareness and its implementation in single and multi-agent systems, as well as submissions that are more centered around the ethical dimension and implications of artificial awareness in agents that interact with humans.
Organisers:
Ana Tanevska - ana.tanevska@it.uu.se
Arabinda Ghosh - arabinda@mpi-sws.org
Ginevra Castellano - ginevra.castellano@it.uu.se
Sadegh Soudjani - sadegh@mpi-sws.org
Theory of Mind in Human-Robot Interaction (Code: g19y9)
The ability to understand and acknowledge others' mental states is known as the Theory of Mind (ToM). Theory of Mind is a multi-modal system people use to communicate and understand each other naturally. A growing group of Human-Robot Interaction (HRI) research focuses on investigating whether people form ToM towards robots, and what level of ToM a robot should have to communicate transparently with the humans in their shared environment in a sociable and accepted way. Robots that encounter humans should be able to perform transparent motions and behaviours, and, at the same time, be able to clearly recognise the humans’ intentions and behaviours. In this session, we want to explore which cognitive skills are needed by a robot, and how ToM affects communications in all aspects of human-robot interaction. We further want to investigate the principal components that can contribute to this research direction. In particular, we aim to define and explore the needed level of shared mental models between people and robots for effectively planning, navigating, manipulating objects and the environment, and transparently communicating.
Organisers:
Patrick Holthaus - p.holthaus@herts.ac.uk
Alessandra Rossi - alessandra.rossi@ieee.org
Towards Meaningful Human-Robot Interactions Using an Interdisciplinary Approach (Code: 3acnv)
Recent advances in machine learning and generative AI have enabled artificial agents to exhibit remarkable social and communicative skills. However, their impact on human cognition, behavior, and emotions remains unclear. To fully understand these effects, research must combine subjective (e.g., motivation, emotions, attitudes) and objective (e.g., behavioral performance, eye tracking, neural responses) measures grounded in well-documented theoretical models. Thus, this special session explores human-artificial agent interaction by integrating insights from cognitive neuroscience, philosophy, and robotics. This special session explores how cognitive and neural processes unfold during interactions with artificial agents, considering factors like agent appearance and behavior, participants' expectations, and biases. Methods such as EEG, fNRIs, eye tracking, or physiological measures reveal implicit cognitive and emotional responses beyond self-reports. Additionally, philosophical perspectives on mind attribution and ethical considerations, alongside research implementing human-inspired cognitive models, provide a deeper understanding of interactions with artificial agents. Key topics include attributing human-like capabilities to artificial agents in collaborative and competitive settings, implementation of human-inspired cognitive architectures in AI, implicit and explicit measures of trust in human-agent interaction, and real-world human-machine collaboration. By integrating neuroscience, philosophy, robotics, and AI, we aim to establish guidelines for meaningful, ethical, and effective human-agent interactions.
Organisers:
Jairo Perez-Osorio, Ph.D. - j.perez-osorio@tu-berlin.de
Prof. Eva Wiese, Ph.D. - eva.wiese@tu-berlin.de
Cognitive Architectures for Social Robots: Recurring Interactions, Contexts and Continual Learning (Code: e657i)
Social robots are increasingly becoming part of daily life in roles like caretakers, home assistants, and autonomous vehicles. However, creating general-purpose social robots is difficult due to the need for complex cognitive architectures that integrate multiple knowledge sources to perform real-world tasks. For instance, robots in households must adapt to specific contexts and patterns of interaction unique to the people living there. These interactions require flexibility, as they cannot be pre-programmed in a fixed manner but must evolve with personal habits and preferences. Inspired by Wittgenstein's language games, we propose that robots learn from recurring patterns of human activity, known as interaction games. These games involve both verbal and non-verbal communication, and success is measured by the robot’s ability to engage meaningfully in these patterns. While much research in Human-Robot Interaction (HRI) and machine learning (ML) exists, it is often fragmented, with HRI focusing on human-centered models and ML on static algorithms. This session will focus on continual learning robots that adapt to long-term human interactions in dynamic contexts, aiming to provide personalized social assistance. Our goal is to bring together researchers in multidisciplinary fields (HRI, Artificial Intelligence, ML, cognitive science) to present and discuss theoretical foundations, real-world applications, and HRI studies with social robots.
Organisers:
Ali Ayub - ali.ayub@concordia.ca
Chrystopher Nehaniv - chrystopher.nehaniv@uwaterloo.ca
Fluidity in Human-Robot Interaction (Code: j3xpu)
A key problem for current human-robot interaction (HRI) is lack of fluidity. Although there have been significant recent advances in computer vision, motion planning, manipulation and automatic speech recognition, state-of-the-art HRI can be slow, laboured and fragile. The contrast with the speed, fluency and error tolerance of human-human interaction is substantial. Building off of a successful workshop on Fluidity at the HAI 2024 conference and recent interest in the wake of recent increases in the use of Large Language Model (LLM)-driven HRI, the principal goal of the special session is to attract researchers interested in defining and improving the fluidity of HRI, welcoming submissions on the following topics:
• Definitions of fluidity in HRI
• Measurements of fluidity and perception of fluid interaction in HRI
• Incremental and predictive speech and action processing
to facilitate fluid interaction
• Legibility models of robot motion for improving and
modelling fluidity
• Live communicative grounding models for robots
• Rapid error recovery and repair mechanisms for robots
• Challenges for adapting LLMs for fluid HRI
Organisers:
Julian Hough - julian.hough@swansea.ac.uk
Carlos Baptista de Lima - c.v.baptistadelima@swansea.ac.uk
Frank Foerster - f.foerster@herts.ac.uk
Patrick Holthaus - p.holthaus@herts.ac.uk
Yongjun Zheng - y.zheng20@herts.ac.uk
Real-world human assistive mechatronics systems (Code: g8d77 )
Human-assistive technologies with mechatronics are in high demand for overcoming the challenges of an aging society. The goal of these assistive mechatronics system is to be of practical use to their target people, which may include handicapped or elderly people in their daily life. However, many reports have presented only concepts or technical achievements in laboratory rather than describing the evaluation of these mechatronics system in actual use. If we want to discuss these technologies in the real world, these studies will require complex procedures, for example, clearing safety reviews to fulfill several governmental safety standards and implementation of informed consent. Passing these procedures will lead to many benefits. Through demonstrations in the actual situation, we sometimes find technical problems that go unnoticed in the laboratory. Furthermore, the people we are actually trying to help can provide important feedback. Thus, the organizers propose a special session to discuss some case studies considering human assistive mechatronics system in the “real world.” Our goal is to share many findings concerning real-world problems through real voices of actual situation and promote each study on human-assistive technology. This organized session proposal is planned by Technical Committee on Human Factors and Technical Committee on Control, Robotics, and Mechatronics, IES, IEEE.
(205 words)
Organisers:
Daisuke Chugo - chugo@kwansei.ac.jp
Koji Makino - kohjim@yamanashi.ac.jp
Mihoko Niitsuma - niitsuma@mech.chuo-u.ac.jp
Important Dates
Submission Deadline: March 20, 2025
Notification of Acceptance: May 10, 2025
Camera-ready Deadline: June 6, 2025
Submission
All submissions to RO-MAN 2025 should be done via https://ras.papercept.net in the corresponding conference section.
Important: Please remember always to check the right time at the ras.papercet server!
This is countdown can be different from the time in the server according to your region. The time to take in consideration is the one at the server.