IROS 2023 Workshop

To achieve sustainable work one of the key elements is ergonomics which aims to design workspace processes in a way to promote efficiency and safety. The recent emergence of collaborative and wearable robots in workspaces presents a potentially powerful new tool for the field of ergonomics. The robot can anticipate and mitigate the physical risk factor related to work-related musculoskeletal disorders, for example, by incorporating human models into robot control to make it aware of the human co-worker’s ergonomic status and actively reconfiguring the work process with the robot.

To tackle major challenges and opportunities that have been recently identified by the community through a series of successful workshops we organised at IROS (IROS2019, IROS2020, IROS2021, and IROS2022) and ICRA (ICRA2018) in the past. So far, our focus was primarily on using traditional human physical and cognitive models and while these models tend to be reliable they are often difficult to personalise. On the other hand, the robotics community has made substantial progress on the topics of robot learning in the past decade, which offers great potential for model personalisation and enhanced adaptiveness of robots to facilitate ergonomics.

The objective of the proposed workshop is to review the progress which was achieved since our last workshop and then focus on how robot learning methods can help with ergonomic human-robot collaboration. This agenda requires experts from various research fields and interdisciplinary discussions, thus we assembled a diverse set of organisers and speakers, who are leading experts in their respective areas highly relevant to the workshop topic.



We encourage contributions to the workshop in the form of extended abstracts (max 4 pages double-column IEEE conference template) to be presented as posters in the poster session. Important dates for the extended abstract submission:

  • Submission deadline for extended abstracts: 15 August 2023
  • Notification of acceptance: 17 August 2023


Submission: please send your PDF file via email to l.peternel@tudelft.nl with subject [IROS-EPHRC 2023] Workshop Contribution.

Selected Papers

Speaker instructions

All accepted paper has to prepare the interactive poster session, it has two components: a 3-4 minutes spotlight pitch and an interactive presentation (A0 portrait Poster format) during Poster session.


Time Talk Title / Comments
08:30 – 08:40 Introduction by the Organizers -
08:40 – 09:10 Eiichi Yoshida Learning-based Understanding and Prediction of Human Motion for Symbiotic Robot Interaction
09:10 – 09:40 Bram Vanderborght Industry 5.0: a multidisciplinary research approach
09:40 – 10:10 Dongheui Lee Descision making in physical human-robot joints actions
10:40 – 11:00 Coffee Break -
11:00 – 11:30 Dorsa Sadigh Learning representations for human-robot collaboration
12:00 – 12:30 Xu Xu Promote workers` safety and health through ubiquitous sensing during human-robot collaborative assembly tasks
12:30 – 13:30 Lunch -
13:30 – 14:00 Tadej Petrič Enhancing human-robot collaboration through investigating human dyads
14:00 – 14:30 Serena Ivaldi Adaptation in human-robot collaboration
14:30 – 15:00 Luis Figueredo Planning for humans: Leveraging comfort-based manipulability for improved pHRC
15:00 – 15:30 Sylvain Calinon Learning from Demonstration for Collaborative Tasks with Physical Contacts
15:30 – 15:50 Coffe Break Poster session will already commence between the coffe break so attendees can get coffee and then see posters
15:50 – 16:30 Poster session Contributors present posters to discuss the ideas and ongoing work
16:30 – 17:00 Joao Silverio Exploiting prior task knowledge for assistance during learning and shared control
17:00 – 17:30 Round-table Discussion -


Luka Peternel, Assistant Professor

Delft University of Technology, Netherlands


Luka Peternel received a Ph.D. in robotics from Faculty of Electrical Engineering, University of Ljubljana, Slovenia in 2015. He conducted Ph.D. studies at Department of Automation, Biocybernetics and Robotics, Jožef Stefan Institute in Ljubljana from 2011 to 2015, and at Department of Brain-Robot Interface, ATR Computational Neuroscience Laboratories in Kyoto, Japan in 2013 and 2014. He was with Human-Robot Interfaces and Physical Interaction Lab, Advanced Robotics, Italian Institute of Technology in Genoa, Italy from 2015 to 2018. From 2019, Luka Peternel is an Assistant Professor at Department of Cognitive Robotics, Delft University of Technology in the Netherlands.

Wansoo Kim, Assistant Professor

Hanyang University, Republic of Korea


Wansoo Kim is an assistant professor at Hanyang University ERICA, Republic of Korea. He received the B.S. degree in mechanical engineering from Hanyang University, Korea in 2008 and a Ph.D. degree in mechanical engineering from Hanyang University, Korea in 2015 (Integrated MS/PhD program). He was with Human-Robot Interfaces and Physical Interaction Lab, Italian Institute of Technology in Genoa, Italy from 2016 to 2020. He has developed several exoskeleton systems such as HEXAR-Hanyang Exoskeleton Assistive Robot, and conducted research on the control of the powered exoskeleton robot through the physical human-robot interaction (pHRI) forces. He is currently involved in a project Horizon-2020 project SOPHIA. He has contributed to several projects in the field of exoskeleton robot in Korea projects (High responsive control technology of a lower-limb exoskeleton under rough terrain-1415144732, Development of Wearable Robot for Industrial Labor Support-1415135223, etc.), and joint R&D projects with a company (DSME and LIG Nex1). He was the winner of the Solution Award 2019 (Premio Innovazione Robotica at MECSPE2019), the winner of the KUKA Innovation Award 2018, the winner of the HYU best PhD paper award 2015, and the winner of the ICCAS best presentation award 2014. His research interests are in Physical human-robot interaction (pHRI), human-robot collaboration, Shared Control, Ergonomics, Human modelling, Feedback devices, and powered exoskeleton robot.

Heni Ben Amor, Associate Professor

Arizona State University, USA


Heni Ben Amor received the Ph.D. degree in computer science from the Technical University Freiberg, Freiberg, Germany, in 2010, focusing on artificial intelligence and machine learning.,He is an Associate Professor of Robotics with Arizona State University, Tempe, AZ, USA. He is the Director of the ASU Interactive Robotics Laboratory. He was a Research Scientist with Georgia Tech, Atlanta, GA, USA, a Postdoctoral Researcher with the Technical University Darmstadt, Darmstadt, Germany, and a Visiting Research Scientist with the Intelligent Robotics Lab, University of Osaka, Osaka, Japan. His research interests include machine learning, robotics, human–robot interaction, and virtual reality.,Dr. Amor was the recipient of the NSF CAREER Award, the Fulton Outstanding Assistant Professor Award, as well as the Daimler-and-Benz Fellowship.

Arash Ajoudani, Principal Investigator

Italian Institute of Technology, Italy


Arash Ajoudani received his PhD degree in Robotics and Automation from Centro ‘E Piaggio’, University of Pisa, and Advanced Robotics Department (ADVR), Italian Institute of Technology (IIT), Italy (July 2014). His PhD thesis was a finalist for the Georges Giralt PhD award 2015 - best European PhD thesis award in robotics. He is currently a tenure-track scientist and the leader of the Human-Robot Interfaces and Physical Interaction (HRI2) lab of the IIT. He was a winner of the Amazon Research Awards 2019, the winner of the Werob best poster award 2018, winner of the KUKA Innovation Award 2018, a finalist for the best conference paper award at Humanoids 2018, a finalist for the best interactive paper award at Humanoids 2016, a finalist for the best oral presentation award at Automatica (SIDRA) 2014, the winner of the best student paper award and a finalist for the best conference paper award at ROBIO 2013, and a finalist for the best manipulation paper award at ICRA 2012. He is the author of the book ‘Transferring Human Impedance Regulation Skills to Robots’ in the Springer Tracts in Advanced Robotics (STAR), and several publications in journals, international conferences, and book chapters. He is currently serving as the executive manager of the IEEE-RAS Young Reviewers’ Program (YRP), chair and representative of the IEEE-RAS Young Professionals Committee, and co-chair of the IEEE-RAS Member Services Committee. He has been serving as a member of scientific advisory committee and as an associate editor for several international journals and conferences such as IEEE RAL, Biorob, ICORR, etc. His main research interests are in physical human-robot interaction and cooperation, robotic manipulation, robust and adaptive control, rehabilitation robotics, and tele-robotics.

Eiichi Yoshida, Professor

Tokyo University of Science, Japan


Eiichi Yoshida received M.E and Ph. D degrees on Precision Machinery Engineering from Graduate School of Engineering, the University of Tokyo in 1996. He then joined former Mechanical Engineering Laboratory, later in 2001 reorganized as National Institute of Advanced Industrial Science and Technology (AIST), Tsukuba, Japan. He served as Co-Director of AIST-CNRS JRL (Joint Robotics Laboratory) at LAAS-CNRS, Toulouse, France, from 2004 to 2008, and at AIST, Tsukuba, Japan from 2009 to 2021. He was also Deputy Director of Industrial Cyber-Physical Systems Research Center, and TICO-AIST Cooperative Research Laboratory for Advanced Logistics in AIST from 2020 to 2021. From 2022, he is Professor of Tokyo University of Science, at Department of Applied Electronics, Faculty of Advanced Engineering. He was previously invited as visiting professor at Karlsrule Institute of Technology and University of Tsukuba. He was awarded Chevalier, l’Ordre National du Mérite from French Government in 2016 for his long-term contributions to French-Japanese collaboration on robotics. He is IEEE Fellow, and member of RSJ and JSME. His research interests include robot task and motion planning, human modeling, humanoid robots and advanced logistics.

Invited Speakers

- Eiichi Yoshida (Tokyo University of Science, Japan)

“ Learning-based Understanding and Prediction of Human Motion for Symbiotic Robot Interaction ”

Human-symbiotic robotic behavior needs not only understanding and interpreting human behaviors, but also predicting their intentions. We have been addressing anthropomorphic whole-body motion understanding, basically model-based dynamic or musculo-skeletal analysis. While it is very useful, we have also realized the difficulty of understanding the underlying motion strategy and the synthesize anthropomorphic motions only by this approach. Recent advances on machine learning techniques can be one important key to address those challenges. In this talk, we introduce some recent research activities to leverage learning methodology, on contact detection and human-following robot. We first address contact detection and estimation from motions. Physical human-robot interaction (pHRI) needs to deal with various contacts. We started tacking this issue by variable autoencoder (VAE) to detect contacts and estimate their force at the same time from the motion input only. We introduce another study on a human-following mobile robot, predicting  motions friendly to both the human and robot by combining optimization and machine learning towards smooth interaction.

- Serena Ivaldi (INRIA, France)

“ Adaptation in human-robot collaboration ”

In this talk, I will present our experimental findings concerning the adaptation mechanisms that humans exhibit when physically interacting with a cobot during co-manipulation tasks. This knowledge is important to inform robotics controller that reason about the human ergonomics during collaboration. I will also show some software tools that we developed to assess and visualise ergonomics criteria online based on Digital Human Models.

- Tadej Petrič (Jožef Stefan Institute, Slovenia)

“ Enhancing human-robot collaboration through investigating human dyads ”

In this talk, I will focus on improving human-robot collaboration through investigating human dyads in workspaces. The interaction between humans and robots in shared workspaces presents numerous challenges that must be addressed to optimize their collaboration. In this context, studying human dyads in workspaces provides an effective means to understand and improve human-robot interaction. By exploring the dynamics of human dyads and the factors that influence their collaboration in shared workspaces, we can develop strategies to improve the effectiveness of human-robot collaboration. This talk will provide an overview of my current research on human dyads in workspaces and highlight the potential benefits of this approach for optimizing human-robot collaboration.

- Bram Vanderborght (Vrije Universiteit Brussel, Belgium)

“ Industry 5.0: a multidisciplinary research approach? ”

Human-robot collaboration has great potential to face societal challenges (as ageing population, need for better and healthier work), sustainability requirements and economic needs (small lot sizes, reshoring,…). In this talk we will focus how we achieve a human centered approach by combining expertise of not only engineering/AI fields, but also human and social sciences. As such we achieve a higher acceptance of the technology by the use. We provide examples on collaborative robots and exoskeletons for improved ergonomics and sustainable self healing soft grippers. At the VUB this work was performed in the Brussels Human Robotics Research Center, BruBotics, which is a joint initiative of 8 research groups of the Vrije Universiteit Brussel (VUB) sharing a common vision: improve our quality of life through Human centered Robotics.

- Dorsa Sadigh (Stanford University, USA)

“ Learning Representations for Human-Robot Collaboration ”

There have been significant advances in the field of robot learning in the past decade. However, many challenges still remain when considering how robot learning can advance interactive agents such as robots that collaborate with humans. In this talk, I will be discussing the role of learning representations for robots that interact with humans and robots that interactively learn from humans through a few different vignettes. I will first discuss how bounded rationality of humans guided us towards developing learned latent action spaces for shared autonomy. It turns out this “bounded rationality” is not a bug and a feature — i.e. we can develop extremely efficient coordination algorithms by learning latent representations of partner strategies and operating in this low dimensional space. I will then discuss how we can go about actively learning such representations capturing human preferences including our recent work on how large language models can help design human preference reward functions. Finally, I will end the talk with a discussion of the type of representations useful for learning a robotics foundation model and some preliminary results on a new model that leverages language supervision to shape representations.

- Xu Xu (North Carolina State University, USA)

“ Promote workers’ safety and health through ubiquitous sensing during human-robot collaborative assembly tasks ”

In recent years, the topic of ubiquitous sensing has gained significant attention due to the availability of low-cost portable sensors, such as cameras and inertial measurement units. Through the use of ubiquitous sensing, human-centric intelligent systems can collect signals from sensors and perform context-aware computing based on human physical and cognitive activities. This talk will explore various applications of ubiquitous sensing, with a particular focus on promoting safety and health during human-robot collaborative assembly tasks. Specifically, we will discuss three applications: collision avoidance between human workers and collaborative robots, finding an optimal point of operation for robots to minimize the risk of musculoskeletal disorders for workers, and examining the impact of robot factors (such as end-effector movement path) on workers’ mental stress.

- Eiichi Yoshida (Tokyo University of Science, Japan)

“ Title ”


- Dongheui Lee (Technische Universität Wien (TU Wien), Austria)

“ Decision Making in Physical Human Robot Joint Actions ”

In this talk, I will present our recent research in collaborative robots in joint assembly tasks. Especially I will discuss about human’s decision making in joint actions, considering human ergonomics, overall task performances. I will present human user study results in this context.

- Sylvain Calinon (Idiap Research Institute, Switzerland)

“ Learning from Demonstration for Collaborative Tasks with Physical Contacts ”

Collaborative tasks with physical contacts require robot controllers that can swiftly adapt to the ongoing situation. Efficient representations at the crossroad of control, planning and perception are required to facilitate this challenge. Learning from demonstration (LfD) can be exploited to build these representations, either by learning the hyperparameters or by learning the higher level organization. Our ongoing work explores several facets of these challenges. First, I will show that a probabilistic interpretation of optimal control can facilitate the links between learning and optimization. First, I will show that a cost function composed of a sum of quadratic error terms can be treated either as a linear quadratic regulator (LQR) problem from an optimal control perspective, or as a product of Gaussians (PoG) from an information fusion perspective. I will then show that this dual view can be extended to non-quadratic costs and non-linear systems, which can be used to extend the concept of movement primitives to control primitives, bringing a modular approach to (re)combine controllers in parallel and in series. I will finally show that the underlying dictionary of controllers can contain: 1) ergodic control behaviors to provide explorative controllers in which the areas to explore are learned from demonstration; and 2) impedance behaviors exploiting geometry (Riemannian manifold and geometric algebra) for object affordances modeling. The two can be exploited to reduce the number of human demonstrations required and to provide better generalization capability. I will showcase the proposed information fusion principle in a wide range of applications requiring shared control, including teleoperation, haptic guidance and physical assistance.

Dr Sylvain Calinon is a Senior Research Scientist at the Idiap Research Institute and a Lecturer at the Ecole Polytechnique Fédérale de Lausanne (EPFL). He heads the Robot Learning & Interaction group at Idiap, with expertise in human-robot collaboration, robot learning from demonstration and model-based optimization. The approaches developed in his group can be applied to a wide range of applications requiring manipulation skills, with robots that are either close to us (assistive and industrial robots), parts of us (prosthetics and exoskeletons), or far away from us (shared control and teleoperation).

- Luis Figueredo ( Technical University of Munich (TUM), Germany)

“ Planning for humans: Leveraging comfort-based manipulability for improved pHRC ”

Recent advances in robotics technologies are closing the gap between humans and robots. Nonetheless, robots are still rarely thought of being physically engaging with humans, and human-like physical human-robot collaboration (pHRC) is still one of the key open challenges in robotics research. Collaboration and teamwork are better achieved when members understand each other capabilities and preferences. When it comes to pHRC, that means robots need better reasoning of human’s physical capabilities, ergonomics, and sense of embodiment. Robots need to have intrinsic knowledge of human feasible postures, quantitative metrics about ergonomics and dexterity - that give way to human reactivity and predictability when meeting manipulation challenges and uncertainties - and finally, quantitative metrics of muscular requirements to achieve given tasks. In this talk, I will present recent advances in human-based manipulability metrics, efficient human embodied structures to analyze them, methods to transfer manipulability features to robotic structures and tools to ground robot decision-making capabilities based on such human comfort-based manipulability metrics.

- João Silveri ( Deutsches Zentrum für Luft- und Raumfahrt (DLR), Germany)

“ Exploiting prior task knowledge for assistance during learning and shared control ”

To make robot skill acquisition quick and effective, a promising route is to use a combination of prior knowledge and data-driven methods. Such prior knowledge can take various forms depending on the problem at hand, including constraints (e.g. ’hold a cup upright if it is not empty’) and object-centered behaviors (e.g. ‘provide more assistance near the task goal’). Interestingly, the specification and handling thereof has been addressed independently in different fields, such as control and machine learning, raising the question of how they can benefit each other when coming together in robotic systems. In this talk I will show how shared control formulations, which implement task constraints as virtual fixtures, can be leveraged to make reinforcement learning in the real world safe and efficient. In the other direction this talk will demonstrate how concepts from machine learning, namely probabilistic models, can be adapted to suit the needs of shared control problems allowing, for instance, for an adaptive handling of virtual fixtures.


The following IEEE-RAS Technical Committees have acknowledged the full support of the proposed workshop:


This workshop will be supported by the European Union’s Horizon 2020 research and innovation program under grant agreement No. 871237 (SOPHIA), and from the European Research Council program under grant agreement No. 850932 (Ergo-Lean).