Example of a trial in our VR experiment. A KUKA LBR iiwa robot is offering a hammer to the human participant. Top row, left to right: The participant awaits the start of the trial with their hands in the designated area. Then they approach the object held by the robot and grasp it. For explanation only, we depicted the contact points in cyan a posteriori, i.e., the cyan points were not visible during the actual trial. Bottom row, left to right: After the participant has made contact with the object, they answered two questions. The bottom rightmost picture shows an aggregate of the contact points of multiple participants for this scene. Each participant is coded with one color.
Humans display exemplary skill in manipulating objects and can adapt to highly diverse situations. For example, a human handing over an object modulates their grasp and movements to accommodate their partner's capabilities, which greatly increases the likelihood of a successful transfer.
State-of-the-art robot behavior lacks this level of understanding, resulting in interactions that force the human partner to shoulder the burden of adaptation, sometimes even in very awkward and unfavorable postures.
This project investigates how visual occlusion of the object being passed affects the quantitative performance and subjective perception of a human receiver.
We performed an experiment in virtual reality (VR) where each of the three tested objects (hammer, screwdriver, and scissors) was individually presented in a wide variety of poses to the participants [ ]. We developed an open-source grasp generator [ ] to devise forty physically realistic scenes with diverse occlusion levels for each of the objects.
The participants were tasked with taking a test object from the hand of the virtual robot as if they were to use it. After each trial they were asked to rate the holdability and direct usability of the object given the grasp they had just performed. We carefully analyzed the user's hand and head motion, the time to grasp the object, the chosen grasp location, and the participant ratings. Results show that visual occlusion significantly impacts the grasping strategy of a human receiver and decreases the perceived holdability and direct usability of the object [ ].
Our findings lay the groundwork for enriching robot grasping with further knowledge needed to choose the most appropriate grasp for a given task considering visual occlusion and its effects on the human receiver. This new facet to robot intelligence could benefit many HRI scenarios that involve collaborative robotics, such as Industry 4.0 and healthcare.