Haptic Intelligence


2024


no image
Demonstration: OCRA - A Kinematic Retargeting Algorithm for Expressive Whole-Arm Teleoperation

Mohan, M., Kuchenbecker, K. J.

Hands-on demonstration presented at the Conference on Robot Learning (CoRL), Munich, Germany, November 2024 (misc) Accepted

Abstract
Traditional teleoperation systems focus on controlling the pose of the end-effector (task space), often neglecting the additional degrees of freedom present in human and many robotic arms. This demonstration presents the Optimization-based Customizable Retargeting Algorithm (OCRA), which was designed to map motions from one serial kinematic chain to another in real time. OCRA is versatile, accommodating any robot joint counts and segment lengths, and it can retarget motions from human arms to kinematically different serial robot arms with revolute joints both expressively and efficiently. One of OCRA's key features is its customizability, allowing the user to adjust the emphasis between hand orientation error and the configuration error of the arm's central line, which we call the arm skeleton. To evaluate the perceptual quality of the motions generated by OCRA, we conducted a video-watching study with 70 participants; the results indicated that the algorithm produces robot motions that closely resemble human movements, with a median rating of 78/100, particularly when the arm skeleton error weight and hand orientation error are balanced. In this demonstration, the presenter will wear an Xsens MVN Link and teleoperate the arms of a NAO child-size humanoid robot to highlight OCRA's ability to create intuitive and human-like whole-arm motions.

Project Page [BibTex]

2024

Project Page [BibTex]


no image
Demonstration: Minsight - A Soft Vision-Based Tactile Sensor for Robotic Fingertips

Andrussow, I., Sun, H., Martius, G., Kuchenbecker, K. J.

Hands-on demonstration presented at the Conference on Robot Learning (CoRL), Munich, Germany, November 2024 (misc) Accepted

Abstract
Beyond vision and hearing, tactile sensing enhances a robot's ability to dexterously manipulate unfamiliar objects and safely interact with humans. Giving touch sensitivity to robots requires compact, robust, affordable, and efficient hardware designs, especially for high-resolution tactile sensing. We present a soft vision-based tactile sensor engineered to meet these requirements. Comparable in size to a human fingertip, Minsight uses machine learning to output high-resolution directional contact force distributions at 60 Hz. Minsight's tactile force maps enable precise sensing of fingertip contacts, which we use in this hands-on demonstration to allow a 3-DoF robot arm to physically track contact with a user's finger. While observing the colorful image captured by Minsight's internal camera, attendees can experience how its ability to detect delicate touches in all directions facilitates real-time robot interaction.

Project Page [BibTex]

Project Page [BibTex]


no image
Active Haptic Feedback for a Virtual Wrist-Anchored User Interface

Bartels, J. U., Sanchez-Tamayo, N., Sedlmair, M., Kuchenbecker, K. J.

Hands-on demonstration presented at the ACM Symposium on User Interface Software and Technology (UIST), Pittsburgh, USA, October 2024 (misc) Accepted

DOI [BibTex]

DOI [BibTex]


no image
Modeling Shank Tissue Properties and Quantifying Body Composition with a Wearable Actuator-Accelerometer Set

Rokhmanova, N., Martus, J., Faulkner, R., Fiene, J., Kuchenbecker, K. J.

Extended abstract (1 page) presented at the American Society of Biomechanics Annual Meeting (ASB), Madison, USA, August 2024 (misc)

Project Page [BibTex]

Project Page [BibTex]


no image
Adapting a High-Fidelity Simulation of Human Skin for Comparative Touch Sensing

Schulz, A., Serhat, G., Kuchenbecker, K. J.

Extended abstract (1 page) presented at the American Society of Biomechanics Annual Meeting (ASB), Madison, USA, August 2024 (misc)

[BibTex]

[BibTex]


no image
Engineering and Evaluating Naturalistic Vibrotactile Feedback for Telerobotic Assembly

Gong, Y.

University of Stuttgart, Stuttgart, Germany, August 2024, Faculty of Design, Production Engineering and Automotive Engineering (phdthesis)

Abstract
Teleoperation allows workers on a construction site to assemble pre-fabricated building components by controlling powerful machines from a safe distance. However, teleoperation's primary reliance on visual feedback limits the operator's efficiency in situations with stiff contact or poor visibility, compromising their situational awareness and thus increasing the difficulty of the task; it also makes construction machines more difficult to learn to operate. To bridge this gap, we propose that reliable, economical, and easy-to-implement naturalistic vibrotactile feedback could improve telerobotic control interfaces in construction and other application areas such as surgery. This type of feedback enables the operator to feel the natural vibrations experienced by the robot, which contain crucial information about its motions and its physical interactions with the environment. This dissertation explores how to deliver naturalistic vibrotactile feedback from a robot's end-effector to the hand of an operator performing telerobotic assembly tasks; furthermore, it seeks to understand the effects of such haptic cues. The presented research can be divided into four parts. We first describe the engineering of AiroTouch, a naturalistic vibrotactile feedback system tailored for use on construction sites but suitable for many other applications of telerobotics. Then we evaluate AiroTouch and explore the effects of the naturalistic vibrotactile feedback it delivers in three user studies conducted either in laboratory settings or on a construction site. We begin this dissertation by developing guidelines for creating a haptic feedback system that provides high-quality naturalistic vibrotactile feedback. These guidelines include three sections: component selection, component placement, and system evaluation. We detail each aspect with the parameters that need to be considered. Based on these guidelines, we adapt widely available commercial audio equipment to create our system called AiroTouch, which measures the vibration experienced by each robot tool with a high-bandwidth three-axis accelerometer and enables the user to feel this vibration in real time through a voice-coil actuator. Accurate haptic transmission is achieved by optimizing the positions of the system's off-the-shelf sensors and actuators and is then verified through measurements. The second part of this thesis presents our initial validation of AiroTouch. We explored how adding this naturalistic type of vibrotactile feedback affects the operator during small-scale telerobotic assembly. Due to the limited accessibility of teleoperated robots and to maintain safety, we conducted a user study in lab with a commercial bimanual dexterous teleoperation system developed for surgery (Intuitive da Vinci Si). Thirty participants used this robot equipped with AiroTouch to assemble a small stiff structure under three randomly ordered haptic feedback conditions: no vibrations, one-axis vibrations, and summed three-axis vibrations. The results show that participants learn to take advantage of both tested versions of the haptic feedback in the given tasks, as significantly lower vibrations and forces are observed in the second trial. Subjective responses indicate that naturalistic vibrotactile feedback increases the realism of the interaction and reduces the perceived task duration, task difficulty, and fatigue. To test our approach on a real construction site, we enhanced AiroTouch using wireless signal-transmission technologies and waterproofing, and then we adapted it to a mini-crane construction robot. A study was conducted to evaluate how naturalistic vibrotactile feedback affects an observer's understanding of telerobotic assembly performed by this robot on a construction site. Seven adults without construction experience observed a mix of manual and autonomous assembly processes both with and without naturalistic vibrotactile feedback. Qualitative analysis of their survey responses and interviews indicates that all participants had positive responses to this technology and believed it would be beneficial for construction activities. Finally, we evaluated the effects of naturalistic vibrotactile feedback provided by wireless AiroTouch during live teleoperation of the mini-crane. Twenty-eight participants remotely controlled the mini-crane to complete three large-scale assembly-related tasks in lab, both with and without this type of haptic feedback. Our results show that naturalistic vibrotactile feedback enhances the participants' awareness of both robot motion and contact between the robot and other objects, particularly in scenarios with limited visibility. These effects increase participants' confidence when controlling the robot. Moreover, there is a noticeable trend of reduced vibration magnitude in the conditions where this type of haptic feedback is provided. The primary contribution of this dissertation is the clear explanation of details that are essential for the effective implementation of naturalistic vibrotactile feedback. We demonstrate that our accessible, audio-based approach can enhance user performance and experience during telerobotic assembly in construction and other application domains. These findings lay the foundation for further exploration of the potential benefits of incorporating haptic cues to enhance user experience during teleoperation.

Project Page [BibTex]

Project Page [BibTex]


no image
Reflectance Outperforms Force and Position in Model-Free Needle Puncture Detection

L’Orsa, R., Bisht, A., Yu, L., Murari, K., Westwick, D. T., Sutherland, G. R., Kuchenbecker, K. J.

In Proceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Orlando, USA, July 2024 (inproceedings) Accepted

Abstract
The surgical procedure of needle thoracostomy temporarily corrects accidental over-pressurization of the space between the chest wall and the lungs. However, failure rates of up to 94.1% have been reported, likely because this procedure is done blind: operators estimate by feel when the needle has reached its target. We believe instrumented needles could help operators discern entry into the target space, but limited success has been achieved using force and/or position to try to discriminate needle puncture events during simulated surgical procedures. We thus augmented our needle insertion system with a novel in-bore double-fiber optical setup. Tissue reflectance measurements as well as 3D force, torque, position, and orientation were recorded while two experimenters repeatedly inserted a bevel-tipped percutaneous needle into ex vivo porcine ribs. We applied model-free puncture detection to various filtered time derivatives of each sensor data stream offline. In the held-out test set of insertions, puncture-detection precision improved substantially using reflectance measurements compared to needle insertion force alone (3.3-fold increase) or position alone (11.6-fold increase).

Project Page [BibTex]

Project Page [BibTex]


no image
Errors in Long-Term Robotic Surgical Training

Lev, H. K., Sharon, Y., Geftler, A., Nisky, I.

Work-in-progress paper (3 pages) presented at the EuroHaptics Conference, Lille, France, June 2024 (misc)

Abstract
Robotic surgeries offer many advantages but require surgeons to master complex motor tasks over years. Most motor-control studies focus on simple tasks and span days at most. To help bridge this gap, we followed surgical residents learning complex tasks on a surgical robot over six months. Here, we focus on the task of moving a ring along a curved wire as quickly and accurately as possible. We wrote an image processing algorithm to locate the errors in the task and computed error metrics and task completion time. We found that participants decreased their completion time and number of errors over the six months, however, the percentage of error time in the task remained constant. This long-term study sheds light on the learning process of the surgeons and opens the possibility of further studying their errors with the aim of minimizing them.

DOI [BibTex]

DOI [BibTex]


no image
GaitGuide: A Wearable Device for Vibrotactile Motion Guidance

Rokhmanova, N., Martus, J., Faulkner, R., Fiene, J., Kuchenbecker, K. J.

Workshop paper (3 pages) presented at the ICRA Workshop on Advancing Wearable Devices and Applications Through Novel Design, Sensing, Actuation, and AI, Yokohama, Japan, May 2024 (misc)

Abstract
Wearable vibrotactile devices can provide salient sensations that attract the user's attention or guide them to change. The future integration of such feedback into medical or consumer devices would benefit from understanding how vibrotactile cues vary in amplitude and perceived strength across the heterogeneity of human skin. Here, we developed an adhesive vibrotactile device (the GaitGuide) that uses two individually mounted linear resonant actuators to deliver directional motion guidance. By measuring the mechanical vibrations of the actuators via small on-board accelerometers, we compared vibration amplitudes and perceived signal strength across 20 subjects at five signal voltages and four sites around the shank. Vibrations were consistently smallest in amplitude—but perceived to be strongest—at the site located over the tibia. We created a fourth-order linear dynamic model to capture differences in tissue properties across subjects and sites via optimized stiffness and damping parameters. The anterior site had significantly higher skin stiffness and damping; these values also correlate with subject-specific body-fat percentages. Surprisingly, our study shows that the perception of vibrotactile stimuli does not solely depend on the vibration magnitude delivered to the skin. These findings also help to explain the clinical practice of evaluating vibrotactile sensitivity over a bony prominence.

link (url) Project Page [BibTex]

link (url) Project Page [BibTex]


no image
Three-Dimensional Surface Reconstruction of a Soft System via Distributed Magnetic Sensing

Sundaram, V. H., Smith, L., Turin, Z., Rentschler, M. E., Welker, C. G.

Workshop paper (3 pages) presented at the ICRA Workshop on Advancing Wearable Devices and Applications Through Novel Design, Sensing, Actuation, and AI, Yokohama, Japan, May 2024 (misc)

Abstract
This study presents a new method for reconstructing continuous 3D surface deformations for a soft pneumatic actuation system using embedded magnetic sensors. A finite element analysis (FEA) model was developed to quantify the surface deformation given the magnetometer readings, with a relative error between the experimental and the simulated sensor data of 7.8%. Using the FEA simulation solutions and a basic model-based mapping, our method achieves sub-millimeter accuracy in measuring deformation from sensor data with an absolute error between the experimental and simulated sensor data of 13.5%. These results show promise for real-time adjustments to deformation, crucial in environments like prosthetic and orthotic interfaces with human limbs.

[BibTex]

[BibTex]


{CAPT} Motor: A Strong Direct-Drive Rotary Haptic Interface
CAPT Motor: A Strong Direct-Drive Rotary Haptic Interface

Javot, B., Nguyen, V. H., Ballardini, G., Kuchenbecker, K. J.

Hands-on demonstration presented at the IEEE Haptics Symposium, Long Beach, USA, April 2024 (misc)

Abstract
We have designed and built a new motor named CAPT Motor that delivers continuous and precise torque. It is a brushless ironless motor using a Halbach-magnet ring and a planar axial Lorentz-coil array. This motor is unique as we use a two-phase design allowing for higher fill factor and geometrical accuracy of the coils, as they can all be made separately. This motor outperforms existing Halbach ring and cylinder motors with a torque constant per magnet volume of 9.94 (Nm/A)/dm3, a record in the field. The angular position of the rotor is measured by a high-resolution incremental optical encoder and tracked by a multimodal data acquisition device. The system's control firmware uses this angle measurement to calculate the two-phase motor currents needed to produce the torque commanded by the virtual environment at the rotor's position. The strength and precision of the CAPT Motor's torque and the lack of any mechanical transmission enable unusually high haptic rendering quality, indicating the promise of this new motor design.

link (url) Project Page [BibTex]

link (url) Project Page [BibTex]


no image
Quantifying Haptic Quality: External Measurements Match Expert Assessments of Stiffness Rendering Across Devices

Fazlollahi, F., Seifi, H., Ballardini, G., Taghizadeh, Z., Schulz, A., MacLean, K. E., Kuchenbecker, K. J.

Work-in-progress paper (2 pages) presented at the IEEE Haptics Symposium, Long Beach, USA, April 2024 (misc)

Project Page [BibTex]


no image
Cutaneous Electrohydraulic (CUTE) Wearable Devices for Multimodal Haptic Feedback

Sanchez-Tamayo, N., Yoder, Z., Ballardini, G., Rothemund, P., Keplinger, C., Kuchenbecker, K. J.

Extended abstract (1 page) presented at the IEEE RoboSoft Workshop on Multimodal Soft Robots for Multifunctional Manipulation, Locomotion, and Human-Machine Interaction, San Diego, USA, April 2024 (misc)

[BibTex]

[BibTex]


no image
Expert Perception of Teleoperated Social Exercise Robots

Mohan, M., Mat Husin, H., Kuchenbecker, K. J.

In Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction (HRI), pages: 769-773, Boulder, USA, March 2024, Late-Breaking Report (LBR) (5 pages) presented at the IEEE/ACM International Conference on Human-Robot Interaction (HRI) (inproceedings)

Abstract
Social robots could help address the growing issue of physical inactivity by inspiring users to engage in interactive exercise. Nevertheless, the practical implementation of social exercise robots poses substantial challenges, particularly in terms of personalizing their activities to individuals. We propose that motion-capture-based teleoperation could serve as a viable solution to address these needs by enabling experts to record custom motions that could later be played back without their real-time involvement. To gather feedback about this idea, we conducted semi-structured interviews with eight exercise-therapy professionals. Our findings indicate that experts' attitudes toward social exercise robots become more positive when considering the prospect of teleoperation to record and customize robot behaviors.

DOI Project Page [BibTex]

DOI Project Page [BibTex]


Creating a Haptic Empathetic Robot Animal That Feels Touch and Emotion
Creating a Haptic Empathetic Robot Animal That Feels Touch and Emotion

Burns, R.

University of Tübingen, Tübingen, Germany, February 2024, Department of Computer Science (phdthesis)

Abstract
Social touch, such as a hug or a poke on the shoulder, is an essential aspect of everyday interaction. Humans use social touch to gain attention, communicate needs, express emotions, and build social bonds. Despite its importance, touch sensing is very limited in most commercially available robots. By endowing robots with social-touch perception, one can unlock a myriad of new interaction possibilities. In this thesis, I present my work on creating a Haptic Empathetic Robot Animal (HERA), a koala-like robot for children with autism. I demonstrate the importance of establishing design guidelines based on one's target audience, which we investigated through interviews with autism specialists. I share our work on creating full-body tactile sensing for the NAO robot using low-cost, do-it-yourself (DIY) methods, and I introduce an approach to model long-term robot emotions using second-order dynamics.

Project Page [BibTex]

Project Page [BibTex]


Adapting a High-Fidelity Simulation of Human Skin for Comparative Touch Sensing in the Elephant Trunk
Adapting a High-Fidelity Simulation of Human Skin for Comparative Touch Sensing in the Elephant Trunk

Schulz, A., Serhat, G., Kuchenbecker, K. J.

Abstract presented at the Society for Integrative and Comparative Biology Annual Meeting (SICB), Seattle, USA, January 2024 (misc)

Abstract
Skin is a complex biological composite consisting of layers with distinct mechanical properties, morphologies, and mechanosensory capabilities. This work seeks to expand the comparative biomechanics field to comparative haptics, analyzing elephant trunk touch by redesigning a previously published human finger-pad model with morphological parameters measured from an elephant trunk. The dorsal surface of the elephant trunk has a thick, wrinkled epidermis covered with whiskers at the distal tip and deep folds at the proximal base. We hypothesize that this thick dorsal skin protects the trunk from mechanical damage but significantly dulls its tactile sensing ability. To facilitate safe and dexterous motion, the distributed dorsal whiskers might serve as pre-touch antennae, transmitting an amplified version of impending contact to the mechanoreceptors beneath the elephant's armor. We tested these hypotheses by simulating soft tissue deformation through high-fidelity finite element analyses involving representative skin layers and whiskers, modeled based on frozen African elephant trunk (Loxodonta africana) morphology. For a typical contact force, quintupling the stratum corneum thickness to match dorsal trunk skin reduces the von Mises stress communicated to the dermis by 18%. However, adding a whisker offsets this dulled sensing, as hypothesized, amplifying the stress by more than 15 at the same location. We hope this work will motivate further investigations of mammalian touch using approaches and models from the ample literature on human touch.

[BibTex]

[BibTex]


no image
MPI-10: Haptic-Auditory Measurements from Tool-Surface Interactions

Khojasteh, B., Shao, Y., Kuchenbecker, K. J.

Dataset published as a companion to the journal article "Robust Surface Recognition with the Maximum Mean Discrepancy: Degrading Haptic-Auditory Signals through Bandwidth and Noise" in IEEE Transactions on Haptics, January 2024 (misc)

DOI Project Page [BibTex]

DOI Project Page [BibTex]


Whiskers That Don’t Whisk: Unique Structure From the Absence of Actuation in Elephant Whiskers
Whiskers That Don’t Whisk: Unique Structure From the Absence of Actuation in Elephant Whiskers

Schulz, A., Kaufmann, L., Brecht, M., Richter, G., Kuchenbecker, K. J.

Abstract presented at the Society for Integrative and Comparative Biology Annual Meeting (SICB), Seattle, USA, January 2024 (misc)

Abstract
Whiskers are so named because these hairs often actuate circularly, whisking, via collagen wrapping at the root of the hair follicle to increase their sensing volumes. Elephant trunks are a unique case study for whiskers, as the dorsal and lateral sections of the elephant proboscis have scattered sensory hairs that lack individual actuation. We hypothesize that the actuation limitations of these non-whisking whiskers led to anisotropic morphology and non-homogeneous composition to meet the animal's sensory needs. To test these hypotheses, we examined trunk whiskers from a 35-year-old female African savannah elephant (Loxodonta africana). Whisker morphology was evaluated through micro-CT and polarized light microscopy. The whiskers from the distal tip of the trunk were found to be axially asymmetric, with an ovular cross-section at the root, shifting to a near-square cross-section at the point. Nanoindentation and additional microscopy revealed that elephant whiskers have a composition unlike any other mammalian hair ever studied: we recorded an elastic modulus of 3 GPa at the root and 0.05 GPa at the point of a single 4-cm-long whisker. This work challenges the assumption that hairs have circular cross-sections and isotropic mechanical properties. With such striking differences compared to other mammals, including the mouse (Mus musculus), rat (Rattus norvegicus), and cat (Felis catus), we conclude that whisker morphology and composition play distinct and complementary roles in elephant trunk mechanosensing.

[BibTex]

[BibTex]


no image
Discrete Fourier Transform Three-to-One (DFT321): Code

Landin, N., Romano, J. M., McMahan, W., Kuchenbecker, K. J.

MATLAB code of discrete fourier transform three-to-one (DFT321), 2024 (misc)

Code Project Page [BibTex]

Code Project Page [BibTex]

2023


no image
Gesture-Based Nonverbal Interaction for Exercise Robots

Mohan, M.

University of Tübingen, Tübingen, Germany, October 2023, Department of Computer Science (phdthesis)

Abstract
When teaching or coaching, humans augment their words with carefully timed hand gestures, head and body movements, and facial expressions to provide feedback to their students. Robots, however, rarely utilize these nuanced cues. A minimally supervised social robot equipped with these abilities could support people in exercising, physical therapy, and learning new activities. This thesis examines how the intuitive power of human gestures can be harnessed to enhance human-robot interaction. To address this question, this research explores gesture-based interactions to expand the capabilities of a socially assistive robotic exercise coach, investigating the perspectives of both novice users and exercise-therapy experts. This thesis begins by concentrating on the user's engagement with the robot, analyzing the feasibility of minimally supervised gesture-based interactions. This exploration seeks to establish a framework in which robots can interact with users in a more intuitive and responsive manner. The investigation then shifts its focus toward the professionals who are integral to the success of these innovative technologies: the exercise-therapy experts. Roboticists face the challenge of translating the knowledge of these experts into robotic interactions. We address this challenge by developing a teleoperation algorithm that can enable exercise therapists to create customized gesture-based interactions for a robot. Thus, this thesis lays the groundwork for dynamic gesture-based interactions in minimally supervised environments, with implications for not only exercise-coach robots but also broader applications in human-robot interaction.

Project Page [BibTex]

2023

Project Page [BibTex]


Seeking Causal, Invariant, Structures with Kernel Mean Embeddings in Haptic-Auditory Data from Tool-Surface Interaction
Seeking Causal, Invariant, Structures with Kernel Mean Embeddings in Haptic-Auditory Data from Tool-Surface Interaction

Khojasteh, B., Shao, Y., Kuchenbecker, K. J.

Workshop paper (4 pages) presented at the IROS Workshop on Causality for Robotics: Answering the Question of Why, Detroit, USA, October 2023 (misc)

Abstract
Causal inference could give future learning robots strong generalization and scalability capabilities, which are crucial for safety, fault diagnosis and error prevention. One application area of interest consists of the haptic recognition of surfaces. We seek to understand cause and effect during physical surface interaction by examining surface and tool identity, their interplay, and other contact-irrelevant factors. To work toward elucidating the mechanism of surface encoding, we attempt to recognize surfaces from haptic-auditory data captured by previously unseen hemispherical steel tools that differ from the recording tool in diameter and mass. In this context, we leverage ideas from kernel methods to quantify surface similarity through descriptive differences in signal distributions. We find that the effect of the tool is significantly present in higher-order statistical moments of contact data: aligning the means of the distributions being compared somewhat improves recognition but does not fully separate tool identity from surface identity. Our findings shed light on salient aspects of haptic-auditory data from tool-surface interaction and highlight the challenges involved in generalizing artificial surface discrimination capabilities.

Manuscript Project Page [BibTex]

Manuscript Project Page [BibTex]


no image
Enhancing Surgical Team Collaboration and Situation Awareness through Multimodal Sensing

Allemang–Trivalle, A.

In Proceedings of the ACM International Conference on Multimodal Interaction, pages: 716-720, Extended abstract (5 pages) presented at the ACM International Conference on Multimodal Interaction (ICMI) Doctoral Consortium, Paris, France, October 2023 (inproceedings)

Abstract
Surgery, typically seen as the surgeon's sole responsibility, requires a broader perspective acknowledging the vital roles of other operating room (OR) personnel. The interactions among team members are crucial for delivering quality care and depend on shared situation awareness. I propose a two-phase approach to design and evaluate a multimodal platform that monitors OR members, offering insights into surgical procedures. The first phase focuses on designing a data-collection platform, tailored to surgical constraints, to generate novel collaboration and situation-awareness metrics using synchronous recordings of the participants' voices, positions, orientations, electrocardiograms, and respiration signals. The second phase concerns the creation of intuitive dashboards and visualizations, aiding surgeons in reviewing recorded surgery, identifying adverse events and contributing to proactive measures. This work aims to demonstrate an innovative approach to data collection and analysis, augmenting the surgical team's capabilities. The multimodal platform has the potential to enhance collaboration, foster situation awareness, and ultimately mitigate surgical adverse events. This research sets the stage for a transformative shift in the OR, enabling a more holistic and inclusive perspective that recognizes that surgery is a team effort.

DOI [BibTex]

DOI [BibTex]


no image
NearContact: Accurate Human Detection using Tomographic Proximity and Contact Sensing with Cross-Modal Attention

Garrofé, G., Schoeffmann, C., Zangl, H., Kuchenbecker, K. J., Lee, H.

Extended abstract (4 pages) presented at the International Workshop on Human-Friendly Robotics (HFR), Munich, Germany, September 2023 (misc)

Project Page [BibTex]

Project Page [BibTex]


The Role of Kinematics Estimation Accuracy in Learning with Wearable Haptics
The Role of Kinematics Estimation Accuracy in Learning with Wearable Haptics

Rokhmanova, N., Pearl, O., Kuchenbecker, K. J., Halilaj, E.

Abstract (1 page) presented at the American Society of Biomechanics Annual Meeting (ASB), Knoxville, USA, August 2023 (misc)

Project Page [BibTex]

Project Page [BibTex]


Wear Your Heart on Your Sleeve: Users Prefer Robots with Emotional Reactions to Touch and Ambient Moods
Wear Your Heart on Your Sleeve: Users Prefer Robots with Emotional Reactions to Touch and Ambient Moods

Burns, R. B., Ojo, F., Kuchenbecker, K. J.

In Proceedings of the IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), pages: 1914-1921, Busan, South Korea, August 2023 (inproceedings)

Abstract
Robots are increasingly being developed as assistants for household, education, therapy, and care settings. Such robots can use adaptive emotional behavior to communicate warmly and effectively with their users and to encourage interest in extended interactions. However, autonomous physical robots often lack a dynamic internal emotional state, instead displaying brief, fixed emotion routines to promote specific user interactions. Furthermore, despite the importance of social touch in human communication, most commercially available robots have limited touch sensing, if any at all. We propose that users' perceptions of a social robotic system will improve when the robot provides emotional responses on both shorter and longer time scales (reactions and moods), based on touch inputs from the user. We evaluated this proposal through an online study in which 51 diverse participants watched nine randomly ordered videos (a three-by-three full-factorial design) of the koala-like robot HERA being touched by a human. Users provided the highest ratings in terms of agency, ambient activity, enjoyability, and touch perceptivity for scenarios in which HERA showed emotional reactions and either neutral or emotional moods in response to social touch gestures. Furthermore, we summarize key qualitative findings about users' preferences for reaction timing, the ability of robot mood to show persisting memory, and perception of neutral behaviors as a curious or self-aware robot.

link (url) DOI Project Page [BibTex]

link (url) DOI Project Page [BibTex]


Augmenting Human Policies using Riemannian Metrics for Human-Robot Shared Control
Augmenting Human Policies using Riemannian Metrics for Human-Robot Shared Control

Oh, Y., Passy, J., Mainprice, J.

In Proceedings of the IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), pages: 1612-1618, Busan, Korea, August 2023 (inproceedings)

Abstract
We present a shared control framework for teleoperation that combines the human and autonomous robot agents operating in different dimension spaces. The shared control problem is an optimization problem to maximize the human's internal action-value function while guaranteeing that the shared control policy is close to the autonomous robot policy. This results in a state update rule that augments the human controls using the Riemannian metric that emerges from computing the curvature of the robot's value function to account for any cost terms or constraints that the human operator may neglect when operating a redundant manipulator. In our experiments, we apply Linear Quadratic Regulators to locally approximate the robot policy using a single optimized robot trajectory, thereby preventing the need for an optimization step at each time step to determine the optimal policy. We show preliminary results of reach-and-grasp teleoperation tasks with a simulated human policy and a pilot user study using the VR headset and controllers. However, the mixed user preference ratings and quantitative results show that more investigation is required to prove the efficacy of the proposed paradigm.

DOI [BibTex]

DOI [BibTex]


Strap Tightness and Tissue Composition Both Affect the Vibration Created by a Wearable Device
Strap Tightness and Tissue Composition Both Affect the Vibration Created by a Wearable Device

Rokhmanova, N., Faulkner, R., Martus, J., Fiene, J., Kuchenbecker, K. J.

Work-in-progress paper (1 page) presented at the IEEE World Haptics Conference (WHC), Delft, The Netherlands, July 2023 (misc)

Abstract
Wearable haptic devices can provide salient real-time feedback (typically vibration) for rehabilitation, sports training, and skill acquisition. Although the body provides many sites for such cues, the influence of the mounting location on vibrotactile mechanics is commonly ignored. This study builds on previous research by quantifying how changes in strap tightness and local tissue composition affect the physical acceleration generated by a typical vibrotactile device.

Project Page [BibTex]

Project Page [BibTex]


Toward a Device for Reliable Evaluation of Vibrotactile Perception
Toward a Device for Reliable Evaluation of Vibrotactile Perception

Ballardini, G., Kuchenbecker, K. J.

Work-in-progress paper (1 page) presented at the IEEE World Haptics Conference (WHC), Delft, The Netherlands, July 2023 (misc)

[BibTex]

[BibTex]


no image
Multimodal Multi-User Surface Recognition with the Kernel Two-Sample Test: Code

Khojasteh, B., Solowjow, F., Trimpe, S., Kuchenbecker, K. J.

Code published as a companion to the journal article "Multimodal Multi-User Surface Recognition with the Kernel Two-Sample Test" in IEEE Transactions on Automation Science and Engineering, July 2023 (misc)

DOI Project Page [BibTex]

DOI Project Page [BibTex]


Improving Haptic Rendering Quality by Measuring and Compensating for Undesired Forces
Improving Haptic Rendering Quality by Measuring and Compensating for Undesired Forces

Fazlollahi, F., Taghizadeh, Z., Kuchenbecker, K. J.

Work-in-progress paper (1 page) presented at the IEEE World Haptics Conference (WHC), Delft, The Netherlands, July 2023 (misc)

Project Page [BibTex]

Project Page [BibTex]


Capturing Rich Auditory-Haptic Contact Data for Surface Recognition
Capturing Rich Auditory-Haptic Contact Data for Surface Recognition

Khojasteh, B., Shao, Y., Kuchenbecker, K. J.

Work-in-progress paper (1 page) presented at the IEEE World Haptics Conference (WHC), Delft, The Netherlands, July 2023 (misc)

Abstract
The sophistication of biological sensing and transduction processes during finger-surface and tool-surface interaction is remarkable, enabling humans to perform ubiquitous tasks such as discriminating and manipulating surfaces. Capturing and processing these rich contact-elicited signals during surface exploration with similar success is an important challenge for artificial systems. Prior research introduced sophisticated mobile surface-sensing systems, but it remains less clear what quality, resolution and acuity of sensor data are necessary to perform human tasks with the same efficiency and accuracy. In order to address this gap in our understanding about artificial surface perception, we have designed a novel auditory-haptic test bed. This study aims to inspire new designs for artificial sensing tools in human-machine and robotic applications.

Project Page [BibTex]

Project Page [BibTex]


no image
Naturalistic Vibrotactile Feedback Could Facilitate Telerobotic Assembly on Construction Sites

Gong, Y., Javot, B., Lauer, A. P. R., Sawodny, O., Kuchenbecker, K. J.

In Proceedings of the IEEE World Haptics Conference (WHC), pages: 169-175, Delft, The Netherlands, July 2023 (inproceedings)

Abstract
Telerobotics is regularly used on construction sites to build large structures efficiently. A human operator remotely controls the construction robot under direct visual feedback, but visibility is often poor. Future construction robots that move autonomously will also require operator monitoring. Thus, we designed a wireless haptic feedback system to provide the operator with task-relevant mechanical information from a construction robot in real time. Our AiroTouch system uses an accelerometer to measure the robot end-effector's vibrations and uses off-the-shelf audio equipment and a voice-coil actuator to display them to the user with high fidelity. A study was conducted to evaluate how this type of naturalistic vibration feedback affects the observer's understanding of telerobotic assembly on a real construction site. Seven adults without construction experience observed a mix of manual and autonomous assembly processes both with and without naturalistic vibrotactile feedback. Qualitative analysis of their survey responses and interviews indicated that all participants had positive responses to this technology and believed it would be beneficial for construction activities.

DOI Project Page [BibTex]

DOI Project Page [BibTex]


Airo{T}ouch: Naturalistic Vibrotactile Feedback for Telerobotic Construction
AiroTouch: Naturalistic Vibrotactile Feedback for Telerobotic Construction

Gong, Y., Javot, B., Lauer, A. P. R., Sawodny, O., Kuchenbecker, K. J.

Hands-on demonstration presented at the IEEE World Haptics Conference, Delft, The Netherlands, July 2023 (misc)

Project Page [BibTex]

Project Page [BibTex]


no image
CAPT Motor: A Strong Direct-Drive Haptic Interface

Javot, B., Nguyen, V. H., Ballardini, G., Kuchenbecker, K. J.

Hands-on demonstration presented at the IEEE World Haptics Conference, Delft, The Netherlands, July 2023 (misc)

Project Page [BibTex]

Project Page [BibTex]


Can Recording Expert Demonstrations with Tool Vibrations Facilitate Teaching of Manual Skills?
Can Recording Expert Demonstrations with Tool Vibrations Facilitate Teaching of Manual Skills?

Gourishetti, R., Javot, B., Kuchenbecker, K. J.

Work-in-progress paper (1 page) presented at the IEEE World Haptics Conference (WHC), Delft, The Netherlands, July 2023 (misc)

Project Page [BibTex]

Project Page [BibTex]


Creating a Haptic Empathetic Robot Animal for Children with Autism
Creating a Haptic Empathetic Robot Animal for Children with Autism

Burns, R. B.

Workshop paper (4 pages) presented at the RSS Pioneers Workshop, Daegu, South Korea, July 2023 (misc)

link (url) Project Page [BibTex]

link (url) Project Page [BibTex]


The Influence of Amplitude and Sharpness on the Perceived Intensity of Isoenergetic Ultrasonic Signals
The Influence of Amplitude and Sharpness on the Perceived Intensity of Isoenergetic Ultrasonic Signals

Gueorguiev, D., Rohou–Claquin, B., Kuchenbecker, K. J.

Work-in-progress paper (1 page) presented at the IEEE World Haptics Conference (WHC), Delft, The Netherlands, July 2023 (misc)

Project Page [BibTex]

Project Page [BibTex]


Vibrotactile Playback for Teaching Manual Skills from Expert Recordings
Vibrotactile Playback for Teaching Manual Skills from Expert Recordings

Gourishetti, R., Hughes, A. G., Javot, B., Kuchenbecker, K. J.

Hands-on demonstration presented at the IEEE World Haptics Conference, Delft, The Netherlands, July 2023 (misc)

Project Page [BibTex]

Project Page [BibTex]


Naturalistic Vibrotactile Feedback Could Facilitate Telerobotic Assembly on Construction Sites
Naturalistic Vibrotactile Feedback Could Facilitate Telerobotic Assembly on Construction Sites

Gong, Y., Javot, B., Lauer, A. P. R., Sawodny, O., Kuchenbecker, K. J.

Poster presented at the ICRA Workshop on Future of Construction: Robot Perception, Mapping, Navigation, Control in Unstructured and Cluttered Environments, London, UK, June 2023 (misc)

Project Page [BibTex]

Project Page [BibTex]


Reconstructing Signing Avatars from Video Using Linguistic Priors
Reconstructing Signing Avatars from Video Using Linguistic Priors

Forte, M., Kulits, P., Huang, C. P., Choutas, V., Tzionas, D., Kuchenbecker, K. J., Black, M. J.

In IEEE/CVF Conf. on Computer Vision and Pattern Recognition (CVPR), pages: 12791-12801, CVPR 2023, June 2023 (inproceedings)

Abstract
Sign language (SL) is the primary method of communication for the 70 million Deaf people around the world. Video dictionaries of isolated signs are a core SL learning tool. Replacing these with 3D avatars can aid learning and enable AR/VR applications, improving access to technology and online media. However, little work has attempted to estimate expressive 3D avatars from SL video; occlusion, noise, and motion blur make this task difficult. We address this by introducing novel linguistic priors that are universally applicable to SL and provide constraints on 3D hand pose that help resolve ambiguities within isolated signs. Our method, SGNify, captures fine-grained hand pose, facial expression, and body movement fully automatically from in-the-wild monocular SL videos. We evaluate SGNify quantitatively by using a commercial motion-capture system to compute 3D avatars synchronized with monocular video. SGNify outperforms state-of-the-art 3D body-pose- and shape-estimation methods on SL videos. A perceptual study shows that SGNify's 3D reconstructions are significantly more comprehensible and natural than those of previous methods and are on par with the source videos. Code and data are available at sgnify.is.tue.mpg.de.

pdf arXiv project code DOI [BibTex]

pdf arXiv project code DOI [BibTex]


Airo{T}ouch: Naturalistic Vibrotactile Feedback for Telerobotic Construction-Related Tasks
AiroTouch: Naturalistic Vibrotactile Feedback for Telerobotic Construction-Related Tasks

Gong, Y., Tashiro, N., Javot, B., Lauer, A. P. R., Sawodny, O., Kuchenbecker, K. J.

Extended abstract (1 page) presented at the ICRA Workshop on Communicating Robot Learning across Human-Robot Interaction, London, UK, May 2023 (misc)

Project Page [BibTex]

Project Page [BibTex]


no image
3D Reconstruction for Minimally Invasive Surgery: Lidar Versus Learning-Based Stereo Matching

Caccianiga, G., Nubert, J., Hutter, M., Kuchenbecker., K. J.

Workshop paper (2 pages) presented at the ICRA Workshop on Robot-Assisted Medical Imaging, London, UK, May 2023 (misc)

Abstract
This work investigates real-time 3D surface reconstruction for minimally invasive surgery. Specifically, we analyze depth sensing through laser-based time-of-flight sensing (lidar) and stereo endoscopy on ex-vivo porcine tissue samples. When compared to modern learning-based stereo matching from endoscopic images, lidar achieves lower processing delay, higher frame rate, and superior robustness against sensor distance and poor illumination. Furthermore, we report on the negative effect of near-infrared light penetration on the accuracy of time-of-flight measurements across different tissue types.

Project Page [BibTex]

Project Page [BibTex]


Surface Perception through Haptic-Auditory Contact Data
Surface Perception through Haptic-Auditory Contact Data

Khojasteh, B., Shao, Y., Kuchenbecker, K. J.

Workshop paper (4 pages) presented at the ICRA Workshop on Embracing Contacts, London, UK, May 2023 (misc)

Abstract
Sliding a finger or tool along a surface generates rich haptic and auditory contact signals that encode properties crucial for manipulation, such as friction and hardness. To engage in contact-rich manipulation, future robots would benefit from having surface-characterization capabilities similar to humans, but the optimal sensing configuration is not yet known. Thus, we developed a test bed for capturing high-quality measurements as a human touches surfaces with different tools: it includes optical motion capture, a force/torque sensor under the surface sample, high-bandwidth accelerometers on the tool and the fingertip, and a high-fidelity microphone. After recording data from three tool diameters and nine surfaces, we describe a surface-classification pipeline that uses the maximum mean discrepancy (MMD) to compare newly gathered data to each surface in our known library. The results achieved under several pipeline variations are compared, and future investigations are outlined.

link (url) Project Page [BibTex]

link (url) Project Page [BibTex]


{OCRA}: An Optimization-Based Customizable Retargeting Algorithm for Teleoperation
OCRA: An Optimization-Based Customizable Retargeting Algorithm for Teleoperation

Mohan, M., Kuchenbecker, K. J.

Workshop paper (3 pages) presented at the ICRA Workshop Toward Robot Avatars, London, UK, May 2023 (misc)

Abstract
This paper presents a real-time optimization-based algorithm for mapping motion between two kinematically dissimilar serial linkages, such as a human arm and a robot arm. OCRA can be customized based on the target task to weight end-effector orientation versus the configuration of the central line of the arm, which we call the skeleton. A video-watching study (N=70) demonstrated that when this algorithm considers both the hand orientation and the arm skeleton, it creates robot arm motions that users perceive to be highly similar to those of the human operator, indicating OCRA would be suitable for telerobotics and telepresence through avatars.

link (url) Project Page [BibTex]

link (url) Project Page [BibTex]


no image
Wearable Biofeedback for Knee Joint Health

Rokhmanova, N.

Extended abstract (5 pages) presented at the ACM SIGCHI Conference on Human Factors in Computing Systems (CHI) Doctoral Consortium, Hamburg, Germany, April 2023 (misc)

Abstract
The human body has the tremendous capacity to learn a new way of walking that reduces its risk of musculoskeletal disease progression. Wearable haptic biofeedback has been used to guide gait retraining in patients with knee osteoarthritis, enabling reductions in pain and improvement in function. However, this promising therapy is not yet a part of standard clinical practice. Here, I propose a two-pronged approach to improving the design and deployment of biofeedback for gait retraining. The first section concerns prescription, with the aim of providing clinicians with an interpretable model of gait retraining outcome in order to best guide their treatment decisions. The second section concerns learning, by examining how internal physiological state and external environmental factors influence the process of learning a therapeutic gait. This work aims to address the challenges keeping a highly promising intervention from being widely used to maintain pain-free mobility throughout the lifespan.

DOI Project Page [BibTex]

DOI Project Page [BibTex]


A Lasting Impact: Using Second-Order Dynamics to Customize the Continuous Emotional Expression of a Social Robot
A Lasting Impact: Using Second-Order Dynamics to Customize the Continuous Emotional Expression of a Social Robot

Burns, R. B., Kuchenbecker, K. J.

Workshop paper (5 pages) presented at the HRI Workshop on Lifelong Learning and Personalization in Long-Term Human-Robot Interaction (LEAP-HRI), Stockholm, Sweden, March 2023 (misc)

Abstract
Robots are increasingly being developed as assistants for household, education, therapy, and care settings. Such robots need social skills to interact warmly and effectively with their users, as well as adaptive behavior to maintain user interest. While complex emotion models exist for chat bots and virtual agents, autonomous physical robots often lack a dynamic internal affective state, instead displaying brief, fixed emotion routines to promote or discourage specific user actions. We address this need by creating a mathematical emotion model that can easily be implemented in a social robot to enable it to react intelligently to external stimuli. The robot's affective state is modeled as a second-order dynamic system analogous to a mass connected to ground by a parallel spring and damper. The present position of this imaginary mass shows the robot's valence, which we visualize as the height of its displayed smile (positive) or frown (negative). Associating positive and negative stimuli with appropriately oriented and sized force pulses applied to the mass enables the robot to respond to social touch and other inputs with a valence that evolves over a longer timescale, capturing essential features of approach-avoidance theory. By adjusting the parameters of this emotion model, one can modify three main aspects of the robot's personality, which we term disposition, stoicism, and calmness.

link (url) Project Page [BibTex]

link (url) Project Page [BibTex]

2022


no image
A Sequential Group VAE for Robot Learning of Haptic Representations

Richardson, B. A., Kuchenbecker, K. J., Martius, G.

pages: 1-11, Workshop paper (8 pages) presented at the CoRL Workshop on Aligning Robot Representations with Humans, Auckland, New Zealand, December 2022 (misc)

Abstract
Haptic representation learning is a difficult task in robotics because information can be gathered only by actively exploring the environment over time, and because different actions elicit different object properties. We propose a Sequential Group VAE that leverages object persistence to learn and update latent general representations of multimodal haptic data. As a robot performs sequences of exploratory procedures on an object, the model accumulates data and learns to distinguish between general object properties, such as size and mass, and trial-to-trial variations, such as initial object position. We demonstrate that after very few observations, the general latent representations are sufficiently refined to accurately encode many haptic object properties.

link (url) Project Page [BibTex]

2022

link (url) Project Page [BibTex]


no image
Multi-Timescale Representation Learning of Human and Robot Haptic Interactions

Richardson, B.

University of Stuttgart, Stuttgart, Germany, December 2022, Faculty of Computer Science, Electrical Engineering and Information Technology (phdthesis)

Abstract
The sense of touch is one of the most crucial components of the human sensory system. It allows us to safely and intelligently interact with the physical objects and environment around us. By simply touching or dexterously manipulating an object, we can quickly infer a multitude of its properties. For more than fifty years, researchers have studied how humans physically explore and form perceptual representations of objects. Some of these works proposed the paradigm through which human haptic exploration is presently understood: humans use a particular set of exploratory procedures to elicit specific semantic attributes from objects. Others have sought to understand how physically measured object properties correspond to human perception of semantic attributes. Few, however, have investigated how specific explorations are perceived. As robots become increasingly advanced and more ubiquitous in daily life, they are beginning to be equipped with haptic sensing capabilities and algorithms for processing and structuring haptic information. Traditional haptics research has so far strongly influenced the introduction of haptic sensation and perception into robots but has not proven sufficient to give robots the necessary tools to become intelligent autonomous agents. The work presented in this thesis seeks to understand how single and sequential haptic interactions are perceived by both humans and robots. In our first study, we depart from the more traditional methods of studying human haptic perception and investigate how the physical sensations felt during single explorations are perceived by individual people. We treat interactions as probability distributions over a haptic feature space and train a model to predict how similarly a pair of surfaces is rated, predicting perceived similarity with a reasonable degree of accuracy. Our novel method also allows us to evaluate how individual people weigh different surface properties when they make perceptual judgments. The method is highly versatile and presents many opportunities for further studies into how humans form perceptual representations of specific explorations. Our next body of work explores how to improve robotic haptic perception of single interactions. We use unsupervised feature-learning methods to derive powerful features from raw robot sensor data and classify robot explorations into numerous haptic semantic property labels that were assigned from human ratings. Additionally, we provide robots with more nuanced perception by learning to predict graded ratings of a subset of properties. Our methods outperform previous attempts that all used hand-crafted features, demonstrating the limitations of such traditional approaches. To push robot haptic perception beyond evaluation of single explorations, our final work introduces and evaluates a method to give robots the ability to accumulate information over many sequential actions; our approach essentially takes advantage of object permanence by conditionally and recursively updating the representation of an object as it is sequentially explored. We implement our method on a robotic gripper platform that performs multiple exploratory procedures on each of many objects. As the robot explores objects with new procedures, it gains confidence in its internal representations and classification of object properties, thus moving closer to the marvelous haptic capabilities of humans and providing a solid foundation for future research in this domain.

link (url) Project Page [BibTex]

link (url) Project Page [BibTex]