Haptic Intelligence


2024


no image
Fiber-Optic Shape Sensing Using Neural Networks Operating on Multispecklegrams

Cao, C. G. L., Javot, B., Bhattarai, S., Bierig, K., Oreshnikov, I., Volchkov, V. V.

IEEE Sensors Journal, 24(17):27532-27540, September 2024 (article)

Abstract
Application of machine learning techniques on fiber speckle images to infer fiber deformation allows the use of an unmodified multimode fiber to act as a shape sensor. This approach eliminates the need for complex fiber design or construction (e.g., Bragg gratings and time-of-flight). Prior work in shape determination using neural networks trained on a finite number of possible fiber shapes (formulated as a classification task), or trained on a few continuous degrees of freedom, has been limited to reconstruction of fiber shapes only one bend at a time. Furthermore, generalization to shapes that were not used in training is challenging. Our innovative approach improves generalization capabilities, using computer vision-assisted parameterization of the actual fiber shape to provide a ground truth, and multiple specklegrams per fiber shape obtained by controlling the input field. Results from experimenting with several neural network architectures, shape parameterization, number of inputs, and specklegram resolution show that fiber shapes with multiple bends can be accurately predicted. Our approach is able to generalize to new shapes that were not in the training set. This approach of end-to-end training on parameterized ground truth opens new avenues for fiber-optic sensor applications. We publish the datasets used for training and validation, as well as an out-of-distribution (OOD) test set, and encourage interested readers to access these datasets for their own model development.

DOI [BibTex]


no image
Cutaneous Electrohydraulic (CUTE) Wearable Devices for Pleasant Broad-Bandwidth Haptic Cues

Sanchez-Tamayo, N., Yoder, Z., Rothemund, P., Ballardini, G., Keplinger, C., Kuchenbecker, K. J.

Advanced Science, (2402461):1-14, September 2024 (article)

Abstract
By focusing on vibrations, current wearable haptic devices underutilize the skin's perceptual capabilities. Devices that provide richer haptic stimuli, including contact feedback and/or variable pressure, are typically heavy and bulky due to the underlying actuator technology and the low sensitivity of hairy skin, which covers most of the body. This paper presents a system architecture for compact wearable devices that deliver salient and pleasant broad-bandwidth haptic cues: Cutaneous Electrohydraulic (CUTE) devices combine a custom materials design for soft haptic electrohydraulic actuators that feature high stroke, high force, and electrical safety with a comfortable mounting strategy that places the actuator in a non-contact resting position. A prototypical wrist-wearable CUTE device produces rich tactile sensations by making and breaking contact with the skin (2.44 mm actuation stroke), applying high controllable forces (exceeding 2.3 N), and delivering vibrations at a wide range of amplitudes and frequencies (0-200 Hz). A perceptual study with fourteen participants achieved 97.9% recognition accuracy across six diverse cues and verified their pleasant and expressive feel. This system architecture for wearable devices gives unprecedented control over the haptic cues delivered to the skin, providing an elegant and discreet way to activate the user's sense of touch.

DOI [BibTex]


Building Instructions You Can Feel: Edge-Changing Haptic Devices for Digitally Guided Construction
Building Instructions You Can Feel: Edge-Changing Haptic Devices for Digitally Guided Construction

Tashiro, N., Faulkner, R., Melnyk, S., Rodriguez, T. R., Javot, B., Tahouni, Y., Cheng, T., Wood, D., Menges, A., Kuchenbecker, K. J.

ACM Transactions on Computer-Human Interaction, September 2024 (article) Accepted

Abstract
Recent efforts to connect builders to digital designs during construction have primarily focused on visual augmented reality, which requires accurate registration and specific lighting, and which could prevent a user from noticing safety hazards. Haptic interfaces, on the other hand, can convey physical design parameters through tangible local cues that don't distract from the surroundings. We propose two edge-changing haptic devices that use small inertial measurement units (IMUs) and linear actuators to guide users to perform construction tasks in real time: Drangle gives feedback for angling a drill relative to gravity, and Brangle assists with orienting bricks in the plane. We conducted a study with 18 participants to evaluate user performance and gather qualitative feedback. All users understood the edge-changing cues from both devices with minimal training. Drilling holes with Drangle was somewhat less accurate but much faster and easier than with a mechanical guide; 89% of participants preferred Drangle over the mechanical guide. Users generally understood Brangle's feedback but found its hand-size-specific grip, palmar contact, and attractive tactile cues less intuitive than Drangle's generalized form factor, fingertip contact, and repulsive cues. After summarizing design considerations, we propose application scenarios and speculate how such devices could improve construction workflows.

[BibTex]

[BibTex]


no image
Augmenting Robot-Assisted Pattern Cutting With Periodic Perturbations – Can We Make Dry Lab Training More Realistic?

Sharon, Y., Nevo, T., Naftalovich, D., Bahar, L., Refaely, Y., Nisky, I.

IEEE Transactions on Biomedical Engineering, August 2024 (article)

Abstract
Objective: Teleoperated robot-assisted minimally-invasive surgery (RAMIS) offers many advantages over open surgery, but RAMIS training still requires optimization. Existing motor learning theories could improve RAMIS training. However, there is a gap between current knowledge based on simple movements and training approaches required for the more complicated work of RAMIS surgeons. Here, we studied how surgeons cope with time-dependent perturbations. Methods: We used the da Vinci Research Kit and investigated the effect of time-dependent force and motion perturbations on learning a circular pattern-cutting surgical task. Fifty-four participants were assigned to two experiments, with two groups for each: a control group trained without perturbations and an experimental group trained with 1Hz perturbations. In the first experiment, force perturbations alternatingly pushed participants' hands inwards and outwards in the radial direction. In the second experiment, the perturbation constituted a periodic up-and-down motion of the task platform. Results: Participants trained with perturbations learned how to overcome them and improve their performances during training without impairing them after the perturbations were removed. Moreover, training with motion perturbations provided participants with an advantage when encountering the same or other perturbations after training, compared to training without perturbations. Conclusion: Periodic perturbations can enhance RAMIS training without impeding the learning of the perturbed task. Significance: Our results demonstrate that using challenging training tasks that include perturbations can better prepare surgical trainees for the dynamic environment they will face with patients in the operating room.

DOI [BibTex]

DOI [BibTex]


no image
Engineering and Evaluating Naturalistic Vibrotactile Feedback for Telerobotic Assembly

Gong, Y.

University of Stuttgart, Stuttgart, Germany, August 2024, Faculty of Design, Production Engineering and Automotive Engineering (phdthesis)

Abstract
Teleoperation allows workers on a construction site to assemble pre-fabricated building components by controlling powerful machines from a safe distance. However, teleoperation's primary reliance on visual feedback limits the operator's efficiency in situations with stiff contact or poor visibility, compromising their situational awareness and thus increasing the difficulty of the task; it also makes construction machines more difficult to learn to operate. To bridge this gap, we propose that reliable, economical, and easy-to-implement naturalistic vibrotactile feedback could improve telerobotic control interfaces in construction and other application areas such as surgery. This type of feedback enables the operator to feel the natural vibrations experienced by the robot, which contain crucial information about its motions and its physical interactions with the environment. This dissertation explores how to deliver naturalistic vibrotactile feedback from a robot's end-effector to the hand of an operator performing telerobotic assembly tasks; furthermore, it seeks to understand the effects of such haptic cues. The presented research can be divided into four parts. We first describe the engineering of AiroTouch, a naturalistic vibrotactile feedback system tailored for use on construction sites but suitable for many other applications of telerobotics. Then we evaluate AiroTouch and explore the effects of the naturalistic vibrotactile feedback it delivers in three user studies conducted either in laboratory settings or on a construction site. We begin this dissertation by developing guidelines for creating a haptic feedback system that provides high-quality naturalistic vibrotactile feedback. These guidelines include three sections: component selection, component placement, and system evaluation. We detail each aspect with the parameters that need to be considered. Based on these guidelines, we adapt widely available commercial audio equipment to create our system called AiroTouch, which measures the vibration experienced by each robot tool with a high-bandwidth three-axis accelerometer and enables the user to feel this vibration in real time through a voice-coil actuator. Accurate haptic transmission is achieved by optimizing the positions of the system's off-the-shelf sensors and actuators and is then verified through measurements. The second part of this thesis presents our initial validation of AiroTouch. We explored how adding this naturalistic type of vibrotactile feedback affects the operator during small-scale telerobotic assembly. Due to the limited accessibility of teleoperated robots and to maintain safety, we conducted a user study in lab with a commercial bimanual dexterous teleoperation system developed for surgery (Intuitive da Vinci Si). Thirty participants used this robot equipped with AiroTouch to assemble a small stiff structure under three randomly ordered haptic feedback conditions: no vibrations, one-axis vibrations, and summed three-axis vibrations. The results show that participants learn to take advantage of both tested versions of the haptic feedback in the given tasks, as significantly lower vibrations and forces are observed in the second trial. Subjective responses indicate that naturalistic vibrotactile feedback increases the realism of the interaction and reduces the perceived task duration, task difficulty, and fatigue. To test our approach on a real construction site, we enhanced AiroTouch using wireless signal-transmission technologies and waterproofing, and then we adapted it to a mini-crane construction robot. A study was conducted to evaluate how naturalistic vibrotactile feedback affects an observer's understanding of telerobotic assembly performed by this robot on a construction site. Seven adults without construction experience observed a mix of manual and autonomous assembly processes both with and without naturalistic vibrotactile feedback. Qualitative analysis of their survey responses and interviews indicates that all participants had positive responses to this technology and believed it would be beneficial for construction activities. Finally, we evaluated the effects of naturalistic vibrotactile feedback provided by wireless AiroTouch during live teleoperation of the mini-crane. Twenty-eight participants remotely controlled the mini-crane to complete three large-scale assembly-related tasks in lab, both with and without this type of haptic feedback. Our results show that naturalistic vibrotactile feedback enhances the participants' awareness of both robot motion and contact between the robot and other objects, particularly in scenarios with limited visibility. These effects increase participants' confidence when controlling the robot. Moreover, there is a noticeable trend of reduced vibration magnitude in the conditions where this type of haptic feedback is provided. The primary contribution of this dissertation is the clear explanation of details that are essential for the effective implementation of naturalistic vibrotactile feedback. We demonstrate that our accessible, audio-based approach can enhance user performance and experience during telerobotic assembly in construction and other application domains. These findings lay the foundation for further exploration of the potential benefits of incorporating haptic cues to enhance user experience during teleoperation.

Project Page [BibTex]

Project Page [BibTex]


no image
Multimodal Multi-User Surface Recognition with the Kernel Two-Sample Test

Khojasteh, B., Solowjow, F., Trimpe, S., Kuchenbecker, K. J.

IEEE Transactions on Automation Science and Engineering, 21(3):4432-4447, July 2024 (article)

Abstract
Machine learning and deep learning have been used extensively to classify physical surfaces through images and time-series contact data. However, these methods rely on human expertise and entail the time-consuming processes of data and parameter tuning. To overcome these challenges, we propose an easily implemented framework that can directly handle heterogeneous data sources for classification tasks. Our data-versus-data approach automatically quantifies distinctive differences in distributions in a high-dimensional space via kernel two-sample testing between two sets extracted from multimodal data (e.g., images, sounds, haptic signals). We demonstrate the effectiveness of our technique by benchmarking against expertly engineered classifiers for visual-audio-haptic surface recognition due to the industrial relevance, difficulty, and competitive baselines of this application; ablation studies confirm the utility of key components of our pipeline. As shown in our open-source code, we achieve 97.2% accuracy on a standard multi-user dataset with 108 surface classes, outperforming the state-of-the-art machine-learning algorithm by 6% on a more difficult version of the task. The fact that our classifier obtains this performance with minimal data processing in the standard algorithm setting reinforces the powerful nature of kernel methods for learning to recognize complex patterns. Note to Practitioners—We demonstrate how to apply the kernel two-sample test to a surface-recognition task, discuss opportunities for improvement, and explain how to use this framework for other classification problems with similar properties. Automating surface recognition could benefit both surface inspection and robot manipulation. Our algorithm quantifies class similarity and therefore outputs an ordered list of similar surfaces. This technique is well suited for quality assurance and documentation of newly received materials or newly manufactured parts. More generally, our automated classification pipeline can handle heterogeneous data sources including images and high-frequency time-series measurements of vibrations, forces and other physical signals. As our approach circumvents the time-consuming process of feature engineering, both experts and non-experts can use it to achieve high-accuracy classification. It is particularly appealing for new problems without existing models and heuristics. In addition to strong theoretical properties, the algorithm is straightforward to use in practice since it requires only kernel evaluations. Its transparent architecture can provide fast insights into the given use case under different sensing combinations without costly optimization. Practitioners can also use our procedure to obtain the minimum data-acquisition time for independent time-series data from new sensor recordings.

DOI Project Page [BibTex]

DOI Project Page [BibTex]


no image
Reflectance Outperforms Force and Position in Model-Free Needle Puncture Detection

L’Orsa, R., Bisht, A., Yu, L., Murari, K., Westwick, D. T., Sutherland, G. R., Kuchenbecker, K. J.

In Proceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Orlando, USA, July 2024 (inproceedings) Accepted

Abstract
The surgical procedure of needle thoracostomy temporarily corrects accidental over-pressurization of the space between the chest wall and the lungs. However, failure rates of up to 94.1% have been reported, likely because this procedure is done blind: operators estimate by feel when the needle has reached its target. We believe instrumented needles could help operators discern entry into the target space, but limited success has been achieved using force and/or position to try to discriminate needle puncture events during simulated surgical procedures. We thus augmented our needle insertion system with a novel in-bore double-fiber optical setup. Tissue reflectance measurements as well as 3D force, torque, position, and orientation were recorded while two experimenters repeatedly inserted a bevel-tipped percutaneous needle into ex vivo porcine ribs. We applied model-free puncture detection to various filtered time derivatives of each sensor data stream offline. In the held-out test set of insertions, puncture-detection precision improved substantially using reflectance measurements compared to needle insertion force alone (3.3-fold increase) or position alone (11.6-fold increase).

Project Page [BibTex]

Project Page [BibTex]


no image
Optimized Magnetically Docked Ingestible Capsules for Non-Invasive Refilling of Implantable Devices

Al-Haddad, H., Guarnera, D., Tamadon, I., Arrico, L., Ballardini, G., Mariottini, F., Cucini, A., Ricciardi, S., Vistoli, F., Rotondo, M. I., Campani, D., Ren, X., Ciuti, G., Terry, B., Iacovacci, V., Ricott, L.

Advanced Intelligent Systems, (2400125):1-21, July 2024 (article)

Abstract
Automated drug delivery systems (ADDS) improve chronic disease management by enhancing adherence and reducing patient burden, particularly in conditions like type 1 diabetes, through intraperitoneal insulin delivery. However, periodic invasive refilling of the reservoir is needed in such a class of implantable devices. In previous work, an implantable ADDS with a capsule docking system is introduced for non-invasive reservoir refilling. Yet, it encounters reliability issues in manufacturing, sealing, and docking design and lacks evidence on intestinal tissue compression effects and chronic in vivo data. This work proposes an optimization of the different components featuring this ADDS. The ingestible capsule is designed, developed, and tested following ISO 13485, exhibiting high insulin stability and optimal sealing for six days in harsh gastrointestinal-like conditions. A magnetic docking system is optimized, ensuring reliable and stable capsule docking at a clinically relevant distance of 5.92 mm. Histological tests on human intestinal tissues confirm safe capsule compression during docking. Bench tests demonstrate that the integrated mechatronic system effectively docks capsules at various peristalsis-mimicking velocities. A six-week in vivo test on porcine models demonstrates chronic safety and provides hints on fibrotic reactions. These results pave the way for the further evolution of implantable ADDS.

DOI [BibTex]

DOI [BibTex]


Fingertip Dynamic Response Simulated Across Excitation Points and Frequencies
Fingertip Dynamic Response Simulated Across Excitation Points and Frequencies

Serhat, G., Kuchenbecker, K. J.

Biomechanics and Modeling in Mechanobiology, 23, pages: 1369-1376, May 2024 (article)

Abstract
Predicting how the fingertip will mechanically respond to different stimuli can help explain human haptic perception and enable improvements to actuation approaches such as ultrasonic mid-air haptics. This study addresses this goal using high-fidelity 3D finite element analyses. We compute the deformation profiles and amplitudes caused by harmonic forces applied in the normal direction at four locations: the center of the finger pad, the side of the finger, the tip of the finger, and the oblique midpoint of these three sites. The excitation frequency is swept from 2.5 to 260 Hz. The simulated frequency response functions (FRFs) obtained for displacement demonstrate that the relative magnitudes of the deformations elicited by stimulating at each of these four locations greatly depends on whether only the excitation point or the entire finger is considered. The point force that induces the smallest local deformation can even cause the largest overall deformation at certain frequency intervals. Above 225 Hz, oblique excitation produces larger mean displacement amplitudes than the other three forces due to excitation of multiple modes involving diagonal deformation. These simulation results give novel insights into the combined influence of excitation location and frequency on the fingertip dynamic response, potentially facilitating the design of future vibration feedback devices.

DOI Project Page [BibTex]

DOI Project Page [BibTex]


Closing the Loop in Minimally Supervised Human-Robot Interaction: Formative and Summative Feedback
Closing the Loop in Minimally Supervised Human-Robot Interaction: Formative and Summative Feedback

Mohan, M., Nunez, C. M., Kuchenbecker, K. J.

Scientific Reports, 14(10564):1-18, May 2024 (article)

Abstract
Human instructors fluidly communicate with hand gestures, head and body movements, and facial expressions, but robots rarely leverage these complementary cues. A minimally supervised social robot with such skills could help people exercise and learn new activities. Thus, we investigated how nonverbal feedback from a humanoid robot affects human behavior. Inspired by the education literature, we evaluated formative feedback (real-time corrections) and summative feedback (post-task scores) for three distinct tasks: positioning in the room, mimicking the robot's arm pose, and contacting the robot's hands. Twenty-eight adults completed seventy-five 30-second-long trials with no explicit instructions or experimenter help. Motion-capture data analysis shows that both formative and summative feedback from the robot significantly aided user performance. Additionally, formative feedback improved task understanding. These results show the power of nonverbal cues based on human movement and the utility of viewing feedback through formative and summative lenses.

DOI Project Page [BibTex]


Airo{T}ouch: Enhancing Telerobotic Assembly through Naturalistic Haptic Feedback of Tool Vibrations
AiroTouch: Enhancing Telerobotic Assembly through Naturalistic Haptic Feedback of Tool Vibrations

Gong, Y., Mat Husin, H., Erol, E., Ortenzi, V., Kuchenbecker, K. J.

Frontiers in Robotics and AI, 11(1355205):1-15, May 2024 (article)

Abstract
Teleoperation allows workers to safely control powerful construction machines; however, its primary reliance on visual feedback limits the operator's efficiency in situations with stiff contact or poor visibility, hindering its use for assembly of pre-fabricated building components. Reliable, economical, and easy-to-implement haptic feedback could fill this perception gap and facilitate the broader use of robots in construction and other application areas. Thus, we adapted widely available commercial audio equipment to create AiroTouch, a naturalistic haptic feedback system that measures the vibration experienced by each robot tool and enables the operator to feel a scaled version of this vibration in real time. Accurate haptic transmission was achieved by optimizing the positions of the system's off-the-shelf accelerometers and voice-coil actuators. A study was conducted to evaluate how adding this naturalistic type of vibrotactile feedback affects the operator during telerobotic assembly. Thirty participants used a bimanual dexterous teleoperation system (Intuitive da Vinci Si) to build a small rigid structure under three randomly ordered haptic feedback conditions: no vibrations, one-axis vibrations, and summed three-axis vibrations. The results show that users took advantage of both tested versions of the naturalistic haptic feedback after gaining some experience with the task, causing significantly lower vibrations and forces in the second trial. Subjective responses indicate that haptic feedback increased the realism of the interaction and reduced the perceived task duration, task difficulty, and fatigue. As hypothesized, higher haptic feedback gains were chosen by users with larger hands and for the smaller sensed vibrations in the one-axis condition. These results elucidate important details for effective implementation of naturalistic vibrotactile feedback and demonstrate that our accessible audio-based approach could enhance user performance and experience during telerobotic assembly in construction and other application domains.

DOI Project Page [BibTex]


no image
Expert Perception of Teleoperated Social Exercise Robots

Mohan, M., Mat Husin, H., Kuchenbecker, K. J.

In Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction (HRI), pages: 769-773, Boulder, USA, March 2024, Late-Breaking Report (LBR) (5 pages) presented at the IEEE/ACM International Conference on Human-Robot Interaction (HRI) (inproceedings)

Abstract
Social robots could help address the growing issue of physical inactivity by inspiring users to engage in interactive exercise. Nevertheless, the practical implementation of social exercise robots poses substantial challenges, particularly in terms of personalizing their activities to individuals. We propose that motion-capture-based teleoperation could serve as a viable solution to address these needs by enabling experts to record custom motions that could later be played back without their real-time involvement. To gather feedback about this idea, we conducted semi-structured interviews with eight exercise-therapy professionals. Our findings indicate that experts' attitudes toward social exercise robots become more positive when considering the prospect of teleoperation to record and customize robot behaviors.

DOI Project Page [BibTex]

DOI Project Page [BibTex]


no image
Modeling Fatigue in Manual and Robot-Assisted Work for Operator 5.0

Allemang–Trivalle, A., Donjat, J., Bechu, G., Coppin, G., Chollet, M., Klaproth, O. W., Mitschke, A., Schirrmann, A., Cao, C. G. L.

IISE Transactions on Occupational Ergonomics and Human Factors, 12(1-2):135-147, March 2024 (article)

DOI [BibTex]

DOI [BibTex]


Being Neurodivergent in Academia: Autistic and abroad
Being Neurodivergent in Academia: Autistic and abroad

Schulz, A.

eLife, 13, March 2024 (article)

Abstract
An AuDHD researcher recounts the highs and lows of relocating from the United States to Germany for his postdoc.

DOI [BibTex]


{IMU}-Based Kinematics Estimation Accuracy Affects Gait Retraining Using Vibrotactile Cues
IMU-Based Kinematics Estimation Accuracy Affects Gait Retraining Using Vibrotactile Cues

Rokhmanova, N., Pearl, O., Kuchenbecker, K. J., Halilaj, E.

IEEE Transactions on Neural Systems and Rehabilitation Engineering, 32, pages: 1005-1012, February 2024 (article)

Abstract
Wearable sensing using inertial measurement units (IMUs) is enabling portable and customized gait retraining for knee osteoarthritis. However, the vibrotactile feedback that users receive directly depends on the accuracy of IMU-based kinematics. This study investigated how kinematic errors impact an individual's ability to learn a therapeutic gait using vibrotactile cues. Sensor accuracy was computed by comparing the IMU-based foot progression angle to marker-based motion capture, which was used as ground truth. Thirty subjects were randomized into three groups to learn a toe-in gait: one group received vibrotactile feedback during gait retraining in the laboratory, another received feedback outdoors, and the control group received only verbal instruction and proceeded directly to the evaluation condition. All subjects were evaluated on their ability to maintain the learned gait in a new outdoor environment. We found that subjects with high tracking errors exhibited more incorrect responses to vibrotactile cues and slower learning rates than subjects with low tracking errors. Subjects with low tracking errors outperformed the control group in the evaluation condition, whereas those with higher error did not. Errors were correlated with foot size and angle magnitude, which may indicate a non-random algorithmic bias. The accuracy of IMU-based kinematics has a cascading effect on feedback; ignoring this effect could lead researchers or clinicians to erroneously classify a patient as a non-responder if they did not improve after retraining. To use patient and clinician time effectively, future implementation of portable gait retraining will require assessment across a diverse range of patients.

DOI Project Page [BibTex]

DOI Project Page [BibTex]


Creating a Haptic Empathetic Robot Animal That Feels Touch and Emotion
Creating a Haptic Empathetic Robot Animal That Feels Touch and Emotion

Burns, R.

University of Tübingen, Tübingen, Germany, February 2024, Department of Computer Science (phdthesis)

Abstract
Social touch, such as a hug or a poke on the shoulder, is an essential aspect of everyday interaction. Humans use social touch to gain attention, communicate needs, express emotions, and build social bonds. Despite its importance, touch sensing is very limited in most commercially available robots. By endowing robots with social-touch perception, one can unlock a myriad of new interaction possibilities. In this thesis, I present my work on creating a Haptic Empathetic Robot Animal (HERA), a koala-like robot for children with autism. I demonstrate the importance of establishing design guidelines based on one's target audience, which we investigated through interviews with autism specialists. I share our work on creating full-body tactile sensing for the NAO robot using low-cost, do-it-yourself (DIY) methods, and I introduce an approach to model long-term robot emotions using second-order dynamics.

Project Page [BibTex]

Project Page [BibTex]


no image
How Should Robots Exercise with People? Robot-Mediated Exergames Win with Music, Social Analogues, and Gameplay Clarity

Fitter, N. T., Mohan, M., Preston, R. C., Johnson, M. J., Kuchenbecker, K. J.

Frontiers in Robotics and AI, 10(1155837):1-18, January 2024 (article)

Abstract
The modern worldwide trend toward sedentary behavior comes with significant health risks. An accompanying wave of health technologies has tried to encourage physical activity, but these approaches often yield limited use and retention. Due to their unique ability to serve as both a health-promoting technology and a social peer, we propose robots as a game-changing solution for encouraging physical activity. This article analyzes the eight exergames we previously created for the Rethink Baxter Research Robot in terms of four key components that are grounded in the video-game literature: repetition, pattern matching, music, and social design. We use these four game facets to assess gameplay data from 40 adult users who each experienced the games in balanced random order. In agreement with prior research, our results show that relevant musical cultural references, recognizable social analogues, and gameplay clarity are good strategies for taking an otherwise highly repetitive physical activity and making it engaging and popular among users. Others who study socially assistive robots and rehabilitation robotics can benefit from this work by considering the presented design attributes to generate future hypotheses and by using our eight open-source games to pursue follow-up work on social-physical exercise with robots.

DOI Project Page [BibTex]

DOI Project Page [BibTex]


no image
Robust Surface Recognition with the Maximum Mean Discrepancy: Degrading Haptic-Auditory Signals through Bandwidth and Noise

(Best ToH Short Paper Award at the IEEE Haptics Symposium Conference 2024)

Khojasteh, B., Shao, Y., Kuchenbecker, K. J.

IEEE Transactions on Haptics, 17(1):58-65, January 2024, Presented at the IEEE Haptics Symposium (article)

Abstract
Sliding a tool across a surface generates rich sensations that can be analyzed to recognize what is being touched. However, the optimal configuration for capturing these signals is yet unclear. To bridge this gap, we consider haptic-auditory data as a human explores surfaces with different steel tools, including accelerations of the tool and finger, force and torque applied to the surface, and contact sounds. Our classification pipeline uses the maximum mean discrepancy (MMD) to quantify differences in data distributions in a high-dimensional space for inference. With recordings from three hemispherical tool diameters and ten diverse surfaces, we conducted two degradation studies by decreasing sensing bandwidth and increasing added noise. We evaluate the haptic-auditory recognition performance achieved with the MMD to compare newly gathered data to each surface in our known library. The results indicate that acceleration signals alone have great potential for high-accuracy surface recognition and are robust against noise contamination. The optimal accelerometer bandwidth exceeds 1000 Hz, suggesting that useful vibrotactile information extends beyond human perception range. Finally, smaller tool tips generate contact vibrations with better noise robustness. The provided sensing guidelines may enable superhuman performance in portable surface recognition, which could benefit quality control, material documentation, and robotics.

DOI Project Page [BibTex]

2023


no image
Towards Semi-Automated Pleural Cavity Access for Pneumothorax in Austere Environments

L’Orsa, R., Lama, S., Westwick, D., Sutherland, G., Kuchenbecker, K. J.

Acta Astronautica, 212, pages: 48-53, November 2023 (article)

Abstract
Astronauts are at risk for pneumothorax, a condition where injury or disease introduces air between the chest wall and the lungs (i.e., the pleural cavity). In a worst-case scenario, it can rapidly lead to a fatality if left unmanaged and will require prompt treatment in situ if developed during spaceflight. Chest tube insertion is the definitive treatment for pneumothorax, but it requires a high level of skill and frequent practice for safe use. Physician astronauts may struggle to maintain this skill on medium- and long-duration exploration-class missions, and it is inappropriate for pure just-in-time learning or skill refreshment paradigms. This paper proposes semi-automating tool insertion to reduce the risk of complications in austere environments and describes preliminary experiments providing initial validation of an intelligent prototype system. Specifically, we showcase and analyse motion and force recordings from a sensorized percutaneous access needle inserted repeatedly into an ex vivo tissue phantom, along with relevant physiological data simultaneously recorded from the operator. When coupled with minimal just-in-time training and/or augmented reality guidance, the proposed system may enable non-expert operators to safely perform emergency chest tube insertion without the use of ground resources.

DOI Project Page [BibTex]

2023

DOI Project Page [BibTex]


no image
Gesture-Based Nonverbal Interaction for Exercise Robots

Mohan, M.

University of Tübingen, Tübingen, Germany, October 2023, Department of Computer Science (phdthesis)

Abstract
When teaching or coaching, humans augment their words with carefully timed hand gestures, head and body movements, and facial expressions to provide feedback to their students. Robots, however, rarely utilize these nuanced cues. A minimally supervised social robot equipped with these abilities could support people in exercising, physical therapy, and learning new activities. This thesis examines how the intuitive power of human gestures can be harnessed to enhance human-robot interaction. To address this question, this research explores gesture-based interactions to expand the capabilities of a socially assistive robotic exercise coach, investigating the perspectives of both novice users and exercise-therapy experts. This thesis begins by concentrating on the user's engagement with the robot, analyzing the feasibility of minimally supervised gesture-based interactions. This exploration seeks to establish a framework in which robots can interact with users in a more intuitive and responsive manner. The investigation then shifts its focus toward the professionals who are integral to the success of these innovative technologies: the exercise-therapy experts. Roboticists face the challenge of translating the knowledge of these experts into robotic interactions. We address this challenge by developing a teleoperation algorithm that can enable exercise therapists to create customized gesture-based interactions for a robot. Thus, this thesis lays the groundwork for dynamic gesture-based interactions in minimally supervised environments, with implications for not only exercise-coach robots but also broader applications in human-robot interaction.

Project Page [BibTex]

Project Page [BibTex]


no image
Upper Limb Position Matching After Stroke: Evidence for Bilateral Asymmetry in Precision but Not in Accuracy

Ballardini, G., Cherpin, A., Chua, K. S. G., Hussain, A., Kager, S., Xiang, L., Campolo, D., Casadio, M.

IEEE Access, 11, pages: 112851-112860, October 2023 (article)

Abstract
Assessment and rehabilitation of the upper limb after stroke have focused primarily on the contralesional arm. However, increasing evidence highlights functional sensorimotor alterations also in the ipsilesional arm. This study aims to evaluate the position sense of both arms after stroke using a passive position matching task. We hypothesized that the ipsilesional arm would have higher accuracy and precision than the contralesional arm but lower than the dominant arm in unimpaired participants. Additionally, we hypothesized a correlation in performance between the two arms in stroke survivors. The study included 40 stroke survivors who performed the proprioceptive test with both arms and 24 unimpaired participants who performed it with their dominant arm. During each trial, a planar robot moved their hand to a target and back. In the Participants had to indicate when their hand reached the target position in the second phase. We evaluated performance by computing the matching accuracy and precision. We found that the ipsilesional arm had similar matching accuracy but higher precision than the contralesional arm. Furthermore, only the matching accuracy of the two arms was correlated in the left and central regions of the workspace. When comparing stroke survivors to unimpaired participants, the ipsilesional arm exhibited significantly lower accuracy, yet not different precision. These findings support the notion that the ipsilesional arm is not 'unaffected' by stroke but rather 'less-affected', suggesting that stroke does not impact ipsilesional position sense precision. Additionally, the results suggest a dissociation between accuracy and precision in passive multi-joint position matching tasks.

DOI [BibTex]

DOI [BibTex]


no image
Enhancing Surgical Team Collaboration and Situation Awareness through Multimodal Sensing

Allemang–Trivalle, A.

In Proceedings of the ACM International Conference on Multimodal Interaction, pages: 716-720, Extended abstract (5 pages) presented at the ACM International Conference on Multimodal Interaction (ICMI) Doctoral Consortium, Paris, France, October 2023 (inproceedings)

Abstract
Surgery, typically seen as the surgeon's sole responsibility, requires a broader perspective acknowledging the vital roles of other operating room (OR) personnel. The interactions among team members are crucial for delivering quality care and depend on shared situation awareness. I propose a two-phase approach to design and evaluate a multimodal platform that monitors OR members, offering insights into surgical procedures. The first phase focuses on designing a data-collection platform, tailored to surgical constraints, to generate novel collaboration and situation-awareness metrics using synchronous recordings of the participants' voices, positions, orientations, electrocardiograms, and respiration signals. The second phase concerns the creation of intuitive dashboards and visualizations, aiding surgeons in reviewing recorded surgery, identifying adverse events and contributing to proactive measures. This work aims to demonstrate an innovative approach to data collection and analysis, augmenting the surgical team's capabilities. The multimodal platform has the potential to enhance collaboration, foster situation awareness, and ultimately mitigate surgical adverse events. This research sets the stage for a transformative shift in the OR, enabling a more holistic and inclusive perspective that recognizes that surgery is a team effort.

DOI [BibTex]

DOI [BibTex]


no image
The Certification Matters: A Comparative Performance Analysis of Combat Application Tourniquets versus Non-Certified CAT Look-Alike Tourniquets

Lagazzi, E., Ballardini, G., Drogo, A., Viola, L., Marrone, E., Valente, V., Bonetti, M., Lee, J., King, D. R., Ricci, S.

Prehospital and Disaster Medicine, 38(4):450-455, August 2023 (article)

DOI [BibTex]

DOI [BibTex]


Wear Your Heart on Your Sleeve: Users Prefer Robots with Emotional Reactions to Touch and Ambient Moods
Wear Your Heart on Your Sleeve: Users Prefer Robots with Emotional Reactions to Touch and Ambient Moods

Burns, R. B., Ojo, F., Kuchenbecker, K. J.

In Proceedings of the IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), pages: 1914-1921, Busan, South Korea, August 2023 (inproceedings)

Abstract
Robots are increasingly being developed as assistants for household, education, therapy, and care settings. Such robots can use adaptive emotional behavior to communicate warmly and effectively with their users and to encourage interest in extended interactions. However, autonomous physical robots often lack a dynamic internal emotional state, instead displaying brief, fixed emotion routines to promote specific user interactions. Furthermore, despite the importance of social touch in human communication, most commercially available robots have limited touch sensing, if any at all. We propose that users' perceptions of a social robotic system will improve when the robot provides emotional responses on both shorter and longer time scales (reactions and moods), based on touch inputs from the user. We evaluated this proposal through an online study in which 51 diverse participants watched nine randomly ordered videos (a three-by-three full-factorial design) of the koala-like robot HERA being touched by a human. Users provided the highest ratings in terms of agency, ambient activity, enjoyability, and touch perceptivity for scenarios in which HERA showed emotional reactions and either neutral or emotional moods in response to social touch gestures. Furthermore, we summarize key qualitative findings about users' preferences for reaction timing, the ability of robot mood to show persisting memory, and perception of neutral behaviors as a curious or self-aware robot.

link (url) DOI Project Page [BibTex]

link (url) DOI Project Page [BibTex]


Minsight: A Fingertip-Sized Vision-Based Tactile Sensor for Robotic Manipulation
Minsight: A Fingertip-Sized Vision-Based Tactile Sensor for Robotic Manipulation

Andrussow, I., Sun, H., Kuchenbecker, K. J., Martius, G.

Advanced Intelligent Systems, 5(8), August 2023, Inside back cover (article)

Abstract
Intelligent interaction with the physical world requires perceptual abilities beyond vision and hearing; vibrant tactile sensing is essential for autonomous robots to dexterously manipulate unfamiliar objects or safely contact humans. Therefore, robotic manipulators need high-resolution touch sensors that are compact, robust, inexpensive, and efficient. The soft vision-based haptic sensor presented herein is a miniaturized and optimized version of the previously published sensor Insight. Minsight has the size and shape of a human fingertip and uses machine learning methods to output high-resolution maps of 3D contact force vectors at 60 Hz. Experiments confirm its excellent sensing performance, with a mean absolute force error of 0.07 N and contact location error of 0.6 mm across its surface area. Minsight's utility is shown in two robotic tasks on a 3-DoF manipulator. First, closed-loop force control enables the robot to track the movements of a human finger based only on tactile data. Second, the informative value of the sensor output is shown by detecting whether a hard lump is embedded within a soft elastomer with an accuracy of 98%. These findings indicate that Minsight can give robots the detailed fingertip touch sensing needed for dexterous manipulation and physical human–robot interaction.

DOI Project Page [BibTex]


Augmenting Human Policies using Riemannian Metrics for Human-Robot Shared Control
Augmenting Human Policies using Riemannian Metrics for Human-Robot Shared Control

Oh, Y., Passy, J., Mainprice, J.

In Proceedings of the IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), pages: 1612-1618, Busan, Korea, August 2023 (inproceedings)

Abstract
We present a shared control framework for teleoperation that combines the human and autonomous robot agents operating in different dimension spaces. The shared control problem is an optimization problem to maximize the human's internal action-value function while guaranteeing that the shared control policy is close to the autonomous robot policy. This results in a state update rule that augments the human controls using the Riemannian metric that emerges from computing the curvature of the robot's value function to account for any cost terms or constraints that the human operator may neglect when operating a redundant manipulator. In our experiments, we apply Linear Quadratic Regulators to locally approximate the robot policy using a single optimized robot trajectory, thereby preventing the need for an optimization step at each time step to determine the optimal policy. We show preliminary results of reach-and-grasp teleoperation tasks with a simulated human policy and a pilot user study using the VR headset and controllers. However, the mixed user preference ratings and quantitative results show that more investigation is required to prove the efficacy of the proposed paradigm.

DOI [BibTex]

DOI [BibTex]


Learning to Estimate Palpation Forces in Robotic Surgery From Visual-Inertial Data
Learning to Estimate Palpation Forces in Robotic Surgery From Visual-Inertial Data

Lee, Y., Husin, H. M., Forte, M., Lee, S., Kuchenbecker, K. J.

IEEE Transactions on Medical Robotics and Bionics, 5(3):496-506, August 2023 (article)

Abstract
Surgeons cannot directly touch the patient's tissue in robot-assisted minimally invasive procedures. Instead, they must palpate using instruments inserted into the body through trocars. This way of operating largely prevents surgeons from using haptic cues to localize visually undetectable structures such as tumors and blood vessels, motivating research on direct and indirect force sensing. We propose an indirect force-sensing method that combines monocular images of the operating field with measurements from IMUs attached externally to the instrument shafts. Our method is thus suitable for various robotic surgery systems as well as laparoscopic surgery. We collected a new dataset using a da Vinci Si robot, a force sensor, and four different phantom tissue samples. The dataset includes 230 one-minute-long recordings of repeated bimanual palpation tasks performed by four lay operators. We evaluated several network architectures and investigated the role of the network inputs. Using the DenseNet vision model and including inertial data best-predicted palpation forces (lowest average root-mean-square error and highest average coefficient of determination). Ablation studies revealed that video frames carry significantly more information than inertial signals. Finally, we demonstrated the model's ability to generalize to unseen tissue and predict shear contact forces.

DOI [BibTex]

DOI [BibTex]


no image
Naturalistic Vibrotactile Feedback Could Facilitate Telerobotic Assembly on Construction Sites

Gong, Y., Javot, B., Lauer, A. P. R., Sawodny, O., Kuchenbecker, K. J.

In Proceedings of the IEEE World Haptics Conference (WHC), pages: 169-175, Delft, The Netherlands, July 2023 (inproceedings)

Abstract
Telerobotics is regularly used on construction sites to build large structures efficiently. A human operator remotely controls the construction robot under direct visual feedback, but visibility is often poor. Future construction robots that move autonomously will also require operator monitoring. Thus, we designed a wireless haptic feedback system to provide the operator with task-relevant mechanical information from a construction robot in real time. Our AiroTouch system uses an accelerometer to measure the robot end-effector's vibrations and uses off-the-shelf audio equipment and a voice-coil actuator to display them to the user with high fidelity. A study was conducted to evaluate how this type of naturalistic vibration feedback affects the observer's understanding of telerobotic assembly on a real construction site. Seven adults without construction experience observed a mix of manual and autonomous assembly processes both with and without naturalistic vibrotactile feedback. Qualitative analysis of their survey responses and interviews indicated that all participants had positive responses to this technology and believed it would be beneficial for construction activities.

DOI Project Page [BibTex]

DOI Project Page [BibTex]


A Toolkit for Expanding Sustainability Engineering Utilizing Foundations of the Engineering for One Planet Initiative
A Toolkit for Expanding Sustainability Engineering Utilizing Foundations of the Engineering for One Planet Initiative

Schulz, A., Anderson, C. D., Cooper, C., Roberts, D., Loyo, J., Lewis, K., Kumar, S., Rolf, J., Marulanda, N. A. G.

In Proceedings of the American Society of Engineering Education (ASEE), Baltimore, USA, June 2023, Andrew Schulz, Cindy Cooper, Cindy Anderson contributed equally. (inproceedings)

Abstract
Recently, there has been a significant push to prepare all engineers with skills in sustainability, motivated by industry needs, accreditation requirements, and international efforts such as the National Science Foundation’s 10 Big Ideas and Grand Challenges and the United Nations’ Sustainable Development Goals (SDGs). This paper discusses a new toolkit to enable broad dissemination of vetted tools to help engineering faculty members teach sustainability using resources from the Engineering for One Planet (EOP) initiative. This toolkit is to be used as a mechanism to engage a diversity of stakeholders to use their voices, experiences, and connections to share the need for national curricular change in engineering education widely. This toolkit can foster the integration of sustainability-focused learning outcomes into engineering courses and programs. This is particularly important for graduating engineers at this crucial time when we collectively face a convergence of national- and global-scale planetary crises that professional engineers will directly and indirectly impact. Catalyzed by The Lemelson Foundation and VentureWell, the EOP initiative provides teaching tools, grants, and support for the EOP Network —a volunteer action network— comprising diverse stakeholders collectively seeking to transform engineering education to equip all engineers with the understanding, knowledge, skills, and mindsets to ensure their work contributes to a healthy world. The EOP Framework, a fundamental resource of the initiative, provides a curated and vetted list of ABET-aligned sustainability-focused student learning outcomes, including core and advanced. It covers social and environmental sustainability topics and essential professional skills such as communication, teamwork, and critical thinking. It was designed as a practical implementation tool — rather than a research framework — to help educators embed sustainability concepts and tools into engineering courses and programs at all levels. The Lemelson Foundation has provided a range of grants to support curricular transformation efforts using the EOP Framework. With support from The Lemelson Foundation, ASEE launched an EOP Mini-Grant Program in 2022 to engender curricular changes using the EOP Framework. The EOP Network is working to extend the reach of the Framework across the ASEE community beyond initial pilot programs by implementing an EOP Toolkit for EOP Network members and other stakeholders to use at their home institutions, conferences, and informative workshops. This article describes the rationale for creating the EOP Toolkit, the development process, content examples, and use scenarios.

[BibTex]

[BibTex]


Utilizing Online and Open-Source Machine Learning Toolkits to Leverage the Future of Sustainable Engineering
Utilizing Online and Open-Source Machine Learning Toolkits to Leverage the Future of Sustainable Engineering

Schulz, A., Stathatos, S., Shriver, C., Moore, R.

In Proceedings of the American Society of Engineering Education (ASEE), Baltimore, USA, June 2023, Andrew Schulz and Suzanne Stathatos are co-first authors. (inproceedings)

Abstract
Recently, there has been a national push to use machine learning (ML) and artificial intelligence (AI) to advance engineering techniques in all disciplines ranging from advanced fracture mechanics in materials science to soil and water quality testing in the civil and environmental engineering fields. Using AI, specifically machine learning, engineers can automate and decrease the processing or human labeling time while maintaining statistical repeatability via trained models and sensors. Edge Impulse has designed an open-source TinyML-enabled Arduino education tool kit for engineering disciplines. This paper discusses the various applications and approaches engineering educators have taken to utilize ML toolkits in the classroom. We provide in-depth implementation guides and associated learning outcomes focused on the Environmental Engineering Classroom. We discuss five specific examples of four standard Environmental Engineering courses for freshman and junior-level engineering. There are currently few programs in the nation that utilize machine learning toolkits to prepare the next generation of ML and AI-educated engineers for industry and academic careers. This paper will guide educators to design and implement ML/AI into engineering curricula (without a specific AI or ML focus within the course) using simple, cheap, and open-source tools and technological aid from an online platform in collaboration with Edge Impulse.

DOI [BibTex]

DOI [BibTex]


In the Arms of a Robot: Designing Autonomous Hugging Robots with Intra-Hug Gestures
In the Arms of a Robot: Designing Autonomous Hugging Robots with Intra-Hug Gestures

Block, A. E., Seifi, H., Hilliges, O., Gassert, R., Kuchenbecker, K. J.

ACM Transactions on Human-Robot Interaction, 12(2):1-49, June 2023, Special Issue on Designing the Robot Body: Critical Perspectives on Affective Embodied Interaction (article)

Abstract
Hugs are complex affective interactions that often include gestures like squeezes. We present six new guidelines for designing interactive hugging robots, which we validate through two studies with our custom robot. To achieve autonomy, we investigated robot responses to four human intra-hug gestures: holding, rubbing, patting, and squeezing. Thirty-two users each exchanged and rated sixteen hugs with an experimenter-controlled HuggieBot 2.0. The robot's inflated torso's microphone and pressure sensor collected data of the subjects' demonstrations that were used to develop a perceptual algorithm that classifies user actions with 88% accuracy. Users enjoyed robot squeezes, regardless of their performed action, they valued variety in the robot response, and they appreciated robot-initiated intra-hug gestures. From average user ratings, we created a probabilistic behavior algorithm that chooses robot responses in real time. We implemented improvements to the robot platform to create HuggieBot 3.0 and then validated its gesture perception system and behavior algorithm with sixteen users. The robot's responses and proactive gestures were greatly enjoyed. Users found the robot more natural, enjoyable, and intelligent in the last phase of the experiment than in the first. After the study, they felt more understood by the robot and thought robots were nicer to hug.

DOI Project Page [BibTex]

DOI Project Page [BibTex]


Reconstructing Signing Avatars from Video Using Linguistic Priors
Reconstructing Signing Avatars from Video Using Linguistic Priors

Forte, M., Kulits, P., Huang, C. P., Choutas, V., Tzionas, D., Kuchenbecker, K. J., Black, M. J.

In IEEE/CVF Conf. on Computer Vision and Pattern Recognition (CVPR), pages: 12791-12801, CVPR 2023, June 2023 (inproceedings)

Abstract
Sign language (SL) is the primary method of communication for the 70 million Deaf people around the world. Video dictionaries of isolated signs are a core SL learning tool. Replacing these with 3D avatars can aid learning and enable AR/VR applications, improving access to technology and online media. However, little work has attempted to estimate expressive 3D avatars from SL video; occlusion, noise, and motion blur make this task difficult. We address this by introducing novel linguistic priors that are universally applicable to SL and provide constraints on 3D hand pose that help resolve ambiguities within isolated signs. Our method, SGNify, captures fine-grained hand pose, facial expression, and body movement fully automatically from in-the-wild monocular SL videos. We evaluate SGNify quantitatively by using a commercial motion-capture system to compute 3D avatars synchronized with monocular video. SGNify outperforms state-of-the-art 3D body-pose- and shape-estimation methods on SL videos. A perceptual study shows that SGNify's 3D reconstructions are significantly more comprehensible and natural than those of previous methods and are on par with the source videos. Code and data are available at sgnify.is.tue.mpg.de.

pdf arXiv project code DOI [BibTex]

pdf arXiv project code DOI [BibTex]


Generating Clear Vibrotactile Cues with a Magnet Embedded in a Soft Finger Sheath
Generating Clear Vibrotactile Cues with a Magnet Embedded in a Soft Finger Sheath

Gertler, I., Serhat, G., Kuchenbecker, K. J.

Soft Robotics, 10(3):624-635, June 2023 (article)

Abstract
Haptic displays act on the user's body to stimulate the sense of touch and enrich applications from gaming and computer-aided design to rehabilitation and remote surgery. However, when crafted from typical rigid robotic components, they tend to be heavy, bulky, and expensive, while sleeker designs often struggle to create clear haptic cues. This article introduces a lightweight wearable silicone finger sheath that can deliver salient and rich vibrotactile cues using electromagnetic actuation. We fabricate the sheath on a ferromagnetic mandrel with a process based on dip molding, a robust fabrication method that is rarely used in soft robotics but is suitable for commercial production. A miniature rare-earth magnet embedded within the silicone layers at the center of the finger pad is driven to vibrate by the application of alternating current to a nearby air-coil. Experiments are conducted to determine the amplitude of the magnetic force and the frequency response function for the displacement amplitude of the magnet perpendicular to the skin. In addition, high-fidelity finite element analyses of the finger wearing the device are performed to investigate the trends observed in the measurements. The experimental and simulated results show consistent dynamic behavior from 10 to 1000 Hz, with the displacement decreasing after about 300 Hz. These results match the detection threshold profile obtained in a psychophysical study performed by 17 users, where more current was needed only at the highest frequency. A cue identification experiment and a demonstration in virtual reality validate the feasibility of this approach to fingertip haptics.

DOI Project Page [BibTex]


Haptify: A Measurement-Based Benchmarking System for Grounded Force-Feedback Devices
Haptify: A Measurement-Based Benchmarking System for Grounded Force-Feedback Devices

Fazlollahi, F., Kuchenbecker, K. J.

IEEE Transactions on Robotics, 39(2):1622-1636, April 2023 (article)

Abstract
Grounded force-feedback (GFF) devices are an established and diverse class of haptic technology based on robotic arms. However, the number of designs and how they are specified make comparing devices difficult. We thus present Haptify, a benchmarking system that can thoroughly, fairly, and noninvasively evaluate GFF haptic devices. The user holds the instrumented device end-effector and moves it through a series of passive and active experiments. Haptify records the interaction between the hand, device, and ground with a seven-camera optical motion-capture system, a 60-cm-square custom force plate, and a customized sensing end-effector. We demonstrate six key ways to assess GFF device performance: workspace shape, global free-space forces, global free-space vibrations, local dynamic forces and torques, frictionless surface rendering, and stiffness rendering. We then use Haptify to benchmark two commercial haptic devices. With a smaller workspace than the 3D Systems Touch, the more expensive Touch X outputs smaller free-space forces and vibrations, smaller and more predictable dynamic forces and torques, and higher-quality renderings of a frictionless surface and high stiffness.

DOI Project Page [BibTex]


no image
Effects of Automated Skill Assessment on Robotic Surgery Training

Brown, J. D., Kuchenbecker, K. J.

The International Journal of Medical Robotics and Computer Assisted Surgery, 19(2):e2492, April 2023 (article)

Abstract
Background: Several automated skill-assessment approaches have been proposed for robotic surgery, but their utility is not well understood. This article investigates the effects of one machine-learning-based skill-assessment approach on psychomotor skill development in robotic surgery training. Methods: N=29 trainees (medical students and residents) with no robotic surgery experience performed five trials of inanimate peg transfer with an Intuitive Surgical da Vinci Standard robot. Half of the participants received no post-trial feedback. The other half received automatically calculated scores from five Global Evaluative Assessment of Robotic Skill (GEARS) domains post-trial. Results: There were no significant differences between the groups regarding overall improvement or skill improvement rate. However, participants who received post-trial feedback rated their overall performance improvement significantly lower than participants who did not receive feedback. Conclusions: These findings indicate that automated skill evaluation systems might improve trainee selfawareness but not accelerate early-stage psychomotor skill development in robotic surgery training.

DOI Project Page [BibTex]

DOI Project Page [BibTex]


ForageFeeder: A Low-Cost Open Source Feeder for Randomly Distributed Food
ForageFeeder: A Low-Cost Open Source Feeder for Randomly Distributed Food

Jadali, N., Zhang, M. J., Schulz, A. K., Meyerchick, J., Hu, D. L.

HardwareX, 14(e00405):1-17, March 2023 (article)

Abstract
Automated feeders have long fed mice, livestock, and poultry, but are incapable of feeding zoo animals such as gorillas. In captivity, gorillas eat cut vegetables and fruits in pieces too large to be dispensed by automated feeders. Consequently, captive gorillas are fed manually at set times and locations, keeping them from the exercise and enrichment that accompanies natural foraging. We designed and built ForageFeeder, an automated gorilla feeder that spreads food at random intervals throughout the day. ForageFeeder is an open source and easy to manufacture and modify device, making the feeder more accessible for zoos. The design presented here reduces manual labor for zoo staff and may be a useful tool for studies of animal ethology.

DOI [BibTex]

DOI [BibTex]


no image
The S-BAN: Insights into the Perception of Shape-Changing Haptic Interfaces via Virtual Pedestrian Navigation

Spiers, A. J., Young, E., Kuchenbecker, K. J.

ACM Transactions on Computer-Human Interaction, 30(1):1-31, March 2023 (article)

Abstract
Screen-based pedestrian navigation assistance can be distracting or inaccessible to users. Shape-changing haptic interfaces can overcome these concerns. The S-BAN is a new handheld haptic interface that utilizes a parallel kinematic structure to deliver 2-DOF spatial information over a continuous workspace, with a form factor suited to integration with other travel aids. The ability to pivot, extend and retract its body opens possibilities and questions around spatial data representation. We present a static study to understand user perception of absolute pose and relative motion for two spatial mappings, showing highest sensitivity to relative motions in the cardinal directions. We then present an embodied navigation experiment in virtual reality. User motion efficiency when guided by the S-BAN was statistically equivalent to using a vision-based tool (a smartphone proxy). Although haptic trials were slower than visual trials, participants' heads were more elevated with the S-BAN, allowing greater visual focus on the environment.

DOI Project Page [BibTex]


Elephant trunks use an adaptable prehensile grip
Elephant trunks use an adaptable prehensile grip

Schulz, A., Reidenberg, J., Wu, J. N., Tang, C. Y., Seleb, B., Mancebo, J., Elgart, N., Hu, D.

Bioinspiration and Biomimetics, 18(2), February 2023 (article)

Abstract
Elephants have long been observed to grip objects with their trunk, but little is known about how they adjust their strategy for different weights. In this study, we challenge a female African elephant at Zoo Atlanta to lift 20–60 kg barbell weights with only its trunk. We measure the trunk’s shape and wrinkle geometry from a frozen elephant trunk at the Smithsonian. We observe several strategies employed to accommodate heavier weights, including accelerating less, orienting the trunk vertically, and wrapping the barbell with a greater trunk length. Mathematical models show that increasing barbell weights are associated with constant trunk tensile force and an increasing barbell-wrapping surface area due to the trunk’s wrinkles. Our findings may inspire the design of more adaptable soft robotic grippers that can improve grip using surface morphology such as wrinkles.

DOI [BibTex]

DOI [BibTex]


Drying dynamics of pellet feces
Drying dynamics of pellet feces

Magondu, B., Lee, A., Schulz, A., Buchelli, G., Meng, M., Kaminski, C., Yang, P., Carver, S., Hu, D.

Soft Matter, 19, pages: 723-732, January 2023 (article)

Abstract
Pellet feces are generated by a number of animals important to science or agriculture, including mice, rats, goats, and wombats. Understanding the factors that lead to fecal shape may provide a better understanding of animal health and diet. In this combined experimental and theoretical study, we test the hypothesis that pellet feces are formed by drying processes in the intestine. Inspirational to our work is the formation of hexagonal columnar jointings in cooling lava beds, in which the width L of the hexagon scales as L ∼ J−1 where J is the heat flux from the bed. Across 22 species of mammals, we report a transition from cylindrical to pellet feces if fecal water content drops below 0.65. Using a mathematical model that accounts for water intake rate and intestinal dimensions, we show pellet feces length L scales as L ∼ J−2.08 where J is the flux of water absorbed by the intestines. We build a mimic of the mammalian intestine using a corn starch cake drying in an open trough, finding that corn starch pellet length scales with water flux−0.46. The range of exponents does not permit us to conclude that formation of columnar jointings is similar to the formation of pellet feces. Nevertheless, the methods and physical picture shown here may be of use to physicians and veterinarians interested in using feces length as a marker of intestinal health.

DOI [BibTex]

DOI [BibTex]


The Utility of Synthetic Reflexes and Haptic Feedback for Upper-Limb Prostheses in a Dexterous Task Without Direct Vision
The Utility of Synthetic Reflexes and Haptic Feedback for Upper-Limb Prostheses in a Dexterous Task Without Direct Vision

Thomas, N., Fazlollahi, F., Kuchenbecker, K. J., Brown, J. D.

IEEE Transactions on Neural Systems and Rehabilitation Engineering, 31, pages: 169-179, January 2023 (article)

Abstract
Individuals who use myoelectric upper-limb prostheses often rely heavily on vision to complete their daily activities. They thus struggle in situations where vision is overloaded, such as multitasking, or unavailable, such as poor lighting conditions. Able-bodied individuals can easily accomplish such tasks due to tactile reflexes and haptic sensation guiding their upper-limb motor coordination. Based on these principles, we developed and tested two novel prosthesis systems that incorporate autonomous controllers and provide the user with touch-location feedback through either vibration or distributed pressure. These capabilities were made possible by installing a custom contact-location sensor on the fingers of a commercial prosthetic hand, along with a custom pressure sensor on the thumb. We compared the performance of the two systems against a standard myoelectric prosthesis and a myoelectric prosthesis with only autonomous controllers in a difficult reach-to-pick-and-place task conducted without direct vision. Results from 40 able-bodied participants in this between-subjects study indicated that vibrotactile feedback combined with synthetic reflexes proved significantly more advantageous than the standard prosthesis in several of the task milestones. In addition, vibrotactile feedback and synthetic reflexes improved grasp placement compared to only synthetic reflexes or pressure feedback combined with synthetic reflexes. These results indicate that autonomous controllers and haptic feedback together facilitate success in dexterous tasks without vision, and that the type of haptic display matters.

DOI Project Page [BibTex]

DOI Project Page [BibTex]


Predicting the Force Map of an {ERT}-Based Tactile Sensor Using Simulation and Deep Networks
Predicting the Force Map of an ERT-Based Tactile Sensor Using Simulation and Deep Networks

Lee, H., Sun, H., Park, H., Serhat, G., Javot, B., Martius, G., Kuchenbecker, K. J.

IEEE Transactions on Automation Science and Engineering, 20(1):425-439, January 2023 (article)

Abstract
Electrical resistance tomography (ERT) can be used to create large-scale soft tactile sensors that are flexible and robust. Good performance requires a fast and accurate mapping from the sensor's sequential voltage measurements to the distribution of force across its surface. However, particularly with multiple contacts, this task is challenging for both previously developed approaches: physics-based modeling and end-to-end data-driven learning. Some promising results were recently achieved using sim-to-real transfer learning, but estimating multiple contact locations and accurate contact forces remains difficult because simulations tend to be less accurate with a high number of contact locations and/or high force. This paper introduces a modular hybrid method that combines simulation data synthesized from an electromechanical finite element model with real measurements collected from a new ERT-based tactile sensor. We use about 290,000 simulated and 90,000 real measurements to train two deep neural networks: the first (Transfer-Net) captures the inevitable gap between simulation and reality, and the second (Recon-Net) reconstructs contact forces from voltage measurements. The number of contacts, contact locations, force magnitudes, and contact diameters are evaluated for a manually collected multi-contact dataset of 150 measurements. Our modular pipeline's results outperform predictions by both a physics-based model and end-to-end learning.

DOI Project Page [BibTex]


Bioinspired Robots Can Foster Nature Conservation
Bioinspired Robots Can Foster Nature Conservation

Chellapurath, M., Khandelwal, P., Schulz, A. K.

Frontiers in Robotics and AI, 10, 2023 (article) Accepted

Abstract
We live in a time of unprecedented scientific and human progress while being increasingly aware of its negative impacts on our planet's health. Aerial, terrestrial, and aquatic ecosystems have significantly declined putting us on course to a sixth mass extinction event. Nonetheless, the advances made in science, engineering, and technology have given us the opportunity to reverse some of our ecosystem damage and preserve them through conservation efforts around the world. However, current conservation efforts are primarily human led with assistance from conventional robotic systems which limit their scope and effectiveness, along with negatively impacting the surroundings. In this perspective, we present the field of bioinspired robotics to develop versatile agents for future conservation efforts that can operate in the natural environment while minimizing the disturbance/impact to its inhabitants and the environment's natural state. We provide an operational and environmental framework that should be considered while developing bioinspired robots for conservation. These considerations go beyond addressing the challenges of human-led conservation efforts and leverage the advancements in the field of materials, intelligence, and energy harvesting, to make bioinspired robots move and sense like animals. In doing so, it makes bioinspired robots an attractive, non-invasive, sustainable, and effective conservation tool for exploration, data collection, intervention, and maintenance. Finally, we discuss the development of bioinspired robots in the context of collaboration, practicality, and applicability that would ensure their further development and widespread use to protect and preserve our natural world.

[BibTex]

[BibTex]


Conservation Tools: The Next Generation of Engineering--Biology Collaborations
Conservation Tools: The Next Generation of Engineering–Biology Collaborations

Schulz, A., Shriver, C., Stathatos, S., Seleb, B., Weigel, E., Chang, Y., Bhamla, M. S., Hu, D., III, J. R. M.

Royal Society Interface, 2023, Andrew Schulz, Cassie Shriver, Suzanne Stathatos, and Benjamin Seleb are co-first authors. (article)

Abstract
The recent increase in public and academic interest in preserving biodiversity has led to the growth of the field of conservation technology. This field involves designing and constructing tools that utilize technology to aid in wildlife conservation. In this review, we present five case studies and infer a framework for designing conservation tools based on human-wildlife interaction. Successful conservation tools range in complexity from cat collars to machine learning and game theory methodologies and do not require technological expertise to contribute to conservation tool creation. We aim to introduce researchers to conservation technology and provide references for guiding the next generation of conservation technologists. Conservation technology has the potential to benefit biodiversity and have broader impacts on fields such as sustainability and environmental protection. By using innovative technologies to address conservation challenges, we can find more effective and efficient solutions to protect and preserve our planet's resources.

DOI [BibTex]

DOI [BibTex]


A Year at the Forefront of Hydrostat Motion
A Year at the Forefront of Hydrostat Motion

Schulz, A., Schneider, N., Zhang, M., Singal, K.

Biology Open, 2023, N. Schneider, M. Zhang, and K. Singal all contributed equally on this manuscript. (article)

Abstract
Currently, in the field of interdisciplinary work in biology, there has been a significant push by the soft robotic community to understand the motion and maneuverability of hydrostats. This review seeks to expand the muscular hydrostat hypothesis toward new structures, including plants, and introduce innovative techniques to the hydrostat community on new modeling, simulating, mimicking, and observing hydrostat motion methods. These methods range from ideas of kirigami, origami, and knitting for mimic creation to utilizing reinforcement learning for control of bio-inspired soft robotic systems. It is now being understood through modeling that different mechanisms can inhibit traditional hydrostat motion, such as skin, nostrils, or sheathed layered muscle walls. The impact of this review will highlight these mechanisms, including asymmetries, and discuss the critical next steps toward understanding their motion and how species with hydrostat structures control such complex motions, highlighting work from January 2022 to December 2022.

DOI [BibTex]

DOI [BibTex]

2022


no image
Multi-Timescale Representation Learning of Human and Robot Haptic Interactions

Richardson, B.

University of Stuttgart, Stuttgart, Germany, December 2022, Faculty of Computer Science, Electrical Engineering and Information Technology (phdthesis)

Abstract
The sense of touch is one of the most crucial components of the human sensory system. It allows us to safely and intelligently interact with the physical objects and environment around us. By simply touching or dexterously manipulating an object, we can quickly infer a multitude of its properties. For more than fifty years, researchers have studied how humans physically explore and form perceptual representations of objects. Some of these works proposed the paradigm through which human haptic exploration is presently understood: humans use a particular set of exploratory procedures to elicit specific semantic attributes from objects. Others have sought to understand how physically measured object properties correspond to human perception of semantic attributes. Few, however, have investigated how specific explorations are perceived. As robots become increasingly advanced and more ubiquitous in daily life, they are beginning to be equipped with haptic sensing capabilities and algorithms for processing and structuring haptic information. Traditional haptics research has so far strongly influenced the introduction of haptic sensation and perception into robots but has not proven sufficient to give robots the necessary tools to become intelligent autonomous agents. The work presented in this thesis seeks to understand how single and sequential haptic interactions are perceived by both humans and robots. In our first study, we depart from the more traditional methods of studying human haptic perception and investigate how the physical sensations felt during single explorations are perceived by individual people. We treat interactions as probability distributions over a haptic feature space and train a model to predict how similarly a pair of surfaces is rated, predicting perceived similarity with a reasonable degree of accuracy. Our novel method also allows us to evaluate how individual people weigh different surface properties when they make perceptual judgments. The method is highly versatile and presents many opportunities for further studies into how humans form perceptual representations of specific explorations. Our next body of work explores how to improve robotic haptic perception of single interactions. We use unsupervised feature-learning methods to derive powerful features from raw robot sensor data and classify robot explorations into numerous haptic semantic property labels that were assigned from human ratings. Additionally, we provide robots with more nuanced perception by learning to predict graded ratings of a subset of properties. Our methods outperform previous attempts that all used hand-crafted features, demonstrating the limitations of such traditional approaches. To push robot haptic perception beyond evaluation of single explorations, our final work introduces and evaluates a method to give robots the ability to accumulate information over many sequential actions; our approach essentially takes advantage of object permanence by conditionally and recursively updating the representation of an object as it is sequentially explored. We implement our method on a robotic gripper platform that performs multiple exploratory procedures on each of many objects. As the robot explores objects with new procedures, it gains confidence in its internal representations and classification of object properties, thus moving closer to the marvelous haptic capabilities of humans and providing a solid foundation for future research in this domain.

link (url) Project Page [BibTex]

2022

link (url) Project Page [BibTex]


A Guide for Successful Research Collaborations between Zoos and Universities
A Guide for Successful Research Collaborations between Zoos and Universities

Schulz, A., Shriver, C., Aubuchon, C., Weigel, E., Kolar, M., III, J. M., Hu, D.

Integrative and Comparitive Biology, 62, pages: 1174-1185, November 2022 (article)

Abstract
Zoos offer university researchers unique opportunities to study animals that would be difficult or impractical to work with in the wild. However, the different cultures, goals, and priorities of zoos and universities can be a source of conflict. How can researchers build mutually beneficial collaborations with their local zoo? In this article, we present the results of a survey of 117 personnel from 59 zoos around the United States, where we highlight best practices spanning all phases of collaboration, from planning to working alongside the zoo and maintaining contact afterward. Collaborations were hindered if university personnel did not appreciate the zoo staff’s time constraints as well as the differences between zoo animals and laboratory animals. We include a vision for how to improve zoo collaborations, along with a history of our own decade-long collaborations with Zoo Atlanta. A central theme is the long-term establishment of trust between institutions.

DOI [BibTex]

DOI [BibTex]


no image
Understanding the Influence of Moisture on Fingerpad-Surface Interactions

Nam, S.

University of Tübingen, Tübingen, Germany, October 2022, Department of Computer Science (phdthesis)

Abstract
People frequently touch objects with their fingers. The physical deformation of a finger pressing an object surface stimulates mechanoreceptors, resulting in a perceptual experience. Through interactions between perceptual sensations and motor control, humans naturally acquire the ability to manage friction under various contact conditions. Many researchers have advanced our understanding of human fingers to this point, but their complex structure and the variations in friction they experience due to continuously changing contact conditions necessitate additional study. Moisture is a primary factor that influences many aspects of the finger. In particular, sweat excreted from the numerous sweat pores on the fingerprints modifies the finger's material properties and the contact conditions between the finger and a surface. Measuring changes of the finger's moisture over time and in response to external stimuli presents a challenge for researchers, as commercial moisture sensors do not provide continuous measurements. This dissertation investigates the influence of moisture on fingerpad-surface interactions from diverse perspectives. First, we examine the extent to which moisture on the finger contributes to the sensation of stickiness during contact with glass. Second, we investigate the representative material properties of a finger at three distinct moisture levels, since the softness of human skin varies significantly with moisture. The third perspective is friction; we examine how the contact conditions, including the moisture of a finger, determine the available friction force opposing lateral sliding on glass. Fourth, we have invented and prototyped a transparent in vivo moisture sensor for the continuous measurement of finger hydration. In the first part of this dissertation, we explore how the perceptual intensity of light stickiness relates to the physical interaction between the skin and the surface. We conducted a psychophysical experiment in which nine participants actively pressed their index finger on a flat glass plate with a normal force close to 1.5 N and then detached it after a few seconds. A custom-designed apparatus recorded the contact force vector and the finger contact area during each interaction as well as pre- and post-trial finger moisture. After detaching their finger, participants judged the stickiness of the glass using a nine-point scale. We explored how sixteen physical variables derived from the recorded data correlate with each other and with the stickiness judgments of each participant. These analyses indicate that stickiness perception mainly depends on the pre-detachment pressing duration, the time taken for the finger to detach, and the impulse in the normal direction after the normal force changes sign; finger-surface adhesion seems to build with pressing time, causing a larger normal impulse during detachment and thus a more intense stickiness sensation. We additionally found a strong between-subjects correlation between maximum real contact area and peak pull-off force, as well as between finger moisture and impulse. When a fingerpad presses into a hard surface, the development of the contact area depends on the pressing force and speed. Importantly, it also varies with the finger's moisture, presumably because hydration changes the tissue's material properties. Therefore, for the second part of this dissertation, we collected data from one finger repeatedly pressing a glass plate under three moisture conditions, and we constructed a finite element model that we optimized to simulate the same three scenarios. We controlled the moisture of the subject's finger to be dry, natural, or moist and recorded 15 pressing trials in each condition. The measurements include normal force over time plus finger-contact images that are processed to yield gross contact area. We defined the axially symmetric 3D model's lumped parameters to include an SLS-Kelvin model (spring in series with parallel spring and damper) for the bulk tissue, plus an elastic epidermal layer. Particle swarm optimization was used to find the parameter values that cause the simulation to best match the trials recorded in each moisture condition. The results show that the softness of the bulk tissue reduces as the finger becomes more hydrated. The epidermis of the moist finger model is softest, while the natural finger model has the highest viscosity. In the third part of this dissertation, we focused on friction between the fingerpad and the surface. The magnitude of finger-surface friction available at the onset of full slip is crucial for understanding how the human hand can grip and manipulate objects. Related studies revealed the significance of moisture and contact time in enhancing friction. Recent research additionally indicated that surface temperature may also affect friction. However, previously reported friction coefficients have been measured only in dynamic contact conditions, where the finger is already sliding across the surface. In this study, we repeatedly measured the initial friction before full slip under eight contact conditions with low and high finger moisture, pressing time, and surface temperature. Moisture and pressing time both independently increased finger-surface friction across our population of twelve participants, and the effect of surface temperature depended on the contact conditions. Furthermore, detailed analysis of the recorded measurements indicates that micro stick-slip during the partial-slip phase contributes to enhanced friction. For the fourth and final part of this dissertation, we designed a transparent moisture sensor for continuous measurement of fingerpad hydration. Because various stimuli cause the sweat pores on fingerprints to excrete sweat, many researchers want to quantify the flow and assess its impact on the formation of the contact area. Unfortunately, the most popular sensor for skin hydration is opaque and does not offer continuous measurements. Our capacitive moisture sensor consists of a pair of inter-digital electrodes covered by an insulating layer, enabling impedance measurements across a wide frequency range. This proposed sensor is made entirely of transparent materials, which allows us to simultaneously measure the finger's contact area. Electrochemical impedance spectroscopy identifies the equivalent electrical circuit and the electrical component parameters that are affected by the amount of moisture present on the surface of the sensor. Most notably, the impedance at 1 kHz seems to best reflect the relative amount of sweat.

DOI Project Page [BibTex]

DOI Project Page [BibTex]


no image
Learning to Feel Textures: Predicting Perceptual Similarities from Unconstrained Finger-Surface Interactions

Richardson, B. A., Vardar, Y., Wallraven, C., Kuchenbecker, K. J.

IEEE Transactions on Haptics, 15(4):705-717, October 2022, Benjamin A. Richardson and Yasemin Vardar contributed equally to this publication. (article)

Abstract
Whenever we touch a surface with our fingers, we perceive distinct tactile properties that are based on the underlying dynamics of the interaction. However, little is known about how the brain aggregates the sensory information from these dynamics to form abstract representations of textures. Earlier studies in surface perception all used general surface descriptors measured in controlled conditions instead of considering the unique dynamics of specific interactions, reducing the comprehensiveness and interpretability of the results. Here, we present an interpretable modeling method that predicts the perceptual similarity of surfaces by comparing probability distributions of features calculated from short time windows of specific physical signals (finger motion, contact force, fingernail acceleration) elicited during unconstrained finger-surface interactions. The results show that our method can predict the similarity judgments of individual participants with a maximum Spearman's correlation of 0.7. Furthermore, we found evidence that different participants weight interaction features differently when judging surface similarity. Our findings provide new perspectives on human texture perception during active touch, and our approach could benefit haptic surface assessment, robotic tactile perception, and haptic rendering.

DOI Project Page [BibTex]

DOI Project Page [BibTex]


no image
A New Power Law Linking the Speed to the Geometry of Tool-Tip Orientation in Teleoperation of a Robot-Assisted Surgical System

Zruya, O., Sharon, Y., Kossowsky, H., Forni, F., Geftler, A., Nisky, I.

IEEE Robotics and Automation Letters, 7(4):10762-10769, October 2022 (article)

Abstract
Fine manipulation is important in dexterous tasks executed via teleoperation, including in robot-assisted surgery. Discovering fundamental laws of human movement can benefit the design and control of teleoperated systems, and the training of their users. These laws are formulated as motor invariants, such as the well-studied speed-curvature power law. However, while the majority of these laws characterize translational movements, fine manipulation requires controlling the orientation of objects as well. This subject has received little attention in human motor control studies. Here, we report a new power law linking the speed to the geometry in orientation control – humans rotate their hands with an angular speed that is exponentially related to the local change in the direction of rotation. We demonstrate this law in teleoperated tasks performed by surgeons using surgical robotics research platforms. Additionally, we show that the law's parameters change slowly with the surgeons' training, and are robust within participants across task segments and repetitions. The fact that this power law is a robust motor invariant suggests that it may be an outcome of sensorimotor control. It also opens questions about the nature of this control and how it can be harnessed for better control of human-teleoperated robotic systems.

DOI [BibTex]

DOI [BibTex]


no image
Towards Semi-Automated Pleural Cavity Access for Pneumothorax in Austere Environments

L’Orsa, R., Lama, S., Westwick, D., Sutherland, G., Kuchenbecker, K. J.

In Proceedings of the International Astronautical Congress (IAC), pages: 1-7, Paris, France, September 2022 (inproceedings)

Abstract
Pneumothorax, a condition where injury or disease introduces air between the chest wall and lungs, can impede lung function and lead to respiratory failure and/or obstructive shock. Chest trauma from dynamic loads, hypobaric exposure from extravehicular activity, and pulmonary inflammation from celestial dust exposures could potentially cause pneumothoraces during spaceflight with or without exacerbation from deconditioning. On Earth, emergent cases are treated with chest tube insertion (tube thoracostomy, TT) when available, or needle decompression (ND) when not (i.e., pre-hospital). However, ND has high failure rates (up to 94%), and TT has high complication rates (up to 37.9%), especially when performed by inexperienced or intermittent operators. Thus neither procedure is ideal for a pure just-in-time training or skill refreshment approach, and both may require adjuncts for safe inclusion in Level of Care IV (e.g., short duration lunar orbit) or V (e.g., Mars transit) missions. Insertional complications are of particular concern since they cause inadvertent tissue damage that, while surgically repairable in an operating room, could result in (preventable) fatality in a spacecraft or other isolated, confined, or extreme (ICE) environments. Tools must be positioned and oriented correctly to avoid accidental insertion into critical structures, and they must be inserted no further than the thin membrane lining the inside of the rib cage (i.e., the parietal pleura). Operators identify pleural puncture via loss-of-resistance sensations on the tool during advancement, but experienced surgeons anecdotally describe a wide range of membrane characteristics: robust tissues require significant force to perforate, while fragile tissues deliver little-to-no haptic sensation when pierced. Both extremes can lead to tool overshoot and may be representative of astronaut tissues at the beginning (healthy) and end (deconditioned) of long duration exploration class missions. Given uncertainty surrounding physician astronaut selection criteria, skill retention, and tissue condition, an adjunct for improved insertion accuracy would be of value. We describe experiments conducted with an intelligent prototype sensorized system aimed at semi-automating tool insertion into the pleural cavity. The assembly would integrate with an in-mission medical system and could be tailored to fully complement an autonomous medical response agent. When coupled with minimal just-in-time training, it has the potential to bestow expert pleural access skills on non-expert operators without the use of ground resources, in both emergent and elective treatment scenarios.

Project Page [BibTex]

Project Page [BibTex]