Katherine J. Kuchenbecker directs the Haptic Intelligence Department at the Max Planck Institute for Intelligent Systems in Stuttgart, Germany. She was previously an Associate Professor of Mechanical Engineering and Applied Mechanics at the University of Pennsylvania, where she held the Class of 1940 Bicentennial Endowed Term Chair and a secondary appointment in Computer and Information Science. Kuchenbecker earned her Ph.D. in Mechanical Engineering at Stanford University in 2006 and did a postdoctoral fellowship at the Johns Hopkins University. Her research centers on haptic interfaces, which enable a user to touch virtual and distant objects as though they were real and within reach, as well as haptic sensing systems, which allow robots to physically interact with objects and people.
Kuchenbecker delivered a TEDYouth talk on haptics in 2012, and she has received several honors including a 2009 NSF CAREER Award, the 2012 IEEE Robotics and Automation Society Academic Early Career Award, and a 2014 Penn Lindback Award for Distinguished Teaching. Her team has won various best paper and best demonstration awards, and she frequently gives keynote talks at conferences. She was co-chair of the IEEE Technical Committee on Haptics from 2014 to 2017, and she co-chaired the IEEE Haptics Symposium in 2016 and 2018.
Although a human can move his or her fingertip with six degrees of freedom (three for position and three for orientation), displaying 6DOF haptic cues continues to escape the capabilities of body-grounded fingertip haptic displays. All six degrees of freedom have been displayed in smaller subsets, so the l...
Development of surface haptic technologies has lately drawn significant attention, in parallel with the growing use of modern electronic devices involving touchscreens. Different haptic scenarios provide active tactile feedback by controlling natural physical phenomena such as contact, friction, and vibrations. However, these system...
The lack of haptic feedback is a potential limitation of existing robotic surgical systems. Members of Dr. Kuchenbecker's group at Penn previously invented a haptic feedback system named VerroTouch that is able to deliver the vibrations of s...
Robotic minimally invasive surgery systems such as the Intuitive Surgical da Vinci system physically separate the surgeon from the surgical tools. As touch cues are known to play a critical role in manipulation tasks, the resulting loss of the sense of touch may affect the speed and ski...
Humans draw on their vast life experience of touching things to make inferences about the objects in their environment; this capability enables one to make haptic judgments before actually touching things. For example, it is effortless to select the correct grip force for picking up a delicate object, or to choose a g...
Creating haptic experiences often entails inventing, modifying, or selecting specialized hardware. However, experience designers are rarely engineers, and 30 years of haptic inventions are buried in the fragmented literature that describes devices mechanically rather...
Many modern humanoid robots are designed to operate in human environments, like homes and hospitals. If designed well, such robots could help humans accomplish tasks and lower their physical and/or mental workload. As opposed to having an operator devise control policies and reprogram the robot for every new situation...
Human soft-tissue properties vary widely: factors such as the patient's age and health can make enormous differences. As a result, junior medical professionals often have difficulty learning how much force is needed to cut through or puncture a particular type of tissue.
For example, during chest tube insertion, the do...
Being able to cover the entire body of a robot with soft tactile sensors has become an attractive concept in intelligent robotics. Soft, stretchable materials can conform around surfaces and also absorb impacts, which is benefi...
Humans can form an impression of how a new object feels simply by touching its surfaces with the densely innervated skin of the fingertips. Recent research has focused on endowing robots with similar levels of haptic intelligence, but these efforts are usually limited to specifi...
Many scenarios arise wherein the high-frequency accelerations of a tool need to be captured and either portrayed for a human to feel or analyzed by a computer system. For example, this approach provides a simple and realistic way for a surgeon to feel tactile information from a remotely controlled surgical robot. Sim...
The frictional forces we experience when our body interacts with objects provide essential sensory cues that help adapt our behavior. We rely on these sensory cues daily, for example when we feel the smoothness of...
Many situations arise where it is beneficial for a human to control the movements of a robot at a distance, such as handling hazardous materials, doing surgery deep inside the human body, or taking part in remote meetings with other people. In these scenarios, the...
DJing is a musical activity that involves the blending of songs (mixing) and the rythmic manipulation of sounds (scratching). Traditionally, DJs used vinyl records as their music sources. The vibration of a stylus (needle) in the groove of these records produced not just sound, but also subtle haptic sensations that could be fe...
The interaction between a human finger-pad and a physical surface generates not only the tangential friction needed for gripping objects but also a wide variety of perceptual experiences. Finger-surface contact behavior is known to depend on the ...
Researchers worldwide want to discover how to generate compelling tactile sensations on touchscreens to increase the usability of mobile devices and other interactive computer systems. One approach for generating such sensations is to control the friction force between the screen and the finger-pad of the ...
Both vision and touch play important roles in human perception of real surfaces. Judging material properties based on only one modality may not give reliable results. For example, many of us have had the experience th...
When performing minimally invasive robotic surgery, surgeons must currently rely only on their visual sense, as commercially available robotic surgery systems provide no touch feedback. Dr. Kuchenbecker and other members of the Penn Haptics Group previously inve...
Improvements in healthcare have led to an increase in human life expectancy. Members of this aging population want to stay healthy and active, but many forms of exercise and physical therapy are expensive, boring, or inefficien...
Walking speed and symmetry are high priorities for people with hemiparesis from stroke. We developed the Gait Propulsion Trainer (GPT) to help such individuals improve their walking abilities by increasing the propulsive force generated by the paretic leg. The GPT centers on a cable spool attached to a stand at wais...
Robots working in unstructured environments and alongside people need to be able to sense contact information from both intentional and unintentional interactions. Soft and skin-like tactile sensors can provide a robot wit...
Up to 90% of individuals who undergo amputation experience a persistent sensation of the missing limb, which is called a phantom limb A substantial subset of these people feel intense pain in the missing extremity; this phantom limb pain (PLP) often responds poorly to medications or other interventions an...
Cross-Cutting Challenge Interactive Discussion presented at the IEEE Haptics Symposium, San Francisco, USA, March 2018 (misc)
Fingertips and hands captivate the attention of most haptic interface designers, but humans can feel touch stimuli across the entire body surface. Trying to create devices that both can be worn and can deliver good haptic sensations raises challenges that rarely arise in other contexts. Most notably, tactile cues such as vibration, tapping, and squeezing are far simpler to implement in wearable systems than kinesthetic haptic feedback. This interactive discussion will present a variety of relevant projects to which I have contributed, attempting to pull out common themes and ideas for the future.
Work-in-progress paper (3 pages) presented at the IEEE Haptics Symposium, San Francisco, USA, March 2018 (misc)
Much of three decades of haptic device invention is effectively lost to today’s designers: dispersion across time, region, and discipline imposes an incalculable drag on innovation in this field. Our goal is to make historical haptic invention accessible through interactive navigation of a comprehensive library – a Haptipedia – of devices that have been annotated with designer-relevant metadata. To build this open resource, we will systematically mine the literature and engage the haptics community for expert annotation. In a multi-year broad-based initiative, we will empirically derive salient attributes of haptic devices, design an interactive visualization tool where device creators and repurposers can efficiently explore and search Haptipedia, and establish methods and tools to manually and algorithmically collect data from the haptics literature and our community of experts. This paper outlines progress in compiling
an initial corpus of grounded force-feedback devices and their attributes, and it presents a concept sketch of the interface we envision.
Workshop paper (6 pages) presented at the HRI Workshop on Personal Robots for Exercising and Coaching, Chicago, USA, March 2018 (misc)
The worldwide population of older adults is steadily increasing and will soon exceed the capacity of assisted living facilities. Accordingly, we aim to understand whether appropriately designed robots could help older adults stay active and engaged while living at home. We developed eight human-robot exercise games for the Baxter Research Robot with the guidance of experts in game design, therapy, and rehabilitation. After extensive iteration, these games were employed in a user study that tested their viability with 20 younger and 20 older adult users. All participants were willing to enter Baxter’s workspace and physically interact with the robot. User trust and confidence in Baxter increased significantly between pre- and post-experiment assessments, and one individual from the target user population supplied us with abundant positive feedback about her experience. The preliminary results presented in this paper indicate potential for the use of two-armed human-scale robots for social-physical exercise interaction.
Workshop paper (2 pages) presented at the HRI Pioneers Workshop, Chicago, USA, March 2018 (misc)
Hugs are one of the first forms of contact and affection humans experience. Due to their prevalence and health benefits, we want to enable robots to safely hug humans. This research strives to create and study a high fidelity robotic system that provides emotional support to people through hugs. This paper outlines our previous work evaluating human responses to a prototype’s physical and behavioral characteristics, and then it lays out our ongoing and future work.
Work-in-progress paper (3 pages) presented at the IEEE Haptics Symposium, San Francisco, USA, March 2018 (misc)
Little is known about the shape and properties of the human finger during haptic interaction even though this knowledge is essential to control wearable finger devices and deliver realistic tactile feedback. This study explores a framework for four-dimensional scanning and modeling of finger-surface interactions, aiming to capture the motion and deformations of the entire finger with high resolution. The results show that when the fingertip is actively pressing a rigid surface, it undergoes lateral expansion of about 0.2 cm and proximal/distal bending of about 30◦, deformations that cannot be captured by imaging of the contact area alone. This project constitutes a first step towards an accurate statistical model of the finger’s behavior during haptic interaction.
Work-in-progress paper (3 pages) presented at the IEEE Haptics Symposium, San Francisco, USA, March 2018 (misc)
Human children typically experience their surroundings both visually and haptically, providing ample opportunities to learn rich cross-sensory associations. To thrive in human environments and interact with the real world, robots also need to build models of these cross-sensory associations; current advances in machine learning should make it possible to infer models from large amounts of data. We previously built a visuo-haptic sensing device, the Proton Pack, and are using it to collect a large database of matched multimodal data from tool-surface interactions. As a benchmark to compare with machine learning performance, we conducted a human subject study (n = 84) on estimating haptic surface properties (here: hardness, roughness, friction, and warmness) from images. Using a 100-surface subset of our database, we showed images to study participants and collected 5635 ratings of the four haptic properties, which we compared with ratings made by the Proton Pack operator and with physical data recorded using motion, force, and vibration sensors. Preliminary results indicate weak correlation between participant and operator ratings, but potential for matching up certain human ratings (particularly hardness and roughness) with features from the literature.
In Proceedings of the International Symposium on Robotics Research (ISRR), Puerto Varas, Chile, December 2017 (inproceedings) In press
Hand-clapping games and other forms of rhythmic social-physical interaction might help foster human-robot teamwork, but the design of such interactions has scarcely been explored. We leveraged our prior work to enable the Rethink Robotics Baxter Research Robot to competently play one-handed tempo-matching hand-clapping games with a human user. To understand how such a robot’s capabilities and behaviors affect user perception, we created four versions of this interaction: the hand clapping could be initiated by either the robot or the human, and the non-initiating partner could be either cooperative, yielding synchronous motion, or mischievously uncooperative. Twenty adults tested two clapping tempos in each of these four interaction modes in a random order, rating every trial on standardized scales. The study results showed that having the robot initiate the interaction gave it a more dominant perceived personality. Despite previous results on the intrigue of misbehaving robots, we found that moving synchronously with the robot almost always made the interaction more enjoyable, less mentally taxing, less physically demanding, and lower effort for users than asynchronous interactions caused by robot or human mischief. Taken together, our results indicate that cooperative rhythmic social-physical interaction has the potential to strengthen human-robot partnerships.
Workshop Paper (2 pages) presented at the RO-MAN Workshop on Social Interaction and Multimodal Expression for Socially Intelligent Robots, Lisbon, Portugal, August 2017 (misc)
A hug is one of the most basic ways humans can express affection. As hugs are so common, a natural progression of robot development is to have robots one day hug humans as seamlessly as these intimate human-human interactions occur. This project’s purpose is to evaluate human responses to different robot physical characteristics and hugging behaviors. Specifically, we aim to test the hypothesis that a warm, soft, touch-sensitive PR2 humanoid robot can provide humans with satisfying hugs by matching both their hugging pressure and their hugging duration. Thirty participants experienced and evaluated twelve hugs with the robot, divided into three randomly ordered trials that focused on physical robot char- acteristics and nine randomly ordered trials with varied hug pressure and duration. We found that people prefer soft, warm hugs over hard, cold hugs. Furthermore, users prefer hugs that physically squeeze them and release immediately when they are ready for the hug to end.
In Proceedings of the IEEE World Haptics Conference (WHC), pages: 599-604, Munich, Germany, June 2017, Finalist for best poster paper (inproceedings)
Despite rapid advancements in the field of fingertip haptics, rendering tactile cues with six degrees of freedom (6 DOF) remains an elusive challenge. In this paper, we investigate the potential of displaying fingertip haptic sensations with a 6-DOF parallel continuum manipulator (PCM) that mounts to the user's index finger and moves a contact platform around the fingertip. Compared to traditional mechanisms composed of rigid links and discrete joints, PCMs have the potential to be strong, dexterous, and compact, but they are also more complicated to design. We define the design space of 6-DOF parallel continuum manipulators and outline a process for refining such a device for fingertip haptic applications. Following extensive simulation, we obtain 12 designs that meet our specifications, construct a manually actuated prototype of one such design, and evaluate the simulation's ability to accurately predict the prototype's motion. Finally, we demonstrate the range of deliverable fingertip tactile cues, including a normal force into the finger and shear forces tangent to the finger at three extreme points on the boundary of the fingertip.
In Proceedings of the IEEE World Haptics Conference (WHC), pages: 394-399, Munich, Germany, June 2017 (inproceedings)
Clever electromechanical design is required to make the force feedback delivered by a kinesthetic haptic interface both strong and safe. This paper explores a onedimensional haptic force display that combines a DC motor and a magnetic particle brake on the same shaft. Rather than a rigid linkage, a spooled cable connects the user to the actuators to enable a large workspace, reduce the moving mass, and eliminate the sticky residual force from the brake. This design combines the high torque/power ratio of the brake and the active output capabilities of the motor to provide a wider range of forces than can be achieved with either actuator alone. A prototype of this device was built, its performance was characterized, and it was used to simulate constant force sources and virtual springs and dampers. Compared to the conventional design of using only a motor, the hybrid device can output higher unidirectional forces at the expense of free space feeling less free.
In Proceedings of the IEEE World Haptics Conference (WHC), pages: 107-112, Munich, Germany, June 2017 (inproceedings)
Over time, surgical trainees learn to compensate for the lack of haptic feedback in commercial robotic minimally invasive surgical systems. Incorporating touch cues into robotic surgery training could potentially shorten this learning process if the benefits of haptic feedback were sustained after it is removed. In this paper, we develop a wrist-squeezing haptic feedback system and evaluate whether it holds the potential to train novice da Vinci users to reduce the force they exert on a bimanual inanimate training task. Subjects were randomly divided into two groups according to a multiple baseline experimental design. Each of the ten participants moved a ring along a curved wire nine times while the haptic feedback was conditionally withheld, provided, and withheld again. The realtime tactile feedback of applied force magnitude significantly reduced the integral of the force produced by the da Vinci tools on the task materials, and this result remained even when the haptic feedback was removed. Overall, our findings suggest that wrist-squeezing force feedback can play an essential role in helping novice trainees learn to minimize the force they exert with a surgical robot.
In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), pages: 439-445, Singapore, May 2017 (inproceedings)
The Portable Robotic Optical/Tactile ObservatioN PACKage (PROTONPACK, or Proton for short) is a new handheld visuo-haptic sensing system that records surface interactions. We previously demonstrated system calibration and a classification task using external motion tracking. This paper details improvements in surface classification performance and removal of the dependence on external motion tracking, necessary before embarking on our goal of gathering a vast surface interaction dataset. Two experiments were performed to refine data collection parameters. After adjusting the placement and filtering of the Proton's high-bandwidth accelerometers, we recorded interactions between two differently-sized steel tooling ball end-effectors (diameter 6.35 and 9.525 mm) and five surfaces. Using features based on normal force, tangential force, end-effector speed, and contact vibration, we trained multi-class SVMs to classify the surfaces using 50 ms chunks of data from each end-effector. Classification accuracies of 84.5% and 91.5% respectively were achieved on unseen test data, an improvement over prior results. In parallel, we pursued on-board motion tracking, using the Proton's camera and fiducial markers. Motion tracks from the external and onboard trackers agree within 2 mm and 0.01 rad RMS, and the accuracy decreases only slightly to 87.7% when using onboard tracking for the 9.525 mm end-effector. These experiments indicate that the Proton 2 is ready for portable data collection.
Workshop paper (3 pages) presented at the ICRA Workshop on C4 Surgical Robots, Singapore, May 2017 (misc)
Teleoperated surgical robots such as the Intuitive da Vinci Surgical System facilitate minimally invasive surgeries, which decrease risk to patients. However, these systems can be difficult to learn, and existing training curricula on surgical simulators do not offer students the realistic experience of a full operation. This paper presents an augmented-reality video training platform for the da Vinci that will allow trainees to rehearse any surgery recorded by an expert. While the trainee operates a da Vinci in free space, they see their own instruments overlaid on the expert video. Tools are identified in the source videos via color segmentation and kernelized correlation filter tracking, and their depth is calculated from the da Vinci’s stereoscopic video feed. The user tries to follow the expert’s movements, and if any of their tools venture too far away, the system provides instantaneous visual feedback and pauses to allow the user to correct their motion. The trainee can also rewind the expert video by bringing either da Vinci tool very close to the camera. This combined and augmented video provides the user with an immersive and interactive training experience.
Hands-on demonstration presented at ACM/IEEE International Conference on Human-Robot Interaction (HRI), Vienna, Austria, March 2017 (misc)
Robots that work alongside humans might be more effective if they could forge a strong social bond with their human partners. Hand-clapping games and other forms of rhythmic social-physical interaction may foster human-robot teamwork, but the design of such interactions has scarcely been explored. At the HRI 2017 conference, we will showcase several such interactions taken from our recent work with the Rethink Robotics Baxter Research Robot, including tempo-matching, Simon says, and Pat-a-cake-like games. We believe conference attendees will be both entertained and intrigued by this novel demonstration of social-physical HRI.
Surgical Endoscopy, 31(Supplement 1):S28, Extended abstract presented as a podium presentation at the Annual Meeting of the Society of American Gastrointestinal and Endoscopic Surgeons (SAGES), Springer, Houston, USA, March 2017 (misc)
Introduction: Minimally invasive surgery has revolutionized surgical practice, but challenges remain. Trainees must acquire complex technical skills while minimizing patient risk, and surgeons must maintain their skills for rare procedures. These challenges are magnified in pediatric surgery due to the smaller spaces, finer tissue, and relative dearth of both inanimate and virtual simulators. To build technical expertise, trainees need opportunities for deliberate practice with specific performance feedback, which is typically provided via tedious human grading. This study aimed to validate a novel motion-tracking system and machine learning algorithm for automatically evaluating trainee performance on a pediatric laparoscopic suturing task using a 1–5 OSATS Overall Skill rating.
Methods: Subjects (n=14) ranging from medical students to fellows per- formed one or two trials of an intracorporeal suturing task in a custom pediatric laparoscopy training box (Fig. 1) after watching a video of ideal performance by an expert. The position and orientation of the tools and endoscope were recorded over time using Ascension trakSTAR magnetic motion-tracking sensors, and both instrument grasp angles were recorded over time using flex sensors on the handles. The 27 trials were video-recorded and scored on the OSATS scale by a senior fellow; ratings ranged from 1 to 4. The raw motion data from each trial was processed to calculate over 200 preliminary motion parameters. Regularized least-squares regression (LASSO) was used to identify the most predictive parameters for inclusion in a regression tree. Model performance was evaluated by leave-one-subject-out cross validation, wherein the automatic scores given to each subject’s trials (by a model trained on all other data) are compared to the corresponding human rater scores.
Results: The best-performing LASSO algorithm identified 14 predictive parameters for inclusion in the regression tree, including completion time, linear path length, angular path length, angular acceleration, grasp velocity, and grasp acceleration. The final model’s raw output showed a strong positive correlation of 0.87 with the reviewer-generated scores, and rounding the output to the nearest integer yielded a leave-one-subject-out cross-validation accuracy of 77.8%. Results are summarized in the confusion matrix (Table 1).
Conclusions: Our novel motion-tracking system and regression model automatically gave previously unseen trials overall skill scores that closely match scores from an expert human rater. With additional data and further development, this system may enable creation of a motion-based training platform for pediatric laparoscopic surgery and could yield insights into the fundamental components of surgical skill.
Workshop paper (5 pages) presented at the AAAI Spring Symposium on Interactive Multi-Sensory Object Perception for Embodied Agents, Stanford, USA, March 2017 (misc)
The Proton Pack is a portable visuo-haptic surface interaction recording device that will be used to collect a vast multimodal dataset, intended for robots to use as part of an approach to understanding the world around them. In order to collect a useful dataset, we want to pick a suitable interaction duration for each surface, noting the tradeoff between data collection resources and completeness of data. One interesting approach frames the data collection process as an online learning problem, building an incremental surface model and using that model to decide when there is enough data. Here we examine how to do such online surface modeling and when to stop collecting data, using kinetic friction as a first domain in which to apply online modeling.
In Social Robotics: 8th International Conference, ICSR 2016, Kansas City, MO, USA, November 1-3, 2016 Proceedings, 9979, pages: 317-327, Lecture Notes in Artificial Intelligence, Springer International Publishing, November 2016, Oral presentation given by Fitter (inproceedings)
In Social Robotics: 8th International Conference, ICSR 2016, Kansas City, MO, USA, November 1-3, 2016 Proceedings, 9979, pages: 340-350, Lecture Notes in Artificial Intelligence, Springer International Publishing, November 2016, Oral presentation given by Fitter (inproceedings)
In Social Robotics: 8th International Conference, ICSR 2016, Kansas City, MO, USA, November 1-3, 2016 Proceedings, 9979, pages: 296-305, Lecture Notes in Artificial Intelligence, Springer International Publishing, November 2016, Oral presentation given by Fitter (inproceedings)
Our goal is to understand the principles of Perception, Action and Learning in autonomous systems that successfully interact with complex environments and to use this understanding to design future systems