Haptic Intelligence


2024


no image
Reflectance Outperforms Force and Position in Model-Free Needle Puncture Detection

L’Orsa, R., Bisht, A., Yu, L., Murari, K., Westwick, D. T., Sutherland, G. R., Kuchenbecker, K. J.

In Proceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Orlando, USA, July 2024 (inproceedings) Accepted

Abstract
The surgical procedure of needle thoracostomy temporarily corrects accidental over-pressurization of the space between the chest wall and the lungs. However, failure rates of up to 94.1% have been reported, likely because this procedure is done blind: operators estimate by feel when the needle has reached its target. We believe instrumented needles could help operators discern entry into the target space, but limited success has been achieved using force and/or position to try to discriminate needle puncture events during simulated surgical procedures. We thus augmented our needle insertion system with a novel in-bore double-fiber optical setup. Tissue reflectance measurements as well as 3D force, torque, position, and orientation were recorded while two experimenters repeatedly inserted a bevel-tipped percutaneous needle into ex vivo porcine ribs. We applied model-free puncture detection to various filtered time derivatives of each sensor data stream offline. In the held-out test set of insertions, puncture-detection precision improved substantially using reflectance measurements compared to needle insertion force alone (3.3-fold increase) or position alone (11.6-fold increase).

Project Page [BibTex]

2024

Project Page [BibTex]


no image
Expert Perception of Teleoperated Social Exercise Robots

Mohan, M., Mat Husin, H., Kuchenbecker, K. J.

In Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction (HRI), pages: 769-773, Boulder, USA, March 2024, Late-Breaking Report (LBR) (5 pages) presented at the IEEE/ACM International Conference on Human-Robot Interaction (HRI) (inproceedings)

Abstract
Social robots could help address the growing issue of physical inactivity by inspiring users to engage in interactive exercise. Nevertheless, the practical implementation of social exercise robots poses substantial challenges, particularly in terms of personalizing their activities to individuals. We propose that motion-capture-based teleoperation could serve as a viable solution to address these needs by enabling experts to record custom motions that could later be played back without their real-time involvement. To gather feedback about this idea, we conducted semi-structured interviews with eight exercise-therapy professionals. Our findings indicate that experts' attitudes toward social exercise robots become more positive when considering the prospect of teleoperation to record and customize robot behaviors.

DOI Project Page [BibTex]

DOI Project Page [BibTex]


Creating a Haptic Empathetic Robot Animal That Feels Touch and Emotion
Creating a Haptic Empathetic Robot Animal That Feels Touch and Emotion

Burns, R.

University of Tübingen, Tübingen, Germany, February 2024, Department of Computer Science (phdthesis)

Abstract
Social touch, such as a hug or a poke on the shoulder, is an essential aspect of everyday interaction. Humans use social touch to gain attention, communicate needs, express emotions, and build social bonds. Despite its importance, touch sensing is very limited in most commercially available robots. By endowing robots with social-touch perception, one can unlock a myriad of new interaction possibilities. In this thesis, I present my work on creating a Haptic Empathetic Robot Animal (HERA), a koala-like robot for children with autism. I demonstrate the importance of establishing design guidelines based on one's target audience, which we investigated through interviews with autism specialists. I share our work on creating full-body tactile sensing for the NAO robot using low-cost, do-it-yourself (DIY) methods, and I introduce an approach to model long-term robot emotions using second-order dynamics.

Project Page [BibTex]

Project Page [BibTex]

2023


no image
Gesture-Based Nonverbal Interaction for Exercise Robots

Mohan, M.

University of Tübingen, Tübingen, Germany, October 2023, Department of Computer Science (phdthesis)

Abstract
When teaching or coaching, humans augment their words with carefully timed hand gestures, head and body movements, and facial expressions to provide feedback to their students. Robots, however, rarely utilize these nuanced cues. A minimally supervised social robot equipped with these abilities could support people in exercising, physical therapy, and learning new activities. This thesis examines how the intuitive power of human gestures can be harnessed to enhance human-robot interaction. To address this question, this research explores gesture-based interactions to expand the capabilities of a socially assistive robotic exercise coach, investigating the perspectives of both novice users and exercise-therapy experts. This thesis begins by concentrating on the user's engagement with the robot, analyzing the feasibility of minimally supervised gesture-based interactions. This exploration seeks to establish a framework in which robots can interact with users in a more intuitive and responsive manner. The investigation then shifts its focus toward the professionals who are integral to the success of these innovative technologies: the exercise-therapy experts. Roboticists face the challenge of translating the knowledge of these experts into robotic interactions. We address this challenge by developing a teleoperation algorithm that can enable exercise therapists to create customized gesture-based interactions for a robot. Thus, this thesis lays the groundwork for dynamic gesture-based interactions in minimally supervised environments, with implications for not only exercise-coach robots but also broader applications in human-robot interaction.

Project Page [BibTex]

2023

Project Page [BibTex]


Wear Your Heart on Your Sleeve: Users Prefer Robots with Emotional Reactions to Touch and Ambient Moods
Wear Your Heart on Your Sleeve: Users Prefer Robots with Emotional Reactions to Touch and Ambient Moods

Burns, R. B., Ojo, F., Kuchenbecker, K. J.

In Proceedings of the IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), pages: 1914-1921, Busan, South Korea, August 2023 (inproceedings)

Abstract
Robots are increasingly being developed as assistants for household, education, therapy, and care settings. Such robots can use adaptive emotional behavior to communicate warmly and effectively with their users and to encourage interest in extended interactions. However, autonomous physical robots often lack a dynamic internal emotional state, instead displaying brief, fixed emotion routines to promote specific user interactions. Furthermore, despite the importance of social touch in human communication, most commercially available robots have limited touch sensing, if any at all. We propose that users' perceptions of a social robotic system will improve when the robot provides emotional responses on both shorter and longer time scales (reactions and moods), based on touch inputs from the user. We evaluated this proposal through an online study in which 51 diverse participants watched nine randomly ordered videos (a three-by-three full-factorial design) of the koala-like robot HERA being touched by a human. Users provided the highest ratings in terms of agency, ambient activity, enjoyability, and touch perceptivity for scenarios in which HERA showed emotional reactions and either neutral or emotional moods in response to social touch gestures. Furthermore, we summarize key qualitative findings about users' preferences for reaction timing, the ability of robot mood to show persisting memory, and perception of neutral behaviors as a curious or self-aware robot.

link (url) DOI Project Page [BibTex]

link (url) DOI Project Page [BibTex]


no image
Naturalistic Vibrotactile Feedback Could Facilitate Telerobotic Assembly on Construction Sites

Gong, Y., Javot, B., Lauer, A. P. R., Sawodny, O., Kuchenbecker, K. J.

In Proceedings of the IEEE World Haptics Conference (WHC), pages: 169-175, Delft, The Netherlands, July 2023 (inproceedings)

Abstract
Telerobotics is regularly used on construction sites to build large structures efficiently. A human operator remotely controls the construction robot under direct visual feedback, but visibility is often poor. Future construction robots that move autonomously will also require operator monitoring. Thus, we designed a wireless haptic feedback system to provide the operator with task-relevant mechanical information from a construction robot in real time. Our AiroTouch system uses an accelerometer to measure the robot end-effector's vibrations and uses off-the-shelf audio equipment and a voice-coil actuator to display them to the user with high fidelity. A study was conducted to evaluate how this type of naturalistic vibration feedback affects the observer's understanding of telerobotic assembly on a real construction site. Seven adults without construction experience observed a mix of manual and autonomous assembly processes both with and without naturalistic vibrotactile feedback. Qualitative analysis of their survey responses and interviews indicated that all participants had positive responses to this technology and believed it would be beneficial for construction activities.

DOI Project Page [BibTex]

DOI Project Page [BibTex]


Reconstructing Signing Avatars from Video Using Linguistic Priors
Reconstructing Signing Avatars from Video Using Linguistic Priors

Forte, M., Kulits, P., Huang, C. P., Choutas, V., Tzionas, D., Kuchenbecker, K. J., Black, M. J.

In IEEE/CVF Conf. on Computer Vision and Pattern Recognition (CVPR), pages: 12791-12801, CVPR 2023, June 2023 (inproceedings)

Abstract
Sign language (SL) is the primary method of communication for the 70 million Deaf people around the world. Video dictionaries of isolated signs are a core SL learning tool. Replacing these with 3D avatars can aid learning and enable AR/VR applications, improving access to technology and online media. However, little work has attempted to estimate expressive 3D avatars from SL video; occlusion, noise, and motion blur make this task difficult. We address this by introducing novel linguistic priors that are universally applicable to SL and provide constraints on 3D hand pose that help resolve ambiguities within isolated signs. Our method, SGNify, captures fine-grained hand pose, facial expression, and body movement fully automatically from in-the-wild monocular SL videos. We evaluate SGNify quantitatively by using a commercial motion-capture system to compute 3D avatars synchronized with monocular video. SGNify outperforms state-of-the-art 3D body-pose- and shape-estimation methods on SL videos. A perceptual study shows that SGNify's 3D reconstructions are significantly more comprehensible and natural than those of previous methods and are on par with the source videos. Code and data are available at sgnify.is.tue.mpg.de.

pdf arXiv project code DOI [BibTex]

pdf arXiv project code DOI [BibTex]

2022


no image
Multi-Timescale Representation Learning of Human and Robot Haptic Interactions

Richardson, B.

University of Stuttgart, Stuttgart, Germany, December 2022, Faculty of Computer Science, Electrical Engineering and Information Technology (phdthesis)

Abstract
The sense of touch is one of the most crucial components of the human sensory system. It allows us to safely and intelligently interact with the physical objects and environment around us. By simply touching or dexterously manipulating an object, we can quickly infer a multitude of its properties. For more than fifty years, researchers have studied how humans physically explore and form perceptual representations of objects. Some of these works proposed the paradigm through which human haptic exploration is presently understood: humans use a particular set of exploratory procedures to elicit specific semantic attributes from objects. Others have sought to understand how physically measured object properties correspond to human perception of semantic attributes. Few, however, have investigated how specific explorations are perceived. As robots become increasingly advanced and more ubiquitous in daily life, they are beginning to be equipped with haptic sensing capabilities and algorithms for processing and structuring haptic information. Traditional haptics research has so far strongly influenced the introduction of haptic sensation and perception into robots but has not proven sufficient to give robots the necessary tools to become intelligent autonomous agents. The work presented in this thesis seeks to understand how single and sequential haptic interactions are perceived by both humans and robots. In our first study, we depart from the more traditional methods of studying human haptic perception and investigate how the physical sensations felt during single explorations are perceived by individual people. We treat interactions as probability distributions over a haptic feature space and train a model to predict how similarly a pair of surfaces is rated, predicting perceived similarity with a reasonable degree of accuracy. Our novel method also allows us to evaluate how individual people weigh different surface properties when they make perceptual judgments. The method is highly versatile and presents many opportunities for further studies into how humans form perceptual representations of specific explorations. Our next body of work explores how to improve robotic haptic perception of single interactions. We use unsupervised feature-learning methods to derive powerful features from raw robot sensor data and classify robot explorations into numerous haptic semantic property labels that were assigned from human ratings. Additionally, we provide robots with more nuanced perception by learning to predict graded ratings of a subset of properties. Our methods outperform previous attempts that all used hand-crafted features, demonstrating the limitations of such traditional approaches. To push robot haptic perception beyond evaluation of single explorations, our final work introduces and evaluates a method to give robots the ability to accumulate information over many sequential actions; our approach essentially takes advantage of object permanence by conditionally and recursively updating the representation of an object as it is sequentially explored. We implement our method on a robotic gripper platform that performs multiple exploratory procedures on each of many objects. As the robot explores objects with new procedures, it gains confidence in its internal representations and classification of object properties, thus moving closer to the marvelous haptic capabilities of humans and providing a solid foundation for future research in this domain.

link (url) Project Page [BibTex]

2022

link (url) Project Page [BibTex]


no image
Understanding the Influence of Moisture on Fingerpad-Surface Interactions

Nam, S.

University of Tübingen, Tübingen, Germany, October 2022, Department of Computer Science (phdthesis)

Abstract
People frequently touch objects with their fingers. The physical deformation of a finger pressing an object surface stimulates mechanoreceptors, resulting in a perceptual experience. Through interactions between perceptual sensations and motor control, humans naturally acquire the ability to manage friction under various contact conditions. Many researchers have advanced our understanding of human fingers to this point, but their complex structure and the variations in friction they experience due to continuously changing contact conditions necessitate additional study. Moisture is a primary factor that influences many aspects of the finger. In particular, sweat excreted from the numerous sweat pores on the fingerprints modifies the finger's material properties and the contact conditions between the finger and a surface. Measuring changes of the finger's moisture over time and in response to external stimuli presents a challenge for researchers, as commercial moisture sensors do not provide continuous measurements. This dissertation investigates the influence of moisture on fingerpad-surface interactions from diverse perspectives. First, we examine the extent to which moisture on the finger contributes to the sensation of stickiness during contact with glass. Second, we investigate the representative material properties of a finger at three distinct moisture levels, since the softness of human skin varies significantly with moisture. The third perspective is friction; we examine how the contact conditions, including the moisture of a finger, determine the available friction force opposing lateral sliding on glass. Fourth, we have invented and prototyped a transparent in vivo moisture sensor for the continuous measurement of finger hydration. In the first part of this dissertation, we explore how the perceptual intensity of light stickiness relates to the physical interaction between the skin and the surface. We conducted a psychophysical experiment in which nine participants actively pressed their index finger on a flat glass plate with a normal force close to 1.5 N and then detached it after a few seconds. A custom-designed apparatus recorded the contact force vector and the finger contact area during each interaction as well as pre- and post-trial finger moisture. After detaching their finger, participants judged the stickiness of the glass using a nine-point scale. We explored how sixteen physical variables derived from the recorded data correlate with each other and with the stickiness judgments of each participant. These analyses indicate that stickiness perception mainly depends on the pre-detachment pressing duration, the time taken for the finger to detach, and the impulse in the normal direction after the normal force changes sign; finger-surface adhesion seems to build with pressing time, causing a larger normal impulse during detachment and thus a more intense stickiness sensation. We additionally found a strong between-subjects correlation between maximum real contact area and peak pull-off force, as well as between finger moisture and impulse. When a fingerpad presses into a hard surface, the development of the contact area depends on the pressing force and speed. Importantly, it also varies with the finger's moisture, presumably because hydration changes the tissue's material properties. Therefore, for the second part of this dissertation, we collected data from one finger repeatedly pressing a glass plate under three moisture conditions, and we constructed a finite element model that we optimized to simulate the same three scenarios. We controlled the moisture of the subject's finger to be dry, natural, or moist and recorded 15 pressing trials in each condition. The measurements include normal force over time plus finger-contact images that are processed to yield gross contact area. We defined the axially symmetric 3D model's lumped parameters to include an SLS-Kelvin model (spring in series with parallel spring and damper) for the bulk tissue, plus an elastic epidermal layer. Particle swarm optimization was used to find the parameter values that cause the simulation to best match the trials recorded in each moisture condition. The results show that the softness of the bulk tissue reduces as the finger becomes more hydrated. The epidermis of the moist finger model is softest, while the natural finger model has the highest viscosity. In the third part of this dissertation, we focused on friction between the fingerpad and the surface. The magnitude of finger-surface friction available at the onset of full slip is crucial for understanding how the human hand can grip and manipulate objects. Related studies revealed the significance of moisture and contact time in enhancing friction. Recent research additionally indicated that surface temperature may also affect friction. However, previously reported friction coefficients have been measured only in dynamic contact conditions, where the finger is already sliding across the surface. In this study, we repeatedly measured the initial friction before full slip under eight contact conditions with low and high finger moisture, pressing time, and surface temperature. Moisture and pressing time both independently increased finger-surface friction across our population of twelve participants, and the effect of surface temperature depended on the contact conditions. Furthermore, detailed analysis of the recorded measurements indicates that micro stick-slip during the partial-slip phase contributes to enhanced friction. For the fourth and final part of this dissertation, we designed a transparent moisture sensor for continuous measurement of fingerpad hydration. Because various stimuli cause the sweat pores on fingerprints to excrete sweat, many researchers want to quantify the flow and assess its impact on the formation of the contact area. Unfortunately, the most popular sensor for skin hydration is opaque and does not offer continuous measurements. Our capacitive moisture sensor consists of a pair of inter-digital electrodes covered by an insulating layer, enabling impedance measurements across a wide frequency range. This proposed sensor is made entirely of transparent materials, which allows us to simultaneously measure the finger's contact area. Electrochemical impedance spectroscopy identifies the equivalent electrical circuit and the electrical component parameters that are affected by the amount of moisture present on the surface of the sensor. Most notably, the impedance at 1 kHz seems to best reflect the relative amount of sweat.

DOI Project Page [BibTex]

DOI Project Page [BibTex]


no image
Towards Semi-Automated Pleural Cavity Access for Pneumothorax in Austere Environments

L’Orsa, R., Lama, S., Westwick, D., Sutherland, G., Kuchenbecker, K. J.

In Proceedings of the International Astronautical Congress (IAC), pages: 1-7, Paris, France, September 2022 (inproceedings)

Abstract
Pneumothorax, a condition where injury or disease introduces air between the chest wall and lungs, can impede lung function and lead to respiratory failure and/or obstructive shock. Chest trauma from dynamic loads, hypobaric exposure from extravehicular activity, and pulmonary inflammation from celestial dust exposures could potentially cause pneumothoraces during spaceflight with or without exacerbation from deconditioning. On Earth, emergent cases are treated with chest tube insertion (tube thoracostomy, TT) when available, or needle decompression (ND) when not (i.e., pre-hospital). However, ND has high failure rates (up to 94%), and TT has high complication rates (up to 37.9%), especially when performed by inexperienced or intermittent operators. Thus neither procedure is ideal for a pure just-in-time training or skill refreshment approach, and both may require adjuncts for safe inclusion in Level of Care IV (e.g., short duration lunar orbit) or V (e.g., Mars transit) missions. Insertional complications are of particular concern since they cause inadvertent tissue damage that, while surgically repairable in an operating room, could result in (preventable) fatality in a spacecraft or other isolated, confined, or extreme (ICE) environments. Tools must be positioned and oriented correctly to avoid accidental insertion into critical structures, and they must be inserted no further than the thin membrane lining the inside of the rib cage (i.e., the parietal pleura). Operators identify pleural puncture via loss-of-resistance sensations on the tool during advancement, but experienced surgeons anecdotally describe a wide range of membrane characteristics: robust tissues require significant force to perforate, while fragile tissues deliver little-to-no haptic sensation when pierced. Both extremes can lead to tool overshoot and may be representative of astronaut tissues at the beginning (healthy) and end (deconditioned) of long duration exploration class missions. Given uncertainty surrounding physician astronaut selection criteria, skill retention, and tissue condition, an adjunct for improved insertion accuracy would be of value. We describe experiments conducted with an intelligent prototype sensorized system aimed at semi-automating tool insertion into the pleural cavity. The assembly would integrate with an in-mission medical system and could be tailored to fully complement an autonomous medical response agent. When coupled with minimal just-in-time training, it has the potential to bestow expert pleural access skills on non-expert operators without the use of ground resources, in both emergent and elective treatment scenarios.

Project Page [BibTex]

Project Page [BibTex]


no image
Wrist-Squeezing Force Feedback Improves Accuracy and Speed in Robotic Surgery Training

Machaca, S., Cao, E., Chi, A., Adrales, G., Kuchenbecker, K. J., Brown, J. D.

In Proceedings of the IEEE RAS/EMBS International Conference for Biomedical Robotics and Biomechatronics (BioRob), pages: 1-8, Seoul, Korea, August 2022 (inproceedings)

Abstract
Current robotic minimally invasive surgery (RMIS) platforms provide surgeons with no haptic feedback of the robot's physical interactions. This limitation forces surgeons to rely heavily on visual feedback and can make it challenging for surgical trainees to manipulate tissue gently. Prior research has demonstrated that haptic feedback can increase task accuracy in RMIS training. However, it remains unclear whether these improvements represent a fundamental improvement in skill, or if they simply stem from re-prioritizing accuracy over task completion time. In this study, we provide haptic feedback of the force applied by the surgical instruments using custom wrist-squeezing devices. We hypothesize that individuals receiving haptic feedback will increase accuracy (produce less force) while increasing their task completion time, compared to a control group receiving no haptic feedback. To test this hypothesis, N=21 novice participants were asked to repeatedly complete a ring rollercoaster surgical training task as quickly as possible. Results show that participants receiving haptic feedback apply significantly less force (0.67 N) than the control group, and they complete the task no faster or slower than the control group after twelve repetitions. Furthermore, participants in the feedback group decreased their task completion times significantly faster (7.68 %) than participants in the control group (5.26 %). This form of haptic feedback, therefore, has the potential to help trainees improve their technical accuracy without compromising speed.

DOI Project Page [BibTex]

DOI Project Page [BibTex]


Larger Skin-Surface Contact Through a Fingertip Wearable Improves Roughness Perception
Larger Skin-Surface Contact Through a Fingertip Wearable Improves Roughness Perception

Gueorguiev, D., Javot, B., Spiers, A., Kuchenbecker, K. J.

In Haptics: Science, Technology, Applications, pages: 171-179, Lecture Notes in Computer Science, 13235, (Editors: Seifi, Hasti and Kappers, Astrid M. L. and Schneider, Oliver and Drewing, Knut and Pacchierotti, Claudio and Abbasimoshaei, Alireza and Huisman, Gijs and Kern, Thorsten A.), Springer, Cham, 13th International Conference on Human Haptic Sensing and Touch Enabled Computer Applications (EuroHaptics 2022), May 2022 (inproceedings)

Abstract
With the aim of creating wearable haptic interfaces that allow the performance of everyday tasks, we explore how differently designed fingertip wearables change the sensory threshold for tactile roughness perception. Study participants performed the same two-alternative forced-choice roughness task with a bare finger and wearing three flexible fingertip covers: two with a square opening (64 and 36 mm2, respectively) and the third with no opening. The results showed that adding the large opening improved the 75% JND by a factor of 2 times compared to the fully covered finger: the higher the skin-surface contact area, the better the roughness perception. Overall, the results show that even partial skin-surface contact through a fingertip wearable improves roughness perception, which opens design opportunities for haptic wearables that preserve natural touch.

DOI [BibTex]

DOI [BibTex]


no image
Robot, Pass Me the Tool: Handle Visibility Facilitates Task-Oriented Handovers

Ortenzi, V., Filipovica, M., Abdlkarim, D., Pardi, T., Takahashi, C., Wing, A. M., Luca, M. D., Kuchenbecker, K. J.

In Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction (HRI), pages: 256-264, March 2022, Valerio Ortenzi and Maija Filipovica contributed equally to this publication. (inproceedings)

Abstract
A human handing over an object modulates their grasp and movements to accommodate their partner's capabilities, which greatly increases the likelihood of a successful transfer. State-of-the-art robot behavior lacks this level of user understanding, resulting in interactions that force the human partner to shoulder the burden of adaptation. This paper investigates how visual occlusion of the object being passed affects the subjective perception and quantitative performance of the human receiver. We performed an experiment in virtual reality where seventeen participants were tasked with repeatedly reaching to take a tool from the hand of a robot; each of the three tested objects (hammer, screwdriver, scissors) was presented in a wide variety of poses. We carefully analysed the user's hand and head motions, the time to grasp the object, and the chosen grasp location, as well as participants' ratings of the grasp they just performed. Results show that initial visibility of the handle significantly increases the reported holdability and immediate usability of a tool. Furthermore, a robot that offers objects so that their handles are more occluded forces the receiver to spend more time in planning and executing the grasp and also lowers the probability that the tool will be grasped by the handle. Together these findings indicate that robots can more effectively support their human work partners by increasing the visibility of the intended grasp location of objects being passed.

DOI Project Page [BibTex]

DOI Project Page [BibTex]

2021


Sensorimotor-Inspired Tactile Feedback and Control Improve Consistency of Prosthesis Manipulation in the Absence of Direct Vision
Sensorimotor-Inspired Tactile Feedback and Control Improve Consistency of Prosthesis Manipulation in the Absence of Direct Vision

Thomas, N., Fazlollahi, F., Brown, J. D., Kuchenbecker, K. J.

In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages: 6174-6181, Prague, Czech Republic, September 2021 (inproceedings)

Abstract
The lack of haptically aware upper-limb prostheses forces amputees to rely largely on visual cues to complete activities of daily living. In contrast, non-amputees inherently rely on conscious haptic perception and automatic tactile reflexes to govern volitional actions in situations that do not allow for constant visual attention. We therefore propose a myoelectric prosthesis system that reflects these concepts to aid manipulation performance without direct vision. To implement this design, we constructed two fabric-based tactile sensors that measure contact location along the palmar and dorsal sides of the prosthetic fingers and grasp pressure at the tip of the prosthetic thumb. Inspired by the natural sensorimotor system, we use the measurements from these sensors to provide vibrotactile feedback of contact location and implement a tactile grasp controller with reflexes that prevent over-grasping and object slip. We compare this tactile system to a standard myoelectric prosthesis in a challenging reach-to-pick-and-place task conducted without direct vision; 17 non-amputee adults took part in this single-session between-subjects study. Participants in the tactile group achieved more consistent high performance compared to participants in the standard group. These results show that adding contact-location feedback and reflex control increases the consistency with which objects can be grasped and moved without direct vision in upper-limb prosthetics

DOI Project Page [BibTex]

2021

DOI Project Page [BibTex]


Huggie{B}ot: An Interactive Hugging Robot With Visual and Haptic Perception
HuggieBot: An Interactive Hugging Robot With Visual and Haptic Perception

Block, A. E.

ETH Zürich, Zürich, August 2021, Department of Computer Science (phdthesis)

Abstract
Hugs are one of the first forms of contact and affection humans experience. Receiving a hug is one of the best ways to feel socially supported, and the lack of social touch can have severe adverse effects on an individual's well-being. Due to the prevalence and health benefits of hugging, roboticists are interested in creating robots that can hug humans as seamlessly as humans hug other humans. However, hugs are complex affective interactions that need to adapt to the height, body shape, and preferences of the hugging partner, and they often include intra-hug gestures like squeezes. This dissertation aims to create a series of hugging robots that use visual and haptic perception to provide enjoyable interactive hugs. Each of the four presented HuggieBot versions is evaluated by measuring how users emotionally and behaviorally respond to hugging it; HuggieBot 4.0 is explicitly compared to a human hugging partner using physiological measures. Building on research both within and outside of human-robot interaction (HRI), this thesis proposes eleven tenets of natural and enjoyable robotic hugging. These tenets were iteratively crafted through a design process combining user feedback and experimenter observation, and they were evaluated through user studies. A good hugging robot should (1) be soft, (2) be warm, (3) be human-sized, (4) autonomously invite the user for a hug when it detects someone in its personal space, and then it should wait for the user to begin walking toward it before closing its arms to ensure a consensual and synchronous hugging experience. It should also (5) adjust its embrace to the user's size and position, (6) reliably release when the user wants to end the hug, and (7) perceive the user's height and adapt its arm positions accordingly to comfortably fit around the user at appropriate body locations. Finally, a hugging robot should (8) accurately detect and classify gestures applied to its torso in real time, regardless of the user's hand placement, (9) respond quickly to their intra-hug gestures, (10) adopt a gesture paradigm that blends user preferences with slight variety and spontaneity, and (11) occasionally provide unprompted, proactive affective social touch to the user through intra-hug gestures. We believe these eleven tenets are essential to delivering high-quality robot hugs. Their presence results in a hug that pleases the user, and their absence results in a hug that is likely to be inadequate. We present these tenets as guidelines for future hugging robot creators to follow when designing new hugging robots to ensure user acceptance. We tested the four versions of HuggieBot through six user studies. First, we analyzed data collected in a previous study with a modified Willow Garage Personal Robot 2 (PR2) to evaluate human responses to different robot physical characteristics and hugging behaviors. Participants experienced and evaluated twelve hugs with the robot, divided into three randomly ordered trials that focused on physical robot characteristics (single factor, three levels) and nine randomly ordered trials with low, medium, and high hug pressure and duration (two factors, three levels each). Second, we created an entirely new robotic platform, HuggieBot 2.0, according to our first six tenets. The new platform features a soft, warm, inflated body (HuggieChest) and uses visual and haptic sensing to deliver closed-loop hugging. We first verified the outward appeal of this platform compared to the previous PR2-based HuggieBot 1.0 via an online video-watching study involving 117 users. We then conducted an in-person experiment in which 32 users each exchanged eight hugs with HuggieBot 2.0, experiencing all combinations of visual hug initiation, haptic sizing, and haptic releasing. We then refine the original fourth tenet (visually perceive its user) and present the remaining five tenets for designing interactive hugging robots; we validate the full list of eleven tenets through more in-person studies with our custom robot. To enable perceptive and pleasing autonomous robot behavior, we investigated robot responses to four human intra-hug gestures: holding, rubbing, patting, and squeezing. The robot's inflated torso's microphone and pressure sensor collected data of 32 people repeatedly demonstrating these gestures, which were used to develop a perceptual algorithm that classifies user actions with 88% accuracy. From user preferences, we created a probabilistic behavior algorithm that chooses robot responses in real time. We implemented improvements to the robot platform to create a third version of our robot, HuggieBot 3.0. We then validated its gesture perception system and behavior algorithm in a fifth user study with 16 users. Finally, we refined the quality and comfort of the embrace by adjusting the joint torques and joint angles of the closed pose position, we further improved the robot's visual perception to detect changes in user approach, we upgraded the robot's response to users who do not press on its back, and we had the robot respond to all intra-hug gestures with squeezes to create our final version of the robotic platform, HuggieBot 4.0. In our sixth user study, we investigated the emotional and physiological effects of hugging a robot compared to the effects of hugging a friendly but unfamiliar person. We continuously monitored participant heart rate and collected saliva samples at seven time points across the 3.5-hour study to measure the temporal evolution of cortisol and oxytocin. We used an adapted Trier Social Stress Test (TSST) protocol to reliably and ethically induce stress in the participants. They then experienced one of five different hug intervention methods before all interacting with HuggieBot 4.0. The results of these six user studies validated our eleven hugging tenets and informed the iterative design of HuggieBot. We see that users enjoy robot softness, robot warmth, and being physically squeezed by the robot. Users dislike being released too soon from a hug and equally dislike being held by the robot for too long. Adding haptic reactivity definitively improves user perception of a hugging robot; the robot's responses and proactive intra-hug gestures were greatly enjoyed. In our last study, we learned that HuggieBot can positively affect users on a physiological level and is somewhat comparable to hugging a person. Participants have more favorable opinions about hugging robots after prolonged interaction with HuggieBot in all of our research studies.

DOI Project Page [BibTex]

DOI Project Page [BibTex]


no image
Optimal Grasp Selection, and Control for Stabilising a Grasped Object, with Respect to Slippage and External Forces

Pardi, T., E., A. G., Ortenzi, V., Stolkin, R.

In Proceedings of the IEEE-RAS International Conference on Humanoid Robots (Humanoids 2020), pages: 429-436, Munich, Germany, July 2021 (inproceedings)

Abstract
This paper explores the problem of how to grasp an object, and then control a robot arm so as to stabilise that object, under conditions where: i) there is significant slippage between the object and the robot's fingers; and ii) the object is perturbed by external forces. For an n degrees of freedom (dof) robot, we treat the robot plus grasped object as an (n+1) dof system, where the grasped object can rotate between the robot's fingers via slippage. Firstly, we propose an optimisation-based algorithm that selects the best grasping location from a set of given candidates. The best grasp is one that will yield the minimum effort for the arm to keep the object in equilibrium against external perturbations. Secondly, we propose a controller which brings the (n+1) dof system to a task configuration, and then maintains that configuration robustly against matched and unmatched disturbances. To minimise slippage between gripper and grasped object, a sufficient criterion for selecting the control coefficients is proposed by adopting a set of inequalities, which are obtained solving a non-linear minimisation problem, dependant on the static friction estimation. We demonstrate our approach on a simulated (2+1) planar robot, comprising two joints of the robot arm, plus the additional passive joint which is formed by the slippage between the object and the robot's fingers. We also present an experiment with a real robot arm, grasping a flat object between the fingers of a parallel jaw gripper.

DOI [BibTex]

DOI [BibTex]


no image
PrendoSim: Proxy-Hand-Based Robot Grasp Generator

Abdlkarim, D., Ortenzi, V., Pardi, T., Filipovica, M., Wing, A. M., Kuchenbecker, K. J., Di Luca, M.

In ICINCO 2021: Proceedings of the International Conference on Informatics in Control, Automation and Robotics, pages: 60-68, (Editors: Gusikhin, Oleg and Nijmeijer, Henk and Madani, Kurosh), SciTePress, Sétubal, 18th International Conference on Informatics in Control, Automation and Robotics (ICINCO 2021), July 2021 (inproceedings)

Abstract
The synthesis of realistic robot grasps in a simulated environment is pivotal in generating datasets that support sim-to-real transfer learning. In a step toward achieving this goal, we propose PrendoSim, an open-source grasp generator based on a proxy-hand simulation that employs NVIDIA's physics engine (PhysX) and the recently released articulated-body objects developed by Unity (https://prendosim.github.io). We present the implementation details, the method used to generate grasps, the approach to operationally evaluate stability of the generated grasps, and examples of grasps obtained with two different grippers (a parallel jaw gripper and a three-finger hand) grasping three objects selected from the YCB dataset (a pair of scissors, a hammer, and a screwdriver). Compared to simulators proposed in the literature, PrendoSim balances grasp realism and ease of use, displaying an intuitive interface and enabling the user to produce a large and varied dataset of stable grasps.

DOI Project Page [BibTex]

DOI Project Page [BibTex]


Ungrounded Vari-Dimensional Tactile Fingertip Feedback for Virtual Object Interaction
Ungrounded Vari-Dimensional Tactile Fingertip Feedback for Virtual Object Interaction

Young, E. M., Kuchenbecker, K. J.

In CHI ’21: Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, pages: 217, ACM, New York, NY, Conference on Human Factors in Computing Systems (CHI 2021), May 2021 (inproceedings)

Abstract
Compared to grounded force feedback, providing tactile feedback via a wearable device can free the user and broaden the potential applications of simulated physical interactions. However, neither the limitations nor the full potential of tactile-only feedback have been precisely examined. Here we investigate how the dimensionality of cutaneous fingertip feedback affects user movements and virtual object recognition. We combine a recently invented 6-DOF fingertip device with motion tracking, a head-mounted display, and novel contact-rendering algorithms to enable a user to tactilely explore immersive virtual environments. We evaluate rudimentary 1-DOF, moderate 3-DOF, and complex 6-DOF tactile feedback during shape discrimination and mass discrimination, also comparing to interactions with real objects. Results from 20 naive study participants show that higher-dimensional tactile feedback may indeed allow completion of a wider range of virtual tasks, but that feedback dimensionality surprisingly does not greatly affect the exploratory techniques employed by the user.

link (url) DOI Project Page [BibTex]

link (url) DOI Project Page [BibTex]


Robot Interaction Studio: A Platform for Unsupervised {HRI}
Robot Interaction Studio: A Platform for Unsupervised HRI

Mohan, M., Nunez, C. M., Kuchenbecker, K. J.

In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Xian, China, May 2021 (inproceedings)

Abstract
Robots hold great potential for supporting exercise and physical therapy, but such systems are often cumbersome to set up and require expert supervision. We aim to solve these concerns by combining Captury Live, a real-time markerless motion-capture system, with a Rethink Robotics Baxter Research Robot to create the Robot Interaction Studio. We evaluated this platform for unsupervised human-robot interaction (HRI) through a 75-minute-long user study with seven adults who were given minimal instructions and no feedback about their actions. The robot used sounds, facial expressions, facial colors, head motions, and arm motions to sequentially present three categories of cues in randomized order while constantly rotating its face screen to look at the user. Analysis of the captured user motions shows that the cue type significantly affected the distance subjects traveled and the amount of time they spent within the robot’s reachable workspace, in alignment with the design of the cues. Heat map visualizations of the recorded user hand positions confirm that users tended to mimic the robot’s arm poses. Despite some initial frustration, taking part in this study did not significantly change user opinions of the robot. We reflect on the advantages of the proposed approach to unsupervised HRI as well as the limitations and possible future extensions of our system.

DOI Project Page [BibTex]

DOI Project Page [BibTex]


The Six Hug Commandments: Design and Evaluation of a Human-Sized Hugging Robot with Visual and Haptic Perception
The Six Hug Commandments: Design and Evaluation of a Human-Sized Hugging Robot with Visual and Haptic Perception

Block, A. E., Christen, S., Gassert, R., Hilliges, O., Kuchenbecker, K. J.

In HRI ’21: Proceedings of the 2021 ACM/IEEE International Conference on Human-Robot Interaction, pages: 380-388, ACM, New York, NY, USA, ACM/IEEE International Conference on Human-Robot Interaction (HRI 2021), March 2021 (inproceedings)

Abstract
Receiving a hug is one of the best ways to feel socially supported, and the lack of social touch can have severe negative effects on an individual's well-being. Based on previous research both within and outside of HRI, we propose six tenets (''commandments'') of natural and enjoyable robotic hugging: a hugging robot should be soft, be warm, be human sized, visually perceive its user, adjust its embrace to the user's size and position, and reliably release when the user wants to end the hug. Prior work validated the first two tenets, and the final four are new. We followed all six tenets to create a new robotic platform, HuggieBot 2.0, that has a soft, warm, inflated body (HuggieChest) and uses visual and haptic sensing to deliver closed-loop hugging. We first verified the outward appeal of this platform in comparison to the previous PR2-based HuggieBot 1.0 via an online video-watching study involving 117 users. We then conducted an in-person experiment in which 32 users each exchanged eight hugs with HuggieBot 2.0, experiencing all combinations of visual hug initiation, haptic sizing, and haptic releasing. The results show that adding haptic reactivity definitively improves user perception a hugging robot, largely verifying our four new tenets and illuminating several interesting opportunities for further improvement.

Block21-HRI-Commandments.pdf DOI Project Page [BibTex]

Block21-HRI-Commandments.pdf DOI Project Page [BibTex]

2020


no image
Delivering Expressive and Personalized Fingertip Tactile Cues

Young, E. M.

University of Pennsylvania, Philadelphia, PA, December 2020, Department of Mechanical Engineering and Applied Mechanics (phdthesis)

Abstract
Wearable haptic devices have seen growing interest in recent years, but providing realistic tactile feedback is not a challenge that is soon to be solved. Daily interac- tions with physical objects elicit complex sensations at the fingertips. Furthermore, human fingertips exhibit a broad range of physical dimensions and perceptive abilities, adding increased complexity to the task of simulating haptic interactions in a compelling manner. However, as the applications of wearable haptic feedback grow, concerns of wearability and generalizability often persuade tactile device designers to simplify the complexities associated with rendering realistic haptic sensations. As such, wearable devices tend to be optimized for particular uses and average users, rendering only the most salient dimensions of tactile feedback for a given task and assuming all users interpret the feedback in a similar fashion. We propose that providing more realistic haptic feedback will require in-depth examinations of higher-dimensional tactile cues and personalization of these cues for individual users. In this thesis, we aim to provide hardware and software-based solutions for rendering more expressive and personalized tactile cues to the fingertip. We first explore the idea of rendering six-degree-of-freedom (6-DOF) tactile fingertip feedback via a wearable device, such that any possible fingertip interaction with a flat surface can be simulated. We highlight the potential of parallel continuum manipulators (PCMs) to meet the requirements of such a device, and we refine the design of a PCM for providing fingertip tactile cues. We construct a manually actuated prototype to validate the concept, and then continue to develop a motorized version, named the Fingertip Puppeteer, or Fuppeteer for short. Various error reduction techniques are presented, and the resulting device is evaluated by analyzing system responses to step inputs, measuring forces rendered to a biomimetic finger sensor, and comparing intended sensations to perceived sensations of twenty-four participants in a human-subject study. Once the functionality of the Fuppeteer is validated, we begin to explore how the device can be used to broaden our understanding of higher-dimensional tactile feedback. One such application is using the 6-DOF device to simulate different lower-dimensional devices. We evaluate 1-, 3-, and 6-DOF tactile feedback during shape discrimination and mass discrimination in a virtual environment, also comparing to interactions with real objects. Results from 20 naive study participants show that higher-dimensional tactile feedback may indeed allow completion of a wider range of virtual tasks, but that feedback dimensionality surprisingly does not greatly affect the exploratory techniques employed by the user. To address alternative approaches to improving tactile rendering in scenarios where low-dimensional tactile feedback is appropriate, we then explore the idea of personalizing feedback for a particular user. We present two software-based approaches to personalize an existing data-driven haptic rendering algorithm for fingertips of different sizes. We evaluate our algorithms in the rendering of pre-recorded tactile sensations onto rubber casts of six different fingertips as well as onto the real fingertips of 13 human participants, all via a 3-DOF wearable device. Results show that both personalization approaches significantly reduced force error magnitudes and improved realism ratings.

Project Page [BibTex]

2020

Project Page [BibTex]


no image
Synchronicity Trumps Mischief in Rhythmic Human-Robot Social-Physical Interaction

Fitter, N. T., Kuchenbecker, K. J.

In Robotics Research, 10, pages: 269-284, Springer Proceedings in Advanced Robotics, (Editors: Amato, Nancy M. and Hager, Greg and Thomas, Shawna and Torres-Torriti, Miguel), Springer, Cham, 18th International Symposium on Robotics Research (ISRR), 2020 (inproceedings)

Abstract
Hand-clapping games and other forms of rhythmic social-physical interaction might help foster human-robot teamwork, but the design of such interactions has scarcely been explored. We leveraged our prior work to enable the Rethink Robotics Baxter Research Robot to competently play one-handed tempo-matching hand-clapping games with a human user. To understand how such a robot’s capabilities and behaviors affect user perception, we created four versions of this interaction: the hand clapping could be initiated by either the robot or the human, and the non-initiating partner could be either cooperative, yielding synchronous motion, or mischievously uncooperative. Twenty adults tested two clapping tempos in each of these four interaction modes in a random order, rating every trial on standardized scales. The study results showed that having the robot initiate the interaction gave it a more dominant perceived personality. Despite previous results on the intrigue of misbehaving robots, we found that moving synchronously with the robot almost always made the interaction more enjoyable, less mentally taxing, less physically demanding, and lower effort for users than asynchronous interactions caused by robot or human mischief. Taken together, our results indicate that cooperative rhythmic social-physical interaction has the potential to strengthen human-robot partnerships.

DOI [BibTex]

DOI [BibTex]


no image
Modulating Physical Interactions in Human-Assistive Technologies

Hu, S.

University of Pennsylvania, Philadelphia, PA, August 2020, Department of Mechanical Engineering and Applied Mechanics (phdthesis)

Abstract
Many mechanical devices and robots operate in home environments, and they offer rich experiences and valuable functionalities for human users. When these devices interact physically with humans, additional care has to be taken in both hardware and software design to ensure that the robots provide safe and meaningful interactions. It is advantageous to have the robots be customizable so users could tinker them for their specific needs. There are many robot platforms that strive toward these goals, but the most successful robots in our world are either separated from humans (such as in factories and warehouses) or occupy the same space as humans but do not offer physical interactions (such as cleaning robots). In this thesis, we envision a suite of assistive robotic devices that assist people in their daily, physical tasks. Specifically, we begin with a hybrid force display that combines a cable, a brake, and a motor, which offers safe and powerful force output with a large workspace. Virtual haptic elements, including free space, constant force, springs, and dampers, can be simulated by this device. We then adapt the hybrid mechanism and develop the Gait Propulsion Trainer (GPT) for stroke rehabilitation, where we aim to reduce propulsion asymmetry by applying resistance at the user’s pelvis during unilateral stance gait phase. Sensors underneath the user’s shoes and a wireless communication module are added to precisely control the timing of the resistance force. To address the effort of parameter tuning in determining the optimal training scheme, we then develop a learning-from-demonstration (LfD) framework where robot behavior can be obtained from data, thus bypassing some of the tuning effort while enabling customization and generalization for different task situations. This LfD framework is evaluated in simulation and in a user study, and results show improved objective performance and human perception of the robot. Finally, we apply the LfD framework in an upper-limb therapy setting, where the robot directly learns the force output from a therapist when supporting stroke survivors in various physical exercises. Six stroke survivors and an occupational therapist provided demonstrations and tested the autonomous robot behaviors in a user study, and we obtain preliminary insights toward making the robot more intuitive and more effective for both therapists and clients of different impairment levels. This thesis thus considers both hardware and software design for robotic platforms, and we explore both direct and indirect force modulation for human-assistive technologies.

Hu20-PHDD-Modulating Project Page [BibTex]

Hu20-PHDD-Modulating Project Page [BibTex]


Calibrating a Soft {ERT}-Based Tactile Sensor with a Multiphysics Model and Sim-to-real Transfer Learning
Calibrating a Soft ERT-Based Tactile Sensor with a Multiphysics Model and Sim-to-real Transfer Learning

Lee, H., Park, H., Serhat, G., Sun, H., Kuchenbecker, K. J.

In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), pages: 1632-1638, IEEE International Conference on Robotics and Automation (ICRA 2020), May 2020 (inproceedings)

Abstract
Tactile sensors based on electrical resistance tomography (ERT) have shown many advantages for implementing a soft and scalable whole-body robotic skin; however, calibration is challenging because pressure reconstruction is an ill-posed inverse problem. This paper introduces a method for calibrating soft ERT-based tactile sensors using sim-to-real transfer learning with a finite element multiphysics model. The model is composed of three simple models that together map contact pressure distributions to voltage measurements. We optimized the model parameters to reduce the gap between the simulation and reality. As a preliminary study, we discretized the sensing points into a 6 by 6 grid and synthesized single- and two-point contact datasets from the multiphysics model. We obtained another single-point dataset using the real sensor with the same contact location and force used in the simulation. Our new deep neural network architecture uses a de-noising network to capture the simulation-to-real gap and a reconstruction network to estimate contact force from voltage measurements. The proposed approach showed 82% hit rate for localization and 0.51 N of force estimation error performance in single-contact tests and 78.5% hit rate for localization and 5.0 N of force estimation error in two-point contact tests. We believe this new calibration method has the possibility to improve the sensing performance of ERT-based tactile sensors.

DOI Project Page [BibTex]

DOI Project Page [BibTex]


no image
An ERT-Based Robotic Skin with Sparsely Distributed Electrodes: Structure, Fabrication, and DNN-Based Signal Processing

Park, K., Park, H., Lee, H., Park, S., Kim, J.

In 2020 IEEE International Conference on Robotics and Automation (ICRA 2020), pages: 1617-1624, IEEE, Piscataway, NJ, IEEE International Conference on Robotics and Automation (ICRA 2020), May 2020 (inproceedings)

Abstract
Electrical resistance tomography (ERT) has previously been utilized to develop a large-scale tactile sensor because this approach enables the estimation of the conductivity distribution among the electrodes based on a known physical model. Such a sensor made with a stretchable material can conform to a curved surface. However, this sensor cannot fully cover a cylindrical surface because in such a configuration, the edges of the sensor must meet each other. The electrode configuration becomes irregular in this edge region, which may degrade the sensor performance. In this paper, we introduce an ERT-based robotic skin with evenly and sparsely distributed electrodes. For implementation, we sprayed a carbon nanotube (CNT)-dispersed solution to form a conductive sensing domain on a cylindrical surface. The electrodes were firmly embedded in the surface so that the wires were not exposed to the outside. The sensor output images were estimated using a deep neural network (DNN), which was trained with noisy simulation data. An indentation experiment revealed that the localization error of the sensor was 5.2 ± 3.3 mm, which is remarkable performance with only 30 electrodes. A frame rate of up to 120 Hz could be achieved with a sensing domain area of 90 cm2. The proposed approach simplifies the fabrication of 3D-shaped sensors, allowing them to be easily applied to existing robot arms in a seamless and robust manner.

DOI [BibTex]

DOI [BibTex]


Capturing Experts’ Mental Models to Organize a Collection of Haptic Devices: Affordances Outweigh Attributes
Capturing Experts’ Mental Models to Organize a Collection of Haptic Devices: Affordances Outweigh Attributes

Seifi, H., Oppermann, M., Bullard, J., MacLean, K. E., Kuchenbecker, K. J.

In Proceedings of the ACM SIGCHI Conference on Human Factors in Computing Systems (CHI), pages: 268, Conference on Human Factors in Computing Systems (CHI 2020), April 2020 (inproceedings)

Abstract
Humans rely on categories to mentally organize and understand sets of complex objects. One such set, haptic devices, has myriad technical attributes that affect user experience in complex ways. Seeking an effective navigation structure for a large online collection, we elicited expert mental categories for grounded force-feedback haptic devices: 18 experts (9 device creators, 9 interaction designers) reviewed, grouped, and described 75 devices according to their similarity in a custom card-sorting study. From the resulting quantitative and qualitative data, we identify prominent patterns of tagging versus binning, and we report 6 uber-attributes that the experts used to group the devices, favoring affordances over device specifications. Finally, we derive 7 device categories and 9 subcategories that reflect the imperfect yet semantic nature of the expert mental models. We visualize these device categories and similarities in the online haptic collection, and we offer insights for studying expert understanding of other human-centered technology.

DOI Project Page [BibTex]

DOI Project Page [BibTex]


Changes in Normal Force During Passive Dynamic Touch: Contact Mechanics and Perception
Changes in Normal Force During Passive Dynamic Touch: Contact Mechanics and Perception

Gueorguiev, D., Lambert, J., Thonnard, J., Kuchenbecker, K. J.

In Proceedings of the IEEE Haptics Symposium (HAPTICS), pages: 746-752, IEEE Haptics Symposium (HAPTICS 2020), March 2020 (inproceedings)

Abstract
Using a force-controlled robotic platform, we investigated the contact mechanics and psychophysical responses induced by negative and positive modulations in normal force during passive dynamic touch. In the natural state of the finger, the applied normal force modulation induces a correlated change in the tangential force. In a second condition, we applied talcum powder to the fingerpad, which induced a significant modification in the slope of the correlated tangential change. In both conditions, the same ten participants had to detect the interval that contained a decrease or an increase in the pre-stimulation normal force of 1 N. In the natural state, the 75% just noticeable difference for this task was found to be a ratio of 0.19 and 0.18 for decreases and increases, respectively. With talcum powder on the fingerpad, the normal force thresholds remained stable, following the Weber law of constant just noticeable differences, while the tangential force thresholds changed in the same way as the correlation slopes. This result suggests that participants predominantly relied on the normal force changes to perform the detection task. In addition, participants were asked to report whether the force decreased or increased. Their performance was generally poor at this second task even for above-threshold changes. However, their accuracy slightly improved with the talcum powder, which might be due to the reduced finger-surface friction.

DOI [BibTex]

DOI [BibTex]


no image
Haptic Object Parameter Estimation during Within-Hand-Manipulation with a Simple Robot Gripper

Mohtasham, D., Narayanan, G., Calli, B., Spiers, A. J.

In Proceedings of the IEEE Haptics Symposium (HAPTICS), pages: 140-147, March 2020 (inproceedings)

Abstract
Though it is common for robots to rely on vision for object feature estimation, there are environments where optical sensing performs poorly, due to occlusion, poor lighting or limited space for camera placement. Haptic sensing in robotics has a long history, but few approaches have combined this with within-hand-manipulation (WIHM), in order to expose more features of an object to the tactile sensing elements of the hand. As in the human hand, these sensing structures are generally non-homogenous in their coverage of a gripper's manipulation surfaces, as the sensitivity of some hand or finger regions is often different to other regions. In this work we use a modified version of the recently developed 2-finger Model VF (variable friction) robot gripper to acquire tactile information while rolling objects within the robot's grasp. This new gripper has one high-friction passive finger surface and one high-friction tactile sensing surface, equipped with 12 low-cost barometric force sensors encased in urethane. We have developed algorithms that use the data generated during these rolling actions to determine parametric aspects of the object under manipulation. Namely, two parameters are currently determined 1) the location of an object within the grasp 2) the object's shape (from three alternatives). The algorithms were first developed on a static test rig with passive object rolling and later evaluated with the robot gripper platform using active WIHM, which introduced artifacts into the data. With an object set consisting of 3 shapes and 5 sizes, an overall shape estimation accuracy was achieved of 88% and 78% for the test rig and hand respectively. Location estimation, of each object's centroid during motion, achieved a mean error of less than 2mm, along the 95mm length of the tactile sensing finger.

DOI [BibTex]

DOI [BibTex]

2019


no image
Deep Neural Network Approach in Electrical Impedance Tomography-Based Real-Time Soft Tactile Sensor

Park, H., Lee, H., Park, K., Mo, S., Kim, J.

IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), (999):7447-7452, IEEE, 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), November 2019 (conference)

Abstract
Recently, a whole-body tactile sensing have emerged in robotics for safe human-robot interaction. A key issue in the whole-body tactile sensing is ensuring large-area manufacturability and high durability. To fulfill these requirements, a reconstruction method called electrical impedance tomography (EIT) was adopted in large-area tactile sensing. This method maps voltage measurements to conductivity distribution using only a few number of measurement electrodes. A common approach for the mapping is using a linearized model derived from the Maxwell's equation. This linearized model shows fast computation time and moderate robustness against measurement noise but reconstruction accuracy is limited. In this paper, we propose a novel nonlinear EIT algorithm through Deep Neural Network (DNN) approach to improve the reconstruction accuracy of EIT-based tactile sensors. The neural network architecture with rectified linear unit (ReLU) function ensured extremely low computational time (0.002 seconds) and nonlinear network structure which provides superior measurement accuracy. The DNN model was trained with dataset synthesized in simulation environment. To achieve the robustness against measurement noise, the training proceeded with additive Gaussian noise that estimated through actual measurement noise. For real sensor application, the trained DNN model was transferred to a conductive fabric-based soft tactile sensor. For validation, the reconstruction error and noise robustness were mainly compared using conventional linearized model and proposed approach in simulation environment. As a demonstration, the tactile sensor equipped with the trained DNN model is presented for a contact force estimation.

DOI [BibTex]

2019

DOI [BibTex]


Effect of Remote Masking on Detection of Electrovibration
Effect of Remote Masking on Detection of Electrovibration

Jamalzadeh, M., Güçlü, B., Vardar, Y., Basdogan, C.

In Proceedings of the IEEE World Haptics Conference (WHC), pages: 229-234, Tokyo, Japan, July 2019 (inproceedings)

Abstract
Masking has been used to study human perception of tactile stimuli, including those created on haptic touch screens. Earlier studies have investigated the effect of in-site masking on tactile perception of electrovibration. In this study, we investigated whether it is possible to change detection threshold of electrovibration at fingertip of index finger via remote masking, i.e. by applying a (mechanical) vibrotactile stimulus on the proximal phalanx of the same finger. The masking stimuli were generated by a voice coil (Haptuator). For eight participants, we first measured the detection thresholds for electrovibration at the fingertip and for vibrotactile stimuli at the proximal phalanx. Then, the vibrations on the skin were measured at four different locations on the index finger of subjects to investigate how the mechanical masking stimulus propagated as the masking level was varied. Finally, electrovibration thresholds measured in the presence of vibrotactile masking stimuli. Our results show that vibrotactile masking stimuli generated sub-threshold vibrations around fingertip, and hence did not mechanically interfere with the electrovibration stimulus. However, there was a clear psychophysical masking effect due to central neural processes. Electrovibration absolute threshold increased approximately 0.19 dB for each dB increase in the masking level.

DOI [BibTex]

DOI [BibTex]


Objective and Subjective Assessment of Algorithms for Reducing Three-Axis Vibrations to One-Axis Vibrations
Objective and Subjective Assessment of Algorithms for Reducing Three-Axis Vibrations to One-Axis Vibrations

Park, G., Kuchenbecker, K. J.

In Proceedings of the IEEE World Haptics Conference, pages: 467-472, July 2019 (inproceedings)

Abstract
A typical approach to creating realistic vibrotactile feedback is reducing 3D vibrations recorded by an accelerometer to 1D signals that can be played back on a haptic actuator, but some of the information is often lost in this dimensional reduction process. This paper describes seven representative algorithms and proposes four metrics based on the spectral match, the temporal match, and the average value and the variability of them across 3D rotations. These four performance metrics were applied to four texture recordings, and the method utilizing the discrete fourier transform (DFT) was found to be the best regardless of the sensing axis. We also recruited 16 participants to assess the perceptual similarity achieved by each algorithm in real time. We found the four metrics correlated well with the subjectively rated similarities for the six dimensional reduction algorithms, with the exception of taking the 3D vector magnitude, which was perceived to be good despite its low spectral and temporal match metrics.

DOI Project Page [BibTex]


Fingertip Interaction Metrics Correlate with Visual and Haptic Perception of Real Surfaces
Fingertip Interaction Metrics Correlate with Visual and Haptic Perception of Real Surfaces

Vardar, Y., Wallraven, C., Kuchenbecker, K. J.

In Proceedings of the IEEE World Haptics Conference (WHC), pages: 395-400, Tokyo, Japan, July 2019 (inproceedings)

Abstract
Both vision and touch contribute to the perception of real surfaces. Although there have been many studies on the individual contributions of each sense, it is still unclear how each modality’s information is processed and integrated. To fill this gap, we investigated the similarity of visual and haptic perceptual spaces, as well as how well they each correlate with fingertip interaction metrics. Twenty participants interacted with ten different surfaces from the Penn Haptic Texture Toolkit by either looking at or touching them and judged their similarity in pairs. By analyzing the resulting similarity ratings using multi-dimensional scaling (MDS), we found that surfaces are similarly organized within the three-dimensional perceptual spaces of both modalities. Also, between-participant correlations were significantly higher in the haptic condition. In a separate experiment, we obtained the contact forces and accelerations acting on one finger interacting with each surface in a controlled way. We analyzed the collected fingertip interaction data in both the time and frequency domains. Our results suggest that the three perceptual dimensions for each modality can be represented by roughness/smoothness, hardness/softness, and friction, and that these dimensions can be estimated by surface vibration power, tap spectral centroid, and kinetic friction coefficient, respectively.

DOI Project Page [BibTex]

DOI Project Page [BibTex]


no image
A Clustering Approach to Categorizing 7 Degree-of-Freedom Arm Motions during Activities of Daily Living

Gloumakov, Y., Spiers, A. J., Dollar, A. M.

In Proceedings of the International Conference on Robotics and Automation (ICRA), pages: 7214-7220, Montreal, Canada, May 2019 (inproceedings)

Abstract
In this paper we present a novel method of categorizing naturalistic human arm motions during activities of daily living using clustering techniques. While many current approaches attempt to define all arm motions using heuristic interpretation, or a combination of several abstract motion primitives, our unsupervised approach generates a hierarchical description of natural human motion with well recognized groups. Reliable recommendation of a subset of motions for task achievement is beneficial to various fields, such as robotic and semi-autonomous prosthetic device applications. The proposed method makes use of well-known techniques such as dynamic time warping (DTW) to obtain a divergence measure between motion segments, DTW barycenter averaging (DBA) to get a motion average, and Ward's distance criterion to build the hierarchical tree. The clusters that emerge summarize the variety of recorded motions into the following general tasks: reach-to-front, transfer-box, drinking from vessel, on-table motion, turning a key or door knob, and reach-to-back pocket. The clustering methodology is justified by comparing against an alternative measure of divergence using Bezier coefficients and K-medoids clustering.

DOI [BibTex]

DOI [BibTex]


Haptipedia: Accelerating Haptic Device Discovery to Support Interaction & Engineering Design
Haptipedia: Accelerating Haptic Device Discovery to Support Interaction & Engineering Design

Seifi, H., Fazlollahi, F., Oppermann, M., Sastrillo, J. A., Ip, J., Agrawal, A., Park, G., Kuchenbecker, K. J., MacLean, K. E.

In Proceedings of the ACM SIGCHI Conference on Human Factors in Computing Systems (CHI), pages: 1-12, Glasgow, Scotland, May 2019 (inproceedings)

Abstract
Creating haptic experiences often entails inventing, modifying, or selecting specialized hardware. However, experience designers are rarely engineers, and 30 years of haptic inventions are buried in a fragmented literature that describes devices mechanically rather than by potential purpose. We conceived of Haptipedia to unlock this trove of examples: Haptipedia presents a device corpus for exploration through metadata that matter to both device and experience designers. It is a taxonomy of device attributes that go beyond physical description to capture potential utility, applied to a growing database of 105 grounded force-feedback devices, and accessed through a public visualization that links utility to morphology. Haptipedia's design was driven by both systematic review of the haptic device literature and rich input from diverse haptic designers. We describe Haptipedia's reception (including hopes it will redefine device reporting standards) and our plans for its sustainability through community participation.

DOI Project Page [BibTex]

DOI Project Page [BibTex]


no image
Internal Array Electrodes Improve the Spatial Resolution of Soft Tactile Sensors Based on Electrical Resistance Tomography

Lee, H., Park, K., Kim, J., Kuchenbecker, K. J.

In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), pages: 5411-5417, Montreal, Canada, May 2019, Hyosang Lee and Kyungseo Park contributed equally to this publication (inproceedings)

DOI [BibTex]

DOI [BibTex]


Improving Haptic Adjective Recognition with Unsupervised Feature Learning
Improving Haptic Adjective Recognition with Unsupervised Feature Learning

Richardson, B. A., Kuchenbecker, K. J.

In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), pages: 3804-3810, Montreal, Canada, May 2019 (inproceedings)

Abstract
Humans can form an impression of how a new object feels simply by touching its surfaces with the densely innervated skin of the fingertips. Many haptics researchers have recently been working to endow robots with similar levels of haptic intelligence, but these efforts almost always employ hand-crafted features, which are brittle, and concrete tasks, such as object recognition. We applied unsupervised feature learning methods, specifically K-SVD and Spatio-Temporal Hierarchical Matching Pursuit (ST-HMP), to rich multi-modal haptic data from a diverse dataset. We then tested the learned features on 19 more abstract binary classification tasks that center on haptic adjectives such as smooth and squishy. The learned features proved superior to traditional hand-crafted features by a large margin, almost doubling the average F1 score across all adjectives. Additionally, particular exploratory procedures (EPs) and sensor channels were found to support perception of certain haptic adjectives, underlining the need for diverse interactions and multi-modal haptic data.

DOI Project Page [BibTex]

DOI Project Page [BibTex]


A Novel Texture Rendering Approach for Electrostatic Displays
A Novel Texture Rendering Approach for Electrostatic Displays

Fiedler, T., Vardar, Y.

In Proceedings of International Workshop on Haptic and Audio Interaction Design (HAID), Lille, France, March 2019 (inproceedings)

Abstract
Generating realistic texture feelings on tactile displays using data-driven methods has attracted a lot of interest in the last decade. However, the need for large data storages and transmission rates complicates the use of these methods for the future commercial displays. In this paper, we propose a new texture rendering approach which can compress the texture data signicantly for electrostatic displays. Using three sample surfaces, we first explain how to record, analyze and compress the texture data, and render them on a touchscreen. Then, through psychophysical experiments conducted with nineteen participants, we show that the textures can be reproduced by a signicantly less number of frequency components than the ones in the original signal without inducing perceptual degradation. Moreover, our results indicate that the possible degree of compression is affected by the surface properties.

Fiedler19-HAID-Electrostatic [BibTex]

Fiedler19-HAID-Electrostatic [BibTex]

2018


no image
A Feasibility Study of Force Feedback using Computer-Mouse

Kumar, A., Gourishetti, R., Manivannan, M.

pages: 1-6, 26th Conference of National Academy of Psychology, December 2018 (conference)

Abstract
This paper is aimed at measuring the capacity to feedback haptic information by means of a standard computer mouse as a passive input device and visual feedback by a visual display. The main objective of this paper is to conduct a psychophysical experiment considering the theoretical hypothesis and compare the results with that of similar studies in the literature. A psychophysical experiment was conducted on eight subjects using the 'Two Alternative Forced Choice Constant (2AFC) Stimuli stiffness discrimination' method. The JND and Weber Fraction were calculated for each subject. To analyze the results, we determined the Response Matrix, Psychometric Function, JND and Weber Fraction. The average JND for 8 subjects was found to be 0.14 and the average Weber fraction is 9.54%. The Weber fraction value for our experiment is comparable to that of similar experiments in the literature. Our proposed technique can be used to enhance the user experience in computer gaming, Mobile Operating Software, and virtual training simulators for various clinical operations.

[BibTex]

2018

[BibTex]


no image
Instrumentation, Data, and Algorithms for Visually Understanding Haptic Surface Properties

Burka, A. L.

University of Pennsylvania, Philadelphia, USA, August 2018, Department of Electrical and Systems Engineering (phdthesis)

Abstract
Autonomous robots need to efficiently walk over varied surfaces and grasp diverse objects. We hypothesize that the association between how such surfaces look and how they physically feel during contact can be learned from a database of matched haptic and visual data recorded from various end-effectors' interactions with hundreds of real-world surfaces. Testing this hypothesis required the creation of a new multimodal sensing apparatus, the collection of a large multimodal dataset, and development of a machine-learning pipeline. This thesis begins by describing the design and construction of the Portable Robotic Optical/Tactile ObservatioN PACKage (PROTONPACK, or Proton for short), an untethered handheld sensing device that emulates the capabilities of the human senses of vision and touch. Its sensory modalities include RGBD vision, egomotion, contact force, and contact vibration. Three interchangeable end-effectors (a steel tooling ball, an OptoForce three-axis force sensor, and a SynTouch BioTac artificial fingertip) allow for different material properties at the contact point and provide additional tactile data. We then detail the calibration process for the motion and force sensing systems, as well as several proof-of-concept surface discrimination experiments that demonstrate the reliability of the device and the utility of the data it collects. This thesis then presents a large-scale dataset of multimodal surface interaction recordings, including 357 unique surfaces such as furniture, fabrics, outdoor fixtures, and items from several private and public material sample collections. Each surface was touched with one, two, or three end-effectors, comprising approximately one minute per end-effector of tapping and dragging at various forces and speeds. We hope that the larger community of robotics researchers will find broad applications for the published dataset. Lastly, we demonstrate an algorithm that learns to estimate haptic surface properties given visual input. Surfaces were rated on hardness, roughness, stickiness, and temperature by the human experimenter and by a pool of purely visual observers. Then we trained an algorithm to perform the same task as well as infer quantitative properties calculated from the haptic data. Overall, the task of predicting haptic properties from vision alone proved difficult for both humans and computers, but a hybrid algorithm using a deep neural network and a support vector machine achieved a correlation between expected and actual regression output between approximately ρ = 0.3 and ρ = 0.5 on previously unseen surfaces.

Project Page [BibTex]

Project Page [BibTex]


no image
Passive Probing Perception: Effect of Latency in Visual-Haptic Feedback

Gourishetti, R., Isaac, J. H. R., Manivannan, M.

In pages: 186-198, Springer, Cham, EuroHaptics, June 2018 (inproceedings)

DOI [BibTex]

DOI [BibTex]

2017


no image
Mechanics of pseudo-haptics with computer mouse

Kumar, A., Gourishetti, R., Manivannan, M.

In pages: 1-6, IEEE, IEEE International Symposium on Haptic, Audio and Visual Environments and Games (HAVE), December 2017 (inproceedings)

Abstract
The haptic illusion based force feedback, known as pseudo-haptics, is used to simulate haptic explorations, such as stiffness, without using a force feedback device. There are many computer mouse-based pseudo-haptics work reported in the literature. However, none has explored the mechanics of the pseudo-haptics. The objective of this paper is to derive an analytical relation between the displacement of the mouse to that of a virtual spring assuming equal work done in both cases (mouse and virtual spring displacement) and experimentally validate their relation. A psychophysical experiment was conducted on eight subjects to discriminate the stiffness of two virtual springs using 2 Alternative Force Choice (AFC) discrimination task, Constant Stimuli method to measure Just Noticeable Difference (JND) for pseudo-stiffness. The mean pseudo-stiffness JND and average Weber fraction were calculated to be 14% and 9.54% respectively. The resulting JND and the Weber fraction from the experiment were comparable to that of the psychophysical parameters in the literature. Currently, this study simulates the haptic illusion for 1 DOF, however, it can be extended to 6 DOF.

DOI [BibTex]

2017

DOI [BibTex]


no image
Stiffness Perception during Pinching and Dissection with Teleoperated Haptic Forceps

Ng, C., Zareinia, K., Sun, Q., Kuchenbecker, K. J.

In Proceedings of the International Symposium on Robot and Human Interactive Communication (RO-MAN), pages: 456-463, Lisbon, Portugal, August 2017 (inproceedings)

DOI [BibTex]

DOI [BibTex]


no image
Design and Evaluation of Interactive Hand-Clapping Robots

Naomi T. Fitter

University of Pennsylvania, August 2017, Department of Mechanical Engineering and Applied Mechanics (phdthesis)

Abstract
Human friends commonly connect through handshakes and high fives, and children around the world rejoice at hand-clapping games. As robots enter everyday human spaces, they will have the opportunity to join in such physical interactions, but few current robots are intended to touch humans. How should robots move and react in playful hand-to-hand interactions with people? We conducted research in four main areas to address this design challenge. First, we implemented and tested an initial hand-clapping robotic system. This effort began by recording sensor data from people performing a variety of hand-clapping activities; the resulting accelerometer and position data taught us how to design appropriate hand-clapping robot motion and logic. Implementation on a Rethink Robotics Baxter Research Robot demonstrated that a robot could move like our human participants and reliably detect hand impacts through its wrist-mounted accelerometers. N = 20 study participants clapped hands with differently configured versions of this robot in random order: the robot’s facial animation, physical reactivity, arm stiffness, and clapping tempo all significantly affected how users perceived the robot. We next sought to create and evaluate more sophisticated robot hand-clapping behaviors. Data from people performing interactive clapping tasks at increasing and decreasing tempos helped us propose prospective timing models and implement adaptive-tempo Baxter play. In a subsequent experiment that involved N = 20 users, a mischievous Baxter was equipped with the top-performing tempo adaptation model and chose to play cooperatively or asynchronously with its human partner. Although a few participants reacted positively to Baxter’s mischief, users overwhelmingly pre- ferred a synchronous, cooperative robot. Third, we set up and conducted a human-robot interaction experiment more similar to everyday human-human hand-clapping interactions. A machine learning pipeline trained on inertial data from human motions demonstrated that linear support vector machines (SVMs) can classify a new person’s hand-clapping actions with an accuracy of about 95%. This technique succeeded for both hand- and wrist-mounted inertial sensors, enabling people to teach the Baxter robot new hand- clapping games. Evaluation of various two-handed clapping play activities by N = 24 users showed that learning games from Baxter was significantly easier than teaching Baxter games, but that the teaching role caused people to consider more teamwork aspects of the gameplay. Finally, to broaden the scope of these interactions, we began exploring applications of Baxter in socially assistive robotics. Using many of the same sensing and actuation strategies, we developed a set of six playful hand-to-hand contact-based exercise interactions to be jointly executed between a person and Baxter, along with two similar non-contact games. A proof-of-concept experiment using these exercise games enrolled N = 20 young adults and N = 14 healthy adults over age 53. The results demonstrated that people are willing and motivated to interact with the robot in this way and that different games promote unique physical and cognitive exercise effects. Overall, this research aims to help shape design processes for socially relevant physical human-robot interaction and reveal new opportunities for socially assistive robotics.

[BibTex]

[BibTex]


no image
Design of a Parallel Continuum Manipulator for 6-DOF Fingertip Haptic Display

Young, E. M., Kuchenbecker, K. J.

In Proceedings of the IEEE World Haptics Conference (WHC), pages: 599-604, Munich, Germany, June 2017, Finalist for best poster paper (inproceedings)

Abstract
Despite rapid advancements in the field of fingertip haptics, rendering tactile cues with six degrees of freedom (6 DOF) remains an elusive challenge. In this paper, we investigate the potential of displaying fingertip haptic sensations with a 6-DOF parallel continuum manipulator (PCM) that mounts to the user's index finger and moves a contact platform around the fingertip. Compared to traditional mechanisms composed of rigid links and discrete joints, PCMs have the potential to be strong, dexterous, and compact, but they are also more complicated to design. We define the design space of 6-DOF parallel continuum manipulators and outline a process for refining such a device for fingertip haptic applications. Following extensive simulation, we obtain 12 designs that meet our specifications, construct a manually actuated prototype of one such design, and evaluate the simulation's ability to accurately predict the prototype's motion. Finally, we demonstrate the range of deliverable fingertip tactile cues, including a normal force into the finger and shear forces tangent to the finger at three extreme points on the boundary of the fingertip.

DOI [BibTex]

DOI [BibTex]


no image
High Magnitude Unidirectional Haptic Force Display Using a Motor/Brake Pair and a Cable

Hu, S., Kuchenbecker, K. J.

In Proceedings of the IEEE World Haptics Conference (WHC), pages: 394-399, Munich, Germany, June 2017 (inproceedings)

Abstract
Clever electromechanical design is required to make the force feedback delivered by a kinesthetic haptic interface both strong and safe. This paper explores a onedimensional haptic force display that combines a DC motor and a magnetic particle brake on the same shaft. Rather than a rigid linkage, a spooled cable connects the user to the actuators to enable a large workspace, reduce the moving mass, and eliminate the sticky residual force from the brake. This design combines the high torque/power ratio of the brake and the active output capabilities of the motor to provide a wider range of forces than can be achieved with either actuator alone. A prototype of this device was built, its performance was characterized, and it was used to simulate constant force sources and virtual springs and dampers. Compared to the conventional design of using only a motor, the hybrid device can output higher unidirectional forces at the expense of free space feeling less free.

DOI Project Page [BibTex]

DOI Project Page [BibTex]


no image
A Wrist-Squeezing Force-Feedback System for Robotic Surgery Training

Brown, J. D., Fernandez, J. N., Cohen, S. P., Kuchenbecker, K. J.

In Proceedings of the IEEE World Haptics Conference (WHC), pages: 107-112, Munich, Germany, June 2017 (inproceedings)

Abstract
Over time, surgical trainees learn to compensate for the lack of haptic feedback in commercial robotic minimally invasive surgical systems. Incorporating touch cues into robotic surgery training could potentially shorten this learning process if the benefits of haptic feedback were sustained after it is removed. In this paper, we develop a wrist-squeezing haptic feedback system and evaluate whether it holds the potential to train novice da Vinci users to reduce the force they exert on a bimanual inanimate training task. Subjects were randomly divided into two groups according to a multiple baseline experimental design. Each of the ten participants moved a ring along a curved wire nine times while the haptic feedback was conditionally withheld, provided, and withheld again. The realtime tactile feedback of applied force magnitude significantly reduced the integral of the force produced by the da Vinci tools on the task materials, and this result remained even when the haptic feedback was removed. Overall, our findings suggest that wrist-squeezing force feedback can play an essential role in helping novice trainees learn to minimize the force they exert with a surgical robot.

DOI Project Page [BibTex]

DOI Project Page [BibTex]


no image
Handling Scan-Time Parameters in Haptic Surface Classification

Burka, A., Kuchenbecker, K. J.

In Proceedings of the IEEE World Haptics Conference (WHC), pages: 424-429, Munich, Germany, June 2017 (inproceedings)

DOI Project Page [BibTex]

DOI Project Page [BibTex]


no image
Proton 2: Increasing the Sensitivity and Portability of a Visuo-haptic Surface Interaction Recorder

Burka, A., Rajvanshi, A., Allen, S., Kuchenbecker, K. J.

In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), pages: 439-445, Singapore, May 2017 (inproceedings)

Abstract
The Portable Robotic Optical/Tactile ObservatioN PACKage (PROTONPACK, or Proton for short) is a new handheld visuo-haptic sensing system that records surface interactions. We previously demonstrated system calibration and a classification task using external motion tracking. This paper details improvements in surface classification performance and removal of the dependence on external motion tracking, necessary before embarking on our goal of gathering a vast surface interaction dataset. Two experiments were performed to refine data collection parameters. After adjusting the placement and filtering of the Proton's high-bandwidth accelerometers, we recorded interactions between two differently-sized steel tooling ball end-effectors (diameter 6.35 and 9.525 mm) and five surfaces. Using features based on normal force, tangential force, end-effector speed, and contact vibration, we trained multi-class SVMs to classify the surfaces using 50 ms chunks of data from each end-effector. Classification accuracies of 84.5% and 91.5% respectively were achieved on unseen test data, an improvement over prior results. In parallel, we pursued on-board motion tracking, using the Proton's camera and fiducial markers. Motion tracks from the external and onboard trackers agree within 2 mm and 0.01 rad RMS, and the accuracy decreases only slightly to 87.7% when using onboard tracking for the 9.525 mm end-effector. These experiments indicate that the Proton 2 is ready for portable data collection.

DOI Project Page [BibTex]

DOI Project Page [BibTex]