Haptic Intelligence


2024


no image
Engineering and Evaluating Naturalistic Vibrotactile Feedback for Telerobotic Assembly

Gong, Y.

University of Stuttgart, Stuttgart, Germany, August 2024, Faculty of Design, Production Engineering and Automotive Engineering (phdthesis)

Abstract
Teleoperation allows workers on a construction site to assemble pre-fabricated building components by controlling powerful machines from a safe distance. However, teleoperation's primary reliance on visual feedback limits the operator's efficiency in situations with stiff contact or poor visibility, compromising their situational awareness and thus increasing the difficulty of the task; it also makes construction machines more difficult to learn to operate. To bridge this gap, we propose that reliable, economical, and easy-to-implement naturalistic vibrotactile feedback could improve telerobotic control interfaces in construction and other application areas such as surgery. This type of feedback enables the operator to feel the natural vibrations experienced by the robot, which contain crucial information about its motions and its physical interactions with the environment. This dissertation explores how to deliver naturalistic vibrotactile feedback from a robot's end-effector to the hand of an operator performing telerobotic assembly tasks; furthermore, it seeks to understand the effects of such haptic cues. The presented research can be divided into four parts. We first describe the engineering of AiroTouch, a naturalistic vibrotactile feedback system tailored for use on construction sites but suitable for many other applications of telerobotics. Then we evaluate AiroTouch and explore the effects of the naturalistic vibrotactile feedback it delivers in three user studies conducted either in laboratory settings or on a construction site. We begin this dissertation by developing guidelines for creating a haptic feedback system that provides high-quality naturalistic vibrotactile feedback. These guidelines include three sections: component selection, component placement, and system evaluation. We detail each aspect with the parameters that need to be considered. Based on these guidelines, we adapt widely available commercial audio equipment to create our system called AiroTouch, which measures the vibration experienced by each robot tool with a high-bandwidth three-axis accelerometer and enables the user to feel this vibration in real time through a voice-coil actuator. Accurate haptic transmission is achieved by optimizing the positions of the system's off-the-shelf sensors and actuators and is then verified through measurements. The second part of this thesis presents our initial validation of AiroTouch. We explored how adding this naturalistic type of vibrotactile feedback affects the operator during small-scale telerobotic assembly. Due to the limited accessibility of teleoperated robots and to maintain safety, we conducted a user study in lab with a commercial bimanual dexterous teleoperation system developed for surgery (Intuitive da Vinci Si). Thirty participants used this robot equipped with AiroTouch to assemble a small stiff structure under three randomly ordered haptic feedback conditions: no vibrations, one-axis vibrations, and summed three-axis vibrations. The results show that participants learn to take advantage of both tested versions of the haptic feedback in the given tasks, as significantly lower vibrations and forces are observed in the second trial. Subjective responses indicate that naturalistic vibrotactile feedback increases the realism of the interaction and reduces the perceived task duration, task difficulty, and fatigue. To test our approach on a real construction site, we enhanced AiroTouch using wireless signal-transmission technologies and waterproofing, and then we adapted it to a mini-crane construction robot. A study was conducted to evaluate how naturalistic vibrotactile feedback affects an observer's understanding of telerobotic assembly performed by this robot on a construction site. Seven adults without construction experience observed a mix of manual and autonomous assembly processes both with and without naturalistic vibrotactile feedback. Qualitative analysis of their survey responses and interviews indicates that all participants had positive responses to this technology and believed it would be beneficial for construction activities. Finally, we evaluated the effects of naturalistic vibrotactile feedback provided by wireless AiroTouch during live teleoperation of the mini-crane. Twenty-eight participants remotely controlled the mini-crane to complete three large-scale assembly-related tasks in lab, both with and without this type of haptic feedback. Our results show that naturalistic vibrotactile feedback enhances the participants' awareness of both robot motion and contact between the robot and other objects, particularly in scenarios with limited visibility. These effects increase participants' confidence when controlling the robot. Moreover, there is a noticeable trend of reduced vibration magnitude in the conditions where this type of haptic feedback is provided. The primary contribution of this dissertation is the clear explanation of details that are essential for the effective implementation of naturalistic vibrotactile feedback. We demonstrate that our accessible, audio-based approach can enhance user performance and experience during telerobotic assembly in construction and other application domains. These findings lay the foundation for further exploration of the potential benefits of incorporating haptic cues to enhance user experience during teleoperation.

Project Page [BibTex]

2024

Project Page [BibTex]


no image
Reflectance Outperforms Force and Position in Model-Free Needle Puncture Detection

L’Orsa, R., Bisht, A., Yu, L., Murari, K., Westwick, D. T., Sutherland, G. R., Kuchenbecker, K. J.

In Proceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Orlando, USA, July 2024 (inproceedings) Accepted

Abstract
The surgical procedure of needle thoracostomy temporarily corrects accidental over-pressurization of the space between the chest wall and the lungs. However, failure rates of up to 94.1% have been reported, likely because this procedure is done blind: operators estimate by feel when the needle has reached its target. We believe instrumented needles could help operators discern entry into the target space, but limited success has been achieved using force and/or position to try to discriminate needle puncture events during simulated surgical procedures. We thus augmented our needle insertion system with a novel in-bore double-fiber optical setup. Tissue reflectance measurements as well as 3D force, torque, position, and orientation were recorded while two experimenters repeatedly inserted a bevel-tipped percutaneous needle into ex vivo porcine ribs. We applied model-free puncture detection to various filtered time derivatives of each sensor data stream offline. In the held-out test set of insertions, puncture-detection precision improved substantially using reflectance measurements compared to needle insertion force alone (3.3-fold increase) or position alone (11.6-fold increase).

Project Page [BibTex]

Project Page [BibTex]


no image
Expert Perception of Teleoperated Social Exercise Robots

Mohan, M., Mat Husin, H., Kuchenbecker, K. J.

In Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction (HRI), pages: 769-773, Boulder, USA, March 2024, Late-Breaking Report (LBR) (5 pages) presented at the IEEE/ACM International Conference on Human-Robot Interaction (HRI) (inproceedings)

Abstract
Social robots could help address the growing issue of physical inactivity by inspiring users to engage in interactive exercise. Nevertheless, the practical implementation of social exercise robots poses substantial challenges, particularly in terms of personalizing their activities to individuals. We propose that motion-capture-based teleoperation could serve as a viable solution to address these needs by enabling experts to record custom motions that could later be played back without their real-time involvement. To gather feedback about this idea, we conducted semi-structured interviews with eight exercise-therapy professionals. Our findings indicate that experts' attitudes toward social exercise robots become more positive when considering the prospect of teleoperation to record and customize robot behaviors.

DOI Project Page [BibTex]

DOI Project Page [BibTex]


Creating a Haptic Empathetic Robot Animal That Feels Touch and Emotion
Creating a Haptic Empathetic Robot Animal That Feels Touch and Emotion

Burns, R.

University of Tübingen, Tübingen, Germany, February 2024, Department of Computer Science (phdthesis)

Abstract
Social touch, such as a hug or a poke on the shoulder, is an essential aspect of everyday interaction. Humans use social touch to gain attention, communicate needs, express emotions, and build social bonds. Despite its importance, touch sensing is very limited in most commercially available robots. By endowing robots with social-touch perception, one can unlock a myriad of new interaction possibilities. In this thesis, I present my work on creating a Haptic Empathetic Robot Animal (HERA), a koala-like robot for children with autism. I demonstrate the importance of establishing design guidelines based on one's target audience, which we investigated through interviews with autism specialists. I share our work on creating full-body tactile sensing for the NAO robot using low-cost, do-it-yourself (DIY) methods, and I introduce an approach to model long-term robot emotions using second-order dynamics.

Project Page [BibTex]

Project Page [BibTex]

2023


no image
Gesture-Based Nonverbal Interaction for Exercise Robots

Mohan, M.

University of Tübingen, Tübingen, Germany, October 2023, Department of Computer Science (phdthesis)

Abstract
When teaching or coaching, humans augment their words with carefully timed hand gestures, head and body movements, and facial expressions to provide feedback to their students. Robots, however, rarely utilize these nuanced cues. A minimally supervised social robot equipped with these abilities could support people in exercising, physical therapy, and learning new activities. This thesis examines how the intuitive power of human gestures can be harnessed to enhance human-robot interaction. To address this question, this research explores gesture-based interactions to expand the capabilities of a socially assistive robotic exercise coach, investigating the perspectives of both novice users and exercise-therapy experts. This thesis begins by concentrating on the user's engagement with the robot, analyzing the feasibility of minimally supervised gesture-based interactions. This exploration seeks to establish a framework in which robots can interact with users in a more intuitive and responsive manner. The investigation then shifts its focus toward the professionals who are integral to the success of these innovative technologies: the exercise-therapy experts. Roboticists face the challenge of translating the knowledge of these experts into robotic interactions. We address this challenge by developing a teleoperation algorithm that can enable exercise therapists to create customized gesture-based interactions for a robot. Thus, this thesis lays the groundwork for dynamic gesture-based interactions in minimally supervised environments, with implications for not only exercise-coach robots but also broader applications in human-robot interaction.

Project Page [BibTex]

2023

Project Page [BibTex]


no image
Enhancing Surgical Team Collaboration and Situation Awareness through Multimodal Sensing

Allemang–Trivalle, A.

In Proceedings of the ACM International Conference on Multimodal Interaction, pages: 716-720, Extended abstract (5 pages) presented at the ACM International Conference on Multimodal Interaction (ICMI) Doctoral Consortium, Paris, France, October 2023 (inproceedings)

Abstract
Surgery, typically seen as the surgeon's sole responsibility, requires a broader perspective acknowledging the vital roles of other operating room (OR) personnel. The interactions among team members are crucial for delivering quality care and depend on shared situation awareness. I propose a two-phase approach to design and evaluate a multimodal platform that monitors OR members, offering insights into surgical procedures. The first phase focuses on designing a data-collection platform, tailored to surgical constraints, to generate novel collaboration and situation-awareness metrics using synchronous recordings of the participants' voices, positions, orientations, electrocardiograms, and respiration signals. The second phase concerns the creation of intuitive dashboards and visualizations, aiding surgeons in reviewing recorded surgery, identifying adverse events and contributing to proactive measures. This work aims to demonstrate an innovative approach to data collection and analysis, augmenting the surgical team's capabilities. The multimodal platform has the potential to enhance collaboration, foster situation awareness, and ultimately mitigate surgical adverse events. This research sets the stage for a transformative shift in the OR, enabling a more holistic and inclusive perspective that recognizes that surgery is a team effort.

DOI [BibTex]

DOI [BibTex]


Wear Your Heart on Your Sleeve: Users Prefer Robots with Emotional Reactions to Touch and Ambient Moods
Wear Your Heart on Your Sleeve: Users Prefer Robots with Emotional Reactions to Touch and Ambient Moods

Burns, R. B., Ojo, F., Kuchenbecker, K. J.

In Proceedings of the IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), pages: 1914-1921, Busan, South Korea, August 2023 (inproceedings)

Abstract
Robots are increasingly being developed as assistants for household, education, therapy, and care settings. Such robots can use adaptive emotional behavior to communicate warmly and effectively with their users and to encourage interest in extended interactions. However, autonomous physical robots often lack a dynamic internal emotional state, instead displaying brief, fixed emotion routines to promote specific user interactions. Furthermore, despite the importance of social touch in human communication, most commercially available robots have limited touch sensing, if any at all. We propose that users' perceptions of a social robotic system will improve when the robot provides emotional responses on both shorter and longer time scales (reactions and moods), based on touch inputs from the user. We evaluated this proposal through an online study in which 51 diverse participants watched nine randomly ordered videos (a three-by-three full-factorial design) of the koala-like robot HERA being touched by a human. Users provided the highest ratings in terms of agency, ambient activity, enjoyability, and touch perceptivity for scenarios in which HERA showed emotional reactions and either neutral or emotional moods in response to social touch gestures. Furthermore, we summarize key qualitative findings about users' preferences for reaction timing, the ability of robot mood to show persisting memory, and perception of neutral behaviors as a curious or self-aware robot.

link (url) DOI Project Page [BibTex]

link (url) DOI Project Page [BibTex]


Augmenting Human Policies using Riemannian Metrics for Human-Robot Shared Control
Augmenting Human Policies using Riemannian Metrics for Human-Robot Shared Control

Oh, Y., Passy, J., Mainprice, J.

In Proceedings of the IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), pages: 1612-1618, Busan, Korea, August 2023 (inproceedings)

Abstract
We present a shared control framework for teleoperation that combines the human and autonomous robot agents operating in different dimension spaces. The shared control problem is an optimization problem to maximize the human's internal action-value function while guaranteeing that the shared control policy is close to the autonomous robot policy. This results in a state update rule that augments the human controls using the Riemannian metric that emerges from computing the curvature of the robot's value function to account for any cost terms or constraints that the human operator may neglect when operating a redundant manipulator. In our experiments, we apply Linear Quadratic Regulators to locally approximate the robot policy using a single optimized robot trajectory, thereby preventing the need for an optimization step at each time step to determine the optimal policy. We show preliminary results of reach-and-grasp teleoperation tasks with a simulated human policy and a pilot user study using the VR headset and controllers. However, the mixed user preference ratings and quantitative results show that more investigation is required to prove the efficacy of the proposed paradigm.

DOI [BibTex]

DOI [BibTex]


no image
Naturalistic Vibrotactile Feedback Could Facilitate Telerobotic Assembly on Construction Sites

Gong, Y., Javot, B., Lauer, A. P. R., Sawodny, O., Kuchenbecker, K. J.

In Proceedings of the IEEE World Haptics Conference (WHC), pages: 169-175, Delft, The Netherlands, July 2023 (inproceedings)

Abstract
Telerobotics is regularly used on construction sites to build large structures efficiently. A human operator remotely controls the construction robot under direct visual feedback, but visibility is often poor. Future construction robots that move autonomously will also require operator monitoring. Thus, we designed a wireless haptic feedback system to provide the operator with task-relevant mechanical information from a construction robot in real time. Our AiroTouch system uses an accelerometer to measure the robot end-effector's vibrations and uses off-the-shelf audio equipment and a voice-coil actuator to display them to the user with high fidelity. A study was conducted to evaluate how this type of naturalistic vibration feedback affects the observer's understanding of telerobotic assembly on a real construction site. Seven adults without construction experience observed a mix of manual and autonomous assembly processes both with and without naturalistic vibrotactile feedback. Qualitative analysis of their survey responses and interviews indicated that all participants had positive responses to this technology and believed it would be beneficial for construction activities.

DOI Project Page [BibTex]

DOI Project Page [BibTex]


A Toolkit for Expanding Sustainability Engineering Utilizing Foundations of the Engineering for One Planet Initiative
A Toolkit for Expanding Sustainability Engineering Utilizing Foundations of the Engineering for One Planet Initiative

Schulz, A., Anderson, C. D., Cooper, C., Roberts, D., Loyo, J., Lewis, K., Kumar, S., Rolf, J., Marulanda, N. A. G.

In Proceedings of the American Society of Engineering Education (ASEE), Baltimore, USA, June 2023, Andrew Schulz, Cindy Cooper, Cindy Anderson contributed equally. (inproceedings)

Abstract
Recently, there has been a significant push to prepare all engineers with skills in sustainability, motivated by industry needs, accreditation requirements, and international efforts such as the National Science Foundation’s 10 Big Ideas and Grand Challenges and the United Nations’ Sustainable Development Goals (SDGs). This paper discusses a new toolkit to enable broad dissemination of vetted tools to help engineering faculty members teach sustainability using resources from the Engineering for One Planet (EOP) initiative. This toolkit is to be used as a mechanism to engage a diversity of stakeholders to use their voices, experiences, and connections to share the need for national curricular change in engineering education widely. This toolkit can foster the integration of sustainability-focused learning outcomes into engineering courses and programs. This is particularly important for graduating engineers at this crucial time when we collectively face a convergence of national- and global-scale planetary crises that professional engineers will directly and indirectly impact. Catalyzed by The Lemelson Foundation and VentureWell, the EOP initiative provides teaching tools, grants, and support for the EOP Network —a volunteer action network— comprising diverse stakeholders collectively seeking to transform engineering education to equip all engineers with the understanding, knowledge, skills, and mindsets to ensure their work contributes to a healthy world. The EOP Framework, a fundamental resource of the initiative, provides a curated and vetted list of ABET-aligned sustainability-focused student learning outcomes, including core and advanced. It covers social and environmental sustainability topics and essential professional skills such as communication, teamwork, and critical thinking. It was designed as a practical implementation tool — rather than a research framework — to help educators embed sustainability concepts and tools into engineering courses and programs at all levels. The Lemelson Foundation has provided a range of grants to support curricular transformation efforts using the EOP Framework. With support from The Lemelson Foundation, ASEE launched an EOP Mini-Grant Program in 2022 to engender curricular changes using the EOP Framework. The EOP Network is working to extend the reach of the Framework across the ASEE community beyond initial pilot programs by implementing an EOP Toolkit for EOP Network members and other stakeholders to use at their home institutions, conferences, and informative workshops. This article describes the rationale for creating the EOP Toolkit, the development process, content examples, and use scenarios.

[BibTex]

[BibTex]


Utilizing Online and Open-Source Machine Learning Toolkits to Leverage the Future of Sustainable Engineering
Utilizing Online and Open-Source Machine Learning Toolkits to Leverage the Future of Sustainable Engineering

Schulz, A., Stathatos, S., Shriver, C., Moore, R.

In Proceedings of the American Society of Engineering Education (ASEE), Baltimore, USA, June 2023, Andrew Schulz and Suzanne Stathatos are co-first authors. (inproceedings)

Abstract
Recently, there has been a national push to use machine learning (ML) and artificial intelligence (AI) to advance engineering techniques in all disciplines ranging from advanced fracture mechanics in materials science to soil and water quality testing in the civil and environmental engineering fields. Using AI, specifically machine learning, engineers can automate and decrease the processing or human labeling time while maintaining statistical repeatability via trained models and sensors. Edge Impulse has designed an open-source TinyML-enabled Arduino education tool kit for engineering disciplines. This paper discusses the various applications and approaches engineering educators have taken to utilize ML toolkits in the classroom. We provide in-depth implementation guides and associated learning outcomes focused on the Environmental Engineering Classroom. We discuss five specific examples of four standard Environmental Engineering courses for freshman and junior-level engineering. There are currently few programs in the nation that utilize machine learning toolkits to prepare the next generation of ML and AI-educated engineers for industry and academic careers. This paper will guide educators to design and implement ML/AI into engineering curricula (without a specific AI or ML focus within the course) using simple, cheap, and open-source tools and technological aid from an online platform in collaboration with Edge Impulse.

DOI [BibTex]

DOI [BibTex]


Reconstructing Signing Avatars from Video Using Linguistic Priors
Reconstructing Signing Avatars from Video Using Linguistic Priors

Forte, M., Kulits, P., Huang, C. P., Choutas, V., Tzionas, D., Kuchenbecker, K. J., Black, M. J.

In IEEE/CVF Conf. on Computer Vision and Pattern Recognition (CVPR), pages: 12791-12801, CVPR 2023, June 2023 (inproceedings)

Abstract
Sign language (SL) is the primary method of communication for the 70 million Deaf people around the world. Video dictionaries of isolated signs are a core SL learning tool. Replacing these with 3D avatars can aid learning and enable AR/VR applications, improving access to technology and online media. However, little work has attempted to estimate expressive 3D avatars from SL video; occlusion, noise, and motion blur make this task difficult. We address this by introducing novel linguistic priors that are universally applicable to SL and provide constraints on 3D hand pose that help resolve ambiguities within isolated signs. Our method, SGNify, captures fine-grained hand pose, facial expression, and body movement fully automatically from in-the-wild monocular SL videos. We evaluate SGNify quantitatively by using a commercial motion-capture system to compute 3D avatars synchronized with monocular video. SGNify outperforms state-of-the-art 3D body-pose- and shape-estimation methods on SL videos. A perceptual study shows that SGNify's 3D reconstructions are significantly more comprehensible and natural than those of previous methods and are on par with the source videos. Code and data are available at sgnify.is.tue.mpg.de.

pdf arXiv project code DOI [BibTex]

pdf arXiv project code DOI [BibTex]

2022


no image
Multi-Timescale Representation Learning of Human and Robot Haptic Interactions

Richardson, B.

University of Stuttgart, Stuttgart, Germany, December 2022, Faculty of Computer Science, Electrical Engineering and Information Technology (phdthesis)

Abstract
The sense of touch is one of the most crucial components of the human sensory system. It allows us to safely and intelligently interact with the physical objects and environment around us. By simply touching or dexterously manipulating an object, we can quickly infer a multitude of its properties. For more than fifty years, researchers have studied how humans physically explore and form perceptual representations of objects. Some of these works proposed the paradigm through which human haptic exploration is presently understood: humans use a particular set of exploratory procedures to elicit specific semantic attributes from objects. Others have sought to understand how physically measured object properties correspond to human perception of semantic attributes. Few, however, have investigated how specific explorations are perceived. As robots become increasingly advanced and more ubiquitous in daily life, they are beginning to be equipped with haptic sensing capabilities and algorithms for processing and structuring haptic information. Traditional haptics research has so far strongly influenced the introduction of haptic sensation and perception into robots but has not proven sufficient to give robots the necessary tools to become intelligent autonomous agents. The work presented in this thesis seeks to understand how single and sequential haptic interactions are perceived by both humans and robots. In our first study, we depart from the more traditional methods of studying human haptic perception and investigate how the physical sensations felt during single explorations are perceived by individual people. We treat interactions as probability distributions over a haptic feature space and train a model to predict how similarly a pair of surfaces is rated, predicting perceived similarity with a reasonable degree of accuracy. Our novel method also allows us to evaluate how individual people weigh different surface properties when they make perceptual judgments. The method is highly versatile and presents many opportunities for further studies into how humans form perceptual representations of specific explorations. Our next body of work explores how to improve robotic haptic perception of single interactions. We use unsupervised feature-learning methods to derive powerful features from raw robot sensor data and classify robot explorations into numerous haptic semantic property labels that were assigned from human ratings. Additionally, we provide robots with more nuanced perception by learning to predict graded ratings of a subset of properties. Our methods outperform previous attempts that all used hand-crafted features, demonstrating the limitations of such traditional approaches. To push robot haptic perception beyond evaluation of single explorations, our final work introduces and evaluates a method to give robots the ability to accumulate information over many sequential actions; our approach essentially takes advantage of object permanence by conditionally and recursively updating the representation of an object as it is sequentially explored. We implement our method on a robotic gripper platform that performs multiple exploratory procedures on each of many objects. As the robot explores objects with new procedures, it gains confidence in its internal representations and classification of object properties, thus moving closer to the marvelous haptic capabilities of humans and providing a solid foundation for future research in this domain.

link (url) Project Page [BibTex]

2022

link (url) Project Page [BibTex]


no image
Understanding the Influence of Moisture on Fingerpad-Surface Interactions

Nam, S.

University of Tübingen, Tübingen, Germany, October 2022, Department of Computer Science (phdthesis)

Abstract
People frequently touch objects with their fingers. The physical deformation of a finger pressing an object surface stimulates mechanoreceptors, resulting in a perceptual experience. Through interactions between perceptual sensations and motor control, humans naturally acquire the ability to manage friction under various contact conditions. Many researchers have advanced our understanding of human fingers to this point, but their complex structure and the variations in friction they experience due to continuously changing contact conditions necessitate additional study. Moisture is a primary factor that influences many aspects of the finger. In particular, sweat excreted from the numerous sweat pores on the fingerprints modifies the finger's material properties and the contact conditions between the finger and a surface. Measuring changes of the finger's moisture over time and in response to external stimuli presents a challenge for researchers, as commercial moisture sensors do not provide continuous measurements. This dissertation investigates the influence of moisture on fingerpad-surface interactions from diverse perspectives. First, we examine the extent to which moisture on the finger contributes to the sensation of stickiness during contact with glass. Second, we investigate the representative material properties of a finger at three distinct moisture levels, since the softness of human skin varies significantly with moisture. The third perspective is friction; we examine how the contact conditions, including the moisture of a finger, determine the available friction force opposing lateral sliding on glass. Fourth, we have invented and prototyped a transparent in vivo moisture sensor for the continuous measurement of finger hydration. In the first part of this dissertation, we explore how the perceptual intensity of light stickiness relates to the physical interaction between the skin and the surface. We conducted a psychophysical experiment in which nine participants actively pressed their index finger on a flat glass plate with a normal force close to 1.5 N and then detached it after a few seconds. A custom-designed apparatus recorded the contact force vector and the finger contact area during each interaction as well as pre- and post-trial finger moisture. After detaching their finger, participants judged the stickiness of the glass using a nine-point scale. We explored how sixteen physical variables derived from the recorded data correlate with each other and with the stickiness judgments of each participant. These analyses indicate that stickiness perception mainly depends on the pre-detachment pressing duration, the time taken for the finger to detach, and the impulse in the normal direction after the normal force changes sign; finger-surface adhesion seems to build with pressing time, causing a larger normal impulse during detachment and thus a more intense stickiness sensation. We additionally found a strong between-subjects correlation between maximum real contact area and peak pull-off force, as well as between finger moisture and impulse. When a fingerpad presses into a hard surface, the development of the contact area depends on the pressing force and speed. Importantly, it also varies with the finger's moisture, presumably because hydration changes the tissue's material properties. Therefore, for the second part of this dissertation, we collected data from one finger repeatedly pressing a glass plate under three moisture conditions, and we constructed a finite element model that we optimized to simulate the same three scenarios. We controlled the moisture of the subject's finger to be dry, natural, or moist and recorded 15 pressing trials in each condition. The measurements include normal force over time plus finger-contact images that are processed to yield gross contact area. We defined the axially symmetric 3D model's lumped parameters to include an SLS-Kelvin model (spring in series with parallel spring and damper) for the bulk tissue, plus an elastic epidermal layer. Particle swarm optimization was used to find the parameter values that cause the simulation to best match the trials recorded in each moisture condition. The results show that the softness of the bulk tissue reduces as the finger becomes more hydrated. The epidermis of the moist finger model is softest, while the natural finger model has the highest viscosity. In the third part of this dissertation, we focused on friction between the fingerpad and the surface. The magnitude of finger-surface friction available at the onset of full slip is crucial for understanding how the human hand can grip and manipulate objects. Related studies revealed the significance of moisture and contact time in enhancing friction. Recent research additionally indicated that surface temperature may also affect friction. However, previously reported friction coefficients have been measured only in dynamic contact conditions, where the finger is already sliding across the surface. In this study, we repeatedly measured the initial friction before full slip under eight contact conditions with low and high finger moisture, pressing time, and surface temperature. Moisture and pressing time both independently increased finger-surface friction across our population of twelve participants, and the effect of surface temperature depended on the contact conditions. Furthermore, detailed analysis of the recorded measurements indicates that micro stick-slip during the partial-slip phase contributes to enhanced friction. For the fourth and final part of this dissertation, we designed a transparent moisture sensor for continuous measurement of fingerpad hydration. Because various stimuli cause the sweat pores on fingerprints to excrete sweat, many researchers want to quantify the flow and assess its impact on the formation of the contact area. Unfortunately, the most popular sensor for skin hydration is opaque and does not offer continuous measurements. Our capacitive moisture sensor consists of a pair of inter-digital electrodes covered by an insulating layer, enabling impedance measurements across a wide frequency range. This proposed sensor is made entirely of transparent materials, which allows us to simultaneously measure the finger's contact area. Electrochemical impedance spectroscopy identifies the equivalent electrical circuit and the electrical component parameters that are affected by the amount of moisture present on the surface of the sensor. Most notably, the impedance at 1 kHz seems to best reflect the relative amount of sweat.

DOI Project Page [BibTex]

DOI Project Page [BibTex]


no image
Towards Semi-Automated Pleural Cavity Access for Pneumothorax in Austere Environments

L’Orsa, R., Lama, S., Westwick, D., Sutherland, G., Kuchenbecker, K. J.

In Proceedings of the International Astronautical Congress (IAC), pages: 1-7, Paris, France, September 2022 (inproceedings)

Abstract
Pneumothorax, a condition where injury or disease introduces air between the chest wall and lungs, can impede lung function and lead to respiratory failure and/or obstructive shock. Chest trauma from dynamic loads, hypobaric exposure from extravehicular activity, and pulmonary inflammation from celestial dust exposures could potentially cause pneumothoraces during spaceflight with or without exacerbation from deconditioning. On Earth, emergent cases are treated with chest tube insertion (tube thoracostomy, TT) when available, or needle decompression (ND) when not (i.e., pre-hospital). However, ND has high failure rates (up to 94%), and TT has high complication rates (up to 37.9%), especially when performed by inexperienced or intermittent operators. Thus neither procedure is ideal for a pure just-in-time training or skill refreshment approach, and both may require adjuncts for safe inclusion in Level of Care IV (e.g., short duration lunar orbit) or V (e.g., Mars transit) missions. Insertional complications are of particular concern since they cause inadvertent tissue damage that, while surgically repairable in an operating room, could result in (preventable) fatality in a spacecraft or other isolated, confined, or extreme (ICE) environments. Tools must be positioned and oriented correctly to avoid accidental insertion into critical structures, and they must be inserted no further than the thin membrane lining the inside of the rib cage (i.e., the parietal pleura). Operators identify pleural puncture via loss-of-resistance sensations on the tool during advancement, but experienced surgeons anecdotally describe a wide range of membrane characteristics: robust tissues require significant force to perforate, while fragile tissues deliver little-to-no haptic sensation when pierced. Both extremes can lead to tool overshoot and may be representative of astronaut tissues at the beginning (healthy) and end (deconditioned) of long duration exploration class missions. Given uncertainty surrounding physician astronaut selection criteria, skill retention, and tissue condition, an adjunct for improved insertion accuracy would be of value. We describe experiments conducted with an intelligent prototype sensorized system aimed at semi-automating tool insertion into the pleural cavity. The assembly would integrate with an in-mission medical system and could be tailored to fully complement an autonomous medical response agent. When coupled with minimal just-in-time training, it has the potential to bestow expert pleural access skills on non-expert operators without the use of ground resources, in both emergent and elective treatment scenarios.

Project Page [BibTex]

Project Page [BibTex]


no image
How Long Does It Take to Learn Trimanual Coordination?

Allemang–Trivalle, A., Eden, J., Ivanova, E., Huang, Y., Burdet, E.

In Proceedings of the IEEE International Conference on Robot and Human Interactive Communication (RO-MAN) , pages: 211-216, Napoli, Italy, August 2022 (inproceedings)

Abstract
Supernumerary robotic limbs can act as intelligent prostheses or augment the motion of healthy people to achieve actions which are not possible with only two natural hands. However, as trimanual control is not typical in everyday activities, it is still unknown how different training could influence its acquisition. We conducted an experimental study to evaluate the impact of different forms of trimanual action on training. Two groups of twelve subjects were each trained in virtual reality for five weeks using either a three independent goals task or one dependent goal task. The success of their training was then evaluated by comparing their task performance and motion characteristics between sessions. The results show that subjects dramatically improved their trimanual task performance as a result of training. However, while they showed improved motion efficiency and reduced workload for tasks with multiple independent goals with practice, no such improvement was observed when they trained with the one coordinated goal task.

DOI [BibTex]

DOI [BibTex]


no image
Comparison of Human Trimanual Performance Between Independent and Dependent Multiple-Limb Training Modes

Allemang–Trivalle, A., Eden, J., Huang, Y., Ivanova, E., Burdet, E.

In Proceedings of the IEEE RAS/EMBS International Conference for Biomedical Robotics and Biomechatronics (BioRob), Seoul, Korea, August 2022 (inproceedings)

Abstract
Human movement augmentation with a third robotic hand can extend human capability allowing a single user to perform three-hand tasks that would typically require cooperation with other people. However, as trimanual control is not typical in everyday activities, it is still unknown how to train people to acquire this capability efficiently. We conducted an experimental study to evaluate two different trimanual training modes with 24 subjects. This investigated how the different modes impact the transfer of learning of the acquired trimanual capability to another task. Two groups of twelve subjects were each trained in virtual reality for five weeks using either independent or dependent trimanual task repetitions. The training was evaluated by comparing performance before and after training in a gamified trimanual task. The results show that both groups of subjects improved their trimanual capabilities after training. However, this improvement appeared independent of training scheme.

DOI [BibTex]

DOI [BibTex]


no image
Wrist-Squeezing Force Feedback Improves Accuracy and Speed in Robotic Surgery Training

Machaca, S., Cao, E., Chi, A., Adrales, G., Kuchenbecker, K. J., Brown, J. D.

In Proceedings of the IEEE RAS/EMBS International Conference for Biomedical Robotics and Biomechatronics (BioRob), pages: 700-707 , Seoul, Korea, 9th IEEE RAS/EMBS International Conference for Biomedical Robotics and Biomechatronics (BioRob 2022), August 2022 (inproceedings)

Abstract
Current robotic minimally invasive surgery (RMIS) platforms provide surgeons with no haptic feedback of the robot's physical interactions. This limitation forces surgeons to rely heavily on visual feedback and can make it challenging for surgical trainees to manipulate tissue gently. Prior research has demonstrated that haptic feedback can increase task accuracy in RMIS training. However, it remains unclear whether these improvements represent a fundamental improvement in skill, or if they simply stem from re-prioritizing accuracy over task completion time. In this study, we provide haptic feedback of the force applied by the surgical instruments using custom wrist-squeezing devices. We hypothesize that individuals receiving haptic feedback will increase accuracy (produce less force) while increasing their task completion time, compared to a control group receiving no haptic feedback. To test this hypothesis, N=21 novice participants were asked to repeatedly complete a ring rollercoaster surgical training task as quickly as possible. Results show that participants receiving haptic feedback apply significantly less force (0.67 N) than the control group, and they complete the task no faster or slower than the control group after twelve repetitions. Furthermore, participants in the feedback group decreased their task completion times significantly faster (7.68 %) than participants in the control group (5.26 %). This form of haptic feedback, therefore, has the potential to help trainees improve their technical accuracy without compromising speed.

link (url) DOI Project Page [BibTex]

link (url) DOI Project Page [BibTex]


no image
A Foundational Design Experience in Conservation Technology: A Multi-Disciplinary Approach to meeting Sustainable Development Goals

Schulz, A., Shriver, C., Seleb, B., Greiner, C., Hu, D., Moore, R., Zhang, M., Jadali, N., Patka, A.

Proceedings of the American Society of Engineering Education, pages: 1-12, Minneapolis, USA, June 2022, Award for best paper (conference)

Abstract
Project-based courses allow students to apply techniques they have learned in their undergraduate engineering curriculum to real-world problems. While many students have demonstrated interest in working on humanitarian projects that address the United Nations’ Sustainable Development Goals (SDGs), these projects typically require longer timelines than a single semester capstone course will allow. To encourage student participation in achieving the SDGs, we have created an interdisciplinary course that allows sophomore through senior-level undergraduate students to engage in utilizing human-wildlife centered design to work on projects that prevent extinction and promote healthy human-wildlife co-habitation. This field, known as Conservation Technology (CT), helps students 1) understand the complexities of solutions to the SDGs and the need for diverse perspectives, 2) find and apply international conservation guidelines, 3) develop teamwork and leadership skills by working on interdisciplinary teams, and 4) evaluate and assess conservation technology projects for multiple stakeholders and in the context of the SDGs. Students may take this course for several sequential semesters, partnering with more senior and junior students, allowing for long-term engagement in sustainability solutions. In the first half of the semester, we leverage more traditional pedagogical approaches, including formative assessments and in-class lectures on conservation, technology, and sustainability solutions. In the second half of the semester, we utilize peer, instructor, and expert reviews of the projects students work on to help them excel at successful and equitable conservation technology interventions. Through 9 formal interviews conducted with students, we discovered themes that students identified as most critical for engaging in conservation technology initiatives. These themes include 1) perspective given to students through in-person, active learning using the Dilemma, Issue, Question approach, 2) Independent learning of conservation technology background and theory during the beginning of the course, and 3) Hands-on learning and project-focused experiences in CT. To leverage engineers’ engagement in the SDGs, students needed half a semester of background information to allow for an adequate understanding of the complexities of humanitarian aid projects. This paper, discusses the course structure that will help leverage Sustainable Development Goals into engineering curricula and use Conservation Technology projects in the course as case studies for interdisciplinary learning.

link (url) [BibTex]

link (url) [BibTex]


Larger Skin-Surface Contact Through a Fingertip Wearable Improves Roughness Perception
Larger Skin-Surface Contact Through a Fingertip Wearable Improves Roughness Perception

Gueorguiev, D., Javot, B., Spiers, A., Kuchenbecker, K. J.

In Haptics: Science, Technology, Applications, pages: 171-179, Lecture Notes in Computer Science, 13235, (Editors: Seifi, Hasti and Kappers, Astrid M. L. and Schneider, Oliver and Drewing, Knut and Pacchierotti, Claudio and Abbasimoshaei, Alireza and Huisman, Gijs and Kern, Thorsten A.), Springer, Cham, 13th International Conference on Human Haptic Sensing and Touch Enabled Computer Applications (EuroHaptics 2022), May 2022 (inproceedings)

Abstract
With the aim of creating wearable haptic interfaces that allow the performance of everyday tasks, we explore how differently designed fingertip wearables change the sensory threshold for tactile roughness perception. Study participants performed the same two-alternative forced-choice roughness task with a bare finger and wearing three flexible fingertip covers: two with a square opening (64 and 36 mm2, respectively) and the third with no opening. The results showed that adding the large opening improved the 75% JND by a factor of 2 times compared to the fully covered finger: the higher the skin-surface contact area, the better the roughness perception. Overall, the results show that even partial skin-surface contact through a fingertip wearable improves roughness perception, which opens design opportunities for haptic wearables that preserve natural touch.

DOI [BibTex]

DOI [BibTex]


no image
Toward The Un’s Sustainable Development Goals (sdgs): Conservation Technology For Design Teaching & Learning

Schulz, A., Seleb, B., Shriver, C., Hu, D., Moore, R.

pages: 1-9, Charlotte, USA, March 2022 (conference)

Abstract
Interdisciplinary capstone team projects have provided a more diverse array of student experiences and have been shown to improve a team’s innovation, analysis, and communication. The UN’s 17 Sustainable Development Goals (SDGs) provide aspirational, human-centered design opportunities for applying engineering practices to real-world technology interventions that aid in programs from public health to wildlife conservation. In this sophomore-level design course, we focused on climate change, the impact of life on the sea, and the impact of life on the land through the lens of conservation technology. Conservation Technology is a relatively new field focusing on the creation of technologies to promote and safeguard sustainable human-wildlife interactions. In this manuscript, we describe the framework for teaching a Conservation Technology project-based capstone engineering course and present observations of monodisciplinary and interdisciplinary team practices. When working in a non-interdisciplinary team, engineers tended to focus only on the design deliverables and missed challenges imposed by policy, biology, and computational requirements. These three challenges are nearly always present in conservation technology interventions. In contrast, the interdisciplinary team was able to better identify the diverse challenges associated with conservation technology intervention. This work in progress paper focuses on the development of an organized curriculum to teach conservation technology to first- and second-year engineers to allow them to work towards more sustainable engineering practices. Universities are working to inject the SDGs into the engineering curriculum, and we believe Conservation Technology may be an ideal fit for combining the engineering design process with the scientific method to discover new types of possible failures in design and create innovative solutions for a sustainable future.

link (url) [BibTex]

link (url) [BibTex]


no image
Robot, Pass Me the Tool: Handle Visibility Facilitates Task-Oriented Handovers

Ortenzi, V., Filipovica, M., Abdlkarim, D., Pardi, T., Takahashi, C., Wing, A. M., Luca, M. D., Kuchenbecker, K. J.

In Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction (HRI), pages: 256-264, March 2022, Valerio Ortenzi and Maija Filipovica contributed equally to this publication. (inproceedings)

Abstract
A human handing over an object modulates their grasp and movements to accommodate their partner's capabilities, which greatly increases the likelihood of a successful transfer. State-of-the-art robot behavior lacks this level of user understanding, resulting in interactions that force the human partner to shoulder the burden of adaptation. This paper investigates how visual occlusion of the object being passed affects the subjective perception and quantitative performance of the human receiver. We performed an experiment in virtual reality where seventeen participants were tasked with repeatedly reaching to take a tool from the hand of a robot; each of the three tested objects (hammer, screwdriver, scissors) was presented in a wide variety of poses. We carefully analysed the user's hand and head motions, the time to grasp the object, and the chosen grasp location, as well as participants' ratings of the grasp they just performed. Results show that initial visibility of the handle significantly increases the reported holdability and immediate usability of a tool. Furthermore, a robot that offers objects so that their handles are more occluded forces the receiver to spend more time in planning and executing the grasp and also lowers the probability that the tool will be grasped by the handle. Together these findings indicate that robots can more effectively support their human work partners by increasing the visibility of the intended grasp location of objects being passed.

DOI Project Page [BibTex]

DOI Project Page [BibTex]

2021


Sensorimotor-Inspired Tactile Feedback and Control Improve Consistency of Prosthesis Manipulation in the Absence of Direct Vision
Sensorimotor-Inspired Tactile Feedback and Control Improve Consistency of Prosthesis Manipulation in the Absence of Direct Vision

Thomas, N., Fazlollahi, F., Brown, J. D., Kuchenbecker, K. J.

In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages: 6174-6181, Prague, Czech Republic, September 2021 (inproceedings)

Abstract
The lack of haptically aware upper-limb prostheses forces amputees to rely largely on visual cues to complete activities of daily living. In contrast, non-amputees inherently rely on conscious haptic perception and automatic tactile reflexes to govern volitional actions in situations that do not allow for constant visual attention. We therefore propose a myoelectric prosthesis system that reflects these concepts to aid manipulation performance without direct vision. To implement this design, we constructed two fabric-based tactile sensors that measure contact location along the palmar and dorsal sides of the prosthetic fingers and grasp pressure at the tip of the prosthetic thumb. Inspired by the natural sensorimotor system, we use the measurements from these sensors to provide vibrotactile feedback of contact location and implement a tactile grasp controller with reflexes that prevent over-grasping and object slip. We compare this tactile system to a standard myoelectric prosthesis in a challenging reach-to-pick-and-place task conducted without direct vision; 17 non-amputee adults took part in this single-session between-subjects study. Participants in the tactile group achieved more consistent high performance compared to participants in the standard group. These results show that adding contact-location feedback and reflex control increases the consistency with which objects can be grasped and moved without direct vision in upper-limb prosthetics

DOI Project Page [BibTex]

2021

DOI Project Page [BibTex]


Huggie{B}ot: An Interactive Hugging Robot With Visual and Haptic Perception
HuggieBot: An Interactive Hugging Robot With Visual and Haptic Perception

Block, A. E.

ETH Zürich, Zürich, August 2021, Department of Computer Science (phdthesis)

Abstract
Hugs are one of the first forms of contact and affection humans experience. Receiving a hug is one of the best ways to feel socially supported, and the lack of social touch can have severe adverse effects on an individual's well-being. Due to the prevalence and health benefits of hugging, roboticists are interested in creating robots that can hug humans as seamlessly as humans hug other humans. However, hugs are complex affective interactions that need to adapt to the height, body shape, and preferences of the hugging partner, and they often include intra-hug gestures like squeezes. This dissertation aims to create a series of hugging robots that use visual and haptic perception to provide enjoyable interactive hugs. Each of the four presented HuggieBot versions is evaluated by measuring how users emotionally and behaviorally respond to hugging it; HuggieBot 4.0 is explicitly compared to a human hugging partner using physiological measures. Building on research both within and outside of human-robot interaction (HRI), this thesis proposes eleven tenets of natural and enjoyable robotic hugging. These tenets were iteratively crafted through a design process combining user feedback and experimenter observation, and they were evaluated through user studies. A good hugging robot should (1) be soft, (2) be warm, (3) be human-sized, (4) autonomously invite the user for a hug when it detects someone in its personal space, and then it should wait for the user to begin walking toward it before closing its arms to ensure a consensual and synchronous hugging experience. It should also (5) adjust its embrace to the user's size and position, (6) reliably release when the user wants to end the hug, and (7) perceive the user's height and adapt its arm positions accordingly to comfortably fit around the user at appropriate body locations. Finally, a hugging robot should (8) accurately detect and classify gestures applied to its torso in real time, regardless of the user's hand placement, (9) respond quickly to their intra-hug gestures, (10) adopt a gesture paradigm that blends user preferences with slight variety and spontaneity, and (11) occasionally provide unprompted, proactive affective social touch to the user through intra-hug gestures. We believe these eleven tenets are essential to delivering high-quality robot hugs. Their presence results in a hug that pleases the user, and their absence results in a hug that is likely to be inadequate. We present these tenets as guidelines for future hugging robot creators to follow when designing new hugging robots to ensure user acceptance. We tested the four versions of HuggieBot through six user studies. First, we analyzed data collected in a previous study with a modified Willow Garage Personal Robot 2 (PR2) to evaluate human responses to different robot physical characteristics and hugging behaviors. Participants experienced and evaluated twelve hugs with the robot, divided into three randomly ordered trials that focused on physical robot characteristics (single factor, three levels) and nine randomly ordered trials with low, medium, and high hug pressure and duration (two factors, three levels each). Second, we created an entirely new robotic platform, HuggieBot 2.0, according to our first six tenets. The new platform features a soft, warm, inflated body (HuggieChest) and uses visual and haptic sensing to deliver closed-loop hugging. We first verified the outward appeal of this platform compared to the previous PR2-based HuggieBot 1.0 via an online video-watching study involving 117 users. We then conducted an in-person experiment in which 32 users each exchanged eight hugs with HuggieBot 2.0, experiencing all combinations of visual hug initiation, haptic sizing, and haptic releasing. We then refine the original fourth tenet (visually perceive its user) and present the remaining five tenets for designing interactive hugging robots; we validate the full list of eleven tenets through more in-person studies with our custom robot. To enable perceptive and pleasing autonomous robot behavior, we investigated robot responses to four human intra-hug gestures: holding, rubbing, patting, and squeezing. The robot's inflated torso's microphone and pressure sensor collected data of 32 people repeatedly demonstrating these gestures, which were used to develop a perceptual algorithm that classifies user actions with 88% accuracy. From user preferences, we created a probabilistic behavior algorithm that chooses robot responses in real time. We implemented improvements to the robot platform to create a third version of our robot, HuggieBot 3.0. We then validated its gesture perception system and behavior algorithm in a fifth user study with 16 users. Finally, we refined the quality and comfort of the embrace by adjusting the joint torques and joint angles of the closed pose position, we further improved the robot's visual perception to detect changes in user approach, we upgraded the robot's response to users who do not press on its back, and we had the robot respond to all intra-hug gestures with squeezes to create our final version of the robotic platform, HuggieBot 4.0. In our sixth user study, we investigated the emotional and physiological effects of hugging a robot compared to the effects of hugging a friendly but unfamiliar person. We continuously monitored participant heart rate and collected saliva samples at seven time points across the 3.5-hour study to measure the temporal evolution of cortisol and oxytocin. We used an adapted Trier Social Stress Test (TSST) protocol to reliably and ethically induce stress in the participants. They then experienced one of five different hug intervention methods before all interacting with HuggieBot 4.0. The results of these six user studies validated our eleven hugging tenets and informed the iterative design of HuggieBot. We see that users enjoy robot softness, robot warmth, and being physically squeezed by the robot. Users dislike being released too soon from a hug and equally dislike being held by the robot for too long. Adding haptic reactivity definitively improves user perception of a hugging robot; the robot's responses and proactive intra-hug gestures were greatly enjoyed. In our last study, we learned that HuggieBot can positively affect users on a physiological level and is somewhat comparable to hugging a person. Participants have more favorable opinions about hugging robots after prolonged interaction with HuggieBot in all of our research studies.

DOI Project Page [BibTex]

DOI Project Page [BibTex]


no image
Optimal Grasp Selection, and Control for Stabilising a Grasped Object, with Respect to Slippage and External Forces

Pardi, T., E., A. G., Ortenzi, V., Stolkin, R.

In Proceedings of the IEEE-RAS International Conference on Humanoid Robots (Humanoids 2020), pages: 429-436, Munich, Germany, July 2021 (inproceedings)

Abstract
This paper explores the problem of how to grasp an object, and then control a robot arm so as to stabilise that object, under conditions where: i) there is significant slippage between the object and the robot's fingers; and ii) the object is perturbed by external forces. For an n degrees of freedom (dof) robot, we treat the robot plus grasped object as an (n+1) dof system, where the grasped object can rotate between the robot's fingers via slippage. Firstly, we propose an optimisation-based algorithm that selects the best grasping location from a set of given candidates. The best grasp is one that will yield the minimum effort for the arm to keep the object in equilibrium against external perturbations. Secondly, we propose a controller which brings the (n+1) dof system to a task configuration, and then maintains that configuration robustly against matched and unmatched disturbances. To minimise slippage between gripper and grasped object, a sufficient criterion for selecting the control coefficients is proposed by adopting a set of inequalities, which are obtained solving a non-linear minimisation problem, dependant on the static friction estimation. We demonstrate our approach on a simulated (2+1) planar robot, comprising two joints of the robot arm, plus the additional passive joint which is formed by the slippage between the object and the robot's fingers. We also present an experiment with a real robot arm, grasping a flat object between the fingers of a parallel jaw gripper.

DOI [BibTex]

DOI [BibTex]


no image
PrendoSim: Proxy-Hand-Based Robot Grasp Generator

Abdlkarim, D., Ortenzi, V., Pardi, T., Filipovica, M., Wing, A. M., Kuchenbecker, K. J., Di Luca, M.

In ICINCO 2021: Proceedings of the International Conference on Informatics in Control, Automation and Robotics, pages: 60-68, (Editors: Gusikhin, Oleg and Nijmeijer, Henk and Madani, Kurosh), SciTePress, Sétubal, 18th International Conference on Informatics in Control, Automation and Robotics (ICINCO 2021), July 2021 (inproceedings)

Abstract
The synthesis of realistic robot grasps in a simulated environment is pivotal in generating datasets that support sim-to-real transfer learning. In a step toward achieving this goal, we propose PrendoSim, an open-source grasp generator based on a proxy-hand simulation that employs NVIDIA's physics engine (PhysX) and the recently released articulated-body objects developed by Unity (https://prendosim.github.io). We present the implementation details, the method used to generate grasps, the approach to operationally evaluate stability of the generated grasps, and examples of grasps obtained with two different grippers (a parallel jaw gripper and a three-finger hand) grasping three objects selected from the YCB dataset (a pair of scissors, a hammer, and a screwdriver). Compared to simulators proposed in the literature, PrendoSim balances grasp realism and ease of use, displaying an intuitive interface and enabling the user to produce a large and varied dataset of stable grasps.

DOI Project Page [BibTex]

DOI Project Page [BibTex]


Ungrounded Vari-Dimensional Tactile Fingertip Feedback for Virtual Object Interaction
Ungrounded Vari-Dimensional Tactile Fingertip Feedback for Virtual Object Interaction

Young, E. M., Kuchenbecker, K. J.

In CHI ’21: Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, pages: 217, ACM, New York, NY, Conference on Human Factors in Computing Systems (CHI 2021), May 2021 (inproceedings)

Abstract
Compared to grounded force feedback, providing tactile feedback via a wearable device can free the user and broaden the potential applications of simulated physical interactions. However, neither the limitations nor the full potential of tactile-only feedback have been precisely examined. Here we investigate how the dimensionality of cutaneous fingertip feedback affects user movements and virtual object recognition. We combine a recently invented 6-DOF fingertip device with motion tracking, a head-mounted display, and novel contact-rendering algorithms to enable a user to tactilely explore immersive virtual environments. We evaluate rudimentary 1-DOF, moderate 3-DOF, and complex 6-DOF tactile feedback during shape discrimination and mass discrimination, also comparing to interactions with real objects. Results from 20 naive study participants show that higher-dimensional tactile feedback may indeed allow completion of a wider range of virtual tasks, but that feedback dimensionality surprisingly does not greatly affect the exploratory techniques employed by the user.

link (url) DOI Project Page [BibTex]

link (url) DOI Project Page [BibTex]


Robot {I}nteraction {S}tudio: A Platform for Unsupervised {HRI}
Robot Interaction Studio: A Platform for Unsupervised HRI

Mohan, M., Nunez, C. M., Kuchenbecker, K. J.

In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Xian, China, May 2021 (inproceedings)

Abstract
Robots hold great potential for supporting exercise and physical therapy, but such systems are often cumbersome to set up and require expert supervision. We aim to solve these concerns by combining Captury Live, a real-time markerless motion-capture system, with a Rethink Robotics Baxter Research Robot to create the Robot Interaction Studio. We evaluated this platform for unsupervised human-robot interaction (HRI) through a 75-minute-long user study with seven adults who were given minimal instructions and no feedback about their actions. The robot used sounds, facial expressions, facial colors, head motions, and arm motions to sequentially present three categories of cues in randomized order while constantly rotating its face screen to look at the user. Analysis of the captured user motions shows that the cue type significantly affected the distance subjects traveled and the amount of time they spent within the robot’s reachable workspace, in alignment with the design of the cues. Heat map visualizations of the recorded user hand positions confirm that users tended to mimic the robot’s arm poses. Despite some initial frustration, taking part in this study did not significantly change user opinions of the robot. We reflect on the advantages of the proposed approach to unsupervised HRI as well as the limitations and possible future extensions of our system.

DOI Project Page [BibTex]

DOI Project Page [BibTex]


no image
Robotic Surgery Training in AR: Multimodal Record and Replay

Krauthausen, F.

pages: 1-147, University of Stuttgart, Stuttgart, May 2021, Study Program in Software Engineering (mastersthesis)

[BibTex]

[BibTex]


no image
An electric machine with two-phase planar Lorentz coils and a ring-shaped Halbach array for high torque density and high-precision applications

Nguyen, V., Javot, B., Kuchenbecker, K. J.

(EP21170679.1), April 2021 (patent)

Abstract
An electric machine, in particular a motor or a generator, comprising a rotor and a stator, wherein the rotor comprises a planar, ring-shaped rotor base element and the stator comprises a planar ring-shaped stator base element, wherein the rotor base element and the stator base element are aligned along an axial axis (Z) of the electric machine, wherein a plurality of magnet elements are arranged around the circumference of the ring-shaped rotor base element forming a Halbach magnet-ring assembly, wherein the Halbach magnet-ring assembly generates a magnetic field (BR) with axial and azimuthal components, wherein a plurality of coils are arranged around the circumference (C) of the ring-shaped stator base element.

Project Page [BibTex]


The Six Hug Commandments: Design and Evaluation of a Human-Sized Hugging Robot with Visual and Haptic Perception
The Six Hug Commandments: Design and Evaluation of a Human-Sized Hugging Robot with Visual and Haptic Perception

Block, A. E., Christen, S., Gassert, R., Hilliges, O., Kuchenbecker, K. J.

In HRI ’21: Proceedings of the 2021 ACM/IEEE International Conference on Human-Robot Interaction, pages: 380-388, ACM, New York, NY, USA, ACM/IEEE International Conference on Human-Robot Interaction (HRI 2021), March 2021 (inproceedings)

Abstract
Receiving a hug is one of the best ways to feel socially supported, and the lack of social touch can have severe negative effects on an individual's well-being. Based on previous research both within and outside of HRI, we propose six tenets (''commandments'') of natural and enjoyable robotic hugging: a hugging robot should be soft, be warm, be human sized, visually perceive its user, adjust its embrace to the user's size and position, and reliably release when the user wants to end the hug. Prior work validated the first two tenets, and the final four are new. We followed all six tenets to create a new robotic platform, HuggieBot 2.0, that has a soft, warm, inflated body (HuggieChest) and uses visual and haptic sensing to deliver closed-loop hugging. We first verified the outward appeal of this platform in comparison to the previous PR2-based HuggieBot 1.0 via an online video-watching study involving 117 users. We then conducted an in-person experiment in which 32 users each exchanged eight hugs with HuggieBot 2.0, experiencing all combinations of visual hug initiation, haptic sizing, and haptic releasing. The results show that adding haptic reactivity definitively improves user perception a hugging robot, largely verifying our four new tenets and illuminating several interesting opportunities for further improvement.

Block21-HRI-Commandments.pdf DOI Project Page [BibTex]

Block21-HRI-Commandments.pdf DOI Project Page [BibTex]


Sensor Arrangement for Sensing Forces and Methods for Fabricating a Sensor Arrangement and Parts Thereof
Sensor Arrangement for Sensing Forces and Methods for Fabricating a Sensor Arrangement and Parts Thereof

Sun, H., Martius, G., Kuchenbecker, K. J.

(PCT/EP2021/050230), Max Planck Institute for Intelligent Systems, Max Planck Ring 4, January 2021 (patent)

Abstract
The invention relates to a vision-based haptic sensor arrangement for sensing forces, to a method for fabricating a top portion of a sensor arrangement, and to a method for fabricating a sensor arrangement.

Project Page [BibTex]

Project Page [BibTex]


Method for force inference, method for training a feed-forward neural network, force inference module, and sensor arrangement
Method for force inference, method for training a feed-forward neural network, force inference module, and sensor arrangement

Sun, H., Martius, G., Kuchenbecker, K. J.

(PCT/EP2021/050231), Max Planck Institute for Intelligent Systems, Max Planck Ring 4, January 2021 (patent)

Abstract
The invention relates to a method for force inference of a sensor arrangement for sensing forces, to a method for training a feed-forward neural network, to a force inference module, and to a sensor arrangement.

Project Page [BibTex]

2020


no image
Delivering Expressive and Personalized Fingertip Tactile Cues

Young, E. M.

University of Pennsylvania, Philadelphia, PA, December 2020, Department of Mechanical Engineering and Applied Mechanics (phdthesis)

Abstract
Wearable haptic devices have seen growing interest in recent years, but providing realistic tactile feedback is not a challenge that is soon to be solved. Daily interac- tions with physical objects elicit complex sensations at the fingertips. Furthermore, human fingertips exhibit a broad range of physical dimensions and perceptive abilities, adding increased complexity to the task of simulating haptic interactions in a compelling manner. However, as the applications of wearable haptic feedback grow, concerns of wearability and generalizability often persuade tactile device designers to simplify the complexities associated with rendering realistic haptic sensations. As such, wearable devices tend to be optimized for particular uses and average users, rendering only the most salient dimensions of tactile feedback for a given task and assuming all users interpret the feedback in a similar fashion. We propose that providing more realistic haptic feedback will require in-depth examinations of higher-dimensional tactile cues and personalization of these cues for individual users. In this thesis, we aim to provide hardware and software-based solutions for rendering more expressive and personalized tactile cues to the fingertip. We first explore the idea of rendering six-degree-of-freedom (6-DOF) tactile fingertip feedback via a wearable device, such that any possible fingertip interaction with a flat surface can be simulated. We highlight the potential of parallel continuum manipulators (PCMs) to meet the requirements of such a device, and we refine the design of a PCM for providing fingertip tactile cues. We construct a manually actuated prototype to validate the concept, and then continue to develop a motorized version, named the Fingertip Puppeteer, or Fuppeteer for short. Various error reduction techniques are presented, and the resulting device is evaluated by analyzing system responses to step inputs, measuring forces rendered to a biomimetic finger sensor, and comparing intended sensations to perceived sensations of twenty-four participants in a human-subject study. Once the functionality of the Fuppeteer is validated, we begin to explore how the device can be used to broaden our understanding of higher-dimensional tactile feedback. One such application is using the 6-DOF device to simulate different lower-dimensional devices. We evaluate 1-, 3-, and 6-DOF tactile feedback during shape discrimination and mass discrimination in a virtual environment, also comparing to interactions with real objects. Results from 20 naive study participants show that higher-dimensional tactile feedback may indeed allow completion of a wider range of virtual tasks, but that feedback dimensionality surprisingly does not greatly affect the exploratory techniques employed by the user. To address alternative approaches to improving tactile rendering in scenarios where low-dimensional tactile feedback is appropriate, we then explore the idea of personalizing feedback for a particular user. We present two software-based approaches to personalize an existing data-driven haptic rendering algorithm for fingertips of different sizes. We evaluate our algorithms in the rendering of pre-recorded tactile sensations onto rubber casts of six different fingertips as well as onto the real fingertips of 13 human participants, all via a 3-DOF wearable device. Results show that both personalization approaches significantly reduced force error magnitudes and improved realism ratings.

Project Page [BibTex]

2020

Project Page [BibTex]


System and Method for Simultaneously Sensing Contact Force and Lateral Strain
System and Method for Simultaneously Sensing Contact Force and Lateral Strain

Lee, H., Kuchenbecker, K. J.

(EP20000480.2), December 2020 (patent)

Abstract
A tactile sensing system having a sensor component which comprises a plurality of layers stacked along a normal axis Z and a detection unit electrically connected to the sensor component, wherein the sensor component comprises a first layer, designed as a piezoresistive layer, a third layer, designed as a conductive layer which is electrically connected to the detection unit, and a second layer, designed as a spacing layer between the first layer and the third layer, wherein the first layer comprises a plurality of electrodes In electrically connected to the detection unit, wherein at least one contact force along the normal axis Z on the sensor component is detectable by the detection unit due to a change of a current distribution between the first layer and the third layer, wherein at least one lateral strain on the sensor component is detectable by the detection unit due to a change of the resistance distribution change in the piezoresistive first layer.

Project Page [BibTex]


no image
Synchronicity Trumps Mischief in Rhythmic Human-Robot Social-Physical Interaction

Fitter, N. T., Kuchenbecker, K. J.

In Robotics Research, 10, pages: 269-284, Springer Proceedings in Advanced Robotics, (Editors: Amato, Nancy M. and Hager, Greg and Thomas, Shawna and Torres-Torriti, Miguel), Springer, Cham, 18th International Symposium on Robotics Research (ISRR), 2020 (inproceedings)

Abstract
Hand-clapping games and other forms of rhythmic social-physical interaction might help foster human-robot teamwork, but the design of such interactions has scarcely been explored. We leveraged our prior work to enable the Rethink Robotics Baxter Research Robot to competently play one-handed tempo-matching hand-clapping games with a human user. To understand how such a robot’s capabilities and behaviors affect user perception, we created four versions of this interaction: the hand clapping could be initiated by either the robot or the human, and the non-initiating partner could be either cooperative, yielding synchronous motion, or mischievously uncooperative. Twenty adults tested two clapping tempos in each of these four interaction modes in a random order, rating every trial on standardized scales. The study results showed that having the robot initiate the interaction gave it a more dominant perceived personality. Despite previous results on the intrigue of misbehaving robots, we found that moving synchronously with the robot almost always made the interaction more enjoyable, less mentally taxing, less physically demanding, and lower effort for users than asynchronous interactions caused by robot or human mischief. Taken together, our results indicate that cooperative rhythmic social-physical interaction has the potential to strengthen human-robot partnerships.

DOI [BibTex]

DOI [BibTex]


Method for Force Inference of a Sensor Arrangement, Methods for Training Networks, Force Inference Module and Sensor Arrangement
Method for Force Inference of a Sensor Arrangement, Methods for Training Networks, Force Inference Module and Sensor Arrangement

Sun, H., Martius, G., Lee, H., Spiers, A., Fiene, J.

(PCT/EP2020/083261), Max Planck Institute for Intelligent Systems, Max Planck Ring 4, November 2020 (patent)

Abstract
The present invention relates to a method for force inference of a sensor arrangement, to related methods for training of networks, to a force inference module for performing such methods, and to a sensor arrangement for sensing forces. When developing applications such as robots, sensing of forces applied on a robot hand or another part of a robot such as a leg or a manipulation device is crucial in giving robots increased capabilities to move around and/or manipulate objects. Known implementations for sensor arrangements that can be used in robotic applications in order to have feedback with regard to applied forces are quite expensive and do not have sufficient resolution. Sensor arrangements may be used to measure forces. However, known sensor arrangements need a high density of sensors to provide for a high special resolution. It is thus an object of the present invention to provide for a method for force inference of a sensor arrangement and related methods that are different or optimized with regard to the prior art. It is a further object to provide for a force inference module to perform such methods. It is a further object to provide for a sensor arrangement for sensing forces with such a force inference module.

Project Page [BibTex]


Elephant Trunk Skin: Nature's Flexible Kevlar
Elephant Trunk Skin: Nature’s Flexible Kevlar

Schulz, A., Fourney, E., Sordilla, S., Sukhwani, A., Hu, D.

IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), IEEE, October 2020 (conference)

Abstract
Elephants can extend their trunks by 20% in order to reach faraway objects. Muscular hydrostats such as earthworms, tongues, and octopus arms are all known to have similar levels of extensibility. However, the large and heavy trunk has the added constraints of being durable as well. In this study, we perform material testing on skin sections of the elephant trunk. This skin varies along the trunk with the dorsal portion having folds and the ventral portion having wrinkles. In tensile tests, the folds have ten times the strain as flat portions of skin, and wrinkles having three times the strain as flat portions. To better interpret the strains observed in tensile testing, we perform numerical simulations of elastic material with wrinkles and folds. We show that wrinkles and folds are a good solution for providing strength and extensibility.

[BibTex]

[BibTex]


no image
Modulating Physical Interactions in Human-Assistive Technologies

Hu, S.

University of Pennsylvania, Philadelphia, PA, August 2020, Department of Mechanical Engineering and Applied Mechanics (phdthesis)

Abstract
Many mechanical devices and robots operate in home environments, and they offer rich experiences and valuable functionalities for human users. When these devices interact physically with humans, additional care has to be taken in both hardware and software design to ensure that the robots provide safe and meaningful interactions. It is advantageous to have the robots be customizable so users could tinker them for their specific needs. There are many robot platforms that strive toward these goals, but the most successful robots in our world are either separated from humans (such as in factories and warehouses) or occupy the same space as humans but do not offer physical interactions (such as cleaning robots). In this thesis, we envision a suite of assistive robotic devices that assist people in their daily, physical tasks. Specifically, we begin with a hybrid force display that combines a cable, a brake, and a motor, which offers safe and powerful force output with a large workspace. Virtual haptic elements, including free space, constant force, springs, and dampers, can be simulated by this device. We then adapt the hybrid mechanism and develop the Gait Propulsion Trainer (GPT) for stroke rehabilitation, where we aim to reduce propulsion asymmetry by applying resistance at the user’s pelvis during unilateral stance gait phase. Sensors underneath the user’s shoes and a wireless communication module are added to precisely control the timing of the resistance force. To address the effort of parameter tuning in determining the optimal training scheme, we then develop a learning-from-demonstration (LfD) framework where robot behavior can be obtained from data, thus bypassing some of the tuning effort while enabling customization and generalization for different task situations. This LfD framework is evaluated in simulation and in a user study, and results show improved objective performance and human perception of the robot. Finally, we apply the LfD framework in an upper-limb therapy setting, where the robot directly learns the force output from a therapist when supporting stroke survivors in various physical exercises. Six stroke survivors and an occupational therapist provided demonstrations and tested the autonomous robot behaviors in a user study, and we obtain preliminary insights toward making the robot more intuitive and more effective for both therapists and clients of different impairment levels. This thesis thus considers both hardware and software design for robotic platforms, and we explore both direct and indirect force modulation for human-assistive technologies.

Hu20-PHDD-Modulating Project Page [BibTex]

Hu20-PHDD-Modulating Project Page [BibTex]


Calibrating a Soft {ERT}-Based Tactile Sensor with a Multiphysics Model and Sim-to-real Transfer Learning
Calibrating a Soft ERT-Based Tactile Sensor with a Multiphysics Model and Sim-to-real Transfer Learning

Lee, H., Park, H., Serhat, G., Sun, H., Kuchenbecker, K. J.

In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), pages: 1632-1638, IEEE International Conference on Robotics and Automation (ICRA 2020), May 2020 (inproceedings)

Abstract
Tactile sensors based on electrical resistance tomography (ERT) have shown many advantages for implementing a soft and scalable whole-body robotic skin; however, calibration is challenging because pressure reconstruction is an ill-posed inverse problem. This paper introduces a method for calibrating soft ERT-based tactile sensors using sim-to-real transfer learning with a finite element multiphysics model. The model is composed of three simple models that together map contact pressure distributions to voltage measurements. We optimized the model parameters to reduce the gap between the simulation and reality. As a preliminary study, we discretized the sensing points into a 6 by 6 grid and synthesized single- and two-point contact datasets from the multiphysics model. We obtained another single-point dataset using the real sensor with the same contact location and force used in the simulation. Our new deep neural network architecture uses a de-noising network to capture the simulation-to-real gap and a reconstruction network to estimate contact force from voltage measurements. The proposed approach showed 82% hit rate for localization and 0.51 N of force estimation error performance in single-contact tests and 78.5% hit rate for localization and 5.0 N of force estimation error in two-point contact tests. We believe this new calibration method has the possibility to improve the sensing performance of ERT-based tactile sensors.

DOI Project Page [BibTex]

DOI Project Page [BibTex]


no image
An ERT-Based Robotic Skin with Sparsely Distributed Electrodes: Structure, Fabrication, and DNN-Based Signal Processing

Park, K., Park, H., Lee, H., Park, S., Kim, J.

In 2020 IEEE International Conference on Robotics and Automation (ICRA 2020), pages: 1617-1624, IEEE, Piscataway, NJ, IEEE International Conference on Robotics and Automation (ICRA 2020), May 2020 (inproceedings)

Abstract
Electrical resistance tomography (ERT) has previously been utilized to develop a large-scale tactile sensor because this approach enables the estimation of the conductivity distribution among the electrodes based on a known physical model. Such a sensor made with a stretchable material can conform to a curved surface. However, this sensor cannot fully cover a cylindrical surface because in such a configuration, the edges of the sensor must meet each other. The electrode configuration becomes irregular in this edge region, which may degrade the sensor performance. In this paper, we introduce an ERT-based robotic skin with evenly and sparsely distributed electrodes. For implementation, we sprayed a carbon nanotube (CNT)-dispersed solution to form a conductive sensing domain on a cylindrical surface. The electrodes were firmly embedded in the surface so that the wires were not exposed to the outside. The sensor output images were estimated using a deep neural network (DNN), which was trained with noisy simulation data. An indentation experiment revealed that the localization error of the sensor was 5.2 ± 3.3 mm, which is remarkable performance with only 30 electrodes. A frame rate of up to 120 Hz could be achieved with a sensing domain area of 90 cm2. The proposed approach simplifies the fabrication of 3D-shaped sensors, allowing them to be easily applied to existing robot arms in a seamless and robust manner.

DOI [BibTex]

DOI [BibTex]


Capturing Experts’ Mental Models to Organize a Collection of Haptic Devices: Affordances Outweigh Attributes
Capturing Experts’ Mental Models to Organize a Collection of Haptic Devices: Affordances Outweigh Attributes

Seifi, H., Oppermann, M., Bullard, J., MacLean, K. E., Kuchenbecker, K. J.

In Proceedings of the ACM SIGCHI Conference on Human Factors in Computing Systems (CHI), pages: 268, Conference on Human Factors in Computing Systems (CHI 2020), April 2020 (inproceedings)

Abstract
Humans rely on categories to mentally organize and understand sets of complex objects. One such set, haptic devices, has myriad technical attributes that affect user experience in complex ways. Seeking an effective navigation structure for a large online collection, we elicited expert mental categories for grounded force-feedback haptic devices: 18 experts (9 device creators, 9 interaction designers) reviewed, grouped, and described 75 devices according to their similarity in a custom card-sorting study. From the resulting quantitative and qualitative data, we identify prominent patterns of tagging versus binning, and we report 6 uber-attributes that the experts used to group the devices, favoring affordances over device specifications. Finally, we derive 7 device categories and 9 subcategories that reflect the imperfect yet semantic nature of the expert mental models. We visualize these device categories and similarities in the online haptic collection, and we offer insights for studying expert understanding of other human-centered technology.

DOI Project Page [BibTex]

DOI Project Page [BibTex]


Changes in Normal Force During Passive Dynamic Touch: Contact Mechanics and Perception
Changes in Normal Force During Passive Dynamic Touch: Contact Mechanics and Perception

Gueorguiev, D., Lambert, J., Thonnard, J., Kuchenbecker, K. J.

In Proceedings of the IEEE Haptics Symposium (HAPTICS), pages: 746-752, IEEE Haptics Symposium (HAPTICS 2020), March 2020 (inproceedings)

Abstract
Using a force-controlled robotic platform, we investigated the contact mechanics and psychophysical responses induced by negative and positive modulations in normal force during passive dynamic touch. In the natural state of the finger, the applied normal force modulation induces a correlated change in the tangential force. In a second condition, we applied talcum powder to the fingerpad, which induced a significant modification in the slope of the correlated tangential change. In both conditions, the same ten participants had to detect the interval that contained a decrease or an increase in the pre-stimulation normal force of 1 N. In the natural state, the 75% just noticeable difference for this task was found to be a ratio of 0.19 and 0.18 for decreases and increases, respectively. With talcum powder on the fingerpad, the normal force thresholds remained stable, following the Weber law of constant just noticeable differences, while the tangential force thresholds changed in the same way as the correlation slopes. This result suggests that participants predominantly relied on the normal force changes to perform the detection task. In addition, participants were asked to report whether the force decreased or increased. Their performance was generally poor at this second task even for above-threshold changes. However, their accuracy slightly improved with the talcum powder, which might be due to the reduced finger-surface friction.

DOI [BibTex]

DOI [BibTex]


no image
Haptic Object Parameter Estimation during Within-Hand-Manipulation with a Simple Robot Gripper

Mohtasham, D., Narayanan, G., Calli, B., Spiers, A. J.

In Proceedings of the IEEE Haptics Symposium (HAPTICS), pages: 140-147, March 2020 (inproceedings)

Abstract
Though it is common for robots to rely on vision for object feature estimation, there are environments where optical sensing performs poorly, due to occlusion, poor lighting or limited space for camera placement. Haptic sensing in robotics has a long history, but few approaches have combined this with within-hand-manipulation (WIHM), in order to expose more features of an object to the tactile sensing elements of the hand. As in the human hand, these sensing structures are generally non-homogenous in their coverage of a gripper's manipulation surfaces, as the sensitivity of some hand or finger regions is often different to other regions. In this work we use a modified version of the recently developed 2-finger Model VF (variable friction) robot gripper to acquire tactile information while rolling objects within the robot's grasp. This new gripper has one high-friction passive finger surface and one high-friction tactile sensing surface, equipped with 12 low-cost barometric force sensors encased in urethane. We have developed algorithms that use the data generated during these rolling actions to determine parametric aspects of the object under manipulation. Namely, two parameters are currently determined 1) the location of an object within the grasp 2) the object's shape (from three alternatives). The algorithms were first developed on a static test rig with passive object rolling and later evaluated with the robot gripper platform using active WIHM, which introduced artifacts into the data. With an object set consisting of 3 shapes and 5 sizes, an overall shape estimation accuracy was achieved of 88% and 78% for the test rig and hand respectively. Location estimation, of each object's centroid during motion, achieved a mean error of less than 2mm, along the 95mm length of the tactile sensing finger.

DOI [BibTex]

DOI [BibTex]

2019


no image
Deep Neural Network Approach in Electrical Impedance Tomography-Based Real-Time Soft Tactile Sensor

Park, H., Lee, H., Park, K., Mo, S., Kim, J.

IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), (999):7447-7452, IEEE, 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), November 2019 (conference)

Abstract
Recently, a whole-body tactile sensing have emerged in robotics for safe human-robot interaction. A key issue in the whole-body tactile sensing is ensuring large-area manufacturability and high durability. To fulfill these requirements, a reconstruction method called electrical impedance tomography (EIT) was adopted in large-area tactile sensing. This method maps voltage measurements to conductivity distribution using only a few number of measurement electrodes. A common approach for the mapping is using a linearized model derived from the Maxwell's equation. This linearized model shows fast computation time and moderate robustness against measurement noise but reconstruction accuracy is limited. In this paper, we propose a novel nonlinear EIT algorithm through Deep Neural Network (DNN) approach to improve the reconstruction accuracy of EIT-based tactile sensors. The neural network architecture with rectified linear unit (ReLU) function ensured extremely low computational time (0.002 seconds) and nonlinear network structure which provides superior measurement accuracy. The DNN model was trained with dataset synthesized in simulation environment. To achieve the robustness against measurement noise, the training proceeded with additive Gaussian noise that estimated through actual measurement noise. For real sensor application, the trained DNN model was transferred to a conductive fabric-based soft tactile sensor. For validation, the reconstruction error and noise robustness were mainly compared using conventional linearized model and proposed approach in simulation environment. As a demonstration, the tactile sensor equipped with the trained DNN model is presented for a contact force estimation.

DOI [BibTex]

2019

DOI [BibTex]


Effect of Remote Masking on Detection of Electrovibration
Effect of Remote Masking on Detection of Electrovibration

Jamalzadeh, M., Güçlü, B., Vardar, Y., Basdogan, C.

In Proceedings of the IEEE World Haptics Conference (WHC), pages: 229-234, Tokyo, Japan, July 2019 (inproceedings)

Abstract
Masking has been used to study human perception of tactile stimuli, including those created on haptic touch screens. Earlier studies have investigated the effect of in-site masking on tactile perception of electrovibration. In this study, we investigated whether it is possible to change detection threshold of electrovibration at fingertip of index finger via remote masking, i.e. by applying a (mechanical) vibrotactile stimulus on the proximal phalanx of the same finger. The masking stimuli were generated by a voice coil (Haptuator). For eight participants, we first measured the detection thresholds for electrovibration at the fingertip and for vibrotactile stimuli at the proximal phalanx. Then, the vibrations on the skin were measured at four different locations on the index finger of subjects to investigate how the mechanical masking stimulus propagated as the masking level was varied. Finally, electrovibration thresholds measured in the presence of vibrotactile masking stimuli. Our results show that vibrotactile masking stimuli generated sub-threshold vibrations around fingertip, and hence did not mechanically interfere with the electrovibration stimulus. However, there was a clear psychophysical masking effect due to central neural processes. Electrovibration absolute threshold increased approximately 0.19 dB for each dB increase in the masking level.

DOI [BibTex]

DOI [BibTex]


Objective and Subjective Assessment of Algorithms for Reducing Three-Axis Vibrations to One-Axis Vibrations
Objective and Subjective Assessment of Algorithms for Reducing Three-Axis Vibrations to One-Axis Vibrations

Park, G., Kuchenbecker, K. J.

In Proceedings of the IEEE World Haptics Conference, pages: 467-472, July 2019 (inproceedings)

Abstract
A typical approach to creating realistic vibrotactile feedback is reducing 3D vibrations recorded by an accelerometer to 1D signals that can be played back on a haptic actuator, but some of the information is often lost in this dimensional reduction process. This paper describes seven representative algorithms and proposes four metrics based on the spectral match, the temporal match, and the average value and the variability of them across 3D rotations. These four performance metrics were applied to four texture recordings, and the method utilizing the discrete fourier transform (DFT) was found to be the best regardless of the sensing axis. We also recruited 16 participants to assess the perceptual similarity achieved by each algorithm in real time. We found the four metrics correlated well with the subjectively rated similarities for the six dimensional reduction algorithms, with the exception of taking the 3D vector magnitude, which was perceived to be good despite its low spectral and temporal match metrics.

DOI Project Page [BibTex]


Fingertip Interaction Metrics Correlate with Visual and Haptic Perception of Real Surfaces
Fingertip Interaction Metrics Correlate with Visual and Haptic Perception of Real Surfaces

Vardar, Y., Wallraven, C., Kuchenbecker, K. J.

In Proceedings of the IEEE World Haptics Conference (WHC), pages: 395-400, Tokyo, Japan, July 2019 (inproceedings)

Abstract
Both vision and touch contribute to the perception of real surfaces. Although there have been many studies on the individual contributions of each sense, it is still unclear how each modality’s information is processed and integrated. To fill this gap, we investigated the similarity of visual and haptic perceptual spaces, as well as how well they each correlate with fingertip interaction metrics. Twenty participants interacted with ten different surfaces from the Penn Haptic Texture Toolkit by either looking at or touching them and judged their similarity in pairs. By analyzing the resulting similarity ratings using multi-dimensional scaling (MDS), we found that surfaces are similarly organized within the three-dimensional perceptual spaces of both modalities. Also, between-participant correlations were significantly higher in the haptic condition. In a separate experiment, we obtained the contact forces and accelerations acting on one finger interacting with each surface in a controlled way. We analyzed the collected fingertip interaction data in both the time and frequency domains. Our results suggest that the three perceptual dimensions for each modality can be represented by roughness/smoothness, hardness/softness, and friction, and that these dimensions can be estimated by surface vibration power, tap spectral centroid, and kinetic friction coefficient, respectively.

DOI Project Page [BibTex]

DOI Project Page [BibTex]


no image
A Clustering Approach to Categorizing 7 Degree-of-Freedom Arm Motions during Activities of Daily Living

Gloumakov, Y., Spiers, A. J., Dollar, A. M.

In Proceedings of the International Conference on Robotics and Automation (ICRA), pages: 7214-7220, Montreal, Canada, May 2019 (inproceedings)

Abstract
In this paper we present a novel method of categorizing naturalistic human arm motions during activities of daily living using clustering techniques. While many current approaches attempt to define all arm motions using heuristic interpretation, or a combination of several abstract motion primitives, our unsupervised approach generates a hierarchical description of natural human motion with well recognized groups. Reliable recommendation of a subset of motions for task achievement is beneficial to various fields, such as robotic and semi-autonomous prosthetic device applications. The proposed method makes use of well-known techniques such as dynamic time warping (DTW) to obtain a divergence measure between motion segments, DTW barycenter averaging (DBA) to get a motion average, and Ward's distance criterion to build the hierarchical tree. The clusters that emerge summarize the variety of recorded motions into the following general tasks: reach-to-front, transfer-box, drinking from vessel, on-table motion, turning a key or door knob, and reach-to-back pocket. The clustering methodology is justified by comparing against an alternative measure of divergence using Bezier coefficients and K-medoids clustering.

DOI [BibTex]

DOI [BibTex]