Haptic Intelligence


2024


no image
Reflectance Outperforms Force and Position in Model-Free Needle Puncture Detection

L’Orsa, R., Bisht, A., Yu, L., Murari, K., Westwick, D. T., Sutherland, G. R., Kuchenbecker, K. J.

In Proceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Orlando, USA, July 2024 (inproceedings) Accepted

Abstract
The surgical procedure of needle thoracostomy temporarily corrects accidental over-pressurization of the space between the chest wall and the lungs. However, failure rates of up to 94.1% have been reported, likely because this procedure is done blind: operators estimate by feel when the needle has reached its target. We believe instrumented needles could help operators discern entry into the target space, but limited success has been achieved using force and/or position to try to discriminate needle puncture events during simulated surgical procedures. We thus augmented our needle insertion system with a novel in-bore double-fiber optical setup. Tissue reflectance measurements as well as 3D force, torque, position, and orientation were recorded while two experimenters repeatedly inserted a bevel-tipped percutaneous needle into ex vivo porcine ribs. We applied model-free puncture detection to various filtered time derivatives of each sensor data stream offline. In the held-out test set of insertions, puncture-detection precision improved substantially using reflectance measurements compared to needle insertion force alone (3.3-fold increase) or position alone (11.6-fold increase).

Project Page [BibTex]

2024

Project Page [BibTex]


no image
Expert Perception of Teleoperated Social Exercise Robots

Mohan, M., Mat Husin, H., Kuchenbecker, K. J.

In Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction (HRI), pages: 769-773, Boulder, USA, March 2024, Late-Breaking Report (LBR) (5 pages) presented at the IEEE/ACM International Conference on Human-Robot Interaction (HRI) (inproceedings)

Abstract
Social robots could help address the growing issue of physical inactivity by inspiring users to engage in interactive exercise. Nevertheless, the practical implementation of social exercise robots poses substantial challenges, particularly in terms of personalizing their activities to individuals. We propose that motion-capture-based teleoperation could serve as a viable solution to address these needs by enabling experts to record custom motions that could later be played back without their real-time involvement. To gather feedback about this idea, we conducted semi-structured interviews with eight exercise-therapy professionals. Our findings indicate that experts' attitudes toward social exercise robots become more positive when considering the prospect of teleoperation to record and customize robot behaviors.

DOI Project Page [BibTex]

DOI Project Page [BibTex]

2023


no image
Enhancing Surgical Team Collaboration and Situation Awareness through Multimodal Sensing

Allemang–Trivalle, A.

In Proceedings of the ACM International Conference on Multimodal Interaction, pages: 716-720, Extended abstract (5 pages) presented at the ACM International Conference on Multimodal Interaction (ICMI) Doctoral Consortium, Paris, France, October 2023 (inproceedings)

Abstract
Surgery, typically seen as the surgeon's sole responsibility, requires a broader perspective acknowledging the vital roles of other operating room (OR) personnel. The interactions among team members are crucial for delivering quality care and depend on shared situation awareness. I propose a two-phase approach to design and evaluate a multimodal platform that monitors OR members, offering insights into surgical procedures. The first phase focuses on designing a data-collection platform, tailored to surgical constraints, to generate novel collaboration and situation-awareness metrics using synchronous recordings of the participants' voices, positions, orientations, electrocardiograms, and respiration signals. The second phase concerns the creation of intuitive dashboards and visualizations, aiding surgeons in reviewing recorded surgery, identifying adverse events and contributing to proactive measures. This work aims to demonstrate an innovative approach to data collection and analysis, augmenting the surgical team's capabilities. The multimodal platform has the potential to enhance collaboration, foster situation awareness, and ultimately mitigate surgical adverse events. This research sets the stage for a transformative shift in the OR, enabling a more holistic and inclusive perspective that recognizes that surgery is a team effort.

DOI [BibTex]

2023

DOI [BibTex]


Wear Your Heart on Your Sleeve: Users Prefer Robots with Emotional Reactions to Touch and Ambient Moods
Wear Your Heart on Your Sleeve: Users Prefer Robots with Emotional Reactions to Touch and Ambient Moods

Burns, R. B., Ojo, F., Kuchenbecker, K. J.

In Proceedings of the IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), pages: 1914-1921, Busan, South Korea, August 2023 (inproceedings)

Abstract
Robots are increasingly being developed as assistants for household, education, therapy, and care settings. Such robots can use adaptive emotional behavior to communicate warmly and effectively with their users and to encourage interest in extended interactions. However, autonomous physical robots often lack a dynamic internal emotional state, instead displaying brief, fixed emotion routines to promote specific user interactions. Furthermore, despite the importance of social touch in human communication, most commercially available robots have limited touch sensing, if any at all. We propose that users' perceptions of a social robotic system will improve when the robot provides emotional responses on both shorter and longer time scales (reactions and moods), based on touch inputs from the user. We evaluated this proposal through an online study in which 51 diverse participants watched nine randomly ordered videos (a three-by-three full-factorial design) of the koala-like robot HERA being touched by a human. Users provided the highest ratings in terms of agency, ambient activity, enjoyability, and touch perceptivity for scenarios in which HERA showed emotional reactions and either neutral or emotional moods in response to social touch gestures. Furthermore, we summarize key qualitative findings about users' preferences for reaction timing, the ability of robot mood to show persisting memory, and perception of neutral behaviors as a curious or self-aware robot.

link (url) DOI Project Page [BibTex]

link (url) DOI Project Page [BibTex]


Augmenting Human Policies using Riemannian Metrics for Human-Robot Shared Control
Augmenting Human Policies using Riemannian Metrics for Human-Robot Shared Control

Oh, Y., Passy, J., Mainprice, J.

In Proceedings of the IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), pages: 1612-1618, Busan, Korea, August 2023 (inproceedings)

Abstract
We present a shared control framework for teleoperation that combines the human and autonomous robot agents operating in different dimension spaces. The shared control problem is an optimization problem to maximize the human's internal action-value function while guaranteeing that the shared control policy is close to the autonomous robot policy. This results in a state update rule that augments the human controls using the Riemannian metric that emerges from computing the curvature of the robot's value function to account for any cost terms or constraints that the human operator may neglect when operating a redundant manipulator. In our experiments, we apply Linear Quadratic Regulators to locally approximate the robot policy using a single optimized robot trajectory, thereby preventing the need for an optimization step at each time step to determine the optimal policy. We show preliminary results of reach-and-grasp teleoperation tasks with a simulated human policy and a pilot user study using the VR headset and controllers. However, the mixed user preference ratings and quantitative results show that more investigation is required to prove the efficacy of the proposed paradigm.

DOI [BibTex]

DOI [BibTex]


no image
Naturalistic Vibrotactile Feedback Could Facilitate Telerobotic Assembly on Construction Sites

Gong, Y., Javot, B., Lauer, A. P. R., Sawodny, O., Kuchenbecker, K. J.

In Proceedings of the IEEE World Haptics Conference (WHC), pages: 169-175, Delft, The Netherlands, July 2023 (inproceedings)

Abstract
Telerobotics is regularly used on construction sites to build large structures efficiently. A human operator remotely controls the construction robot under direct visual feedback, but visibility is often poor. Future construction robots that move autonomously will also require operator monitoring. Thus, we designed a wireless haptic feedback system to provide the operator with task-relevant mechanical information from a construction robot in real time. Our AiroTouch system uses an accelerometer to measure the robot end-effector's vibrations and uses off-the-shelf audio equipment and a voice-coil actuator to display them to the user with high fidelity. A study was conducted to evaluate how this type of naturalistic vibration feedback affects the observer's understanding of telerobotic assembly on a real construction site. Seven adults without construction experience observed a mix of manual and autonomous assembly processes both with and without naturalistic vibrotactile feedback. Qualitative analysis of their survey responses and interviews indicated that all participants had positive responses to this technology and believed it would be beneficial for construction activities.

DOI Project Page [BibTex]

DOI Project Page [BibTex]


A Toolkit for Expanding Sustainability Engineering Utilizing Foundations of the Engineering for One Planet Initiative
A Toolkit for Expanding Sustainability Engineering Utilizing Foundations of the Engineering for One Planet Initiative

Schulz, A., Anderson, C. D., Cooper, C., Roberts, D., Loyo, J., Lewis, K., Kumar, S., Rolf, J., Marulanda, N. A. G.

In Proceedings of the American Society of Engineering Education (ASEE), Baltimore, USA, June 2023, Andrew Schulz, Cindy Cooper, Cindy Anderson contributed equally. (inproceedings)

Abstract
Recently, there has been a significant push to prepare all engineers with skills in sustainability, motivated by industry needs, accreditation requirements, and international efforts such as the National Science Foundation’s 10 Big Ideas and Grand Challenges and the United Nations’ Sustainable Development Goals (SDGs). This paper discusses a new toolkit to enable broad dissemination of vetted tools to help engineering faculty members teach sustainability using resources from the Engineering for One Planet (EOP) initiative. This toolkit is to be used as a mechanism to engage a diversity of stakeholders to use their voices, experiences, and connections to share the need for national curricular change in engineering education widely. This toolkit can foster the integration of sustainability-focused learning outcomes into engineering courses and programs. This is particularly important for graduating engineers at this crucial time when we collectively face a convergence of national- and global-scale planetary crises that professional engineers will directly and indirectly impact. Catalyzed by The Lemelson Foundation and VentureWell, the EOP initiative provides teaching tools, grants, and support for the EOP Network —a volunteer action network— comprising diverse stakeholders collectively seeking to transform engineering education to equip all engineers with the understanding, knowledge, skills, and mindsets to ensure their work contributes to a healthy world. The EOP Framework, a fundamental resource of the initiative, provides a curated and vetted list of ABET-aligned sustainability-focused student learning outcomes, including core and advanced. It covers social and environmental sustainability topics and essential professional skills such as communication, teamwork, and critical thinking. It was designed as a practical implementation tool — rather than a research framework — to help educators embed sustainability concepts and tools into engineering courses and programs at all levels. The Lemelson Foundation has provided a range of grants to support curricular transformation efforts using the EOP Framework. With support from The Lemelson Foundation, ASEE launched an EOP Mini-Grant Program in 2022 to engender curricular changes using the EOP Framework. The EOP Network is working to extend the reach of the Framework across the ASEE community beyond initial pilot programs by implementing an EOP Toolkit for EOP Network members and other stakeholders to use at their home institutions, conferences, and informative workshops. This article describes the rationale for creating the EOP Toolkit, the development process, content examples, and use scenarios.

[BibTex]

[BibTex]


Utilizing Online and Open-Source Machine Learning Toolkits to Leverage the Future of Sustainable Engineering
Utilizing Online and Open-Source Machine Learning Toolkits to Leverage the Future of Sustainable Engineering

Schulz, A., Stathatos, S., Shriver, C., Moore, R.

In Proceedings of the American Society of Engineering Education (ASEE), Baltimore, USA, June 2023, Andrew Schulz and Suzanne Stathatos are co-first authors. (inproceedings)

Abstract
Recently, there has been a national push to use machine learning (ML) and artificial intelligence (AI) to advance engineering techniques in all disciplines ranging from advanced fracture mechanics in materials science to soil and water quality testing in the civil and environmental engineering fields. Using AI, specifically machine learning, engineers can automate and decrease the processing or human labeling time while maintaining statistical repeatability via trained models and sensors. Edge Impulse has designed an open-source TinyML-enabled Arduino education tool kit for engineering disciplines. This paper discusses the various applications and approaches engineering educators have taken to utilize ML toolkits in the classroom. We provide in-depth implementation guides and associated learning outcomes focused on the Environmental Engineering Classroom. We discuss five specific examples of four standard Environmental Engineering courses for freshman and junior-level engineering. There are currently few programs in the nation that utilize machine learning toolkits to prepare the next generation of ML and AI-educated engineers for industry and academic careers. This paper will guide educators to design and implement ML/AI into engineering curricula (without a specific AI or ML focus within the course) using simple, cheap, and open-source tools and technological aid from an online platform in collaboration with Edge Impulse.

DOI [BibTex]

DOI [BibTex]


Reconstructing Signing Avatars from Video Using Linguistic Priors
Reconstructing Signing Avatars from Video Using Linguistic Priors

Forte, M., Kulits, P., Huang, C. P., Choutas, V., Tzionas, D., Kuchenbecker, K. J., Black, M. J.

In IEEE/CVF Conf. on Computer Vision and Pattern Recognition (CVPR), pages: 12791-12801, CVPR 2023, June 2023 (inproceedings)

Abstract
Sign language (SL) is the primary method of communication for the 70 million Deaf people around the world. Video dictionaries of isolated signs are a core SL learning tool. Replacing these with 3D avatars can aid learning and enable AR/VR applications, improving access to technology and online media. However, little work has attempted to estimate expressive 3D avatars from SL video; occlusion, noise, and motion blur make this task difficult. We address this by introducing novel linguistic priors that are universally applicable to SL and provide constraints on 3D hand pose that help resolve ambiguities within isolated signs. Our method, SGNify, captures fine-grained hand pose, facial expression, and body movement fully automatically from in-the-wild monocular SL videos. We evaluate SGNify quantitatively by using a commercial motion-capture system to compute 3D avatars synchronized with monocular video. SGNify outperforms state-of-the-art 3D body-pose- and shape-estimation methods on SL videos. A perceptual study shows that SGNify's 3D reconstructions are significantly more comprehensible and natural than those of previous methods and are on par with the source videos. Code and data are available at sgnify.is.tue.mpg.de.

pdf arXiv project code DOI [BibTex]

pdf arXiv project code DOI [BibTex]

2022


no image
Towards Semi-Automated Pleural Cavity Access for Pneumothorax in Austere Environments

L’Orsa, R., Lama, S., Westwick, D., Sutherland, G., Kuchenbecker, K. J.

In Proceedings of the International Astronautical Congress (IAC), pages: 1-7, Paris, France, September 2022 (inproceedings)

Abstract
Pneumothorax, a condition where injury or disease introduces air between the chest wall and lungs, can impede lung function and lead to respiratory failure and/or obstructive shock. Chest trauma from dynamic loads, hypobaric exposure from extravehicular activity, and pulmonary inflammation from celestial dust exposures could potentially cause pneumothoraces during spaceflight with or without exacerbation from deconditioning. On Earth, emergent cases are treated with chest tube insertion (tube thoracostomy, TT) when available, or needle decompression (ND) when not (i.e., pre-hospital). However, ND has high failure rates (up to 94%), and TT has high complication rates (up to 37.9%), especially when performed by inexperienced or intermittent operators. Thus neither procedure is ideal for a pure just-in-time training or skill refreshment approach, and both may require adjuncts for safe inclusion in Level of Care IV (e.g., short duration lunar orbit) or V (e.g., Mars transit) missions. Insertional complications are of particular concern since they cause inadvertent tissue damage that, while surgically repairable in an operating room, could result in (preventable) fatality in a spacecraft or other isolated, confined, or extreme (ICE) environments. Tools must be positioned and oriented correctly to avoid accidental insertion into critical structures, and they must be inserted no further than the thin membrane lining the inside of the rib cage (i.e., the parietal pleura). Operators identify pleural puncture via loss-of-resistance sensations on the tool during advancement, but experienced surgeons anecdotally describe a wide range of membrane characteristics: robust tissues require significant force to perforate, while fragile tissues deliver little-to-no haptic sensation when pierced. Both extremes can lead to tool overshoot and may be representative of astronaut tissues at the beginning (healthy) and end (deconditioned) of long duration exploration class missions. Given uncertainty surrounding physician astronaut selection criteria, skill retention, and tissue condition, an adjunct for improved insertion accuracy would be of value. We describe experiments conducted with an intelligent prototype sensorized system aimed at semi-automating tool insertion into the pleural cavity. The assembly would integrate with an in-mission medical system and could be tailored to fully complement an autonomous medical response agent. When coupled with minimal just-in-time training, it has the potential to bestow expert pleural access skills on non-expert operators without the use of ground resources, in both emergent and elective treatment scenarios.

Project Page [BibTex]

2022

Project Page [BibTex]


no image
How Long Does It Take to Learn Trimanual Coordination?

Allemang–Trivalle, A., Eden, J., Ivanova, E., Huang, Y., Burdet, E.

In Proceedings of the IEEE International Conference on Robot and Human Interactive Communication (RO-MAN) , pages: 211-216, Napoli, Italy, August 2022 (inproceedings)

Abstract
Supernumerary robotic limbs can act as intelligent prostheses or augment the motion of healthy people to achieve actions which are not possible with only two natural hands. However, as trimanual control is not typical in everyday activities, it is still unknown how different training could influence its acquisition. We conducted an experimental study to evaluate the impact of different forms of trimanual action on training. Two groups of twelve subjects were each trained in virtual reality for five weeks using either a three independent goals task or one dependent goal task. The success of their training was then evaluated by comparing their task performance and motion characteristics between sessions. The results show that subjects dramatically improved their trimanual task performance as a result of training. However, while they showed improved motion efficiency and reduced workload for tasks with multiple independent goals with practice, no such improvement was observed when they trained with the one coordinated goal task.

DOI [BibTex]

DOI [BibTex]


no image
Comparison of Human Trimanual Performance Between Independent and Dependent Multiple-Limb Training Modes

Allemang–Trivalle, A., Eden, J., Huang, Y., Ivanova, E., Burdet, E.

In Proceedings of the IEEE RAS/EMBS International Conference for Biomedical Robotics and Biomechatronics (BioRob), Seoul, Korea, August 2022 (inproceedings)

Abstract
Human movement augmentation with a third robotic hand can extend human capability allowing a single user to perform three-hand tasks that would typically require cooperation with other people. However, as trimanual control is not typical in everyday activities, it is still unknown how to train people to acquire this capability efficiently. We conducted an experimental study to evaluate two different trimanual training modes with 24 subjects. This investigated how the different modes impact the transfer of learning of the acquired trimanual capability to another task. Two groups of twelve subjects were each trained in virtual reality for five weeks using either independent or dependent trimanual task repetitions. The training was evaluated by comparing performance before and after training in a gamified trimanual task. The results show that both groups of subjects improved their trimanual capabilities after training. However, this improvement appeared independent of training scheme.

DOI [BibTex]

DOI [BibTex]


no image
Wrist-Squeezing Force Feedback Improves Accuracy and Speed in Robotic Surgery Training

Machaca, S., Cao, E., Chi, A., Adrales, G., Kuchenbecker, K. J., Brown, J. D.

In Proceedings of the IEEE RAS/EMBS International Conference for Biomedical Robotics and Biomechatronics (BioRob), pages: 700-707 , Seoul, Korea, 9th IEEE RAS/EMBS International Conference for Biomedical Robotics and Biomechatronics (BioRob 2022), August 2022 (inproceedings)

Abstract
Current robotic minimally invasive surgery (RMIS) platforms provide surgeons with no haptic feedback of the robot's physical interactions. This limitation forces surgeons to rely heavily on visual feedback and can make it challenging for surgical trainees to manipulate tissue gently. Prior research has demonstrated that haptic feedback can increase task accuracy in RMIS training. However, it remains unclear whether these improvements represent a fundamental improvement in skill, or if they simply stem from re-prioritizing accuracy over task completion time. In this study, we provide haptic feedback of the force applied by the surgical instruments using custom wrist-squeezing devices. We hypothesize that individuals receiving haptic feedback will increase accuracy (produce less force) while increasing their task completion time, compared to a control group receiving no haptic feedback. To test this hypothesis, N=21 novice participants were asked to repeatedly complete a ring rollercoaster surgical training task as quickly as possible. Results show that participants receiving haptic feedback apply significantly less force (0.67 N) than the control group, and they complete the task no faster or slower than the control group after twelve repetitions. Furthermore, participants in the feedback group decreased their task completion times significantly faster (7.68 %) than participants in the control group (5.26 %). This form of haptic feedback, therefore, has the potential to help trainees improve their technical accuracy without compromising speed.

link (url) DOI Project Page [BibTex]

link (url) DOI Project Page [BibTex]


no image
A Foundational Design Experience in Conservation Technology: A Multi-Disciplinary Approach to meeting Sustainable Development Goals

Schulz, A., Shriver, C., Seleb, B., Greiner, C., Hu, D., Moore, R., Zhang, M., Jadali, N., Patka, A.

Proceedings of the American Society of Engineering Education, pages: 1-12, Minneapolis, USA, June 2022, Award for best paper (conference)

Abstract
Project-based courses allow students to apply techniques they have learned in their undergraduate engineering curriculum to real-world problems. While many students have demonstrated interest in working on humanitarian projects that address the United Nations’ Sustainable Development Goals (SDGs), these projects typically require longer timelines than a single semester capstone course will allow. To encourage student participation in achieving the SDGs, we have created an interdisciplinary course that allows sophomore through senior-level undergraduate students to engage in utilizing human-wildlife centered design to work on projects that prevent extinction and promote healthy human-wildlife co-habitation. This field, known as Conservation Technology (CT), helps students 1) understand the complexities of solutions to the SDGs and the need for diverse perspectives, 2) find and apply international conservation guidelines, 3) develop teamwork and leadership skills by working on interdisciplinary teams, and 4) evaluate and assess conservation technology projects for multiple stakeholders and in the context of the SDGs. Students may take this course for several sequential semesters, partnering with more senior and junior students, allowing for long-term engagement in sustainability solutions. In the first half of the semester, we leverage more traditional pedagogical approaches, including formative assessments and in-class lectures on conservation, technology, and sustainability solutions. In the second half of the semester, we utilize peer, instructor, and expert reviews of the projects students work on to help them excel at successful and equitable conservation technology interventions. Through 9 formal interviews conducted with students, we discovered themes that students identified as most critical for engaging in conservation technology initiatives. These themes include 1) perspective given to students through in-person, active learning using the Dilemma, Issue, Question approach, 2) Independent learning of conservation technology background and theory during the beginning of the course, and 3) Hands-on learning and project-focused experiences in CT. To leverage engineers’ engagement in the SDGs, students needed half a semester of background information to allow for an adequate understanding of the complexities of humanitarian aid projects. This paper, discusses the course structure that will help leverage Sustainable Development Goals into engineering curricula and use Conservation Technology projects in the course as case studies for interdisciplinary learning.

link (url) [BibTex]

link (url) [BibTex]


Larger Skin-Surface Contact Through a Fingertip Wearable Improves Roughness Perception
Larger Skin-Surface Contact Through a Fingertip Wearable Improves Roughness Perception

Gueorguiev, D., Javot, B., Spiers, A., Kuchenbecker, K. J.

In Haptics: Science, Technology, Applications, pages: 171-179, Lecture Notes in Computer Science, 13235, (Editors: Seifi, Hasti and Kappers, Astrid M. L. and Schneider, Oliver and Drewing, Knut and Pacchierotti, Claudio and Abbasimoshaei, Alireza and Huisman, Gijs and Kern, Thorsten A.), Springer, Cham, 13th International Conference on Human Haptic Sensing and Touch Enabled Computer Applications (EuroHaptics 2022), May 2022 (inproceedings)

Abstract
With the aim of creating wearable haptic interfaces that allow the performance of everyday tasks, we explore how differently designed fingertip wearables change the sensory threshold for tactile roughness perception. Study participants performed the same two-alternative forced-choice roughness task with a bare finger and wearing three flexible fingertip covers: two with a square opening (64 and 36 mm2, respectively) and the third with no opening. The results showed that adding the large opening improved the 75% JND by a factor of 2 times compared to the fully covered finger: the higher the skin-surface contact area, the better the roughness perception. Overall, the results show that even partial skin-surface contact through a fingertip wearable improves roughness perception, which opens design opportunities for haptic wearables that preserve natural touch.

DOI [BibTex]

DOI [BibTex]


no image
Toward The Un’s Sustainable Development Goals (sdgs): Conservation Technology For Design Teaching & Learning

Schulz, A., Seleb, B., Shriver, C., Hu, D., Moore, R.

pages: 1-9, Charlotte, USA, March 2022 (conference)

Abstract
Interdisciplinary capstone team projects have provided a more diverse array of student experiences and have been shown to improve a team’s innovation, analysis, and communication. The UN’s 17 Sustainable Development Goals (SDGs) provide aspirational, human-centered design opportunities for applying engineering practices to real-world technology interventions that aid in programs from public health to wildlife conservation. In this sophomore-level design course, we focused on climate change, the impact of life on the sea, and the impact of life on the land through the lens of conservation technology. Conservation Technology is a relatively new field focusing on the creation of technologies to promote and safeguard sustainable human-wildlife interactions. In this manuscript, we describe the framework for teaching a Conservation Technology project-based capstone engineering course and present observations of monodisciplinary and interdisciplinary team practices. When working in a non-interdisciplinary team, engineers tended to focus only on the design deliverables and missed challenges imposed by policy, biology, and computational requirements. These three challenges are nearly always present in conservation technology interventions. In contrast, the interdisciplinary team was able to better identify the diverse challenges associated with conservation technology intervention. This work in progress paper focuses on the development of an organized curriculum to teach conservation technology to first- and second-year engineers to allow them to work towards more sustainable engineering practices. Universities are working to inject the SDGs into the engineering curriculum, and we believe Conservation Technology may be an ideal fit for combining the engineering design process with the scientific method to discover new types of possible failures in design and create innovative solutions for a sustainable future.

link (url) [BibTex]

link (url) [BibTex]


no image
Robot, Pass Me the Tool: Handle Visibility Facilitates Task-Oriented Handovers

Ortenzi, V., Filipovica, M., Abdlkarim, D., Pardi, T., Takahashi, C., Wing, A. M., Luca, M. D., Kuchenbecker, K. J.

In Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction (HRI), pages: 256-264, March 2022, Valerio Ortenzi and Maija Filipovica contributed equally to this publication. (inproceedings)

Abstract
A human handing over an object modulates their grasp and movements to accommodate their partner's capabilities, which greatly increases the likelihood of a successful transfer. State-of-the-art robot behavior lacks this level of user understanding, resulting in interactions that force the human partner to shoulder the burden of adaptation. This paper investigates how visual occlusion of the object being passed affects the subjective perception and quantitative performance of the human receiver. We performed an experiment in virtual reality where seventeen participants were tasked with repeatedly reaching to take a tool from the hand of a robot; each of the three tested objects (hammer, screwdriver, scissors) was presented in a wide variety of poses. We carefully analysed the user's hand and head motions, the time to grasp the object, and the chosen grasp location, as well as participants' ratings of the grasp they just performed. Results show that initial visibility of the handle significantly increases the reported holdability and immediate usability of a tool. Furthermore, a robot that offers objects so that their handles are more occluded forces the receiver to spend more time in planning and executing the grasp and also lowers the probability that the tool will be grasped by the handle. Together these findings indicate that robots can more effectively support their human work partners by increasing the visibility of the intended grasp location of objects being passed.

DOI Project Page [BibTex]

DOI Project Page [BibTex]

2021


Sensorimotor-Inspired Tactile Feedback and Control Improve Consistency of Prosthesis Manipulation in the Absence of Direct Vision
Sensorimotor-Inspired Tactile Feedback and Control Improve Consistency of Prosthesis Manipulation in the Absence of Direct Vision

Thomas, N., Fazlollahi, F., Brown, J. D., Kuchenbecker, K. J.

In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages: 6174-6181, Prague, Czech Republic, September 2021 (inproceedings)

Abstract
The lack of haptically aware upper-limb prostheses forces amputees to rely largely on visual cues to complete activities of daily living. In contrast, non-amputees inherently rely on conscious haptic perception and automatic tactile reflexes to govern volitional actions in situations that do not allow for constant visual attention. We therefore propose a myoelectric prosthesis system that reflects these concepts to aid manipulation performance without direct vision. To implement this design, we constructed two fabric-based tactile sensors that measure contact location along the palmar and dorsal sides of the prosthetic fingers and grasp pressure at the tip of the prosthetic thumb. Inspired by the natural sensorimotor system, we use the measurements from these sensors to provide vibrotactile feedback of contact location and implement a tactile grasp controller with reflexes that prevent over-grasping and object slip. We compare this tactile system to a standard myoelectric prosthesis in a challenging reach-to-pick-and-place task conducted without direct vision; 17 non-amputee adults took part in this single-session between-subjects study. Participants in the tactile group achieved more consistent high performance compared to participants in the standard group. These results show that adding contact-location feedback and reflex control increases the consistency with which objects can be grasped and moved without direct vision in upper-limb prosthetics

DOI Project Page [BibTex]

2021

DOI Project Page [BibTex]


no image
Optimal Grasp Selection, and Control for Stabilising a Grasped Object, with Respect to Slippage and External Forces

Pardi, T., E., A. G., Ortenzi, V., Stolkin, R.

In Proceedings of the IEEE-RAS International Conference on Humanoid Robots (Humanoids 2020), pages: 429-436, Munich, Germany, July 2021 (inproceedings)

Abstract
This paper explores the problem of how to grasp an object, and then control a robot arm so as to stabilise that object, under conditions where: i) there is significant slippage between the object and the robot's fingers; and ii) the object is perturbed by external forces. For an n degrees of freedom (dof) robot, we treat the robot plus grasped object as an (n+1) dof system, where the grasped object can rotate between the robot's fingers via slippage. Firstly, we propose an optimisation-based algorithm that selects the best grasping location from a set of given candidates. The best grasp is one that will yield the minimum effort for the arm to keep the object in equilibrium against external perturbations. Secondly, we propose a controller which brings the (n+1) dof system to a task configuration, and then maintains that configuration robustly against matched and unmatched disturbances. To minimise slippage between gripper and grasped object, a sufficient criterion for selecting the control coefficients is proposed by adopting a set of inequalities, which are obtained solving a non-linear minimisation problem, dependant on the static friction estimation. We demonstrate our approach on a simulated (2+1) planar robot, comprising two joints of the robot arm, plus the additional passive joint which is formed by the slippage between the object and the robot's fingers. We also present an experiment with a real robot arm, grasping a flat object between the fingers of a parallel jaw gripper.

DOI [BibTex]

DOI [BibTex]


no image
PrendoSim: Proxy-Hand-Based Robot Grasp Generator

Abdlkarim, D., Ortenzi, V., Pardi, T., Filipovica, M., Wing, A. M., Kuchenbecker, K. J., Di Luca, M.

In ICINCO 2021: Proceedings of the International Conference on Informatics in Control, Automation and Robotics, pages: 60-68, (Editors: Gusikhin, Oleg and Nijmeijer, Henk and Madani, Kurosh), SciTePress, Sétubal, 18th International Conference on Informatics in Control, Automation and Robotics (ICINCO 2021), July 2021 (inproceedings)

Abstract
The synthesis of realistic robot grasps in a simulated environment is pivotal in generating datasets that support sim-to-real transfer learning. In a step toward achieving this goal, we propose PrendoSim, an open-source grasp generator based on a proxy-hand simulation that employs NVIDIA's physics engine (PhysX) and the recently released articulated-body objects developed by Unity (https://prendosim.github.io). We present the implementation details, the method used to generate grasps, the approach to operationally evaluate stability of the generated grasps, and examples of grasps obtained with two different grippers (a parallel jaw gripper and a three-finger hand) grasping three objects selected from the YCB dataset (a pair of scissors, a hammer, and a screwdriver). Compared to simulators proposed in the literature, PrendoSim balances grasp realism and ease of use, displaying an intuitive interface and enabling the user to produce a large and varied dataset of stable grasps.

DOI Project Page [BibTex]

DOI Project Page [BibTex]


Ungrounded Vari-Dimensional Tactile Fingertip Feedback for Virtual Object Interaction
Ungrounded Vari-Dimensional Tactile Fingertip Feedback for Virtual Object Interaction

Young, E. M., Kuchenbecker, K. J.

In CHI ’21: Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, pages: 217, ACM, New York, NY, Conference on Human Factors in Computing Systems (CHI 2021), May 2021 (inproceedings)

Abstract
Compared to grounded force feedback, providing tactile feedback via a wearable device can free the user and broaden the potential applications of simulated physical interactions. However, neither the limitations nor the full potential of tactile-only feedback have been precisely examined. Here we investigate how the dimensionality of cutaneous fingertip feedback affects user movements and virtual object recognition. We combine a recently invented 6-DOF fingertip device with motion tracking, a head-mounted display, and novel contact-rendering algorithms to enable a user to tactilely explore immersive virtual environments. We evaluate rudimentary 1-DOF, moderate 3-DOF, and complex 6-DOF tactile feedback during shape discrimination and mass discrimination, also comparing to interactions with real objects. Results from 20 naive study participants show that higher-dimensional tactile feedback may indeed allow completion of a wider range of virtual tasks, but that feedback dimensionality surprisingly does not greatly affect the exploratory techniques employed by the user.

link (url) DOI Project Page [BibTex]

link (url) DOI Project Page [BibTex]


Robot {I}nteraction {S}tudio: A Platform for Unsupervised {HRI}
Robot Interaction Studio: A Platform for Unsupervised HRI

Mohan, M., Nunez, C. M., Kuchenbecker, K. J.

In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Xian, China, May 2021 (inproceedings)

Abstract
Robots hold great potential for supporting exercise and physical therapy, but such systems are often cumbersome to set up and require expert supervision. We aim to solve these concerns by combining Captury Live, a real-time markerless motion-capture system, with a Rethink Robotics Baxter Research Robot to create the Robot Interaction Studio. We evaluated this platform for unsupervised human-robot interaction (HRI) through a 75-minute-long user study with seven adults who were given minimal instructions and no feedback about their actions. The robot used sounds, facial expressions, facial colors, head motions, and arm motions to sequentially present three categories of cues in randomized order while constantly rotating its face screen to look at the user. Analysis of the captured user motions shows that the cue type significantly affected the distance subjects traveled and the amount of time they spent within the robot’s reachable workspace, in alignment with the design of the cues. Heat map visualizations of the recorded user hand positions confirm that users tended to mimic the robot’s arm poses. Despite some initial frustration, taking part in this study did not significantly change user opinions of the robot. We reflect on the advantages of the proposed approach to unsupervised HRI as well as the limitations and possible future extensions of our system.

DOI Project Page [BibTex]

DOI Project Page [BibTex]


The Six Hug Commandments: Design and Evaluation of a Human-Sized Hugging Robot with Visual and Haptic Perception
The Six Hug Commandments: Design and Evaluation of a Human-Sized Hugging Robot with Visual and Haptic Perception

Block, A. E., Christen, S., Gassert, R., Hilliges, O., Kuchenbecker, K. J.

In HRI ’21: Proceedings of the 2021 ACM/IEEE International Conference on Human-Robot Interaction, pages: 380-388, ACM, New York, NY, USA, ACM/IEEE International Conference on Human-Robot Interaction (HRI 2021), March 2021 (inproceedings)

Abstract
Receiving a hug is one of the best ways to feel socially supported, and the lack of social touch can have severe negative effects on an individual's well-being. Based on previous research both within and outside of HRI, we propose six tenets (''commandments'') of natural and enjoyable robotic hugging: a hugging robot should be soft, be warm, be human sized, visually perceive its user, adjust its embrace to the user's size and position, and reliably release when the user wants to end the hug. Prior work validated the first two tenets, and the final four are new. We followed all six tenets to create a new robotic platform, HuggieBot 2.0, that has a soft, warm, inflated body (HuggieChest) and uses visual and haptic sensing to deliver closed-loop hugging. We first verified the outward appeal of this platform in comparison to the previous PR2-based HuggieBot 1.0 via an online video-watching study involving 117 users. We then conducted an in-person experiment in which 32 users each exchanged eight hugs with HuggieBot 2.0, experiencing all combinations of visual hug initiation, haptic sizing, and haptic releasing. The results show that adding haptic reactivity definitively improves user perception a hugging robot, largely verifying our four new tenets and illuminating several interesting opportunities for further improvement.

Block21-HRI-Commandments.pdf DOI Project Page [BibTex]

Block21-HRI-Commandments.pdf DOI Project Page [BibTex]

2020


no image
Synchronicity Trumps Mischief in Rhythmic Human-Robot Social-Physical Interaction

Fitter, N. T., Kuchenbecker, K. J.

In Robotics Research, 10, pages: 269-284, Springer Proceedings in Advanced Robotics, (Editors: Amato, Nancy M. and Hager, Greg and Thomas, Shawna and Torres-Torriti, Miguel), Springer, Cham, 18th International Symposium on Robotics Research (ISRR), 2020 (inproceedings)

Abstract
Hand-clapping games and other forms of rhythmic social-physical interaction might help foster human-robot teamwork, but the design of such interactions has scarcely been explored. We leveraged our prior work to enable the Rethink Robotics Baxter Research Robot to competently play one-handed tempo-matching hand-clapping games with a human user. To understand how such a robot’s capabilities and behaviors affect user perception, we created four versions of this interaction: the hand clapping could be initiated by either the robot or the human, and the non-initiating partner could be either cooperative, yielding synchronous motion, or mischievously uncooperative. Twenty adults tested two clapping tempos in each of these four interaction modes in a random order, rating every trial on standardized scales. The study results showed that having the robot initiate the interaction gave it a more dominant perceived personality. Despite previous results on the intrigue of misbehaving robots, we found that moving synchronously with the robot almost always made the interaction more enjoyable, less mentally taxing, less physically demanding, and lower effort for users than asynchronous interactions caused by robot or human mischief. Taken together, our results indicate that cooperative rhythmic social-physical interaction has the potential to strengthen human-robot partnerships.

DOI [BibTex]

2020

DOI [BibTex]


Elephant Trunk Skin: Nature's Flexible Kevlar
Elephant Trunk Skin: Nature’s Flexible Kevlar

Schulz, A., Fourney, E., Sordilla, S., Sukhwani, A., Hu, D.

IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), IEEE, October 2020 (conference)

Abstract
Elephants can extend their trunks by 20% in order to reach faraway objects. Muscular hydrostats such as earthworms, tongues, and octopus arms are all known to have similar levels of extensibility. However, the large and heavy trunk has the added constraints of being durable as well. In this study, we perform material testing on skin sections of the elephant trunk. This skin varies along the trunk with the dorsal portion having folds and the ventral portion having wrinkles. In tensile tests, the folds have ten times the strain as flat portions of skin, and wrinkles having three times the strain as flat portions. To better interpret the strains observed in tensile testing, we perform numerical simulations of elastic material with wrinkles and folds. We show that wrinkles and folds are a good solution for providing strength and extensibility.

[BibTex]

[BibTex]


Calibrating a Soft {ERT}-Based Tactile Sensor with a Multiphysics Model and Sim-to-real Transfer Learning
Calibrating a Soft ERT-Based Tactile Sensor with a Multiphysics Model and Sim-to-real Transfer Learning

Lee, H., Park, H., Serhat, G., Sun, H., Kuchenbecker, K. J.

In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), pages: 1632-1638, IEEE International Conference on Robotics and Automation (ICRA 2020), May 2020 (inproceedings)

Abstract
Tactile sensors based on electrical resistance tomography (ERT) have shown many advantages for implementing a soft and scalable whole-body robotic skin; however, calibration is challenging because pressure reconstruction is an ill-posed inverse problem. This paper introduces a method for calibrating soft ERT-based tactile sensors using sim-to-real transfer learning with a finite element multiphysics model. The model is composed of three simple models that together map contact pressure distributions to voltage measurements. We optimized the model parameters to reduce the gap between the simulation and reality. As a preliminary study, we discretized the sensing points into a 6 by 6 grid and synthesized single- and two-point contact datasets from the multiphysics model. We obtained another single-point dataset using the real sensor with the same contact location and force used in the simulation. Our new deep neural network architecture uses a de-noising network to capture the simulation-to-real gap and a reconstruction network to estimate contact force from voltage measurements. The proposed approach showed 82% hit rate for localization and 0.51 N of force estimation error performance in single-contact tests and 78.5% hit rate for localization and 5.0 N of force estimation error in two-point contact tests. We believe this new calibration method has the possibility to improve the sensing performance of ERT-based tactile sensors.

DOI Project Page [BibTex]

DOI Project Page [BibTex]


no image
An ERT-Based Robotic Skin with Sparsely Distributed Electrodes: Structure, Fabrication, and DNN-Based Signal Processing

Park, K., Park, H., Lee, H., Park, S., Kim, J.

In 2020 IEEE International Conference on Robotics and Automation (ICRA 2020), pages: 1617-1624, IEEE, Piscataway, NJ, IEEE International Conference on Robotics and Automation (ICRA 2020), May 2020 (inproceedings)

Abstract
Electrical resistance tomography (ERT) has previously been utilized to develop a large-scale tactile sensor because this approach enables the estimation of the conductivity distribution among the electrodes based on a known physical model. Such a sensor made with a stretchable material can conform to a curved surface. However, this sensor cannot fully cover a cylindrical surface because in such a configuration, the edges of the sensor must meet each other. The electrode configuration becomes irregular in this edge region, which may degrade the sensor performance. In this paper, we introduce an ERT-based robotic skin with evenly and sparsely distributed electrodes. For implementation, we sprayed a carbon nanotube (CNT)-dispersed solution to form a conductive sensing domain on a cylindrical surface. The electrodes were firmly embedded in the surface so that the wires were not exposed to the outside. The sensor output images were estimated using a deep neural network (DNN), which was trained with noisy simulation data. An indentation experiment revealed that the localization error of the sensor was 5.2 ± 3.3 mm, which is remarkable performance with only 30 electrodes. A frame rate of up to 120 Hz could be achieved with a sensing domain area of 90 cm2. The proposed approach simplifies the fabrication of 3D-shaped sensors, allowing them to be easily applied to existing robot arms in a seamless and robust manner.

DOI [BibTex]

DOI [BibTex]


Capturing Experts’ Mental Models to Organize a Collection of Haptic Devices: Affordances Outweigh Attributes
Capturing Experts’ Mental Models to Organize a Collection of Haptic Devices: Affordances Outweigh Attributes

Seifi, H., Oppermann, M., Bullard, J., MacLean, K. E., Kuchenbecker, K. J.

In Proceedings of the ACM SIGCHI Conference on Human Factors in Computing Systems (CHI), pages: 268, Conference on Human Factors in Computing Systems (CHI 2020), April 2020 (inproceedings)

Abstract
Humans rely on categories to mentally organize and understand sets of complex objects. One such set, haptic devices, has myriad technical attributes that affect user experience in complex ways. Seeking an effective navigation structure for a large online collection, we elicited expert mental categories for grounded force-feedback haptic devices: 18 experts (9 device creators, 9 interaction designers) reviewed, grouped, and described 75 devices according to their similarity in a custom card-sorting study. From the resulting quantitative and qualitative data, we identify prominent patterns of tagging versus binning, and we report 6 uber-attributes that the experts used to group the devices, favoring affordances over device specifications. Finally, we derive 7 device categories and 9 subcategories that reflect the imperfect yet semantic nature of the expert mental models. We visualize these device categories and similarities in the online haptic collection, and we offer insights for studying expert understanding of other human-centered technology.

DOI Project Page [BibTex]

DOI Project Page [BibTex]


Changes in Normal Force During Passive Dynamic Touch: Contact Mechanics and Perception
Changes in Normal Force During Passive Dynamic Touch: Contact Mechanics and Perception

Gueorguiev, D., Lambert, J., Thonnard, J., Kuchenbecker, K. J.

In Proceedings of the IEEE Haptics Symposium (HAPTICS), pages: 746-752, IEEE Haptics Symposium (HAPTICS 2020), March 2020 (inproceedings)

Abstract
Using a force-controlled robotic platform, we investigated the contact mechanics and psychophysical responses induced by negative and positive modulations in normal force during passive dynamic touch. In the natural state of the finger, the applied normal force modulation induces a correlated change in the tangential force. In a second condition, we applied talcum powder to the fingerpad, which induced a significant modification in the slope of the correlated tangential change. In both conditions, the same ten participants had to detect the interval that contained a decrease or an increase in the pre-stimulation normal force of 1 N. In the natural state, the 75% just noticeable difference for this task was found to be a ratio of 0.19 and 0.18 for decreases and increases, respectively. With talcum powder on the fingerpad, the normal force thresholds remained stable, following the Weber law of constant just noticeable differences, while the tangential force thresholds changed in the same way as the correlation slopes. This result suggests that participants predominantly relied on the normal force changes to perform the detection task. In addition, participants were asked to report whether the force decreased or increased. Their performance was generally poor at this second task even for above-threshold changes. However, their accuracy slightly improved with the talcum powder, which might be due to the reduced finger-surface friction.

DOI [BibTex]

DOI [BibTex]


no image
Haptic Object Parameter Estimation during Within-Hand-Manipulation with a Simple Robot Gripper

Mohtasham, D., Narayanan, G., Calli, B., Spiers, A. J.

In Proceedings of the IEEE Haptics Symposium (HAPTICS), pages: 140-147, March 2020 (inproceedings)

Abstract
Though it is common for robots to rely on vision for object feature estimation, there are environments where optical sensing performs poorly, due to occlusion, poor lighting or limited space for camera placement. Haptic sensing in robotics has a long history, but few approaches have combined this with within-hand-manipulation (WIHM), in order to expose more features of an object to the tactile sensing elements of the hand. As in the human hand, these sensing structures are generally non-homogenous in their coverage of a gripper's manipulation surfaces, as the sensitivity of some hand or finger regions is often different to other regions. In this work we use a modified version of the recently developed 2-finger Model VF (variable friction) robot gripper to acquire tactile information while rolling objects within the robot's grasp. This new gripper has one high-friction passive finger surface and one high-friction tactile sensing surface, equipped with 12 low-cost barometric force sensors encased in urethane. We have developed algorithms that use the data generated during these rolling actions to determine parametric aspects of the object under manipulation. Namely, two parameters are currently determined 1) the location of an object within the grasp 2) the object's shape (from three alternatives). The algorithms were first developed on a static test rig with passive object rolling and later evaluated with the robot gripper platform using active WIHM, which introduced artifacts into the data. With an object set consisting of 3 shapes and 5 sizes, an overall shape estimation accuracy was achieved of 88% and 78% for the test rig and hand respectively. Location estimation, of each object's centroid during motion, achieved a mean error of less than 2mm, along the 95mm length of the tactile sensing finger.

DOI [BibTex]

DOI [BibTex]

2019


no image
Deep Neural Network Approach in Electrical Impedance Tomography-Based Real-Time Soft Tactile Sensor

Park, H., Lee, H., Park, K., Mo, S., Kim, J.

IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), (999):7447-7452, IEEE, 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), November 2019 (conference)

Abstract
Recently, a whole-body tactile sensing have emerged in robotics for safe human-robot interaction. A key issue in the whole-body tactile sensing is ensuring large-area manufacturability and high durability. To fulfill these requirements, a reconstruction method called electrical impedance tomography (EIT) was adopted in large-area tactile sensing. This method maps voltage measurements to conductivity distribution using only a few number of measurement electrodes. A common approach for the mapping is using a linearized model derived from the Maxwell's equation. This linearized model shows fast computation time and moderate robustness against measurement noise but reconstruction accuracy is limited. In this paper, we propose a novel nonlinear EIT algorithm through Deep Neural Network (DNN) approach to improve the reconstruction accuracy of EIT-based tactile sensors. The neural network architecture with rectified linear unit (ReLU) function ensured extremely low computational time (0.002 seconds) and nonlinear network structure which provides superior measurement accuracy. The DNN model was trained with dataset synthesized in simulation environment. To achieve the robustness against measurement noise, the training proceeded with additive Gaussian noise that estimated through actual measurement noise. For real sensor application, the trained DNN model was transferred to a conductive fabric-based soft tactile sensor. For validation, the reconstruction error and noise robustness were mainly compared using conventional linearized model and proposed approach in simulation environment. As a demonstration, the tactile sensor equipped with the trained DNN model is presented for a contact force estimation.

DOI [BibTex]

2019

DOI [BibTex]


Effect of Remote Masking on Detection of Electrovibration
Effect of Remote Masking on Detection of Electrovibration

Jamalzadeh, M., Güçlü, B., Vardar, Y., Basdogan, C.

In Proceedings of the IEEE World Haptics Conference (WHC), pages: 229-234, Tokyo, Japan, July 2019 (inproceedings)

Abstract
Masking has been used to study human perception of tactile stimuli, including those created on haptic touch screens. Earlier studies have investigated the effect of in-site masking on tactile perception of electrovibration. In this study, we investigated whether it is possible to change detection threshold of electrovibration at fingertip of index finger via remote masking, i.e. by applying a (mechanical) vibrotactile stimulus on the proximal phalanx of the same finger. The masking stimuli were generated by a voice coil (Haptuator). For eight participants, we first measured the detection thresholds for electrovibration at the fingertip and for vibrotactile stimuli at the proximal phalanx. Then, the vibrations on the skin were measured at four different locations on the index finger of subjects to investigate how the mechanical masking stimulus propagated as the masking level was varied. Finally, electrovibration thresholds measured in the presence of vibrotactile masking stimuli. Our results show that vibrotactile masking stimuli generated sub-threshold vibrations around fingertip, and hence did not mechanically interfere with the electrovibration stimulus. However, there was a clear psychophysical masking effect due to central neural processes. Electrovibration absolute threshold increased approximately 0.19 dB for each dB increase in the masking level.

DOI [BibTex]

DOI [BibTex]


Objective and Subjective Assessment of Algorithms for Reducing Three-Axis Vibrations to One-Axis Vibrations
Objective and Subjective Assessment of Algorithms for Reducing Three-Axis Vibrations to One-Axis Vibrations

Park, G., Kuchenbecker, K. J.

In Proceedings of the IEEE World Haptics Conference, pages: 467-472, July 2019 (inproceedings)

Abstract
A typical approach to creating realistic vibrotactile feedback is reducing 3D vibrations recorded by an accelerometer to 1D signals that can be played back on a haptic actuator, but some of the information is often lost in this dimensional reduction process. This paper describes seven representative algorithms and proposes four metrics based on the spectral match, the temporal match, and the average value and the variability of them across 3D rotations. These four performance metrics were applied to four texture recordings, and the method utilizing the discrete fourier transform (DFT) was found to be the best regardless of the sensing axis. We also recruited 16 participants to assess the perceptual similarity achieved by each algorithm in real time. We found the four metrics correlated well with the subjectively rated similarities for the six dimensional reduction algorithms, with the exception of taking the 3D vector magnitude, which was perceived to be good despite its low spectral and temporal match metrics.

DOI Project Page [BibTex]


Fingertip Interaction Metrics Correlate with Visual and Haptic Perception of Real Surfaces
Fingertip Interaction Metrics Correlate with Visual and Haptic Perception of Real Surfaces

Vardar, Y., Wallraven, C., Kuchenbecker, K. J.

In Proceedings of the IEEE World Haptics Conference (WHC), pages: 395-400, Tokyo, Japan, July 2019 (inproceedings)

Abstract
Both vision and touch contribute to the perception of real surfaces. Although there have been many studies on the individual contributions of each sense, it is still unclear how each modality’s information is processed and integrated. To fill this gap, we investigated the similarity of visual and haptic perceptual spaces, as well as how well they each correlate with fingertip interaction metrics. Twenty participants interacted with ten different surfaces from the Penn Haptic Texture Toolkit by either looking at or touching them and judged their similarity in pairs. By analyzing the resulting similarity ratings using multi-dimensional scaling (MDS), we found that surfaces are similarly organized within the three-dimensional perceptual spaces of both modalities. Also, between-participant correlations were significantly higher in the haptic condition. In a separate experiment, we obtained the contact forces and accelerations acting on one finger interacting with each surface in a controlled way. We analyzed the collected fingertip interaction data in both the time and frequency domains. Our results suggest that the three perceptual dimensions for each modality can be represented by roughness/smoothness, hardness/softness, and friction, and that these dimensions can be estimated by surface vibration power, tap spectral centroid, and kinetic friction coefficient, respectively.

DOI Project Page [BibTex]

DOI Project Page [BibTex]


no image
A Clustering Approach to Categorizing 7 Degree-of-Freedom Arm Motions during Activities of Daily Living

Gloumakov, Y., Spiers, A. J., Dollar, A. M.

In Proceedings of the International Conference on Robotics and Automation (ICRA), pages: 7214-7220, Montreal, Canada, May 2019 (inproceedings)

Abstract
In this paper we present a novel method of categorizing naturalistic human arm motions during activities of daily living using clustering techniques. While many current approaches attempt to define all arm motions using heuristic interpretation, or a combination of several abstract motion primitives, our unsupervised approach generates a hierarchical description of natural human motion with well recognized groups. Reliable recommendation of a subset of motions for task achievement is beneficial to various fields, such as robotic and semi-autonomous prosthetic device applications. The proposed method makes use of well-known techniques such as dynamic time warping (DTW) to obtain a divergence measure between motion segments, DTW barycenter averaging (DBA) to get a motion average, and Ward's distance criterion to build the hierarchical tree. The clusters that emerge summarize the variety of recorded motions into the following general tasks: reach-to-front, transfer-box, drinking from vessel, on-table motion, turning a key or door knob, and reach-to-back pocket. The clustering methodology is justified by comparing against an alternative measure of divergence using Bezier coefficients and K-medoids clustering.

DOI [BibTex]

DOI [BibTex]


Haptipedia: Accelerating Haptic Device Discovery to Support Interaction & Engineering Design
Haptipedia: Accelerating Haptic Device Discovery to Support Interaction & Engineering Design

Seifi, H., Fazlollahi, F., Oppermann, M., Sastrillo, J. A., Ip, J., Agrawal, A., Park, G., Kuchenbecker, K. J., MacLean, K. E.

In Proceedings of the ACM SIGCHI Conference on Human Factors in Computing Systems (CHI), pages: 1-12, Glasgow, Scotland, May 2019 (inproceedings)

Abstract
Creating haptic experiences often entails inventing, modifying, or selecting specialized hardware. However, experience designers are rarely engineers, and 30 years of haptic inventions are buried in a fragmented literature that describes devices mechanically rather than by potential purpose. We conceived of Haptipedia to unlock this trove of examples: Haptipedia presents a device corpus for exploration through metadata that matter to both device and experience designers. It is a taxonomy of device attributes that go beyond physical description to capture potential utility, applied to a growing database of 105 grounded force-feedback devices, and accessed through a public visualization that links utility to morphology. Haptipedia's design was driven by both systematic review of the haptic device literature and rich input from diverse haptic designers. We describe Haptipedia's reception (including hopes it will redefine device reporting standards) and our plans for its sustainability through community participation.

DOI Project Page [BibTex]

DOI Project Page [BibTex]


no image
Internal Array Electrodes Improve the Spatial Resolution of Soft Tactile Sensors Based on Electrical Resistance Tomography

Lee, H., Park, K., Kim, J., Kuchenbecker, K. J.

In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), pages: 5411-5417, Montreal, Canada, May 2019, Hyosang Lee and Kyungseo Park contributed equally to this publication (inproceedings)

DOI [BibTex]

DOI [BibTex]


Improving Haptic Adjective Recognition with Unsupervised Feature Learning
Improving Haptic Adjective Recognition with Unsupervised Feature Learning

Richardson, B. A., Kuchenbecker, K. J.

In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), pages: 3804-3810, Montreal, Canada, May 2019 (inproceedings)

Abstract
Humans can form an impression of how a new object feels simply by touching its surfaces with the densely innervated skin of the fingertips. Many haptics researchers have recently been working to endow robots with similar levels of haptic intelligence, but these efforts almost always employ hand-crafted features, which are brittle, and concrete tasks, such as object recognition. We applied unsupervised feature learning methods, specifically K-SVD and Spatio-Temporal Hierarchical Matching Pursuit (ST-HMP), to rich multi-modal haptic data from a diverse dataset. We then tested the learned features on 19 more abstract binary classification tasks that center on haptic adjectives such as smooth and squishy. The learned features proved superior to traditional hand-crafted features by a large margin, almost doubling the average F1 score across all adjectives. Additionally, particular exploratory procedures (EPs) and sensor channels were found to support perception of certain haptic adjectives, underlining the need for diverse interactions and multi-modal haptic data.

DOI Project Page [BibTex]

DOI Project Page [BibTex]


A Novel Texture Rendering Approach for Electrostatic Displays
A Novel Texture Rendering Approach for Electrostatic Displays

Fiedler, T., Vardar, Y.

In Proceedings of International Workshop on Haptic and Audio Interaction Design (HAID), Lille, France, March 2019 (inproceedings)

Abstract
Generating realistic texture feelings on tactile displays using data-driven methods has attracted a lot of interest in the last decade. However, the need for large data storages and transmission rates complicates the use of these methods for the future commercial displays. In this paper, we propose a new texture rendering approach which can compress the texture data signicantly for electrostatic displays. Using three sample surfaces, we first explain how to record, analyze and compress the texture data, and render them on a touchscreen. Then, through psychophysical experiments conducted with nineteen participants, we show that the textures can be reproduced by a signicantly less number of frequency components than the ones in the original signal without inducing perceptual degradation. Moreover, our results indicate that the possible degree of compression is affected by the surface properties.

Fiedler19-HAID-Electrostatic [BibTex]

Fiedler19-HAID-Electrostatic [BibTex]

2018


no image
A Feasibility Study of Force Feedback using Computer-Mouse

Kumar, A., Gourishetti, R., Manivannan, M.

pages: 1-6, 26th Conference of National Academy of Psychology, December 2018 (conference)

Abstract
This paper is aimed at measuring the capacity to feedback haptic information by means of a standard computer mouse as a passive input device and visual feedback by a visual display. The main objective of this paper is to conduct a psychophysical experiment considering the theoretical hypothesis and compare the results with that of similar studies in the literature. A psychophysical experiment was conducted on eight subjects using the 'Two Alternative Forced Choice Constant (2AFC) Stimuli stiffness discrimination' method. The JND and Weber Fraction were calculated for each subject. To analyze the results, we determined the Response Matrix, Psychometric Function, JND and Weber Fraction. The average JND for 8 subjects was found to be 0.14 and the average Weber fraction is 9.54%. The Weber fraction value for our experiment is comparable to that of similar experiments in the literature. Our proposed technique can be used to enhance the user experience in computer gaming, Mobile Operating Software, and virtual training simulators for various clinical operations.

[BibTex]

2018

[BibTex]


no image
Assessment Of Atypical Motor Development In Infants Through Toy-Stimulated Play And Center Of Pressure Analysis

Zhao, S., Mohan, M., Torres, W. O., Bogen, D. K., Shofer, F. S., Prosser, L., Loeb, H., Johnson, M. J.

In Proceedings of the Annual Rehabilitation Engineering and Assistive Technology Society of North America (RESNA) Conference, Arlington, USA, July 2018 (inproceedings)

Abstract
There is a need to identify measures and create systems to assess motor development at an early stage. Center of Pressure (CoP) is a quantifiable metric that has been used to investigate postural control in healthy young children [6], children with CP [7], and infants just beginning to sit [8]. It was found that infants born prematurely exhibit different patterns of CoP movement than infants born full-term when assessing development impairments relating to postural control [9]. Preterm infants exhibited greater CoP excursions but had greater variability in their movements than fullterm infants. Our solution, the Play And Neuro-Development Assessment (PANDA) Gym, is a sensorized environment that aims to provide early diagnosis of neuromotor disorder in infants and improve current screening processes by providing quantitative measures rather than subjective ones, and promoting natural play with the stimulus of toys. Previous studies have documented stages in motor development in infants [10, 11], and developmental delays could become more apparent through toy interactions. This study examines the sensitivity of the pressure-sensitive mat subsystem to detect differences in CoP movement patterns for preterm and fullterm infants less than 6 months of age, with varying risk levels. This study aims to distinguish between typical and atypical motor development through assessment of the CoP data of infants in a natural play environment, in conditions where movement may be further stimulated with the presence of a toy.

link (url) [BibTex]

link (url) [BibTex]


no image
Passive Probing Perception: Effect of Latency in Visual-Haptic Feedback

Gourishetti, R., Isaac, J. H. R., Manivannan, M.

In pages: 186-198, Springer, Cham, EuroHaptics, June 2018 (inproceedings)

DOI [BibTex]

DOI [BibTex]


no image
Travelling Ultrasonic Wave Enhances Keyclick Sensation

Gueorguiev, D., Kaci, A., Amberg, M., Giraud, F., Lemaire-Semail, B.

In Haptics: Science, Technology, and Applications, pages: 302-312, Springer International Publishing, Cham, 2018 (inproceedings)

Abstract
A realistic keyclick sensation is a serious challenge for haptic feedback since vibrotactile rendering faces the limitation of the absence of contact force as experienced on physical buttons. It has been shown that creating a keyclick sensation is possible with stepwise ultrasonic friction modulation. However, the intensity of the sensation is limited by the impedance of the fingertip and by the absence of a lateral force component external to the finger. In our study, we compare this technique to rendering with an ultrasonic travelling wave, which exerts a lateral force on the fingertip. For both techniques, participants were asked to report the detection (or not) of a keyclick during a forced choice one interval procedure. In experiment 1, participants could press the surface as many time as they wanted for a given trial. In experiment 2, they were constrained to press only once. The results show a lower perceptual threshold for travelling waves. Moreover, participants pressed less times per trial and exerted smaller normal force on the surface. The subjective quality of the sensation was found similar for both techniques. In general, haptic feedback based on travelling ultrasonic waves is promising for applications without lateral motion of the finger.

[BibTex]

[BibTex]


no image
Exploring Fingers’ Limitation of Texture Density Perception on Ultrasonic Haptic Displays

Kalantari, F., Gueorguiev, D., Lank, E., Bremard, N., Grisoni, L.

In Haptics: Science, Technology, and Applications, pages: 354-365, Springer International Publishing, Cham, 2018 (inproceedings)

Abstract
Recent research in haptic feedback is motivated by the crucial role that tactile perception plays in everyday touch interactions. In this paper, we describe psychophysical experiments to investigate the perceptual threshold of individual fingers on both the right and left hand of right-handed participants using active dynamic touch for spatial period discrimination of both sinusoidal and square-wave gratings on ultrasonic haptic touchscreens. Both one-finger and multi-finger touch were studied and compared. Our results indicate that users' finger identity (index finger, middle finger, etc.) significantly affect the perception of both gratings in the case of one-finger exploration. We show that index finger and thumb are the most sensitive in all conditions whereas little finger followed by ring are the least sensitive for haptic perception. For multi-finger exploration, the right hand was found to be more sensitive than the left hand for both gratings. Our findings also demonstrate similar perception sensitivity between multi-finger exploration and the index finger of users' right hands (i.e. dominant hand in our study), while significant difference was found between single and multi-finger perception sensitivity for the left hand.

[BibTex]

[BibTex]

2017


no image
Mechanics of pseudo-haptics with computer mouse

Kumar, A., Gourishetti, R., Manivannan, M.

In pages: 1-6, IEEE, IEEE International Symposium on Haptic, Audio and Visual Environments and Games (HAVE), December 2017 (inproceedings)

Abstract
The haptic illusion based force feedback, known as pseudo-haptics, is used to simulate haptic explorations, such as stiffness, without using a force feedback device. There are many computer mouse-based pseudo-haptics work reported in the literature. However, none has explored the mechanics of the pseudo-haptics. The objective of this paper is to derive an analytical relation between the displacement of the mouse to that of a virtual spring assuming equal work done in both cases (mouse and virtual spring displacement) and experimentally validate their relation. A psychophysical experiment was conducted on eight subjects to discriminate the stiffness of two virtual springs using 2 Alternative Force Choice (AFC) discrimination task, Constant Stimuli method to measure Just Noticeable Difference (JND) for pseudo-stiffness. The mean pseudo-stiffness JND and average Weber fraction were calculated to be 14% and 9.54% respectively. The resulting JND and the Weber fraction from the experiment were comparable to that of the psychophysical parameters in the literature. Currently, this study simulates the haptic illusion for 1 DOF, however, it can be extended to 6 DOF.

DOI [BibTex]

2017

DOI [BibTex]


A Robotic Framework to Overcome Sensory Overload in Children on the Autism Spectrum: A Pilot Study
A Robotic Framework to Overcome Sensory Overload in Children on the Autism Spectrum: A Pilot Study

Javed, H., Burns, R., Jeon, M., Howard, A., Park, C. H.

In International Conference on Intelligent Robots and Systems (IROS) 2017, International Conference on Intelligent Robots and Systems, September 2017 (inproceedings)

Abstract
This paper discusses a novel framework designed to provide sensory stimulation to children with Autism Spectrum Disorder (ASD). The set up consists of multi-sensory stations to stimulate visual/auditory/olfactory/gustatory/tactile/vestibular senses, together with a robotic agent that navigates through each station responding to the different stimuli. We hypothesize that the robot’s responses will help children learn acceptable ways to respond to stimuli that might otherwise trigger sensory overload. Preliminary results from a pilot study conducted to examine the effectiveness of such a setup were encouraging and are described briefly in this text.

[BibTex]

[BibTex]


An Interactive Robotic System for Promoting Social Engagement
An Interactive Robotic System for Promoting Social Engagement

Burns, R., Javed, H., Jeon, M., Howard, A., Park, C. H.

In International Conference on Intelligent Robots and Systems (IROS) 2017, International Conference on Intelligent Robots and Systems, September 2017 (inproceedings)

Abstract
This abstract (and poster) is a condensed version of Burns' Master's thesis and related journal article. It discusses the use of imitation via robotic motion learning to improve human-robot interaction. It focuses on the preliminary results from a pilot study of 12 subjects. We hypothesized that the robot's use of imitation will increase the user's openness towards engaging with the robot. Post-imitation, experimental subjects displayed a more positive emotional state, had higher instances of mood contagion towards the robot, and interpreted the robot to have a higher level of autonomy than their control group counterparts. These results point to an increased user interest in engagement fueled by personalized imitation during interaction.

[BibTex]

[BibTex]


no image
Stiffness Perception during Pinching and Dissection with Teleoperated Haptic Forceps

Ng, C., Zareinia, K., Sun, Q., Kuchenbecker, K. J.

In Proceedings of the International Symposium on Robot and Human Interactive Communication (RO-MAN), pages: 456-463, Lisbon, Portugal, August 2017 (inproceedings)

DOI [BibTex]

DOI [BibTex]


no image
Towards quantifying dynamic human-human physical interactions for robot assisted stroke therapy

Mohan, M., Mendonca, R., Johnson, M. J.

In Proceedings of the IEEE International Conference on Rehabilitation Robotics (ICORR), London, UK, July 2017 (inproceedings)

Abstract
Human-Robot Interaction is a prominent field of robotics today. Knowledge of human-human physical interaction can prove vital in creating dynamic physical interactions between human and robots. Most of the current work in studying this interaction has been from a haptic perspective. Through this paper, we present metrics that can be used to identify if a physical interaction occurred between two people using kinematics. We present a simple Activity of Daily Living (ADL) task which involves a simple interaction. We show that we can use these metrics to successfully identify interactions.

DOI [BibTex]

DOI [BibTex]


no image
Design of a Parallel Continuum Manipulator for 6-DOF Fingertip Haptic Display

Young, E. M., Kuchenbecker, K. J.

In Proceedings of the IEEE World Haptics Conference (WHC), pages: 599-604, Munich, Germany, June 2017, Finalist for best poster paper (inproceedings)

Abstract
Despite rapid advancements in the field of fingertip haptics, rendering tactile cues with six degrees of freedom (6 DOF) remains an elusive challenge. In this paper, we investigate the potential of displaying fingertip haptic sensations with a 6-DOF parallel continuum manipulator (PCM) that mounts to the user's index finger and moves a contact platform around the fingertip. Compared to traditional mechanisms composed of rigid links and discrete joints, PCMs have the potential to be strong, dexterous, and compact, but they are also more complicated to design. We define the design space of 6-DOF parallel continuum manipulators and outline a process for refining such a device for fingertip haptic applications. Following extensive simulation, we obtain 12 designs that meet our specifications, construct a manually actuated prototype of one such design, and evaluate the simulation's ability to accurately predict the prototype's motion. Finally, we demonstrate the range of deliverable fingertip tactile cues, including a normal force into the finger and shear forces tangent to the finger at three extreme points on the boundary of the fingertip.

DOI [BibTex]

DOI [BibTex]