Haptic Intelligence


2024


Creating a Haptic Empathetic Robot Animal That Feels Touch and Emotion
Creating a Haptic Empathetic Robot Animal That Feels Touch and Emotion

Burns, R.

University of Tübingen, Tübingen, Germany, February 2024, Department of Computer Science (phdthesis)

Abstract
Social touch, such as a hug or a poke on the shoulder, is an essential aspect of everyday interaction. Humans use social touch to gain attention, communicate needs, express emotions, and build social bonds. Despite its importance, touch sensing is very limited in most commercially available robots. By endowing robots with social-touch perception, one can unlock a myriad of new interaction possibilities. In this thesis, I present my work on creating a Haptic Empathetic Robot Animal (HERA), a koala-like robot for children with autism. I demonstrate the importance of establishing design guidelines based on one's target audience, which we investigated through interviews with autism specialists. I share our work on creating full-body tactile sensing for the NAO robot using low-cost, do-it-yourself (DIY) methods, and I introduce an approach to model long-term robot emotions using second-order dynamics.

Project Page [BibTex]

2024

Project Page [BibTex]

2023


no image
Gesture-Based Nonverbal Interaction for Exercise Robots

Mohan, M.

University of Tübingen, Tübingen, Germany, October 2023, Department of Computer Science (phdthesis)

Abstract
When teaching or coaching, humans augment their words with carefully timed hand gestures, head and body movements, and facial expressions to provide feedback to their students. Robots, however, rarely utilize these nuanced cues. A minimally supervised social robot equipped with these abilities could support people in exercising, physical therapy, and learning new activities. This thesis examines how the intuitive power of human gestures can be harnessed to enhance human-robot interaction. To address this question, this research explores gesture-based interactions to expand the capabilities of a socially assistive robotic exercise coach, investigating the perspectives of both novice users and exercise-therapy experts. This thesis begins by concentrating on the user's engagement with the robot, analyzing the feasibility of minimally supervised gesture-based interactions. This exploration seeks to establish a framework in which robots can interact with users in a more intuitive and responsive manner. The investigation then shifts its focus toward the professionals who are integral to the success of these innovative technologies: the exercise-therapy experts. Roboticists face the challenge of translating the knowledge of these experts into robotic interactions. We address this challenge by developing a teleoperation algorithm that can enable exercise therapists to create customized gesture-based interactions for a robot. Thus, this thesis lays the groundwork for dynamic gesture-based interactions in minimally supervised environments, with implications for not only exercise-coach robots but also broader applications in human-robot interaction.

Project Page [BibTex]

2023

Project Page [BibTex]

2022


no image
Multi-Timescale Representation Learning of Human and Robot Haptic Interactions

Richardson, B.

University of Stuttgart, Stuttgart, Germany, December 2022, Faculty of Computer Science, Electrical Engineering and Information Technology (phdthesis)

Abstract
The sense of touch is one of the most crucial components of the human sensory system. It allows us to safely and intelligently interact with the physical objects and environment around us. By simply touching or dexterously manipulating an object, we can quickly infer a multitude of its properties. For more than fifty years, researchers have studied how humans physically explore and form perceptual representations of objects. Some of these works proposed the paradigm through which human haptic exploration is presently understood: humans use a particular set of exploratory procedures to elicit specific semantic attributes from objects. Others have sought to understand how physically measured object properties correspond to human perception of semantic attributes. Few, however, have investigated how specific explorations are perceived. As robots become increasingly advanced and more ubiquitous in daily life, they are beginning to be equipped with haptic sensing capabilities and algorithms for processing and structuring haptic information. Traditional haptics research has so far strongly influenced the introduction of haptic sensation and perception into robots but has not proven sufficient to give robots the necessary tools to become intelligent autonomous agents. The work presented in this thesis seeks to understand how single and sequential haptic interactions are perceived by both humans and robots. In our first study, we depart from the more traditional methods of studying human haptic perception and investigate how the physical sensations felt during single explorations are perceived by individual people. We treat interactions as probability distributions over a haptic feature space and train a model to predict how similarly a pair of surfaces is rated, predicting perceived similarity with a reasonable degree of accuracy. Our novel method also allows us to evaluate how individual people weigh different surface properties when they make perceptual judgments. The method is highly versatile and presents many opportunities for further studies into how humans form perceptual representations of specific explorations. Our next body of work explores how to improve robotic haptic perception of single interactions. We use unsupervised feature-learning methods to derive powerful features from raw robot sensor data and classify robot explorations into numerous haptic semantic property labels that were assigned from human ratings. Additionally, we provide robots with more nuanced perception by learning to predict graded ratings of a subset of properties. Our methods outperform previous attempts that all used hand-crafted features, demonstrating the limitations of such traditional approaches. To push robot haptic perception beyond evaluation of single explorations, our final work introduces and evaluates a method to give robots the ability to accumulate information over many sequential actions; our approach essentially takes advantage of object permanence by conditionally and recursively updating the representation of an object as it is sequentially explored. We implement our method on a robotic gripper platform that performs multiple exploratory procedures on each of many objects. As the robot explores objects with new procedures, it gains confidence in its internal representations and classification of object properties, thus moving closer to the marvelous haptic capabilities of humans and providing a solid foundation for future research in this domain.

link (url) Project Page [BibTex]

2022

link (url) Project Page [BibTex]


no image
Understanding the Influence of Moisture on Fingerpad-Surface Interactions

Nam, S.

University of Tübingen, Tübingen, Germany, October 2022, Department of Computer Science (phdthesis)

Abstract
People frequently touch objects with their fingers. The physical deformation of a finger pressing an object surface stimulates mechanoreceptors, resulting in a perceptual experience. Through interactions between perceptual sensations and motor control, humans naturally acquire the ability to manage friction under various contact conditions. Many researchers have advanced our understanding of human fingers to this point, but their complex structure and the variations in friction they experience due to continuously changing contact conditions necessitate additional study. Moisture is a primary factor that influences many aspects of the finger. In particular, sweat excreted from the numerous sweat pores on the fingerprints modifies the finger's material properties and the contact conditions between the finger and a surface. Measuring changes of the finger's moisture over time and in response to external stimuli presents a challenge for researchers, as commercial moisture sensors do not provide continuous measurements. This dissertation investigates the influence of moisture on fingerpad-surface interactions from diverse perspectives. First, we examine the extent to which moisture on the finger contributes to the sensation of stickiness during contact with glass. Second, we investigate the representative material properties of a finger at three distinct moisture levels, since the softness of human skin varies significantly with moisture. The third perspective is friction; we examine how the contact conditions, including the moisture of a finger, determine the available friction force opposing lateral sliding on glass. Fourth, we have invented and prototyped a transparent in vivo moisture sensor for the continuous measurement of finger hydration. In the first part of this dissertation, we explore how the perceptual intensity of light stickiness relates to the physical interaction between the skin and the surface. We conducted a psychophysical experiment in which nine participants actively pressed their index finger on a flat glass plate with a normal force close to 1.5 N and then detached it after a few seconds. A custom-designed apparatus recorded the contact force vector and the finger contact area during each interaction as well as pre- and post-trial finger moisture. After detaching their finger, participants judged the stickiness of the glass using a nine-point scale. We explored how sixteen physical variables derived from the recorded data correlate with each other and with the stickiness judgments of each participant. These analyses indicate that stickiness perception mainly depends on the pre-detachment pressing duration, the time taken for the finger to detach, and the impulse in the normal direction after the normal force changes sign; finger-surface adhesion seems to build with pressing time, causing a larger normal impulse during detachment and thus a more intense stickiness sensation. We additionally found a strong between-subjects correlation between maximum real contact area and peak pull-off force, as well as between finger moisture and impulse. When a fingerpad presses into a hard surface, the development of the contact area depends on the pressing force and speed. Importantly, it also varies with the finger's moisture, presumably because hydration changes the tissue's material properties. Therefore, for the second part of this dissertation, we collected data from one finger repeatedly pressing a glass plate under three moisture conditions, and we constructed a finite element model that we optimized to simulate the same three scenarios. We controlled the moisture of the subject's finger to be dry, natural, or moist and recorded 15 pressing trials in each condition. The measurements include normal force over time plus finger-contact images that are processed to yield gross contact area. We defined the axially symmetric 3D model's lumped parameters to include an SLS-Kelvin model (spring in series with parallel spring and damper) for the bulk tissue, plus an elastic epidermal layer. Particle swarm optimization was used to find the parameter values that cause the simulation to best match the trials recorded in each moisture condition. The results show that the softness of the bulk tissue reduces as the finger becomes more hydrated. The epidermis of the moist finger model is softest, while the natural finger model has the highest viscosity. In the third part of this dissertation, we focused on friction between the fingerpad and the surface. The magnitude of finger-surface friction available at the onset of full slip is crucial for understanding how the human hand can grip and manipulate objects. Related studies revealed the significance of moisture and contact time in enhancing friction. Recent research additionally indicated that surface temperature may also affect friction. However, previously reported friction coefficients have been measured only in dynamic contact conditions, where the finger is already sliding across the surface. In this study, we repeatedly measured the initial friction before full slip under eight contact conditions with low and high finger moisture, pressing time, and surface temperature. Moisture and pressing time both independently increased finger-surface friction across our population of twelve participants, and the effect of surface temperature depended on the contact conditions. Furthermore, detailed analysis of the recorded measurements indicates that micro stick-slip during the partial-slip phase contributes to enhanced friction. For the fourth and final part of this dissertation, we designed a transparent moisture sensor for continuous measurement of fingerpad hydration. Because various stimuli cause the sweat pores on fingerprints to excrete sweat, many researchers want to quantify the flow and assess its impact on the formation of the contact area. Unfortunately, the most popular sensor for skin hydration is opaque and does not offer continuous measurements. Our capacitive moisture sensor consists of a pair of inter-digital electrodes covered by an insulating layer, enabling impedance measurements across a wide frequency range. This proposed sensor is made entirely of transparent materials, which allows us to simultaneously measure the finger's contact area. Electrochemical impedance spectroscopy identifies the equivalent electrical circuit and the electrical component parameters that are affected by the amount of moisture present on the surface of the sensor. Most notably, the impedance at 1 kHz seems to best reflect the relative amount of sweat.

DOI Project Page [BibTex]

DOI Project Page [BibTex]

2021


Huggie{B}ot: An Interactive Hugging Robot With Visual and Haptic Perception
HuggieBot: An Interactive Hugging Robot With Visual and Haptic Perception

Block, A. E.

ETH Zürich, Zürich, August 2021, Department of Computer Science (phdthesis)

Abstract
Hugs are one of the first forms of contact and affection humans experience. Receiving a hug is one of the best ways to feel socially supported, and the lack of social touch can have severe adverse effects on an individual's well-being. Due to the prevalence and health benefits of hugging, roboticists are interested in creating robots that can hug humans as seamlessly as humans hug other humans. However, hugs are complex affective interactions that need to adapt to the height, body shape, and preferences of the hugging partner, and they often include intra-hug gestures like squeezes. This dissertation aims to create a series of hugging robots that use visual and haptic perception to provide enjoyable interactive hugs. Each of the four presented HuggieBot versions is evaluated by measuring how users emotionally and behaviorally respond to hugging it; HuggieBot 4.0 is explicitly compared to a human hugging partner using physiological measures. Building on research both within and outside of human-robot interaction (HRI), this thesis proposes eleven tenets of natural and enjoyable robotic hugging. These tenets were iteratively crafted through a design process combining user feedback and experimenter observation, and they were evaluated through user studies. A good hugging robot should (1) be soft, (2) be warm, (3) be human-sized, (4) autonomously invite the user for a hug when it detects someone in its personal space, and then it should wait for the user to begin walking toward it before closing its arms to ensure a consensual and synchronous hugging experience. It should also (5) adjust its embrace to the user's size and position, (6) reliably release when the user wants to end the hug, and (7) perceive the user's height and adapt its arm positions accordingly to comfortably fit around the user at appropriate body locations. Finally, a hugging robot should (8) accurately detect and classify gestures applied to its torso in real time, regardless of the user's hand placement, (9) respond quickly to their intra-hug gestures, (10) adopt a gesture paradigm that blends user preferences with slight variety and spontaneity, and (11) occasionally provide unprompted, proactive affective social touch to the user through intra-hug gestures. We believe these eleven tenets are essential to delivering high-quality robot hugs. Their presence results in a hug that pleases the user, and their absence results in a hug that is likely to be inadequate. We present these tenets as guidelines for future hugging robot creators to follow when designing new hugging robots to ensure user acceptance. We tested the four versions of HuggieBot through six user studies. First, we analyzed data collected in a previous study with a modified Willow Garage Personal Robot 2 (PR2) to evaluate human responses to different robot physical characteristics and hugging behaviors. Participants experienced and evaluated twelve hugs with the robot, divided into three randomly ordered trials that focused on physical robot characteristics (single factor, three levels) and nine randomly ordered trials with low, medium, and high hug pressure and duration (two factors, three levels each). Second, we created an entirely new robotic platform, HuggieBot 2.0, according to our first six tenets. The new platform features a soft, warm, inflated body (HuggieChest) and uses visual and haptic sensing to deliver closed-loop hugging. We first verified the outward appeal of this platform compared to the previous PR2-based HuggieBot 1.0 via an online video-watching study involving 117 users. We then conducted an in-person experiment in which 32 users each exchanged eight hugs with HuggieBot 2.0, experiencing all combinations of visual hug initiation, haptic sizing, and haptic releasing. We then refine the original fourth tenet (visually perceive its user) and present the remaining five tenets for designing interactive hugging robots; we validate the full list of eleven tenets through more in-person studies with our custom robot. To enable perceptive and pleasing autonomous robot behavior, we investigated robot responses to four human intra-hug gestures: holding, rubbing, patting, and squeezing. The robot's inflated torso's microphone and pressure sensor collected data of 32 people repeatedly demonstrating these gestures, which were used to develop a perceptual algorithm that classifies user actions with 88% accuracy. From user preferences, we created a probabilistic behavior algorithm that chooses robot responses in real time. We implemented improvements to the robot platform to create a third version of our robot, HuggieBot 3.0. We then validated its gesture perception system and behavior algorithm in a fifth user study with 16 users. Finally, we refined the quality and comfort of the embrace by adjusting the joint torques and joint angles of the closed pose position, we further improved the robot's visual perception to detect changes in user approach, we upgraded the robot's response to users who do not press on its back, and we had the robot respond to all intra-hug gestures with squeezes to create our final version of the robotic platform, HuggieBot 4.0. In our sixth user study, we investigated the emotional and physiological effects of hugging a robot compared to the effects of hugging a friendly but unfamiliar person. We continuously monitored participant heart rate and collected saliva samples at seven time points across the 3.5-hour study to measure the temporal evolution of cortisol and oxytocin. We used an adapted Trier Social Stress Test (TSST) protocol to reliably and ethically induce stress in the participants. They then experienced one of five different hug intervention methods before all interacting with HuggieBot 4.0. The results of these six user studies validated our eleven hugging tenets and informed the iterative design of HuggieBot. We see that users enjoy robot softness, robot warmth, and being physically squeezed by the robot. Users dislike being released too soon from a hug and equally dislike being held by the robot for too long. Adding haptic reactivity definitively improves user perception of a hugging robot; the robot's responses and proactive intra-hug gestures were greatly enjoyed. In our last study, we learned that HuggieBot can positively affect users on a physiological level and is somewhat comparable to hugging a person. Participants have more favorable opinions about hugging robots after prolonged interaction with HuggieBot in all of our research studies.

DOI Project Page [BibTex]

2021

DOI Project Page [BibTex]

2020


no image
Delivering Expressive and Personalized Fingertip Tactile Cues

Young, E. M.

University of Pennsylvania, Philadelphia, PA, December 2020, Department of Mechanical Engineering and Applied Mechanics (phdthesis)

Abstract
Wearable haptic devices have seen growing interest in recent years, but providing realistic tactile feedback is not a challenge that is soon to be solved. Daily interac- tions with physical objects elicit complex sensations at the fingertips. Furthermore, human fingertips exhibit a broad range of physical dimensions and perceptive abilities, adding increased complexity to the task of simulating haptic interactions in a compelling manner. However, as the applications of wearable haptic feedback grow, concerns of wearability and generalizability often persuade tactile device designers to simplify the complexities associated with rendering realistic haptic sensations. As such, wearable devices tend to be optimized for particular uses and average users, rendering only the most salient dimensions of tactile feedback for a given task and assuming all users interpret the feedback in a similar fashion. We propose that providing more realistic haptic feedback will require in-depth examinations of higher-dimensional tactile cues and personalization of these cues for individual users. In this thesis, we aim to provide hardware and software-based solutions for rendering more expressive and personalized tactile cues to the fingertip. We first explore the idea of rendering six-degree-of-freedom (6-DOF) tactile fingertip feedback via a wearable device, such that any possible fingertip interaction with a flat surface can be simulated. We highlight the potential of parallel continuum manipulators (PCMs) to meet the requirements of such a device, and we refine the design of a PCM for providing fingertip tactile cues. We construct a manually actuated prototype to validate the concept, and then continue to develop a motorized version, named the Fingertip Puppeteer, or Fuppeteer for short. Various error reduction techniques are presented, and the resulting device is evaluated by analyzing system responses to step inputs, measuring forces rendered to a biomimetic finger sensor, and comparing intended sensations to perceived sensations of twenty-four participants in a human-subject study. Once the functionality of the Fuppeteer is validated, we begin to explore how the device can be used to broaden our understanding of higher-dimensional tactile feedback. One such application is using the 6-DOF device to simulate different lower-dimensional devices. We evaluate 1-, 3-, and 6-DOF tactile feedback during shape discrimination and mass discrimination in a virtual environment, also comparing to interactions with real objects. Results from 20 naive study participants show that higher-dimensional tactile feedback may indeed allow completion of a wider range of virtual tasks, but that feedback dimensionality surprisingly does not greatly affect the exploratory techniques employed by the user. To address alternative approaches to improving tactile rendering in scenarios where low-dimensional tactile feedback is appropriate, we then explore the idea of personalizing feedback for a particular user. We present two software-based approaches to personalize an existing data-driven haptic rendering algorithm for fingertips of different sizes. We evaluate our algorithms in the rendering of pre-recorded tactile sensations onto rubber casts of six different fingertips as well as onto the real fingertips of 13 human participants, all via a 3-DOF wearable device. Results show that both personalization approaches significantly reduced force error magnitudes and improved realism ratings.

Project Page [BibTex]

2020

Project Page [BibTex]


no image
Modulating Physical Interactions in Human-Assistive Technologies

Hu, S.

University of Pennsylvania, Philadelphia, PA, August 2020, Department of Mechanical Engineering and Applied Mechanics (phdthesis)

Abstract
Many mechanical devices and robots operate in home environments, and they offer rich experiences and valuable functionalities for human users. When these devices interact physically with humans, additional care has to be taken in both hardware and software design to ensure that the robots provide safe and meaningful interactions. It is advantageous to have the robots be customizable so users could tinker them for their specific needs. There are many robot platforms that strive toward these goals, but the most successful robots in our world are either separated from humans (such as in factories and warehouses) or occupy the same space as humans but do not offer physical interactions (such as cleaning robots). In this thesis, we envision a suite of assistive robotic devices that assist people in their daily, physical tasks. Specifically, we begin with a hybrid force display that combines a cable, a brake, and a motor, which offers safe and powerful force output with a large workspace. Virtual haptic elements, including free space, constant force, springs, and dampers, can be simulated by this device. We then adapt the hybrid mechanism and develop the Gait Propulsion Trainer (GPT) for stroke rehabilitation, where we aim to reduce propulsion asymmetry by applying resistance at the user’s pelvis during unilateral stance gait phase. Sensors underneath the user’s shoes and a wireless communication module are added to precisely control the timing of the resistance force. To address the effort of parameter tuning in determining the optimal training scheme, we then develop a learning-from-demonstration (LfD) framework where robot behavior can be obtained from data, thus bypassing some of the tuning effort while enabling customization and generalization for different task situations. This LfD framework is evaluated in simulation and in a user study, and results show improved objective performance and human perception of the robot. Finally, we apply the LfD framework in an upper-limb therapy setting, where the robot directly learns the force output from a therapist when supporting stroke survivors in various physical exercises. Six stroke survivors and an occupational therapist provided demonstrations and tested the autonomous robot behaviors in a user study, and we obtain preliminary insights toward making the robot more intuitive and more effective for both therapists and clients of different impairment levels. This thesis thus considers both hardware and software design for robotic platforms, and we explore both direct and indirect force modulation for human-assistive technologies.

Hu20-PHDD-Modulating Project Page [BibTex]

Hu20-PHDD-Modulating Project Page [BibTex]

2018


no image
Instrumentation, Data, and Algorithms for Visually Understanding Haptic Surface Properties

Burka, A. L.

University of Pennsylvania, Philadelphia, USA, August 2018, Department of Electrical and Systems Engineering (phdthesis)

Abstract
Autonomous robots need to efficiently walk over varied surfaces and grasp diverse objects. We hypothesize that the association between how such surfaces look and how they physically feel during contact can be learned from a database of matched haptic and visual data recorded from various end-effectors' interactions with hundreds of real-world surfaces. Testing this hypothesis required the creation of a new multimodal sensing apparatus, the collection of a large multimodal dataset, and development of a machine-learning pipeline. This thesis begins by describing the design and construction of the Portable Robotic Optical/Tactile ObservatioN PACKage (PROTONPACK, or Proton for short), an untethered handheld sensing device that emulates the capabilities of the human senses of vision and touch. Its sensory modalities include RGBD vision, egomotion, contact force, and contact vibration. Three interchangeable end-effectors (a steel tooling ball, an OptoForce three-axis force sensor, and a SynTouch BioTac artificial fingertip) allow for different material properties at the contact point and provide additional tactile data. We then detail the calibration process for the motion and force sensing systems, as well as several proof-of-concept surface discrimination experiments that demonstrate the reliability of the device and the utility of the data it collects. This thesis then presents a large-scale dataset of multimodal surface interaction recordings, including 357 unique surfaces such as furniture, fabrics, outdoor fixtures, and items from several private and public material sample collections. Each surface was touched with one, two, or three end-effectors, comprising approximately one minute per end-effector of tapping and dragging at various forces and speeds. We hope that the larger community of robotics researchers will find broad applications for the published dataset. Lastly, we demonstrate an algorithm that learns to estimate haptic surface properties given visual input. Surfaces were rated on hardness, roughness, stickiness, and temperature by the human experimenter and by a pool of purely visual observers. Then we trained an algorithm to perform the same task as well as infer quantitative properties calculated from the haptic data. Overall, the task of predicting haptic properties from vision alone proved difficult for both humans and computers, but a hybrid algorithm using a deep neural network and a support vector machine achieved a correlation between expected and actual regression output between approximately ρ = 0.3 and ρ = 0.5 on previously unseen surfaces.

Project Page [BibTex]

2018

Project Page [BibTex]

2017


no image
Design and Evaluation of Interactive Hand-Clapping Robots

Naomi T. Fitter

University of Pennsylvania, August 2017, Department of Mechanical Engineering and Applied Mechanics (phdthesis)

Abstract
Human friends commonly connect through handshakes and high fives, and children around the world rejoice at hand-clapping games. As robots enter everyday human spaces, they will have the opportunity to join in such physical interactions, but few current robots are intended to touch humans. How should robots move and react in playful hand-to-hand interactions with people? We conducted research in four main areas to address this design challenge. First, we implemented and tested an initial hand-clapping robotic system. This effort began by recording sensor data from people performing a variety of hand-clapping activities; the resulting accelerometer and position data taught us how to design appropriate hand-clapping robot motion and logic. Implementation on a Rethink Robotics Baxter Research Robot demonstrated that a robot could move like our human participants and reliably detect hand impacts through its wrist-mounted accelerometers. N = 20 study participants clapped hands with differently configured versions of this robot in random order: the robot’s facial animation, physical reactivity, arm stiffness, and clapping tempo all significantly affected how users perceived the robot. We next sought to create and evaluate more sophisticated robot hand-clapping behaviors. Data from people performing interactive clapping tasks at increasing and decreasing tempos helped us propose prospective timing models and implement adaptive-tempo Baxter play. In a subsequent experiment that involved N = 20 users, a mischievous Baxter was equipped with the top-performing tempo adaptation model and chose to play cooperatively or asynchronously with its human partner. Although a few participants reacted positively to Baxter’s mischief, users overwhelmingly pre- ferred a synchronous, cooperative robot. Third, we set up and conducted a human-robot interaction experiment more similar to everyday human-human hand-clapping interactions. A machine learning pipeline trained on inertial data from human motions demonstrated that linear support vector machines (SVMs) can classify a new person’s hand-clapping actions with an accuracy of about 95%. This technique succeeded for both hand- and wrist-mounted inertial sensors, enabling people to teach the Baxter robot new hand- clapping games. Evaluation of various two-handed clapping play activities by N = 24 users showed that learning games from Baxter was significantly easier than teaching Baxter games, but that the teaching role caused people to consider more teamwork aspects of the gameplay. Finally, to broaden the scope of these interactions, we began exploring applications of Baxter in socially assistive robotics. Using many of the same sensing and actuation strategies, we developed a set of six playful hand-to-hand contact-based exercise interactions to be jointly executed between a person and Baxter, along with two similar non-contact games. A proof-of-concept experiment using these exercise games enrolled N = 20 young adults and N = 14 healthy adults over age 53. The results demonstrated that people are willing and motivated to interact with the robot in this way and that different games promote unique physical and cognitive exercise effects. Overall, this research aims to help shape design processes for socially relevant physical human-robot interaction and reveal new opportunities for socially assistive robotics.

[BibTex]

2017

[BibTex]