I am a research scientist in the Haptic Intelligence Department who works on a variety of projects related to robotic manipulation, haptic user interfaces and human behavior analysis. As a senior member of the group I also provide technical and scientific support to other members of our department.
I completed my undergraduate degree (in Cybernetics and Control Systems) and Masters degree (in Engineering and Information Systems) at the University of Reading (UK, close to London), under the guidance of Prof. William Harwin, Prof. Peter Kyberd and Prof. Kevin Warwick. My PhD, in synthesizing human-like motion for humanoid robots using non-linear control approaches, was completed at the University of Bristol (UK) and Bristol Robotics Laboratory with Dr. Guido Herrmann. The majority of my thesis is now available as a book published by Springer.
Following my PhD I worked as a post-doc at the Bristol Robotics Laboratory and University of West of England on investigating haptics for use in medical robotics. In 2014 I began a Postdoc at Yale University in the USA in the lab of Prof. Aaron Dollar. Here I worked on a variety of projects related to upper limb prosthetics, robot manipulator design and shape-changing haptic devices for navigation. I later became an Associate Research Scientist in that lab.
pages: 6, Workshop paper (6 pages) presented at the CHI 2019 Workshop on Hacking Blind Navigation, May 2019 (misc) Accepted
Since the 1960s, technologists have worked to develop systems that facilitate independent
navigation by vision-impaired (VI) pedestrians. These devices vary in terms of conveyed information
and feedback modality. Unfortunately, many such prototypes never progress beyond laboratory
testing. Conversely, smartphone-based navigation systems for sighted pedestrians have grown in
robustness and capabilities, to the point of now being ubiquitous.
How can we leverage the success of sighted navigation technology, which is driven by a larger global
market, as a way to progress VI navigation systems? We believe one possibility is to make common
devices that benefit both VI and sighted individuals, by providing information in a way that does not
distract either user from their tasks or environment. To this end we have developed physical
interfaces that eschew visual, audio or vibratory feedback, instead relying on the natural human
ability to perceive the shape of a handheld object.
In Proceedings of the International Conference on Robotics and Automation (ICRA), pages: 7214-7220, Montreal, Canada, May 2019 (inproceedings)
In this paper we present a novel method of categorizing naturalistic human arm motions during activities of daily living using clustering techniques. While many current approaches attempt to define all arm motions using heuristic interpretation, or a combination of several abstract motion primitives, our unsupervised approach generates a hierarchical description of natural human motion with well recognized groups. Reliable recommendation of a subset of motions for task achievement is beneficial to various fields, such as robotic and semi-autonomous prosthetic device applications. The proposed method makes use of well-known techniques such as dynamic time warping (DTW) to obtain a divergence measure between motion segments, DTW barycenter averaging (DBA) to get a motion average, and Ward's distance criterion to build the hierarchical tree. The clusters that emerge summarize the variety of recorded motions into the following general tasks: reach-to-front, transfer-box, drinking from vessel, on-table motion, turning a key or door knob, and reach-to-back pocket. The clustering methodology is justified by comparing against an alternative measure of divergence using Bezier coefficients and K-medoids clustering.
Our goal is to understand the principles of Perception, Action and Learning in autonomous systems that successfully interact with complex environments and to use this understanding to design future systems