I am a research scientist in the Haptic Intelligence Department who works on a variety of projects related to haptic user interfaces, robotic manipulators and human behavior analysis. As a senior member of the group I also provide technical and scientific support to other members of our department.
I completed my undergraduate degree (in Cybernetics and Control Systems) and Masters degree (in Engineering and Information Systems) at the University of Reading (UK, close to London), under the guidance of Prof. William Harwin, Prof. Peter Kyberd and Prof. Kevin Warwick. My PhD, in synthesizing human-like motion for humanoid robots using non-linear control approaches, was completed at the University of Bristol (UK) and Bristol Robotics Laboratory with Dr. Guido Herrmann. The majority of my thesis is now available as a book published by Springer.
Following my PhD I worked as a post-doc at the Bristol Robotics Laboratory and University of West of England on investigating haptics for use in medical robotics. In 2014 I began a Postdoc at Yale University in the USA in the lab of Prof. Aaron Dollar. Here I worked on a variety of projects related to upper limb prosthetics, robot manipulator design and shape-changing haptic devices for navigation. I later became an Associate Research Scientist in that lab.
In CHI 2019 Workshop on Hacking Blind Navigation, 2019 (inproceedings)
Since the 1960s, technologists have worked to develop systems that facilitate independent
navigation by vision-impaired (VI) pedestrians. These devices vary in terms of conveyed information
and feedback modality. Unfortunately, many such prototypes never progress beyond laboratory
testing. Conversely, smartphone-based navigation systems for sighted pedestrians have grown in
robustness and capabilities, to the point of now being ubiquitous.
How can we leverage the success of sighted navigation technology, which is driven by a larger global
market, as a way to progress VI navigation systems? We believe one possibility is to make common
devices that benefit both VI and sighted individuals, by providing information in a way that does not
distract either user from their tasks or environment. To this end we have developed physical
interfaces that eschew visual, audio or vibratory feedback, instead relying on the natural human
ability to perceive the shape of a handheld object.
Our goal is to understand the principles of Perception, Action and Learning in autonomous systems that successfully interact with complex environments and to use this understanding to design future systems