The increasing availability of on-line resources and the widespread practice of storing data over the internet arise the problem of their accessibility for visually impaired people.
A translation from the visual domain to the available modalities is therefore necessary to study if this access is somewhat possible. However, the translation of information from vision to touch is necessarily impaired due to the superiority of vision during the acquisition process. Yet, compromises exist as visual information can be simplified, sketched. A picture can become a map. An object can become a geometrical shape. Under some circumstances, and with a reasonable loss of generality, touch can substitute vision. In particular, when touch substitutes vision, data can be differentiated by adding a further dimension to the tactile feedback, i.e. extending tactile feedback to three dimensions instead of two. This mode has been chosen because it mimics our natural way of following object profiles with fingers. Specifically, regardless if a hand lying on an object is moving or not, our tactile and proprioceptive systems are both stimulated and tell us something about which object we are manipulating, what can be its shape and size.
The goal of this talk is to describe how to exploit tactile stimulation to render digital information non visually, so that cognitive maps associated with this information can be efficiently elicited from visually impaired persons. In particular, the focus is to deliver geometrical information in a learning scenario.
Moreover, a completely blind interaction with virtual environment in a learning scenario is something little investigated because visually impaired subjects are often passive agents of exercises with fixed environment constraints. For this reason, during the talk I will provide my personal answer to the question: can visually impaired people manipulate dynamic virtual content through touch? This process is much more challenging than only exploring and learning a virtual content, but at the same time it leads to a more conscious and dynamic creation of the spatial understanding of an environment during tactile exploration.
Biography: Mariacarla Memeo is a Biomedical Engineer with a specialization in Electronics. She received her PhD in Bioengineering and Robotics at University of Genoa in 2017 with a thesis titled “Interactive and effective representation of digital content through touch using local tactile feedback”. The main aim was to produce and analyze the amount and quality of information to be delivered through touch, in a non-visual scenario, for guidance and education purposes.
Currently, she works in the Cognition, Motion and Neuroscience Unit (C’MoN) at Italian Institute of Technology, Genoa. The focus of her work is on the extraction of significant biomechanical features involved in the recognition of human intentions and on the realization of experimental naturalistic scenarios, i.e. environments with a limited number of restrictions that could be considered ecological and realistic. To achieve those aims, her current approach is to integrate real-time analysis (kinematic and behavioral variables) with event-relate synchronization of different measurement systems (motion capture, EMG and TMS).