Laboratoire d'Informatique de Grenoble Équipe Ingénierie de l'Interaction Humain-Machine

Équipe Ingénierie de l'Interaction
Humain-Machine

Increasing input interaction expressiveness using eyes-free multi-finger interaction

2024.

Gauthier Faisandaz

Résumé

In our visually-oriented societies, people with visual impairments (PVI) face significant challenges in their daily lives. For PVI, assistive technologies (AT) play a crucial role in facilitating their interaction with their environment and accessing various content. However, abandonment rates and non-usage of traditional AT are high due to factors such as availability, cost, and social acceptance. This is why mainstream touchscreen devices (MTDs) like smartphones and tablets are increasingly utilized by PVI due to their affordability and widespread availability. Besides, MTDs offer diverse functionalities and can replace the need for multiple specialized AT.Nevertheless, the accessibility of MTDs presents issues due to their reliance on visual input. Without vision, it becomes challenging to target elements within a graphical user interface. In such situations, interactions are limited to simple gestures that can be executed without visual feedback, such as taps and directional swipes, which reduces expressiveness. Additionally, for PVI, MTD accessibility relies on applications known as "screen readers", which audibly present the screen's 2D visual content as a 1D auditory list of elements. Although this allows PVI to access digital content and interact with it, each element must be processed one by one, increasing the time and interactions needed to decipher the content. Performing complex actions like copy-paste becomes even more time-consuming as users must navigate step-by-step through menus to find the desired functions.To address these issues, we explore the possibility of introducing an additional means of interaction: Thumb-To-Finger (TTF) microgestures (µG), aimed at enhancing MTD expressiveness in situations where visual feedback is absent. For instance, by combining four directional swipes with just two TTF µG (thumb touching the index or middle finger), it is possible to generate 12 commands: four swipes without the thumb touching any finger, four swipes with the thumb touching the index, and four swipes with the thumb touching the middle finger. Similar to keyboard shortcuts on a computer, TTF µG have the potential to streamline interactions for common tasks.Hence, our research focuses on the feasibility and utility of TTF µG when used in conjunction with MTDs and without visual feedback. Initially, we conducted a study to assess the feasibility of 33 commonly mentioned TTF µG in the literature. Our results identified 8 particularly effective TTF µG that can be used concurrently with a MTD in a eyes-free situation.Subsequently, we demonstrate, through three practical usage scenarios, how these eight TTF µG can effectively address common challenges faced by PVI when using MTDs:The first scenario revolves around exploring an audio-tactile document, using TTF µG to trigger commands and provide localized audio feedback without interrupting contact with the MTD's surface, thereby enhancing exploration fluidity.The second scenario involves rearranging a grid of icons. Here, TTF µG serve as commands to swiftly access copy-paste functionalities, thus shortening interaction paths.Lastly, the third scenario pertains to text selection, a cumbersome task for PVI due to hierarchical menus imposed by accessibility tools. TTF µG enhance MTD expressiveness, simplifying text selection and bypassing menu usage.Our results indicate that TTF µG show promise in improving the efficiency and accessibility of interactions on MTDs in the absence of visual feedback. By enhancing expressiveness, they reduce the time and actions required to perform various tasks that are typically avoided by PVI due to their complexity when using default accessibility tools.