Laboratoire d'Informatique de Grenoble Équipe Ingénierie de l'Interaction Humain-Machine

Équipe Ingénierie de l'Interaction
Humain-Machine

Polymodal Menus: A Model-based Approach for Designing Multimodal Adaptive Menus for Small Screens

In Procs of The 9th ACM SIGCHI Symposium on Engineering Interactive Computing Systems. pages 1-19. 2017.

Sarah Bouzit, Gaëlle Calvary, Denis Chene, Jean Vanderdonckt

June 26-29, 2017 - Lisbon, Portugal

Résumé

This paper presents a model-based approach for designing Polymodal Menus, a new type of multimodal adaptive menu for small screen graphical user interfaces where item selection and adaptivity are responsive to more than one interaction modality: a menu item can be selected graphically, tactilely, vocally, gesturally, or any combination of them. The prediction window containing the most predicted menu items by assignment, equivalence, or redundancy is made equally adaptive. For this purpose, an adaptive menu model maintains the most predictable menu items according to various prediction methods. This model is exploited throughout various steps defined on a new Adaptivity Design Space based on a Perception-Decision-Action cycle coming from cognitive psychology. A user experiment compares four conditions of Polymodal Menus (graphical, vocal, gestural, and mixed) in terms of menu selection time, error rate, user subjective satisfaction and user preference, when item prediction has a low or high level of accuracy. Polymodal Menus offer alternative input/output modalities to select menu items in various contexts of use, especially when graphical modality is constrained.