|
Gaze Controlled Application Framework |
Description
The technology changes our society, but it also brings a new threat: people, who cannot use it, are excluded from the society. Paradoxically, the solution for this problem can be provided also by technology. The great example is using Cyber-Physical Systems (CPS). For example disabled persons often cannot use the mouse and keyboard, typical computer controllers, because of various reasons. However, it can be replaced by the voice or, which is more interesting for us, by gaze interaction! Gaze Controlled Application Framework (GCAF) is a platform for easy building the customized application controlled by gaze. The idea is to let the close community of disabled people to build the application for them, and therefore significantly lower its cost. It is possible thanks to new interface description language, namely Gaze Interaction Markup Language (GIML) based on human readable XML files. It allows to define the areas of interest (AOI), describing its behaviour and reaction for glance or longer staring. One can add drawings, videos, sounds, animations or events to the regions customizing its look and reacting.
The second group of potential users are scientists performing experiments involving the eye tracing devices. Thus the GCAF allows gaining events history and statistics. It can be written to common formats such as CSV, XML or plain text. They contain complete information including an analysis of eye movement. So there is no need to use additional software. It is also possible to record a sequence of screenshots or video of test.
Currently, our software is used in experiments with infants:
First experiment concerns the phenomenon of speech sound discrimination obliterating. Differentiation of sound not existing in the native language of infants lowers during first year of life. Children are specializing in the diagnosis of their native language. Using ET enables the realization of research purposes by application of Anticipatory Eye Movement paradigm, in which infants are predicting the position of visual stimuli depending on the type of sound stimulus. The aim of our study was verifying this phenomenon on a Polish samples and execution Interactive Training resulting in extending the period sensitive in identifying phonemes and constructing training for infants based on visual interaction.
We used software application to design and test an interactive movie. The movie was developed as a part of ongoing research project using ET to language learning in infants and young children. We expect that training of the phonetic contrasts discrimination in active way will be more effective than passive listening of speech sounds. The aim of the movie is to enable infants to control their environment and induce rewarding responses of the movie characters. We are checking if infants in indirect contact with the environment are able to discover the agency or the ability to conclude for certain events as caused by their own actions.
However, the use of GCAF is much more developed. It can be used to, for example, perception and recognition the product, studies pre-testing effectiveness the various projects, lifting visual-motor coordination precision, and many others.
Project Team:
Jacek Matulewski - kierownik projektu, główny programista
Bibianna Bałaj - nadzór merytoryczny (eyetrackery), testy
Rafał Linowiecki (student) - programista, badania i rozwój
Alicja Majka (student) - testy jednostkowe
Agnieszka Ignaczewska (student) - testy aplikacji GCAF/GIML