The MannAccess is assistive technology for the accessibility of visually impaired. Its main goal is to represent a digital image in an auditive and tactile way, in which a visually impaired can "feeling" the image. The tool consists of a refreshable pin table activated through a 3-axis mechanism. The pins have attached touch sensors that identify the user touch, sending to the user contextual audio regarding the image region that has been touched. The technology still has a digital image object recognition module, allowing that different researches can be integrated with MannAccess.
Globally, around 1,3 billion people have some visual impairment in which 36 million are blind. One of the main challenges for visually impaired, in a digital inclusion point of view, is to learn the digital image content.
To improve this learning it is recommended to represent an image in different ways, such as the tactile representation. There are different tactile devices available that can perform this task, however, the majority of devices have a high price to be acquired (around a thousand dollars).
This factor is a fundamental problem to visually impaired whose purchasing power is low, blocking their purchase. The high cost also blocks that visually impaired aid entities can purchase them, attending their public in a better way.
Considering the aforementioned issues, the MannAccess has two main premises: have a low price to be acquired and be customizable, according to the user needs. To reach these premises, the tactile device was projected with pieces that can be printed through a 3D printer, diminishing its development cost. Besides that, the pieces allow that the refreshable table has customizable sizes in which the developer creates it according to its needs.
The following Figure presents the main working flow of MannAcess.
On the MannAccess, a user, through its self device (such as a notebook or smartphone), accesses the system. After, he can select a digital image to generate its auditory and tactile representation. With the selected image, an object recognizer system process it and provides an intermediary representation. This representation is processed in a centralized server that generates a tactile representation to be designed at the refreshable pin table.
From this moment, the user can interact with the refreshable pin table. Whereas he touches the pins, touch sensors sent to the centralized server a contextual region touched information. The server, in turn, will process the signals and will generate contextual audio to the user, indicating what is the meaning of the touched image area. This audio is reproduced by the user self device.
The refreshable pin table was designed in order to be customizable. She is composed of 9 main pieces that can be replicated, in a way to accomplish different sizes. The following Figure shows her drafting.
Besides the pieces related to the refreshable pin display surface, we also designed pieces related to the following categories:
Exemplifying these pieces, the following pictures show the design and assembly of the pins.
The prototype was assembled with pieces printed through a homemade 3D-printer. The printed pin table has dimensions of 12 x 12 pins. The development cost was around 113 times lower than the HyperBraille (a state-of-the-art device) acquisition cost. The prototype can be seen as follows.
This prototype was evaluated by visually impaired, whose evaluation indicated that the system, in general, has proper usability, attending the accessibility guidelines, providing a proper user experience, and effectively complying with its goal: an alternative digital image representation through the touch and auditory ways. The following picture shows the visually impaired accomplishing their evaluation (the faces were pixelized in order to ensure the anonymization).
The pieces and device assembly description are detailed at the Master's thesis "MannaHap: um modelo de sistema háptico assistivo de representação de imagens digitais para deficientes visuais", that can be accessed on this link (in Portuguese, only).
A automata digital image recognition module was also developed, contextualizing a specific Computer Science area of the visually impaired teaching-learning process. This module is depicted at the Master's thesis "MannAR : um método de interpretação de imagens de autômatos aplicado às tecnologias assistivas para deficientes visuais", that can be accessed on this link (only in Portuguese).
To create your own refreshable pin table, the project and the source code of printed pin tables are available at this file.
It has the pieces, developed at the software SolidWorks Professional 2018. It is composed of 4 directories, with the pieces groups:
Each directory has a subdirectory with the pieces file on .STL extension, already prepared to print.
To the assembly, we recommend that you read the Master's thesis that introduced the pin table, available on this link (in Portuguese, only).
The MannAccess is an initiative of Manna Group - Invisible Computing Engineering Research, from the Informatics Department at the State University of Maringá, Brazil.
The research is led by the professor Linnyer Beatrys Ruiz Aylon, Ph.D. The refreshable pin table development and design were accomplished by Álisson Renan Svaigen, MSc., and Wuigor Ivens Siqueira Bine. The automata digital image recognizer module was developed by Lailla Milainny Siqueira Bine, MSc. The research also has the collaboration of Juliano Cezar Chagas Tavares.
BINE, L. M. S; SVAIGEN, A. R.; BINE, W. I. S.; RUIZ, L. B. MannAccess: a novel low cost assistive educational tool of digital image for visually impaired. (Accepted to be published at IEEE Computer Society Signature Conference on Computers, Software and Applications - COMPSAC 2020)
BINE, Lailla; RUIZ, Linnyer. MannAR: um método de interpretação de imagens de autômatos aplicado às tecnologias assistivas para deficientes visuais. In: Anais dos Workshops do Congresso Brasileiro de Informática na Educação. 2019. p. 1073. (in Portuguese, only)
BINE, L. M. S; SVAIGEN, A. R.; BINE, W. I. S.; RUIZ, L. B. Visual content and its teaching-learning process for visually impaired: a big challenge. In: Brazilian Symposium on Computers in Education (Simpósio Brasileiro de Informática na Educação-SBIE). 2019. p. 1251.
BINE, Lailla M. Siqueira; COSTA, Yandre MG; AYLON, Linnyer B. Ruiz. Automata classification with convolutional neural networks for use in assistive technologies for the visually impaired. In: Proceedings of the 11th PErvasive Technologies Related to Assistive Environments Conference. 2018. p. 157-164.
SVAIGEN, Alisson Renan; BINE, Lailla M. Siqueira; AYLON, Linnyer Beatrys Ruiz. An Assistive Haptic System Towards Visually Impaired Computer Science Learning. In: Proceedings of the 11th PErvasive Technologies Related to Assistive Environments Conference. 2018. p. 153-156.