486 0

Full metadata record

DC FieldValueLanguage
dc.contributor.author김기범-
dc.date.accessioned2018-12-19T07:52:36Z-
dc.date.available2018-12-19T07:52:36Z-
dc.date.issued2018-01-
dc.identifier.citationINTERNATIONAL JOURNAL OF HUMAN-COMPUTER STUDIES, v. 109, Page. 79-88en_US
dc.identifier.issn1095-9300-
dc.identifier.issn1071-5819-
dc.identifier.urihttps://www.sciencedirect.com/science/article/pii/S1071581917301283-
dc.identifier.urihttps://repository.hanyang.ac.kr/handle/20.500.11754/80958-
dc.description.abstractCurrent smartphone accessibility for people with visual impairments relies largely on screen readers and voice commands. However, voice commands and screen readers are often not ideal because users with visual impairments rely mostly on hearing ambient sound from the environment for their safety in mobile situations. Recent research has shown that marking menus in mobile devices provide fast and eyes-free access for sighted users Francone et al., 2010; Oakley and Park, 2007, 2009. However, the literature is lacking design implications and adaptations that meet the needs of users with visual impairments. This paper investigates the capabilities of visually impaired people to invoke smartphone functions using marking menus via 3D motions. We explore and present the optimal numbers of menu items (breadth) and menu levels (depth) for marking menus that people with visual impairments can successfully adopt. We also compared a marking menu prototype to TalkBack (TM) which is an accessibility menu system in Android smartphones. The experimental results show that our participants could perform menu selections using marking menus faster than when using TalkBack. Based on the study results, we provide implications and guidelines for designing marking menus and motion gesture interfaces for people with visual impairments.en_US
dc.description.sponsorshipThis study has been partially supported by the Grant-in-Aid for Scientific Research by MEXT (Ministry of Education, Culture, Sports, Science and Technology of Japan), under Grant No. 25330241. In addition, this research was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (Grant No. NRF-2017R1D1A3B03033353) and by the Keimyung University Research Grant of 2017. Authors would like to thank the participants, Zhenxin Wang, Guanghui Chen, Qi Fang and members of CHEC for their great effort and support.en_US
dc.language.isoen_USen_US
dc.publisherACADEMIC PRESS LTD- ELSEVIER SCIENCE LTDen_US
dc.subjectMarking menusen_US
dc.subjectMotion gesturesen_US
dc.subjectAccessibilityen_US
dc.subjectPeople with visual impairmentsen_US
dc.titleDesigning motion marking menus for people with visual impairmentsen_US
dc.typeArticleen_US
dc.relation.volume109-
dc.identifier.doi10.1016/j.ijhcs.2017.09.002-
dc.relation.page79-88-
dc.relation.journalINTERNATIONAL JOURNAL OF HUMAN-COMPUTER STUDIES-
dc.contributor.googleauthorDim, Nem Khan-
dc.contributor.googleauthorKim, Kibum-
dc.contributor.googleauthorRen, Xiangshi-
dc.relation.code2018002990-
dc.sector.campusE-
dc.sector.daehakCOLLEGE OF COMPUTING[E]-
dc.sector.departmentDIVISION OF MEDIA, CULTURE, AND DESIGN TECHNOLOGY-
dc.identifier.pidkibum-
Appears in Collections:
COLLEGE OF COMPUTING[E](소프트웨어융합대학) > MEDIA, CULTURE, AND DESIGN TECHNOLOGY(ICT융합학부) > Articles
Files in This Item:
There are no files associated with this item.
Export
RIS (EndNote)
XLS (Excel)
XML


qrcode

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

BROWSE