Over the last decades there has been much research on mobile accessibility. However, most of it relates to laboratory settings, producing a snapshot of user performance. Understanding how performance changes over time and how people truly use their mobile devices remains an open question. Having a deeper knowledge of the challenges, frustrations, and overall true user experience is of upmost importance to improve current mobile technologies.
In this project, we are creating the tools and gathering the knowledge to characterize user performance in the wild (real-world) in order to improve current devices and interfaces that are used everyday.
Title: Accessibility in the Wild
Date: Jan 1, 2017
Authors: André Rodrigues, Hugo Nicolau, Kyle Montague, Tiago Guerreiro, João Guerreiro
Keywords: accessibility, mobile, laboratory, in-the-wild, everyday, touchscreen, mobile, performance
Typing on mobile devices is a common and complex task. The act of typing itself thereby encodes rich information, such as the typing method, the context it is performed in, and individual traits of the person typing. Researchers are increasingly using a selection or combination of experience sampling and passive sensing methods in real-world settings to examine typing behaviours. However, there is limited understanding of the effects these methods have on measures of input speed, typing behaviours, compliance, perceived trust and privacy. In this paper, we investigate the tradeoffs of everyday data collection methods. We contribute empirical results from a four-week field study (N=26). Here, participants contributed by transcribing, composing, passively having sentences analyzed and reflecting on their contributions. We present a tradeoff analysis of these data collection methods, discuss their impact on text-entry applications, and contribute a flexible research platform for in the wild text-entry studies.
Touch data, and in particular text-entry data, has been mostly collected in the laboratory, under controlled conditions. While touch and text-entry data has consistently shown its potential for monitoring and detecting a variety of conditions and impairments, its deployment in-the-wild remains a challenge. In this paper, we present WildKey, an Android keyboard toolkit that allows for the usable deployment of in-the-wild user studies. WildKey is able to analyse text-entry behaviours through implicit and explicit text-entry data collection while ensuring user privacy. We detail each of the WildKey’s components and features, all of the metrics collected, and discuss the steps taken to ensure user privacy and promote compliance.
Blind people face significant challenges when using smartphones. The focus on improving non-visual mobile accessibility has been at the level of touchscreen access. Our research investigates the challenges faced by blind people in their everyday usage of mobile phones. In this paper, we present a set of studies performed with the target population, novices and experts, using a variety of methods, targeted at identifying and verifying challenges; and coping mechanisms. Through a multiple methods approach we identify and validate challenges locally with a diverse set of user expertise and devices, and at scale through the analyses of the largest Android and iOS dedicate forums for blind people. We contribute with a comprehensive corpus of smartphone challenges for blind people, an assessment of their perceived relevance for users with different expertise levels, and a discussion on a set of directions for future research that tackle the open and often overlooked challenges.
Mobile device users are required to constantly learn to use new apps, features, and adapt to updates. For blind people, adapting to a new interface requires additional time and effort. At the limit, and often so, devices and applications may become unusable without support from someone else. Using tutorials is a common approach to foster independent learning of new concepts and workflows. However, most tutorials available online are limited in scope, detail, or quickly become outdated. Also, they presume a degree of tech savviness that is not at the reach of the common mobile device user. Our research explores the democratization of assistance by enabling non-technical people to create tutorials in their mobile phones for others. We report on the interaction and information needs of blind people when following ‘amateur’ tutorials. Thus, providing insights into how to widen and improve the authoring and playthrough of these learning artifacts. We conducted a study where 12 blind users followed tutorials previously created by blind or sighted people. Our findings suggest that instructions authored by sighted and blind people are limited in different aspects, and that those limitations prevent effective learning of the task at hand. We identified the types of contents produced by authors and the information required by followers during playthrough, which often do not align. We provide insights on how to support both authoring and playthrough of nonvisual smartphone tutorials. There is an opportunity to design solutions that mediate authoring, combine contributions, adapt to user profile, react to context and are living artifacts capable of perpetual improvement.
The constant barrage of updates and novel applications to explore creates a ceaseless cycle of new layouts and interaction methods that we must adapt to. One way to address these challenges is through in-context interactive tutorials. Most applications provide onboarding tutorials using visual metaphors to guide the user through the core features available. However, these tutorials are limited in their scope and are often inaccessible to blind people. In this paper, we present AidMe, a system-wide authoring and playthrough of non-visual interactive tutorials. Tutorials are created via user demonstration and narration. Using AidMe, in a user study with 11 blind participants we identified issues with instruction delivery and user guidance providing insights into the development of accessible interactive non-visual tutorials.
Over the last decade there have been numerous studies on touchscreen typing by blind people. However, there are no reports about blind users’ everyday typing performance and how it relates to laboratory settings. We conducted a longitudinal study involving five participants to investigate how blind users truly type on their smartphones. For twelve weeks, we collected field data, coupled with eight weekly laboratory sessions. This paper provides a thorough analysis of everyday typing data and its relationship with controlled laboratory assessments. We improve state-of-the-art techniques to obtain intent from field data, and provide insights on real-world performance. Our findings show that users improve over time, even though it is at a slow rate. Substitutions are the most common type of error and have a significant impact on entry rates in both field and laboratory settings. Results show that participants are 1.3-2 times faster when typing during everyday tasks. On the other hand, they are less accurate. We finished by deriving some implications that should inform the design of future virtual keyboard for non-visual input. Moreover, findings should be of interest to keyboard designers and researchers looking to conduct field studies to understand everyday input performance.
Blind people face many barriers using smartphones. Still, previous research has been mostly restricted to non-visual gestural interaction, paying little attention to the deeper daily challenges of blind users. To bridge this gap, we conducted a series of workshops with 42 blind participants, uncovering application challenges across all levels of expertise, most of which could only be surpassed through a support network. We propose Hint Me!, a human-powered service that allows blind users to get in-app assistance by posing questions or browsing previously answered questions on a shared knowledge-base. We evaluated the perceived usefulness and acceptance of this approach with six blind people. Participants valued the ability to learn independently and anticipated a series of usages: labeling, layout and feature descriptions, bug workarounds, and learning to accomplish tasks. Creating or browsing questions depends on aspects like privacy, knowledge of respondents and response time, revealing the benefits of a hybrid approach.
Non-visual text-entry for people with visual impairments has focused mostly on the comparison of input techniques reporting on performance measures, such as accuracy and speed. While researchers have been able to establish that non-visual input is slow and error prone, there is little understanding on how to improve it. To develop a richer characterization of typing performance, we conducted a longitudinal study with five novice blind users. For eight weeks, we collected in-situ usage data and conducted weekly laboratory assessment sessions. This paper presents a thorough analysis of typing performance that goes beyond traditional aggregated measures of text-entry and reports on character-level errors and touch measures. Our findings show that users improve over time, even though it is at a slow rate (0.3 WPM per week). Substitutions are the most common type of error and have a significant impact on entry rates. In addition to text input data, we analyzed touch behaviors, looking at touch contact points, exploration movements, and lift positions. We provide insights on why and how performance improvements and errors occur. Finally, we derive some implications that should inform the design of future virtual keyboards for non-visual input
The advent of system-wide accessibility services on mainstream touch-based smartphones has been a major point of inclusion for blind and visually impaired people. Ever since, researchers aimed to improve the accessibility of specific tasks, such text-entry and gestural interaction. However, little work aimed to understand and improve the overall accessibility of these devices in real world settings. In this paper, we present an eight-week long study with five novice blind participants where we seek to understand major concerns, expectations, challenges, barriers, and experiences with smartphones. The study included pre-adoption and weekly interviews, weekly controlled task assessments, and in-the wild system-wide usage. Our results show that mastering these devices is an arduous and long task, confirming the users’ initial concerns. We report on accessibility barriers experienced throughout the study, which could not be encountered in task-based laboratorial settings. Finally, we discuss how smartphones are being integrated in everyday activities and highlight the need for better adoption support tools.
Most work investigating mobile HCI is carried out within controlled laboratory settings; these spaces are not representative of the real-world environments for which the technology will predominantly be used. The result of which can produce a skewed or inaccurate understanding of interaction behaviors and users’ abilities. While mobile in-the-wild studies provide more realistic representations of technology usage, there are additional challenges to conducting data collection outside of the lab. In this paper we discuss these challenges and present TinyBlackBox, a standalone data collection framework to support mobile in-thewild studies with today’s smartphone and tablet devices.
Touchscreens are pervasive in mainstream technologies; they offer novel user interfaces and exciting gestural interactions. However, to interpret and distinguish between the vast ranges of gestural inputs, the devices require users to consistently perform interactions inline with the predefined location, movement and timing parameters of the gesture recognizers. For people with variable motor abilities, particularly hand tremors, performing these input gestures can be extremely challenging and impose limitations on the possible interactions the user can make with the device. In this paper, we examine touchscreen performance and interaction behaviors of motor-impaired users on mobile devices. The primary goal of this work is to measure and understand the variance of touchscreen interaction performances by people with motor-impairments. We conducted a four-week in-the-wild user study with nine participants using a mobile touchscreen device. A Sudoku stimulus application measured their interaction performance abilities during this time. Our results show that not only does interaction performance vary significantly between users, but also that an individual’s interaction abilities are significantly different between device sessions. Finally, we propose and evaluate the effect of novel tap gesture recognizers to accommodate for individual variances in touchscreen interactions.
NavTap is a navigational method that enables blind users to input text in a mobile device by reducing the associated cognitive load. In this paper, we present studies that go beyond a laboratorial setting, exploring the methods’ effectiveness and learnability as well as its influence on the users’ daily lives. Eight blind users participated in designing the prototype (3 weeks) while five took part in the studies along 16 more weeks. Results gathered in controlled weekly sessions and real life usage logs enabled us to better understand NavTap’s advantages and limitations. The method revealed itself both as easy to learn and improve. Indeed, users were able to better control their mobile devices to send SMS and use other tasks that require text input such as managing a phonebook, from day one, in real-life settings. While individual user profiles play an important role in determining their evolution, even less capable users (with ageinduced impairments or cognitive difficulties), were able to perform the assigned tasks (sms, directory) both in the laboratory and in everyday use, showing continuous improvement to their skills. According to interviews, none were able to input text before. Nav-Tap dramatically changed their relation with mobile devices and noticeably improved their social interaction capabilities.
NavTap is a navigational method that enables blind users to input text in a mobile device by reducing the associated cognitive load. We present studies that go beyond a laboratorial setting, exploring the methods’ effectiveness and learnability as well as its influence in the users’ daily lives. Eight blind users participated in the prototype’s design (3 weeks) while five took part in the studies along 16 more weeks. All were unable to input text before. Results gathered in controlled weekly sessions and real life interaction logs revealed the method as easy to learn and improve performance, as the users were able to fully control mobile devices in the first contact within real life scenarios. The individual profiles play an important role determining evolution and even less capable users (with age-induced impairments or cognitive difficulties) were able to perform the required tasks, in and out of the laboratory, with continuous improvements. NavTap dramatically changed the users’ relation with the devices and improved their social interaction capabilities.