ResearchGate Google Scholar

The Team

Related Research

PAELife - Accessible Virtual Keyboard for Seniors

Funding

AAL PAELife Project AAL/0014/2009

Older Adult Performance Using Body Gesture Interaction

Gesture interfaces are becoming an increasingly popular way to interact with technology, as they are considered very easy to learn and use. However, most gesture interactions studies focus on the average adult or, when focusing on older adults, it is usually in the gaming or physical activity contexts. In this study, we evaluate the suitability of gestural interfaces for older adults, in order to interact with a general technological interface. To this end, we asked 14 older users to perform a set of navigation and selection tasks; two tasks required to interact on most technological interfaces. For each of these tasks, we evaluated two alternative gestures. All senior participants were able to complete almost all the proposed tasks, and enjoyed using this type of interface. We concluded that gestural interfaces are adequate for the senior users, and derived a set of design implications that future developers should take into account when developing gestural interactions for the older people.

The problem

Over the last decade, gestural interfaces have earned increasing interest, both in the commercial industry as well as in research. This type of interface gained popularity in the video game context where, usually, users move their body that acts as a controller to play video games. However, due to the broad availability and low cost of gesture recognition hardware, several applications are being developed out of the gaming context. Since people express themselves and interact in everyday social life through gestures, gestural interfaces are considered very natural and easy to use. Therefore, body gesture interfaces provide for easier technological interactions for groups that, until now, have shown some resistance in adopting technology, such as the older adults.

However, seniors have particular physical characteristics that can be a hindrance when using gestural interfaces. Research shows that ageing brings along a significant decline in cognitive, perceptual and motor abilities. Motor issues of older adults include slower motion, less strength, less fine motor control and decreased range of motion and grip force. Therefore, interactions should be designed carefully in order to avoid fatigue, exhaustion and fine motor control. On the other hand, since some degree of physical activity is required to interact with gestural interfaces, it is likely to positively impact the health of the senior users, even if the intensity of physical activity is low.

Current literature focuses mainly on gesture interactions for average adults or, when it focuses on older adults, it is usually in the gaming context. Therefore, seniors' performance and acceptance towards body gesture interfaces are not well understood, particularly considering their specific needs and abilities out of the gaming context. In this study, we aim to understand how older adults can benefit from gesture based interactions, in terms of suitability and acceptance, when interacting with technological interfaces in general. In order to evaluate how seniors adapt to gestural interfaces, we focused on two types of tasks required to interact on most technological interfaces: navigation and selection.

Experimental design

In order to understand if gestural interfaces are suited for seniors when interacting with a general technological interface, we focused on two types of tasks: navigation and selection. For each task, we evaluated two alternative gestures. We designed simple one hand gestures, thus avoiding problems that may arise with bimanual interactions. For all the defined gestures, it is only required that seniors move their dominant hand above the hip and in front of their body for a short period of time. Therefore, all the gestures are relatively simple and physically easy to achieve. We decided to use Microsoft's Kinect to perform the gesture recognition, since it has the benefit of not requiring any accessory to operate, making it more practical and comfortable to use. It also allows to track the whole body of the user; the alternatives that require a hand-held controller only track the forces applied to that controller.

Regarding navigation, we evaluated Swipe and Grab and Drag gestures. To perform a Swipe, users should drag either hand in the air and perform a horizontal motion to the desired direction. A Swipe gesture is only considered when users horizontally move their hand for at least 30cm. Regarding the Grab and Drag gesture, we used the implementation of Microsoft's Kinect SKD. To perform the Grab and Drag, users should raise either hand so that a hand cursor appears on screen. The hand should be open, and the palm should be facing the Kinect sensor. Then, users can close their hand to "grab" the content and then they can drag the hand in the desired direction to scroll. To scroll more, users have to open their hand to "release", so they can Grab and Drag again. This alternative may require more movements and coordination than the Swipe gesture, but we expect users to have more control on the navigation process. The Swipe gesture strives for simplicity.

For the selection task, we developed Point and Push and Point and Hold gestures. For both gestures, users should raise either hand towards the screen so a hand cursor appears. Then, to perform a selection through the push gesture, users should move their hand in the direction of the screen, as if they were touching the target. For this gesture, we also used the implementation in Microsoft's Kinect SDK. Regarding the Point and Hold gesture, users should keep the hand cursor over a target for 1.5 seconds to select it. The interface gives feedback about the selection state of the target by progressively filling its background with a lighter color, like a sandglass. When the target is completely filled, it is selected. We expect the Point and Push gesture to be more precise, since it will not restrict the time users have to aim. The Point and Hold is simpler, as the users only have to keep pointing for a while to perform a selection.

To test the navigation gestures, we had a list of horizontally scrollable numbers, as shown in the next figure (a). For the selection gestures, a different number of targets were displayed on the screen, and users were asked to select a particular target from the set, as shown in the (b).

Results of the user study

Fourteen older people, 3 men and 11 women, took part in our user study. Users were asked to perform specific tasks for navigation and selection. To test the navigation gestures, participants were successively asked to scroll to a predetermined number that was displayed on the screen. A total of 8 navigations were required for each navigation gesture. The required navigation numbers order was chosen in a way to cover 3 conditions: large, medium, and small ranges of scroll. Regarding the selection task, the application asks to select a random target in a grid of 2 targets, then in a grid of 4, then in grid of 8, and finally in a grid of 16 targets. The varying number of selectable targets allows us to access the performance and precision of the developed gestures relative to the target number and size.

Speed and errors

Regarding navigation tasks, users completed them faster when using the Swipe gesture. Indeed, a paired t test revealed that the differences are statistically significant. This occurred mainly because Swipes allow to scroll bigger distances faster. Moreover, most senior users found the Grab and Drag gesture to be more complex and harder to perform than the Swipe gesture. Some participants reported that they needed to be very focused in order to coordinate the motions required to perform the Grab and Drag gesture. On the other hand, some users preferred the Grab and Drag gesture because it allowed a finer control, especially for small distances.

Regarding selection tasks, the Point and Hold gesture allowed for a better performance when compared to the Point and Push alternative. A paired t test showed that this difference is statistically significant (p=0.009). Since both gestures require pointing at the screen, we can conclude that users took more than 1.5 seconds to perform the push gesture: the time required in Point and Hold to perform a selection. Almost all users performed the selection tasks without difficulties, finding the gestures simple and easy to perform.

As we can see in the previous figure, the total number of errors in both navigation gestures is similar. Indeed, a paired t test showed that there are no statistically significant differences (p=0.18). However, the types of errors users committed on alternative navigation gestures are different. With the Swipe gesture, users made more precision errors (happen when users are scrolling in one direction to get to a particular number but, due to lack of precision of the gesture, pass it by). On the Grab and Drag, participants commited more direction errors (when users are asked to navigate in one direction but end up scrolling in the opposite direction).

Regarding errors in the selection tasks, both gestures achieved very similar results. However, when users were performing the Point and Hold gesture, most of the errors occurred when there were only 2 or 4 targets on screen (83% of errors). This happened because the users would start pointing at an undesired target, but they did not have enough time (selection is effective in 1.5 seconds) to adjust the hand position to the desired target, and then an erroneous target selection would occur. On the other hand, most errors that were made on the Point and Push gesture occurred when there were 8 and 16 targets on screen (60% of errors). In this case, the reason was the lack of precision users had when performing the push gesture. Indeed, users had no trouble preselecting the desired target by putting the hand cursor above it, but when they were performing the push gesture they would slightly move their hand and would accidently select another target.

User Satisfaction and Feedback

Regarding the navigation gestures, both alternatives achieved similar results in the satisfaction questionnaire. However, this occurred because some participants preferred the Swipe, while other preferred the Grab and Drag, which tied the satisfaction scores. Thus, it is relevant to analyze the most frequent comments made by participants. They reported that the Swipe is easier to learn and execute than the Grab and Drag gesture. Therefore, Swipe was considered a more natural gesture. Some participants found the Grab and Drag too complex and demanding in terms of coordination, considering it a gesture that is not usually performed in everyday life and thus harder to master. Some participants also linked the difficulty of performing navigation gestures with the lack of practice. However, they were optimistic that if they had more time to practice, they would get used to it and would be able to use gestures more proficiently.

Regarding the precision of the gestures for the navigation tasks, users reported that the Swipe did not allow for very precise scrolling, particularly when users wanted to scroll very little. Indeed, participants who were able to perform the Grab and Drag gesture usually preferred this alternative over the Swipe, since it allowed for more control and precision. However, some users reported discomfort while performing the Grab and Drag gesture, stating that since the hand palm needs to be facing the television screen, it is an uncomfortable position. Besides this, seniors did not regard either navigation gestures as tiring, except for a couple of participants who suffered from arthritis and mobility and balance issues.

The selection tasks were performed more easily when compared to the navigation tasks. We are aware that this is also probably due to the fact that they were performed after the navigation gestures, leaving time to the users to get used to the gesture based interaction. For selection tasks, most users preferred the Point and Hold gesture. They reported this gesture to be very simple and easy to perform. Even when the targets got smaller, users reported that it was easy to aim and select the desired target. Participants also enjoyed the Point and Push gesture, but reported that it was a bit more tiring to the arm. Some users started to perform the gesture with the arm already stretched. In this case, there was no room for the arm to stretch more and perform the "push" part of the gesture, resulting in users stretching the whole upper body painfully. Some users had no problem preselecting, i.e. getting hand over target, but when performing the push would lose precision and press on another target or even out of the screen.

Design implications for gestural interfaces

In this study, we showed that gesture interactions are an appropriate way for older adults to control a general technological interface. Our results showed that older people enjoyed using gestural interfaces, finding most of the evaluated gestures easy to learn and use. From our results, we derive the following design implications for gestural interfaces:

  • Keep the defined gestures as basic as possible. Gestures that may look simple for the average adult, such as Grab and Drag, may prove to be complex coordination challenges for older adults. Our gestures that were composed by two distinct steps (Grab and Drag, Point and Push), demanded more concentration from seniors, which led to a reduction in performance. The simpler the gesture, the easier it is to learn, which also increases motivation to keep using it.
  • Develop gestures that allow to be used by any hand. In this study we tried to simulate a real life scenario where gestural interfaces bring value, such as a living room. In this scenario, users may not have their dominant hand free. Therefore, and related to the previous design implication, the gesture must be simple enough to be used by the non-dominant hand. All participants have only used their dominant hand to perform the Grab and Drag gesture, but some seniors used both hands to perform the Swipe gesture in both directions. The Swipe, by being simple enough to be performed by any hand, allows for greater freedom in interactions.
  • Give visual feedback of the state of the gesture recognition. In our first implementation of the Swipe gesture there was no visual feedback simply because, for this gesture, there is no direct mapping from hand movements to the elements on screen. However, senior participants felt lost when no visual cues were given, wondering on when they could perform the gesture. Participants felt more confident performing Swipes when visual feedback was given, even if it was as minimal as simply displaying an icon on screen when the user's hand was raised.
  • Allow personalization and adaptation. Each user has his/her own particularities in the way he/she moves, both in speed and distance. This makes static thresholds not optimal for all population. Gesture recognition is a great challenge per se, but the optimal solution involves adapting these thresholds to each user, preferably automatically. Otherwise, manual personalization should be available.