We investigate the use of social robots to create inclusive mix-visual ability classrooms.
We investigate the use of tangible systems to promote computational thinking skills in mixed-ability children.
AVATAR proposes creating a signing 3D avatar able to synthesize Portuguese Sign Language.
ARCADE proposes leveraging interactive and digital technologies to create context-aware workspaces to improve physical rehabilitation practices.
Although text-entry is an inherently visually demanding task, we are creating novel non-visual input methods to multiple form-factors: from tablets to smartwatches.
Braille 21 is an umbrella term for a series of research projects that aim to bring Braille to the 21st century. Our goal is to facilitate access to Braille in the new digital era.
In this project, we are creating the tools to characterize user performance in the wild and improve current everyday devices and interfaces.
We investigate novel interfaces and interaction techniques for nonvisual word completion. We are particularly interested in quantifying the benefits and costs of such new solutions.
As touchscreens have evolved to provide multitouch capabilities, we are exploring new multi-point feedback solutions.
In this research work, we are investigating novel interactive applications that leverage the use of concurrent speech to improve users' experiences.
This research leverages mobile and wearable technologies to improve classroom accessibility for Deaf and Hard of Hearing college students.
Our goal is to thoroughly study mobile touchscreen interfaces, their characteristics and parameterizations, thus providing the tools for informed interface design.
This project investigates how accurate tracking systems and engaging activities can be leveraged to provide effective evaluation procedures in physical rehabilitation.
We aim to understand the overlap of problems faced by health and situational impaired users when using their mobile devices and design solutions for both user groups.
Visually impaired children (VI) face challenges in collaborative learning in classrooms. Robots have the potential to support inclusive classroom experiences by leveraging their physicality, bespoke social behaviors, sensors, and multimodal feedback. However, the design of social robots for mixed-visual abilities classrooms remains mostly unexplored. This paper presents a four-month-long community-based design process where we engaged with a school community. We provide insights into the barriers experienced by children and how social robots can address them. We also report on a participatory design activity with mixed-visual abilities children, highlighting the expected roles, attitudes, and physical characteristics of robots. Findings contextualize social robots within inclusive classroom settings as a holistic solution that can interact anywhere when needed and suggest a broader view of inclusion beyond disability. These include children’s personality traits, technology access, and mastery of school subjects. We finish by providing reflections on the community-based design process.
Visually impaired children are increasingly educated in mainstream schools following an inclusive educational approach. However, even though visually impaired (VI) and sighted peers are side by side in the classroom, previous research showed a lack of participation of VI children in classroom dynamics and group activities. That leads to a reduced engagement between VI children and their sighted peers and a missed opportunity to value and explore class members’ differences. Robots due to their physicality, and ability to perceive the world, socially-behave and act in a wide range of interactive modalities, can leverage mixed-visual ability children access to group activities while fostering their mutual understanding and social engagement. With this work, we aim to use social robots, as facilitators, to booster inclusive activities in mixed-visual abilities classroom.
Touch data, and in particular text-entry data, has been mostly collected in the laboratory, under controlled conditions. While touch and text-entry data has consistently shown its potential for monitoring and detecting a variety of conditions and impairments, its deployment in-the-wild remains a challenge. In this paper, we present WildKey, an Android keyboard toolkit that allows for the usable deployment of in-the-wild user studies. WildKey is able to analyse text-entry behaviours through implicit and explicit text-entry data collection while ensuring user privacy. We detail each of the WildKey’s components and features, all of the metrics collected, and discuss the steps taken to ensure user privacy and promote compliance.
A lingua gestual portuguesa, tal como a lingua portuguesa, evoluiu de forma natural, adquirindo caracteristicas gramaticais distintas do portugues. Assim, o desenvolvimento de um tradutor entre as duas nao consiste somente no mapeamento de uma palavra num gesto (portugues gestuado), mas em garantir que os gestos resultantes satisfazem a gramatica da lingua gestual portuguesa e que as traducoes estejam semanticamente corretas. Trabalhos desenvolvidos anteriormente utilizam exclusivamente regras de traduçao manuais, sendo muito limitados na quantidade de fenomenos gramaticais abrangidos, produzindo pouco mais que portugues gestuado. Neste artigo, apresentasse o primeiro sistema de traducao de portugues para a lingua gestual portuguesa, o PE2LGP, que, para alem de regras manuais, se baseia em regras de traducao construidas automaticamente a partir de um corpus de referencia. Dada uma frase em portugues, o sistema devolve uma sequencia de glosas com marcadores que identicam expressoes faciais, palavras soletradas, entre outras. Uma avaliacao automatica e uma avaliacao manual sao apresentadas, indicando os resultados melhorias na qualidade da traducao de frases simples e pequenas em comparacao ao sistema baseline (portugues gestuado). E tambem o primeiro trabalho que lida com as expresoes faciais gramaticais que marcam as frases interrogativas e negativas.
Accessible introductory programming environments are scarce, and their study within ecological settings (e.g., at home) is almost non-existent. We present ACCembly, an accessible block-based environment that enables children with visual impairments to perform spatial programming activities. ACCembly allows children to assemble tangible blocks to program a multimodal robot. We evaluated this approach with seven families that used the system autonomously at home. Results showed that both the children and family members learned from what was an inclusive and engaging experience. Children leveraged fundamental computational thinking concepts to solve spatial programming challenges; parents took different roles as mediators, some actively teaching and scaffolding, others learning together with their child. We contribute with an environment that enables children with visual impairments to engage in spatial programming activities, an analysis of parent-child interactions, and reflections on inclusive programming environments within a shared family experience.
Inclusion of vulnerable people in society is essential to grant human rights and equal opportunities for all. Our research goal is to mitigate the disparities in education and ensure access to all children, including pupils having a special educational need and disability (SEND) and promote inclusion among students using social robots. Inclusion in schools has different dimensions to be considered, namely: identification of exclusion reasons and behaviours, accessibility to school activities, and promotion of diverse and inclusive culture among children. Our approach to this challenge was a 6-month long community engagement effort with a local school community to get insights into different stakeholders: children with and without disabilities (Visual Impairment and Autism), parents, teachers and several therapists, such as: braille, speech and occupational therapy, psychologists, mobility and navigation. We then conducted a participatory design session to build robots, during lectures, with 50 children with mixed abilities. We contribute novel insights on the design of robots for mixed abilities groups of children, in remote and co-located settings and the challenges and opportunities for an inclusive school raised by the school community.
Sign Languages are visual languages and the main means of communication used by Deaf people. However, the majority of the information available online is presented through written form. Hence, it is not of easy access to the Deaf community. Avatars that can animate sign languages have gained an increase of interest in this area due to their flexibility in the process of generation and edition. Synthetic animation of conversational agents can be achieved through the use of notation systems. HamNoSys is one of these systems, which describes movements of the body through symbols. Its XML-compliant, SiGML, is a machine-readable input of HamNoSys able to animate avatars. Nevertheless, current tools have no freely available open source libraries that allow the conversion from HamNoSys to SiGML. Our goal is to develop a tool of open access, which can perform this conversion independently from other platforms. This system represents a crucial intermediate step in the bigger pipeline of animating signing avatars. Two cases studies are described in order to illustrate different applications of our tool.