Yu Zhong
Research Areas
Authored Publications
Sort By
Social Biases in NLP Models as Barriers for Persons with Disabilities
Stephen Craig Denuyl
Proceedings of ACL 2020, ACL (to appear)
Preview abstract
Building equitable and inclusive technologies
demands paying attention to how social attitudes towards persons with disabilities are
represented within technology. Representations perpetuated by NLP models often inadvertently encode undesirable social biases
from the data on which they are trained. In this
paper, first we present evidence of such undesirable biases towards mentions of disability in
two different NLP models: toxicity prediction
and sentiment analysis. Next, we demonstrate
that neural embeddings that are critical first
steps in most NLP pipelines also contain undesirable biases towards mentions of disabilities.
We then expose the topical biases in the social
discourse about some disabilities which may
explain such biases in the models; for instance,
terms related to gun violence, homelessness,
and drug addiction are over-represented in discussions about mental illness.
View details
Unintended Machine Learning Biases as Social Barriers for Persons with Disabilities
Stephen Craig Denuyl
Proceedings of Workshop on AI Fairness for People with Disabilities (2019)
Preview abstract
Persons with disabilities face many barriers to participation in society,
and the rapid advancement of technology creates ever more.
Achieving fair opportunity and justice for people with disabilities
demands paying attention not just to accessibility, but also to the attitudes
towards, and representations of, disability that are implicit in machine learning (ML) models
that are pervasive in how one engages with the society.
However such models often inadvertently learn to perpetuate
undesirable social biases from the data on which they are trained.
This can result, for example, in models for classifying text producing very different
predictions for {\em I stand by a person with mental illness},
and {\em I stand by a tall person}.
We present evidence of such social biases in existing ML models, along
with an analysis of biases in a dataset used for model development.
View details
Investigating Cursor-based Interactions to Support Non-Visual Exploration in the Real World
Anhong Guo
Xu Wang
Patrick Clary
Ken Goldman
Jeffrey Bigham
Proceedings of the 20th International ACM SIGACCESS Conference on Computers and Accessibility (2018)
Preview abstract
The human visual system processes complex scenes to focus attention on relevant items. However, blind people cannot visually skim for an area of interest. Instead, they use a combination of contextual information, knowledge of the spatial layout of their environment, and interactive scanning to find and attend to specific items. In this paper, we define and compare three cursor-based interactions to help blind people attend to items in a complex visual scene: window cursor (move their phone to scan), finger cursor (point their finger to read), and touch cursor (drag their finger on the touchscreen to explore). We conducted a user study with 12 participants to evaluate the three techniques on four tasks, and found that: window cursor worked well for locating objects on large surfaces, finger cursor worked well for accessing control panels, and touch cursor worked well for helping users understand spatial layouts. A combination of multiple techniques will likely be best for supporting a variety of everyday tasks for blind users.
View details
Enhancing Android Accessibility for Users with Hand Tremor by Reducing Fine Pointing and Steady Tapping
Preview
Phil Weaver
Jeffrey P. Bigham
Web4All, Florence, Italy (2015), pp. 10
Preview abstract
In this paper we introduce JustSpeak, a universal voice control solution for non-visual access to the Android operating system. JustSpeak offers two contributions as compared to existing systems. First, it enables system wide voice control on Android that can accommodate any application. JustSpeak constructs the set of available voice commands based on application context; these commands are directly synthesized from on-screen labels and accessibility metadata, and require no further intervention from the application developer. Second, it provides more efficient and natural interaction with support of multiple voice commands in the same utterance. We present the system design of JustSpeak and describe its utility in various use cases. We then discuss the system level supports required by a service like JustSpeak on other platforms. By eliminating the target locating and pointing tasks, JustSpeak can significantly improve experience of graphic interface interaction for blind and motion-impaired users.
View details