Quang Anh Duong
Authored Publications
Sort By
Searching for Dermatology Information Online using Images vs Text: a Randomized Study
Jay Hartford
Amit Talreja
Natalie Salaets
Kimberley Raiford
Jay Nayar
Dounia Berrada
Harsh Kharbanda
Lou Wang
Peggy Bui
medRxiv (2024)
Preview abstract
Background: Skin conditions are extremely common worldwide, and are an important cause of both anxiety and morbidity. Since the advent of the internet, individuals have used text-based search (eg, “red rash on arm”) to learn more about concerns on their skin, but this process is often hindered by the inability to accurately describe the lesion’s morphology. In the study, we surveyed respondents’ experiences with an image-based search, compared to the traditional text-based search experience.
Methods: An internet-based survey was conducted to evaluate the experience of text-based vs image-based search for skin conditions. We recruited respondents from an existing cohort of volunteers in a commercial survey panel; survey respondents that met inclusion/exclusion criteria, including willingness to take photos of a visible concern on their body, were enrolled. Respondents were asked to use the Google mobile app to conduct both regular text-based search (Google Search) and image-based search (Google Lens) for their concern, with the order of text vs. image search randomized. Satisfaction for each search experience along six different dimensions were recorded and compared, and respondents’ preferences for the different search types along these same six dimensions were recorded.
Results: 372 respondents were enrolled in the study, with 44% self-identifying as women, 86% as White and 41% over age 45. The rate of respondents who were at least moderately familiar with searching for skin conditions using text-based search versus image-based search were 81.5% and 63.5%, respectively. After using both search modalities, respondents were highly satisfied with both image-based and text-based search, with >90% at least somewhat satisfied in each dimension and no significant differences seen between text-based and image-based search when examining the responses on an absolute scale per search modality. When asked to directly rate their preferences in a comparative way, survey respondents preferred image-based search over text-based search in 5 out of 6 dimensions, with an absolute 9.9% more preferring image-based search over text-based search overall (p=0.004). 82.5% (95% CI 78.2 - 86.3) reported a preference to leverage image-based search (alone or in combination with text-based search) in future searches. Of those who would prefer to use a combination of both, 64% indicated they would like to start with image-based search, indicating that image-based search may be the preferred entry point for skin-related searches.
Conclusion: Despite being less familiar with image-based search upon study inception, survey respondents generally preferred image-based search to text-based search and overwhelmingly wanted to include this in future searches. These results suggest the potential for image-based search to play a key role in people searching for information regarding skin concerns.
View details
Iterative quality control strategies for expert medical image labeling
Sonia Phene
Abigail Huang
Rebecca Ackermann
Olga Kanzheleva
Caitlin Taggart
Proceedings of the AAAI Conference on Human Computation and Crowdsourcing (2021)
Preview abstract
Data quality is a key concern for artificial intelligence (AI) efforts that rely upon crowdsourced data collection. In the domain of medicine in particular, labeled data must meet higher quality standards, or the resulting AI may lead to patient harm, and/or perpetuate biases. What are the challenges involved in expert medical labeling? What processes do such teams employ? In this study, we interviewed members of teams developing AI for medical imaging across 4 subdomains (ophthalmology, radiology, pathology, and dermatology). We identify a set of common practices for ensuring data quality. We describe one instance of low-quality labeling caught by post-launch monitoring. However, the more common pattern is to involve experts in an iterative process of defining, testing, and iterating tasks and instructions. Teams invest in these upstream efforts in order to mitigate downstream quality issues during large-scale labeling.
View details
Preview abstract
Crowdsourcing has enabled the collection, aggregation and refinement of human knowledge and judgment, i.e. ground truth, for problem domains with data of increasing complexity and scale.
This scale of ground truth data generation, especially towards the development of machine learning based medical applications that require large volumes of consistent diagnoses, poses significant and unique challenges to quality control.
Poor quality control in crowdsourced labeling of medical data can result in undesired effects on patients' health.
In this paper, we study medicine-specific quality control problems, including the diversity of grader expertise and diagnosis guidelines' ambiguity in novel datasets of three eye diseases.
We present analytical findings on physicians' work patterns, evaluate existing quality control methods that rely on task completion time to circumvent the scarcity and cost problem of generating ground truth medical data, and share our experiences with a real-world system that collects medical labels at scale.
View details