David Huffaker
David Huffaker is the Director of UX Research for Google Maps.
His academic research focuses on understanding communication and social behavior to inform the design of HCI. He holds a Ph.D. in Media, Technology and Society from Northwestern University.
His academic research focuses on understanding communication and social behavior to inform the design of HCI. He holds a Ph.D. in Media, Technology and Society from Northwestern University.
Research Areas
Authored Publications
Sort By
"Not some trumped up beef": Assessing Credibility of Online Restaurant Reviews
Victoria Schwanda Sosik
Human-Computer Interaction – INTERACT 2015, Springer
Preview abstract
Online reviews, or electronic word of mouth (eWOM), are an essential source of information for people making decisions about products and services, however they are also susceptible to abuses such as spamming and defamation. Therefore when making decisions, readers must determine if reviews are credible. Yet relatively little research has investigated how people make credibility judgments of online reviews. This paper presents quantitative and qualitative results from a survey of 1,979 respondents, showing that attributes of the reviewer and review content influence credibility ratings. Especially important for judging credibility is the level of detail in the review, whether or not it is balanced in sentiment, and whether the reviewer demonstrates expertise. Our findings contribute to the understanding of how people judge eWOM credibility, and we suggest how eWOM platforms can be designed to coach reviewers to write better reviews and present reviews in a manner that facilitates credibility judgments.
View details
A Comparison of Questionnaire Biases Across Sample Providers
Aaron Sedley
Victoria Sosik
American Association for Public Opinion Research, 2015 Annual Conference (2015)
Preview abstract
Survey research, like all methods, is fraught with potential sources of error that can significantly affect the validity and reliability of results. There are four major types of error common to surveys as a data collection method: (1) coverage error arising from certain segments of a target population being excluded, (2) nonresponse error where not all those selected for a sample respond, (3) sampling error which results from the fact that surveys only collect data from a subset of the population being measured, and (4) measurement error. Measurement error can arise from the wording and design of survey questions (i.e., instrument error), as well as the variability in respondent ability and motivation (i.e., respondent error) [17].
This paper focuses primarily on measurement error as a source of bias in surveys. It is well established that instrument error [34, 40] and respondent error (e.g., [21]) can yield meaningful differences in results. For example, variations in response order, response scales, descriptive text, or images used in a survey can lead to instrument error which can result in skewed response distributions. Certain types of questions can trigger other instrument error biases, such as the tendency to agree with statements presented in an agree/disagree format (acquiescence bias) or the hesitancy to admit undesirable behaviors or overreport desirable behaviors (social desirability bias). Respondent error is largely related to the amount of cognitive effort required to answer a survey and arises when respondents are either unable or unwilling to exert the required effort [21].
Such measurement error has been compared across survey modes, such as face-to-face, telephone, and Internet (e.g., [9, 18]), but little work has compared different Internet samples, such as crowdsourcing task platforms (e.g., Amazon’s Mechanical Turk), paywall surveys (e.g., Google Consumer Surveys), opt-in panels (e.g., Survey Sampling International), and probability based panels (e.g., the Gfk KnowledgePanel). Because these samples differ in recruiting, context, and incentives, respondents may be more or less motivated to effortfully respond to questions, leading to different degrees of bias in different samples. The specific instruments deployed to respondents in these different modes can also exacerbate the situation by requiring more or less cognitive effort to answer satisfactorily.
The present study has two goals:
Investigate the impact of question wording on response distributions in order to measure the strength of common survey biases arising from instrument and respondent error
Compare the variance in the degree of these biases across Internet survey samples with differing characteristics in order to determine whether certain types of samples are more susceptible to certain biases than others.
View details
Online Microsurveys for User Experience Research
Preview
Victoria Schwanda Sosik
Gueorgi Kossinets
Kerwell Liao
Paul McDonald
Aaron Sedley
CHI '14 Extended Abstracts on Human Factors in Computing Systems (2014)
Instant Foodie: Predicting Expert Ratings From Grassroots
Chenhao Tan
Gueorgi Kossinets
Alex J. Smola
CIKM’13, Oct. 27–Nov. 1, 2013, San Francisco, CA, USA, ACM
Preview abstract
Consumer review sites and recommender systems typically rely on a large volume of user-contributed ratings, which makes rating acquisition an essential component in the design of such systems. User ratings are then summarized to provide an aggregate score representing a popular evaluation of an item. An inherent problem in such summarization is potential bias due to raters’ self-selection and heterogeneity in terms of experiences, tastes and rating scale interpretations. There are two major approaches to collecting ratings, which have different advantages and disadvantages. One is to allow a large number of volunteers to choose and rate items directly (a method employed by e.g. Yelp and Google Places). Alternatively, a panel of raters may be maintained and invited to rate a predefined set of items at regular intervals (such as in Zagat Survey). The latter approach arguably results in more consistent reviews and reduced selection bias, however, at the expense of much smaller coverage (fewer rated items).
In this paper, we examine the two different approaches to collecting user ratings of restaurants and explore the question of whether it is possible to reconcile them. Specifically, we study the problem of inferring the more calibrated Zagat Survey ratings (which we dub “expert ratings”) from the user-contributed ratings (“grassroots”) in Google Places. To achieve this, we employ latent factor models and provide a probabilistic treatment of the ordinal ratings. We can predict Zagat Survey ratings accurately from ad hoc user-generated ratings by employing joint optimization. Furthermore, the resulting model show that users become more discerning as they submit more ratings. We also describe an approach towards
cross-city recommendations, answering questions such as “What is the equivalent of the Per Se restaurant in Chicago?”
View details
Preview abstract
In this article Carolyn Wei and David Huffaker, Google User Experience researchers, explore how understanding gaming sociability could help marketers communicate with a growing audience in new ways. From heightening personalization with "virtual goods", to avoiding the pitfalls of "noisy" game notifications, today's marketers can create a gaming niche that is both relevant and meaningful to a highly engaged user base.
View details
Around the Water Cooler: Shared Discussion Topics and Contact Closeness in Social Search
Saranga Komanduri
Lujun Fang
Jessica Staddon
Proceedings of the Sixth International AAAI Conference on Weblogs and Social Media (ICWSM-12), ACM (2012)
Preview abstract
Search engines are now augmenting search results with social annotations, i.e., endorsements from users’ social network contacts. However, there is currently a dearth of published research on the effects of these annotations on user choice. This work investigates two research questions associated with annotations: 1) do some contacts affect user choice more than others, and 2) are annotations relevant across various information needs. We conduct a controlled experiment with 355 participants, using hypothetical searches and annotations, and elicit users’ choices. We find that domain contacts are preferred to close contacts, and this preference persists across a variety of information needs. Further, these contacts need not be experts and might be identified easily from conversation data.
View details
Are privacy concerns a turn-off? Engagement and privacy in social networks
Jessica Staddon
Larkin Brown
Aaron Sedley
Symposium on Usable Privacy and Security (SOUPS), ACM (2012) (to appear)
Preview abstract
We describe the survey results from a representative sample of 1,075 U.S. social network users who use Facebook as their primary network. Our results show a strong association between low engagement and privacy concern. Specifically, users who report concerns around sharing control, comprehension of sharing practices or general Facebook privacy concern, also report consistently less time spent as well as less (self-reported) posting, commenting and “Like”ing of content. The limited evidence of other significant differences between engaged users and others suggests that privacy-related concerns may be an important gate to engagement. Indeed, privacy concern and network size are the only malleable attributes that we find to have significant association with engagement. We manually categorize the privacy concerns finding that many are nonspecific and not associated with negative personal experiences. Finally, we identify some education and utility issues associated with low social network activity, suggesting avenues for increasing engagement amongst current users.
View details
Understanding the Meta-Experience of Casual Games
Carolyn Wei
Proceedings of the ACM Conference on Human Factors in Computing Systems (CHI ’12). Workshop on Games User Research, ACM (2012)
Preview abstract
In this position paper, we argue that casual gamers can be segmented by “meta-experiences” into a typology that could inform game platform design. These meta- experiences include out-of-game immersion, social layering, and game discovery. We discuss the interviews and video diaries that have helped shape the typology.
View details
Talking in Circles: Selective Sharing in Google+
Sanjay Kairam
Michael J. Brzozowski
Proceedings of the ACM Conference on Human Factors in Computing Systems (CHI ’12), ACM, New York, NY (2012), pp. 1065-1074