Mario Callegaro

Mario Callegaro

Mario Callegaro is user experience survey researcher at Google UK, London, in the Cloud Platform User Experience team (CPUX). He works on any survey related projects within his organization. He also consults with numerous other internal teams regarding survey design, sampling, questionnaire design and online survey programming and implementation. Mario holds a M.S. and a Ph.D. in Survey Research and Methodology from the University of Nebraska, Lincoln. Prior to joining Google, Mario was working as survey research scientist for now Ipsos KnowledgePanel previously known as Knowledge Networks KnowledgePanel. Current research areas: user experience research, web survey design, smartphone surveys, survey paradata, and questionnaire design in which he has published numerous papers, book chapters and conference presentations. He published (May 2014) an edited Wiley book titled Online Panel Research: A Data Quality Perspective together with Reginald P. Baker, Jelke Bethlehem, Anja S. Göritz, Jon A. Krosnick and Paul J.. Lavrakas. Mario also completed a book titled: "Web survey methodology" with Katja Lozar-Manfreda and Vasja Vehovar from the University of Ljubljana, Slovenia, published by Sage in June 2015 which is also available as open access PDF and Epub at this Sage URL: https://study.sagepub.com/web-survey-methodology
Authored Publications
Sort By
  • Title
  • Title, descending
  • Year
  • Year, descending
    Did You Misclick? Reversing 5-Point Satisfaction Scales Causes Unintended Responses
    CHI '24: Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems
    Preview abstract When fielding satisfaction questions, survey platforms offer the option to randomly reverse the response options. In this paper, we provide evidence that the use of this option leads to biased results. In Study 1, we show that reversing vertically oriented response options leads to significantly lower satisfaction ratings – from 90 to 82 percent in our case. Study 2 had survey respondents verify their response and found that on a reversed scale, the very-dissatisfied option was selected unintentionally in about half of the cases. The cause, shown by Study 3, is that survey respondents expect the positive option at the top and do not always pay sufficient attention to the question, combined with the similar spelling of satisfied and dissatisfied. To prevent unintentional responses from biasing the results, we recommend keeping the positive option at the top in vertically-oriented scales with visually-similar endpoint labels. View details
    Preview abstract Welcome to the 16th edition of this column on recent books and journal articles in the field of public opinion, survey methods, survey statistics, Big Data, data science, and user experience research. Special issues of journals have a space in this article because, in our view, they are like edited books. We also added review papers from the journal series of Annual Reviews because these papers are seminal state of the art write ups, a mini book, if you wish on a specific subject. This article is an update of the books and journals published in the 2022 article. Like the previous year, the books are organized by topic; this should help the readers to focus on their interests. You will note that we use very broad definitions of public opinion, survey methods, survey statistics, Big Data, data science, and user experience research. This is because there are many books published in different outlets that can be very useful to the readers of Survey Practice, even if they do not come from traditional sources of survey content. It is unlikely we have exhaustively listed all new books in each subcategory; we did our best scouting different resources and websites, but we take full responsibility for any omissions. The list is also focused only on books published in the English language and available for purchase (as an ebook or in print) at the time of this review (April 2024) and with the printed copyright year of 2023. Books are listed based on the relevance to the topic, and no judgment is made in terms of quality of the content. We let the readers do so. If you want to send information for the next issue, please send it to surveypractice.new.books@gmail.com. View details
    Preview abstract Welcome to the 14th edition of this column on recent books and journal articles in the field of public opinion, survey methods, survey statistics, Big Data, data science, and user experience research. Special issues of journals have a space in this article because, in our view, they are like edited books. We also added review papers from the journal series of Annual Reviews because these papers are seminal state of the art write ups, a short book, if you wish on a specific subject. This article is an update of the books and journals published in the 2020 article. Like the previous year, the books are organized by topic; this should help the readers to focus on their interests. You will note that we use very broad definitions of public opinion, survey methods, survey statistics, Big Data, data science, and user experience research. This is because there are many books published in different outlets that can be very useful to the readers of Survey Practice, even if they do not come from traditional sources of survey content. It is unlikely we have exhaustively listed all new books in each subcategory; we did our best scouting different resources and websites, but we take full responsibility for any omissions. The list is also focused only on books published in the English language and available for purchase (as an ebook or in print) at the time of this review (October 2023) and with the printed copyright year of 2021. Books are listed based on the relevance to the topic, and no judgment is made in terms of quality of the content. We let the readers do so. If you want to send information for the next issue, please send it to surveypractice.new.books@gmail.com. View details
    Preview abstract After having reviewed hundreds of surveys in my career I would like to share what I learned from it. In this hands-on workshop I will discuss how you can improve the survey you are planning to field for your UX project. We will start with a decision tree to decide if a survey is actually the best method for your research process. Then we will move to some feasibility issues such as sample size and response rates. Before starting the second part of the workshop I will present the top 10 issues I found in my career reviewing surveys. In the second part of the workshop I will answer survey questions the audience has using the chat feature in Hopin (or join on screen to pose live questions). View details
    Preview abstract Customer satisfaction surveys are common in technology companies like Google. The standard satisfaction question asks respondents to rate how satisfied or dissatisfied they are with a product or service generally going from very satisfied to very dissatisfied. When the scale is presented vertically, some survey literature suggests placing the positive end of the scale on top as “up means good” to avoid confusing respondents. We report from 2 studies. The first study shows that reversing the response options of a bipolar satisfaction question (very dissatisfied on top) leads to significantly lower reported satisfaction. In a between group experiment, 3,000 Google Opinion Rewards (Smartphone panel) respondents took a 1-question satisfaction survey. When the response options were reversed participants were 10 times more likely to select the very dissatisfied option (5% versus 0.5% prevalence). They also took 11% more time to answer the reversed scale. The second study shows that this effect can be partially explained by respondents mistaking the word dissatisfied for satisfied. ~1750 people responded to a reversed satisfaction question in an in-product survey on fonts.google.com. In a follow-up verification question (“You selected [answer option], was this your intention?”), 42.1% of the respondents indicated that they had selected very dissatisfied by mistake. Open ended feedback suggests that respondents hadn’t read properly and expected the positive option on top. More experiments should be conducted on different samples to better understand the interaction of scale orientation versus the type of scale (unipolar vs. bipolar). View details
    Preview abstract This article talks about survey paradata also called logs data and how they can use to shed light on questionnaire design issues. Written for the AAPOR newsletter View details
    Preview abstract Welcome to the 15th edition of this column on recent books and journal articles in the field of public opinion, survey methods, survey statistics, Big Data, data science and user experience research. Special issues of journals have a space in this article because, in our view, they are like edited books. We also added review papers from the journal series of Annual Reviews because these papers are seminal state of the art write ups, a mini book, if you wish on a specific subject. This article is an update of the books and journals published in the 2021 article. Like the previous year, the books are organized by topic; this should help the readers to focus on their interests. You will note that we use very broad definitions of public opinion, survey methods, survey statistics, Big Data, data science, and user experience research. This is because there are many books published in different outlets that can be very useful to the readers of Survey Practice, even if they do not come from traditional sources of survey content. It is unlikely we have exhaustively listed all new books in each subcategory; we did our best scouting different resources and websites, but we take full responsibility for any omissions. The list is also focused only on books published in the English language and available for purchase (as an ebook or in print) at the time of this review (October 2023) and with the printed copyright year of 2022. Books are listed based on the relevance to the topic, and no judgment is made in terms of quality of the content. We let the readers do so. If you want to send information for the next issue, please send it to surveypractice.new.books@gmail.com. View details
    Preview abstract In this study, we explore the use of a hybrid approach in online surveys, combining traditional form-based closed-ended questions with open-ended questions administered by a chatbot. We trained a chatbot using OpenAI's GPT-3 language model to produce context-dependent probes to responses given to open-ended questions. The goal was to mimic a typical professional survey interviewer scenario where the interviewer is trained to probe the respondent when answering an open-ended question. For example, assume this initial exchange: “What did you find hard to use or frustrating when using Google Maps?” “It wasn't easy to find the address we were looking for” The chatbot would follow-up with “What made it hard to find the address?” or “What about it made it difficult to find?” or “What steps did you take to find it?”. The experiment consisted of a Qualtrics survey with 1,200 participants, who were randomly assigned to one of two groups. Both groups answered closed-ended questions, but the final open-ended question differed between the groups, with one group receiving a chatbot and the other group receiving a single open-ended question. The results showed that using a chatbot resulted in higher quality and more detailed responses compared to the single open-ended question approach, and respondents indicated a preference towards using a chatbot to open-ended questions. However, respondents also noted the importance of avoiding repetitive probes and expressed dislike for the uncertainty around the number of required exchanges. This hybrid approach has the potential to provide valuable insights for survey practitioners, although there is room for improvement in the conversation flow. View details
    Preview abstract Quant UX Con 2022 was the first ever general industry conference for Quantitative User Experience Researchers. When it was planned in the Fall of 2021, we expected to host an in-person, loosely structured “unconference” event for 150–200 people. By Spring 2022, registrations exceeded 2000 people and the organizing committee radically revised the format to be an online conference open to anyone, anywhere. When the event occurred in June 2022, there were over 2500 attendees with an average viewing time of 7.5 hours. It was a surprise and a delight to meet attendees from all over the world — more than 70 countries — who were interested in Quant UX. This volume compiles the papers and slides from presenters at Quant UX Con 2022. We are excited to share these with you! As you review the materials here, please keep a few points in mind: ● There were no recordings of the talks. When we planned the event, it was expected to be in person, and speakers expected it not to be recorded. ● Not every presentation is included. Some speakers were not able to include their materials due to publication restrictions. Other sessions were discussion panels that had no materials other than audience questions. ● The materials have varied formats. Some presenters shared raw slides; others shared annotated slides; and others shared full papers. ● If you have any questions or wish to follow up with authors, please contact them directly. In addition to this PDF, individual files are available at https://bit.ly/3SruyKD View details
    KANO ANALYSIS: A CRITICAL SURVEY SCIENCE REVIEW
    Chris Chapman
    Proceedings of the 2022 Sawtooth Software Conference, May 2022 Orlando, FL, Sawtooth Software(2022)
    Preview abstract The Kano method gives a “compelling” answer to questions about features, but it is impossible to know whether it is a correct answer. To put it differently, it will tell a story— quite possibly an incorrect story. This is because the standard Kano questions are low quality survey items, often paired with questionable theory and scoring. The concepts are based on durable consumer goods and may be inapplicable for technology products. We follow our theoretical assessment of the Kano method with empirical studies to examine the response scale, reliability, validity, and sample size requirements. We find that Kano validity is suspect on several counts, and a common scoring model is inappropriate because the items are multidimensional. Beyond the questions about validity, we find that category assignment may be unreliable with small samples (N < 200). Finally, we suggest alternatives that obtain similarly compelling answers using higher quality survey methods and analytic practices. View details