Abhishek Roy

Abhishek Roy

Abhishek is a Staff UX Researcher leading user research efforts on issues related to civics and scams in Google’s Trust & Safety team. In this role, he leads quantitative and qualitative research studies focused on gaining insights into user behavior, perceptions and preferences in order to drive actionable changes that promote user protection, and advocate for changes with product and policy teams. He has worked at Google for over 10 years and has a passion for advocating for at-risk users and solving complex problems in the field of human computer interaction. Previously, he led user research and policy enforcement efforts for products like Google News, Google Discover and Google Search.
Authored Publications
Sort By
  • Title
  • Title, descending
  • Year
  • Year, descending
    Seeking in Cycles: How Users Leverage Personal Information Ecosystems to Find Mental Health Information
    Ashlee Milton
    Fernando Maestre
    Rebecca Umbach
    Stevie Chancellor
    Proceedings of the CHI Conference on Human Factors in Computing Systems (2024)
    Preview abstract Information is crucial to how people understand their mental health and well-being, and many turn to online sources found through search engines and social media. We present the findings from an interview study (n = 17) of participants who use online platforms to seek information about their mental illnesses. We found that participants leveraged multiple platforms in a cyclical process for finding information from their personal information ecosystems, driven by the adoption of new information and uncertainty surrounding the credibility of information. Concerns about privacy, fueled by perceptions of stigma and platform design, also influenced their information-seeking decisions. Our work proposes theoretical implications for social computing and information retrieval on information seeking in users' personal information ecosystems. We also offer design implications to support users in navigating their personal information ecosystems to find mental health information. View details
    Preview abstract Conspiracy influencers have become a major means of spreading health misinformation, particularly during the COVID-19 pandemic. Previous research has found that a small number of these influencers are responsible for a large portion of misinformation related to public health. Although there has been research on the spread of misinformation, there has been comparatively little on the strategies conspiracy influencers use to spread their message across platforms. To better understand these strategies, we conducted a crosssectional study of 55 influencers, analyzing their platform usage, audience size, account creation date, and content originality. Our results indicate that these influencers use multiple platforms to circumvent algorithmic discrimination and deplatforming, tailor their content to monetization channels and that despite the rise in popularity of unmoderated platforms, there’s still a reliance on moderated platforms to build an audience. Our findings can inform strategies to combat the spread of health misinformation in the online ecosystem. View details
    Evidence-Based Misinformation Interventions: Challenges and Opportunities for Measurement and Collaboration
    Yasmin Green
    Andrew Gully
    Yoel Roth
    Joshua Tucker
    Alicia Wanless
    Carnegie Endowment for International Peace (2023)
    Preview abstract The lingering coronavirus pandemic has only underscored the need to find effective interventions to help internet users evaluate the credibility of the information before them. Yet a divide remains between researchers within digital platforms and those in academia and other research professions who are analyzing interventions. Beyond issues related to data access, a challenge deserving papers of its own, opportunities exist to clarify the core competencies of each research community and to build bridges between them in pursuit of the shared goal of improving user-facing interventions that address misinformation online. This paper attempts to contribute to such bridge-building by posing questions for discussion: How do different incentive structures determine the selection of outcome metrics and the design of research studies by academics and platform researchers, given the values and objectives of their respective institutions? What factors affect the evaluation of intervention feasibility for platforms that are not present for academics (for example, platform users’ perceptions, measurability at scale, interaction, and longitudinal effects on metrics that are introduced in real-world deployments)? What are the mutually beneficial opportunities for collaboration (such as increased insight-sharing from platforms to researchers about user feedback regarding a diversity of intervention designs). Finally, we introduce a measurement attributes framework to aid development of feasible, meaningful, and replicable metrics for researchers and platform practitioners to consider when developing, testing, and deploying misinformation interventions. View details