Trust & safety
Overview
Google is committed to supporting researchers who are working to create a positive societal impact with technology. Our Trust & Safety Research Award focuses on work to improve digital safety across the online ecosystem.
We’re seeking research proposals and will provide unrestricted grants to support research efforts across disciplines and areas of interest related to trust and safety in technology. We welcome proposals from disciplines including, but not limited to, computer science, legal studies, public policy, social sciences, psychology, and human-computer interaction.
Application status
Applications are currently closed.
Decisions for the June 2024 application will be announced via email by October 2024. Please check back in Summer 2025 for details on future application cycles.
-
Applications open
-
Applications close
-
Notification of proposal decisions
Research topics
Proposals should specifically cover one or more of the following topics:
- Scams and financial fraud — How to characterize and reduce harm across the ecosystem of online scams. Of particular interest are:
- Holistic approaches to scams: human rights perspectives on scam perpetration, multiplatform scam dynamics, scams as a service, underserved and vulnerable populations' response to scams
- Reporting dynamics: improving scam reporting processes, limitations to scam reporting systems
- Interventions: effective scam interventions beyond digital literacy
- Safety by design — How to proactively prevent online harms through design choices. Of particular interest are:
- System safety: improving the validity and reliability metrics for safety across product types and harm categories, how to better calculate risk of different kinds of online harm
- Design patterns for safety: frameworks for understanding how to mitigate risks of different types of online interactions, formal specifications, limits, protocols of safety requirements, evaluations of current systems that may cause users to feel unsafe or too safe
- Cross domain knowledge: what existing safety measures from other domains are relevant for online safety?
- Misinformation — How to limit the harms of misinformation and help communities with support, tools, and processes for increasing their information quality and resilience. Of particular interest are:
- Misinformation interventions: effective measures of misinformation resilience and susceptibility for different users and subpopulations, mitigation of misinformation harms through information literacy or community tools, misinformation measures across different misinformation types and formats (e.g., video, audio)
- Impact of generative AI on misinformation: ensuring AI models do not return low-quality information from large training datasets, impact of genAI on the prevalence of misinformation, use of genAI tools by misinformation influencers for profit, use of genAI tools to help communities access higher quality information for community resilience
- Child safety and image-based sexual abuse (IBSA) — with a focus on teenagers’ attitudes, behaviors, and comprehension of their safety. Of particular interest are:
- Non-consensual explicit imagery (NCEI): developmentally-appropriate educational materials about NCEI including evaluations, estimating the economic burden of NCEI, attitudes surrounding intimate synthetic imagery
- Sociological perspectives: exploring more holistically how teenagers think about and share explicit imagery, including when interventions for IBSA may feel invasive
- Sextortion: interventions to prevent sextortion or grooming, preventing or detecting sextortion via AI or ML, financial dynamics on sextortion
- Generative AI — How users of Generative AI systems can be safer in their personal or professional use of Generative AI systems. Of particular interest are:
- Generative AI for safety: how users engage with personalizable safety classifiers, using Generative AI to identify malicious AI-generated content (e.g., disinformation), tailored AI support for online harms (e.g., reporting bots)
- Making generative AI safer for end users: education and explainability of AI systems (including explainability of safety interventions), age-appropriate design for Generative AI, harm mitigation beyond blocking through classifiers
We will also accept proposals on content moderation, gender-based violence, hate speech, harassment, violent extremism, manipulated or synthetic media, regulatory impacts (such as from General Data Protection Regulation, Digital Services Act, the European Union Artificial Intelligence Act , etc.), or other areas of trust and safety research and practice.
Award details
Award amounts vary by topic up to $100K USD, and are intended to support the advancement of the professor’s research during the academic year in which the award is provided.
Funds will be disbursed as unrestricted gifts to the university and are not intended for overhead or indirect costs. In the case of cross-institutional collaborations, we will distribute funds to a maximum of two institutions.
Requirements
Eligibility
- Open to professors (assistant, associate, etc.) at a university or degree-granting research institution.
- Applicants may only serve as Principal Investigator (PI) or co-PI on one proposal per round. There can be a maximum of 2 PIs per proposal.
- Proposals must be related to computing or technology.
In addition to the guidance provided in our FAQ section, proposals should consider the following:
We are interested in projects that address the following aspects of online safety, both within our priority topic areas outlined above, as well as in any other topic areas relevant to trust and safety research and practice:
- Interventions: What specific interventions are effective in preventing online harms in different topic areas? What are the limitations of existing interventions? How might interventions be combined to improve efficacy?
- Metrics: What metrics accurately capture prevalence or severity of topic-specific harms? What metrics can measure changes in dynamic harms over time? How should we think about comparing the effectiveness of interventions across different platforms or products?
- Underserved, vulnerable, and/or hidden populations: How do harms in the priority topic areas above manifest differently in countries outside the US? How do interventions for harms need to be tailored for different subpopulations to be effective? How can we characterize the specific gaps between universal policies and localized interpretation and enforcement?
Review criteria
- Faculty merit: Faculty is accomplished in research, community engagement, and open source contributions, with potential to contribute to responsible innovation.
- Research merit: Faculty's proposed research is aligned with Google Research interests, innovative, and likely to have a significant impact on the field.
- Proposal quality: The research proposal is clear, focused, and well-organized, and it demonstrates the team's ability to successfully execute the research and achieve a significant impact.
- AI ethics principles: The research proposal strongly aligns with Google's AI Principles.
FAQs
Listed on GARA landing page.