Trust, Safety, Security, and Privacy Research
Overview
Google is committed to supporting researchers who are working to create a positive societal impact with technology.
Our Trust, Safety, Security, & Privacy Research Award focuses on work to improve digital trust, safety, privacy, and security across the online ecosystem.
We’re seeking research proposals and will provide unrestricted gifts to support research efforts across disciplines and areas of interest related to trust, safety, security, and privacy in technology. We welcome proposals from disciplines including, but not limited to, computer science, legal studies, public policy, social sciences, psychology, and human-computer interaction.
Quick links
Quick links
Application status
Currently accepting applications. Apply Now.
-
ApplicatIons open
-
APPLICATIONS CLOSE (11:59 pm PST)
-
Notification of proposal decisions by
Research topics
This year’s call can be focused in any area of trust, safety, security, or privacy research, where frontier AI is not central to the research. We have four areas of primary interest:
- Scams & Financial Fraud — How to characterize, improve detection, and reduce harm across the ecosystem of online scams. Of particular interest are-
- Detailed investigations into longer-running scams (more than a single moment, e.g., romance scams, pig butchering, etc.)
- Discovery or measurement of novel scam vectors (scams that target local communities, niche social media influencers)
- Reporting dynamics: improving scam reporting processes, limitations of scam reporting
- Protecting At-Risk Groups — At-risk groups are those which may have a greater risk of experiencing harm online or may have a more difficult time recovering. By understanding their experiences we can make the internet safer for everyone. Of particular interest:
- Work focused on the technology needs and use patterns of teenagers; for example, appropriate technology use from a child development perspective
- Tests of how solutions developed for one at-risk group may have limited efficacy for other groups
- Studies that focus on the intersection of two risk factors (e.g. low-income health care workers or female politicians and public figures)
- Frameworks and Taxonomies - instill structures into spaces that can serve as the foundations for policy development or improved technical enforcement. Of particular interest:
- Systemic reviews to create a taxonomy of harm types in subareas such as mental health or financial harms
- Frameworks to describe specific dynamics in sociotechnical systems that can lead to harms
- Design patterns for Safety-by-Design approaches to emerging technology development
- Computational Thinking & Literacy
- Understanding and improving the digital literacy of people who are making decisions or sharing information about AI: politicians, legislators, journalists, particularly through AI transparency artifacts (i.e., model cards)
- Repeatable methods to evaluate Digital/AI literacy programs and adapt them across contexts
We will also accept proposals on topics including: user and measurement studies, content moderation, hate speech, phishing and malware, software vulnerability and exploits, tailored advertising and profiling, harassment, violent extremism, applied cryptography, differential privacy, impacts of manipulated or synthetic media, hardware security and side-channel analysis, regulatory impacts (such as from General Data Protection Regulation, Digital Services Act, etc.), or other areas of trust and safety, privacy, or security research and practice.
We will pay particular attention to proposals that have a collaborative focus, with proposals submitted from research teams with PIs from two different countries or different disciplines.
Submissions to this call may have AI elements, but proposals focused on frontier AI systems or solutions should be submitted to the AI for Privacy, Safety, and Security call.
Award details
Award amounts vary by topic up to $100K USD. Funding is intended to support the advancement of the proposed research, with an intended coverage of about one year of work.
Funds will be disbursed as unrestricted gifts to the university or degree-granting research institution and are not intended for overhead or indirect costs. In the case of cross-institutional collaborations, we will distribute funds to a maximum of two institutions per proposal.
Requirements
Eligibility
- Open to professors (assistant, associate, etc.) at a university or degree-granting research institution.
- Applicants may only serve as Principal Investigator (PI) or co-PI on one proposal per round. There can be a maximum of 2 PIs per proposal.
- Proposals must be related to computing or technology.
Review criteria
- Faculty merit: Faculty is accomplished in research, community engagement, and open source contributions, with potential to contribute to responsible innovation.
- Research merit: Faculty's proposed research is aligned with Google Research interests, innovative, and likely to have a significant impact on the field.
- Proposal quality: The research proposal is clear, focused, and well-organized, and it demonstrates the team's ability to successfully execute the research and achieve a significant impact.
- AI ethics principles: The research proposal strongly aligns with Google's AI Principles.
FAQ's
See FAQ's here.
Past Award recipients
See past recipients.
More info
We will host info sessions with live Q&A. RSVP to attend here.