AI for Privacy, Safety, and Security
Overview
Google is committed to supporting researchers who are working to create a positive societal impact with AI.
Our AI for Privacy, Safety, and Security Research Award focuses on work that leverages frontier AI models to improve digital safety, privacy, and security.
We’re seeking research proposals from universities and degree-granting research institutions and will provide unrestricted gifts to support selected research efforts across disciplines and areas of interest related to the intersection of AI and privacy, safety, and security. We welcome proposals from disciplines including, but not limited to, computer science, legal studies, public policy, social sciences, psychology, and human-computer interaction.
Quick links
Quick links
Application status
Currently accepting applications. Apply Now.
-
ApplicatIons open
-
APPLICATIONS CLOSE (11:59 pm PST)
-
Notification of proposal decisions by
Research topics
This year’s call has 3 areas of primary interest where frontier AI is central to the research:
- Novel Applications of AI for Privacy, Safety, and Security — Transforming existing protections or envisioning new protections with AI. Topics might include:
- Vulnerability analysis and fuzzing
- Network monitoring
- Preventing scams and fraud
- Detection and response
- System hardening
- Compliance
- Improving AI Tooling and Benchmarks for Privacy, Safety, and Security — Novel research into tooling, agentic capabilities, and benchmarks that demonstrate fundamental advancements in AI capabilities. Topics might include:
- Architecture designs to integrate cybersecurity tools and improve reasoning (such as MCP, A2A).
- Privacy, safety, and security benchmarks to measure progress on critical capabilities.
- Mitigating Adversarial Usage of AI for Harming Privacy, Safety, and Security — Ensuring that AI models benefit defenders and not attackers. Topics might include:
- Investigations of AI being used in real-world attacks, such as scams, deepfakes, hacking, and more.
- Investigations of potential harmful applications of AI.
- Investigations of how to improve the safeguards of AI, such as watermarking, alignment, and more.
If submissions do not have a focus on AI, but are relevant to privacy, safety, or security, they should be submitted to our Trust, Safety, Security, and Privacy Research open call.
Award details
Award amounts vary by topic up to $100K USD. Funding is intended to support the advancement of the proposed research, with an intended coverage of about one year of work.
Funds will be disbursed as unrestricted gifts to the university or degree-granting research institution and are not intended for overhead or indirect costs. In the case of cross-institutional collaborations, we will distribute funds to a maximum of two institutions per proposal.
Requirements
Eligibility
- Open to professors (assistant, associate, etc.) at a university or degree-granting research institution.
- Applicants may only serve as Principal Investigator (PI) or co-PI on one proposal per round. There can be a maximum of 2 PIs per proposal.
- Proposals must be related to computing or technology.
Review criteria
- Faculty merit: Faculty is accomplished in research, community engagement, and open source contributions, with potential to contribute to responsible innovation.
- Research merit: Faculty's proposed research is aligned with Google Research interests, innovative, and likely to have a significant impact on the field.
- Proposal quality: The research proposal is clear, focused, and well-organized, and it demonstrates the team's ability to successfully execute the research and achieve a significant impact.
AI ethics principles: The research proposal strongly aligns with Google's AI Principles.
FAQ's
See FAQ's here.
Past Award recipients
See past recipients.
More info
We will host info sessions with live Q&A. RSVP to attend here.