Security, Privacy and Abuse Prevention
- Algorithms & Theory
- Climate & Sustainability
- Conferences & Events
- Data Management
- Data Mining & Modeling
- Distributed Systems & Parallel Computing
- Economics & Electronic Commerce
- Education Innovation
- General Science
- Generative AI
- Global
- Hardware & Architecture
- Health & Bioscience
- Human-Computer Interaction and Visualization
- Machine Intelligence
- Machine Perception
- Machine Translation
- Mobile Systems
- Natural Language Processing
- Networking
- Open Source Models & Datasets
- Photography
- Product
- Programs
- Quantum
- RAI-HCT Highlights
- Responsible AI
- Robotics
- Security, Privacy and Abuse Prevention
- Software Systems & Engineering
- Sound & Accoustics
- Speech Processing
- Year in Review
-
November 25, 2024
Bridging the gap in differentially private model training -
May 16, 2024
Protecting users with differentially private synthetic training data -
April 19, 2024
Improving Gboard language models via private federated analytics -
February 21, 2024
Advances in private training for production on-device language models -
February 13, 2024
DP-Auditorium: A flexible library for auditing differential privacy -
December 8, 2023
Sparsity-preserving differentially private training -
December 4, 2023
Summary report optimization in the Privacy Sandbox Attribution Reporting API -
September 8, 2023
Differentially private median and more -
June 29, 2023
Announcing the first Machine Unlearning Challenge -
May 25, 2023
Differentially private clustering for large-scale datasets -
May 19, 2023
Making ML models differentially private: Best practices and open challenges -
April 18, 2023
Differentially private heatmaps