Elie Bursztein
I lead Google's anti-abuse research team, which invents ways to protect users against cyber-criminal activities and Internet threats. I've redesigned Google's CAPTCHA to make it easier, and I've made Chrome safer and faster by implementing better cryptography. I spend my spare time doing video game research, photography, and magic tricks. I was born in Paris, France, wear berets, and now live with my wife in Mountain View, California
Authored Publications
Sort By
Leveraging Virtual Reality to Enhance Diversity and Inclusion training at Google
Karla Brown
Patrick Gage Kelley
Leonie Sanderson
2024 CHI Conference on Human Factors in Computing Systems, ACM
Preview abstract
Virtual reality (VR) has emerged as a promising educational training method, offering a more engaging and immersive experience than traditional approaches. In this case study, we explore its effectiveness for diversity, equity, and inclusion (DEI) training, with a focus on how VR can help participants better understand and appreciate different perspectives. We describe the design and development of a VR training application that aims to raise awareness about unconscious biases and promote more inclusive behaviors in the workplace.
We report initial findings based on the feedback of Google employees who took our training and found that VR appears to be an effective way to enhance DEI training. In particular, participants reported that VR training helped them better recognize biases and how to effectively respond to them. However, our findings also highlight some challenges with VR-based DEI training, which we discuss in terms of future research directions.
View details
Generalized Power Attacks against Crypto Hardware using Long-Range Deep Learning
Karel Král
Marina Zhang
Transactions on Cryptographic Hardware and Embedded Systems (TCHES), IACR (2024)
Preview abstract
To make cryptographic processors more resilient against side-channel attacks, engineers have developed various countermeasures. However, the effectiveness of these countermeasures is often uncertain, as it depends on the complex interplay between software and hardware. Assessing a countermeasure’s effectiveness using profiling techniques or machine learning so far requires significant expertise and effort to be adapted to new targets which makes those assessments expensive. We argue that including cost-effective automated attacks will help chip design teams to quickly evaluate their countermeasures during the development phase, paving the way to more secure chips.In this paper, we lay the foundations toward such automated system by proposing GPAM, the first deep-learning system for power side-channel analysis that generalizes across multiple cryptographic algorithms, implementations, and side-channel countermeasures without the need for manual tuning or trace preprocessing. We demonstrate GPAM’s capability by successfully attacking four hardened hardware-accelerated elliptic-curve digital-signature implementations. We showcase GPAM’s ability to generalize across multiple algorithms by attacking a protected AES implementation and achieving comparable performance to state-of-the-art attacks, but without manual trace curation and within a limited budget. We release our data and models as an open-source contribution to allow the community to independently replicate our results and build on them.
View details
Identifying and Mitigating the Security Risks of Generative AI
Clark Barrett
Brad Boyd
Brad Chen
Jihye Choi
Amrita Roy Chowdhury
Anupam Datta
Soheil Feizi
Kathleen Fisher
Tatsunori B. Hashimoto
Dan Hendrycks
Somesh Jha
Daniel Kang
Florian Kerschbaum
Eric Mitchell
John Mitchell
Zulfikar Ramzan
Khawaja Shams
Dawn Song
Ankur Taly
Diyi Yang
Foundations and Trends in Privacy and Security, 6 (2023), pp. 1-52
Preview abstract
Every major technical invention resurfaces the dual-use dilemma—the new technology has the potential to be used for good as well as for harm. Generative AI (GenAI) techniques, such as large language models (LLMs) and diffusion models, have shown remarkable capabilities (e.g., in-context learning, code-completion, and text-to-image generation and editing). However, GenAI can be used just as well by attackers to generate new attacks and increase
the velocity and efficacy of existing attacks. This paper reports the findings of a workshop held at Google (co-organized by Stanford University and the University of Wisconsin-Madison) on the dual-use dilemma posed by GenAI. This paper is not meant to be comprehensive,
and reports on some of the interesting findings from the workshop. We discuss short-term and long-term goals for the community on this topic. We hope this paper provides a launching point on this important topic and provides interesting problems that the research community can work to address.
View details
Hybrid Post-Quantum Signatures in Hardware Security Keys
Diana Ghinea
Jennifer Pullman
Julien Cretin
Rafael Misoczki
Stefan Kölbl
Applied Cryptography and Network Security Workshop (2023)
Preview abstract
Recent advances in quantum computing are increasingly jeopardizing the security of cryptosystems currently in widespread use, such as RSA or elliptic-curve signatures. To address this threat, researchers and standardization institutes have accelerated the transition to quantum-resistant cryptosystems, collectively known as Post-Quantum Cryptography (PQC). These PQC schemes present new challenges due to their larger memory and computational footprints and their higher chance of latent vulnerabilities.
In this work, we address these challenges by introducing a scheme to upgrade the digital signatures used by security keys to PQC. We introduce a hybrid digital signature scheme based on two building blocks: a classically-secure scheme, ECDSA, and a post-quantum secure one, Dilithium.
Our hybrid scheme maintains the guarantees of each underlying building block even if the other one is broken, thus being resistant to classical and quantum attacks.
We experimentally show that our hybrid signature scheme can successfully execute on current security keys, even though secure PQC schemes are known to require substantial resources.
We publish an open-source implementation of our scheme at https://github.com/google/OpenSK/releases/tag/hybrid-pqc so that other researchers can reproduce our results on a nRF52840 development kit.
View details
Designing Toxic Content Classification for a Diversity of Perspectives
Deepak Kumar
Patrick Gage Kelley
Joshua Mason
Zakir Durumeric
Michael Bailey
(2021)
Preview abstract
In this work, we demonstrate how existing classifiers for identifying toxic comments online fail to generalize to the diverse concerns of Internet users. We survey 17,280 participants to understand how user expectations for what constitutes toxic content differ across demographics, beliefs, and personal experiences. We find that groups historically at-risk of harassment—such as people who identify as LGBTQ+ or young adults—are more likely to to flag a random comment drawn from Reddit, Twitter, or 4chan as toxic, as are people who have personally experienced harassment in the past. Based on our findings, we show how current one-size-fits-all toxicity classification algorithms, like the Perspective API from Jigsaw, can improve in accuracy by 86% on average through personalized model tuning. Ultimately, we highlight current pitfalls and new design directions that can improve the equity and efficacy of toxic content classifiers for all users.
View details
SoK: Hate, Harassment, and the Changing Landscape of Online Abuse
Devdatta Akhawe
Michael Bailey
Dan Boneh
Nicola Dell
Zakir Durumeric
Patrick Gage Kelley
Deepak Kumar
Damon McCoy
Sarah Meiklejohn
Thomas Ristenpart
Gianluca Stringhini
(2021)
Preview abstract
We argue that existing security, privacy, and anti-abuse protections fail to address the growing threat of online hate and harassment. In order for our community to understand and address this gap, we propose a taxonomy for reasoning about online hate and harassment. Our taxonomy draws on over 150 interdisciplinary research papers that cover disparate threats ranging from intimate partner violence to coordinated mobs. In the process, we identify seven classes of attacks---such as toxic content and surveillance---that each stem from different attacker capabilities and intents. We also provide longitudinal evidence from a three-year survey that hate and harassment is a pervasive, growing experience for online users, particularly for at-risk communities like young adults and people who identify as LGBTQ+. Responding to each class of hate and harassment requires a unique strategy and we highlight five such potential research directions that ultimately empower individuals, communities, and platforms to do so.
View details
“Why wouldn’t someone think of democracy as a target?”: Security practices & challenges of people involved with U.S. political campaigns
Patrick Gage Kelley
Tara Matthews
Lee Carosi Dunn
Proceedings of the USENIX Security Symposium (2021)
Preview abstract
People who are involved with political campaigns face increased digital security threats from well-funded, sophisticated attackers, especially nation-states. Improving political campaign security is a vital part of protecting democracy. To identify campaign security issues, we conducted qualitative research with 28 participants across the U.S. political spectrum to understand the digital security practices, challenges, and perceptions of people involved in campaigns. A main, overarching finding is that a unique combination of threats, constraints, and work culture lead people involved with political campaigns to use technologies from across platforms and domains in ways that leave them—and democracy—vulnerable to security attacks. Sensitive data was kept in a plethora of personal and work accounts, with ad hoc adoption of strong passwords, two-factor authentication, encryption, and access controls. No individual company, committee, organization, campaign, or academic institution can solve the identified problems on their own. To this end, we provide an initial understanding of this complex problem space and recommendations for how a diverse group of experts can begin working together to improve security for political campaigns.
View details
Preview abstract
Traffic monetization is a crucial component of
running most for-profit online businesses. One of its latest
incarnations is cryptocurrency mining, where a website instructs
the visitor’s browser to participate in building a cryptocurrency
ledger (e.g., Bitcoin, Monero) in exchange for a small reward in
the same currency.
In its essence, this practice trades the user’s electric bill
(or battery level) for cryptocurrency. With user consent, this
exchange can be a legitimate funding source – for example,
UNICEF has collected over 27k charity donations on a website
dedicated to this purpose, thehopepage.org. Regrettably, this
practice also easily lends itself to abuse: in this form, called
cryptojacking, attacks surreptitiously mine in the users browser,
and profits are collected either by website owners or by hackers
that planted the mining script into a vulnerable page.
Understandably, users frown upon this practice and have
sought to mitigate it by installing blacklist-based browser extensions (the top 3 for Chrome total over one million installs),
whereas researchers have devised more robust methods to detect
it [1]–[6]. In turn, cryptojackers have been bettering their evasion
techniques, incorporating in their toolkits domain fluxing, content
obfuscation, the use of WebAssembly, and throttling. The latter,
for example, grew from being a niche feature, adopted by only
one in ten sites in 2018 [2], to become commonplace in 2019,
reaching an adoption ratio of 58%. Whereas most state-of-the-art defenses address multiple of these evasion techniques, none
is resistant against all.
In this paper, we offer a novel detection method, CoinPolice, that is robust against all of the aforementioned evasion
techniques. CoinPolice flips throttling against cryptojackers,
artificially varying the browser’s CPU power to observe the
presence of throttling. Based on a deep neural network classifier,
CoinPolice can detect 97.87% of hidden miners with a low false
positive rate (0.74%). We compare CoinPolice performance with
the current state of the art and show our approach outperforms
it when detecting aggressively throttled miners.
Finally, we deploy Coinpolice to perform the largest-scale
cryptoming investigation to date, identifying 6700 sites that
monetize traffic in this fashion.
View details
Spotlight: Malware Lead Generation at Scale
Bernhard Grill
Jennifer Pullman
Cecilia M. Procopiuc
David Tao
Borbala Benko
Proceedings of Annual Computer Security Applications Conference (ACSAC) (2020)
Preview abstract
Malware is one of the key threats to online security today, with applications ranging from phishing mailers to ransomware andtrojans. Due to the sheer size and variety of the malware threat, it is impractical to combat it as a whole. Instead, governments and companies have instituted teams dedicated to identifying, prioritizing, and removing specific malware families that directly affect their population or business model. The identification and prioritization of the most disconcerting malware families (known as malware hunting) is a time-consuming activity, accounting for more than 20% of the work hours of a typical threat intelligence researcher, according to our survey. To save this precious resource and amplify the team’s impact on users’ online safety we present Spotlight, a large-scale malware lead-generation framework. Spotlight first sifts through a large malware data set to remove known malware families, based on first and third-party threat intelligence. It then clusters the remaining malware into potentially-undiscovered families, and prioritizes them for further investigation using a score based on their potential business impact.
We evaluate Spotlight on 67M malware samples, to show that it can produce top-priority clusters with over 99% purity (i.e., homogeneity), which is higher than simpler approaches and prior work. To showcase Spotlight’s effectiveness, we apply it to ad-fraud malware hunting on real-world data. Using Spotlight’s output, threat intelligence researchers were able to quickly identify three large botnets that perform ad fraud.
View details