Pauline Anthonysamy
Pauline Anthonysamy is a Staff Privacy Engineer in Android. She holds a PhD in Computer Science from Security Lancaster, Lancaster University, UK. Her current work involves designing and developing new Android Privacy features. Previously she worked on building tools that helped developers boost the trust of their users, by surfacing signals in the Play Developer Console about how to improve their privacy posture.
Her wider interests include applications of natural language processing and machine learning to privacy, modelling user behaviour by leveraging large scale data to understand user privacy preferences and concerns.
Authored Publications
Sort By
Analyzing User Perspectives on Mobile App Privacy at Scale
International Conference on Software Engineering (ICSE) (2022)
Preview abstract
In this paper we present a methodology to analyze users’ concerns and perspectives about privacy at scale. We leverage NLP
techniques to process millions of mobile app reviews and extract
privacy concerns. Our methodology is composed of a binary classifier that distinguishes between privacy and non-privacy related
reviews. We use clustering to gather reviews that discuss similar
privacy concerns, and employ summarization metrics to extract
representative reviews to summarize each cluster. We apply our
methods on 287M reviews for about 2M apps across the 29 categories in Google Play to identify top privacy pain points in mobile
apps. We identified approximately 440K privacy related reviews.
We find that privacy related reviews occur in all 29 categories, with
some issues arising across numerous app categories and other issues
only surfacing in a small set of app categories. We show empirical
evidence that confirms dominant privacy themes – concerns about
apps requesting unnecessary permissions, collection of personal
information, frustration with privacy controls, tracking and the selling of personal data. As far as we know, this is the first large scale
analysis to confirm these findings based on hundreds of thousands
of user inputs. We also observe some unexpected findings such
as users warning each other not to install an app due to privacy
issues, users uninstalling apps due to privacy reasons, as well as
positive reviews that reward developers for privacy friendly apps.
Finally we discuss the implications of our method and findings for
developers and app stores.
View details
Reducing Permission Requests in Mobile Apps
Martin Pelikan
Ulfar Erlingsson
Giles Hogben
Proceedings of ACM Internet Measurement Conference (IMC) (2019)
Preview abstract
Users of mobile apps sometimes express discomfort or concerns with what they see as unnecessary or intrusive permission requests by certain apps. However encouraging mobile app developers to request fewer permissions is challenging because there are many reasons why permissions are requested; furthermore, prior work has shown it is hard to disambiguate the purpose of a particular permission with high certainty. In this work we describe a novel, algorithmic mechanism intended to discourage mobile-app developers from asking for unnecessary permissions. Developers are incentivized by an automated alert, or "nudge", shown in the Google Play Console when their apps ask for permissions that are requested by very few functionally-similar apps---in other words, by their competition. Empirically, this incentive is effective, with significant developer response since its deployment. Permissions have been redacted by 59% of apps that were warned, and this attenuation has occurred broadly across both app categories and app popularity levels. Importantly, billions of users' app installs from the Google Play have benefited from these redactions
View details
The Good, the Bad and the Ugly: A Study of Security Decisions in a Cyber-Physical Systems Game
Awais Rashid
Maria Pinto-Albuquerque
Asad Syed
IEEE Transactions on Software Engineering, Issue: 99 (2017)
Preview abstract
Stakeholders’ security decisions play a fundamental role in determining security requirements, yet, little is currently understood about how different stakeholder groups within an organisation approach security and the drivers and tacit biases underpinning their decisions. We studied and contrasted the security decisions of three demographics – security experts, computer scientists and managers – when playing a tabletop game that we designed and developed. The game tasks players with managing the security of a cyber-physical environment while facing various threats. Analysis of 12 groups of players (4 groups in each of our demographics) reveals strategies that repeat in particular demographics, e.g., managers and security experts generally favoring technological solutions over personnel training, which computer scientists preferred. Surprisingly, security experts were not ipso facto better players – in some cases, they made very questionable decisions – yet they showed a higher level of confidence in themselves. We classified players’ decision-making processes, i.e., procedure-, experience-, scenario- or intuition-driven. We identified decision patterns, both good practices and typical errors and pitfalls. Our game provides a requirements sandbox in which players can experiment with security risks, learn about decision-making and its consequences, and reflect on their own perception of security.
View details
Privacy Requirements: Present & Future
Awais Rashid
Ruzanna Chitchyan
39th International Conference on Software Engineering (2017)
Preview abstract
Software systems are increasingly more and more
open, handle large amounts of personal or other sensitive data
and are intricately linked with the daily lives of individuals
and communities. This poses a range of privacy requirements.
Such privacy requirements are typically treated as instances of
requirements pertaining to compliance, traceability, access control,
verification or usability. Though important, such approaches
assume that the scope for the privacy requirements can be
established a-priori and that such scope does not vary drastically
once the system is deployed. User data and information, however,
exists in an open, hyper-connected and potentially “unbounded”
environment. Furthermore, “privacy requirements - present” and
“privacy requirements - future” may differ significantly as the
privacy implications are often emergent a-posteriori. Effective
treatment of privacy requirements, therefore, requires techniques
and approaches that fit with the inherent openness and fluidity
of the environment through which user data and information
flows are shared. This paper surveys state of the art and present
some potential directions in the way privacy requirements should
be treated. We reflect on the limitations of existing approaches
with regards to unbounded privacy requirements and highlight a
set of key challenges for requirements engineering research with
regards to managing privacy in such unbounded settings.
View details
Inferring semantic mapping between policies and code: the clue is in the language
Matthew Edwards
Chris Weichel
Awais Rashid
International Symposium on Engineering Secure Software and Systems, Springer (2016)
Preview abstract
A common misstep in the development of security and privacy solutions is the failure to keep the demands resulting from high-level policies in line with the actual implementation that is supposed to operationalize those policies. This is especially problematic in the domain of social networks, where software typically predates policies and then
evolves alongside its user base and any changes in policies that arise from their interactions with (and the demands that they place on) the system. Our contribution targets this specific problem, drawing together the assurances actually presented to users in the form of policies and
the large codebases with which developers work. We demonstrate that a mapping between policies and code can be inferred from the semantics of the natural language. These semantics manifest not only in the policy statements but also coding conventions. Our technique, implemented in
a tool (CASTOR ), can infer semantic mappings with F1 accuracy of 70% and 78% for two social networks, Diaspora and Friendica respectively – as compared with a ground truth mapping established through manual examination of the policies and code.
View details
Software engineering for privacy in-the-large
Awais Rashid
International Conference in Software Engineering, IEEE Press (2015)
Preview abstract
There will be an estimated 35 zettabytes (35× 1021) of digital records worldwide
by the year 2020. This effectively amounts to privacy management on an ultra-large-scale. In
this briefing, we discuss the privacy challenges posed by such an ultralarge-scale
ecosystem-we term this “Privacy in the Large”. We will contrast existing approaches to
privacy management, reflect on their strengths and limitations in this regard and outline key
software engineering research and practice challenges to be addressed in the future.
View details