Jump to Content
Daniel Theron

Daniel Theron

Danie Theron leads the Data Science group in Alphabet's Internal Audit organization. His focus there includes developing quantitative approaches to audit and assurance work, collaborating with other teams at Google on AI standards, governance frameworks and audit approaches, as well as promoting the use of AI for social good through Google.org.

Research Areas

Authored Publications
Google Publications
Other Publications
Sort By
  • Title
  • Title, desc
  • Year
  • Year, desc
    Preview abstract Large technology firms face the problem of moderating content on their platforms for compliance with laws and policies. To accomplish this at the scale of billions of pieces of content per day, a combination of human and machine review are necessary to label content. However, human error and subjective methods of measure are inherent in many audit procedures. This paper introduces statistical analysis methods and mathematical techniques to determine, quantify, and minimize these sources of risk. Through these methodologies it can be shown that we are able to reduce reviewer bias. View details
    Preview abstract This paper demonstrates how the limitations of pre-trained models and open evaluation datasets factor into assessing the performance of binary semantic similarity classification tasks. As (1) end-user-facing documentation around the curation of these datasets and pre-trained model training regimes is often not easily accessible and (2) given the lower friction and higher demand to quickly deploy such systems in real-world contexts, our study reinforces prior work showing performance disparities across datasets, embedding techniques and distance metrics, while highlighting the importance of understanding how data is collected, curated and analyzed in semantic similarity classification. View details
    Closing the AI accountability gap: defining an end-to-end framework for internal algorithmic auditin
    Becky White
    Inioluwa Deborah Raji
    Margaret Mitchell
    Timnit Gebru
    FAT* Barcelona, 2020, ACM Conference on Fairness, Accountability, and Transparency (ACM FAT* (2020)
    Preview abstract Rising concern for the societal implications of artificial intelligencesystems has inspired a wave of academic and journalistic literaturein which deployed systems are audited for harm by investigatorsfrom outside the organizations deploying the algorithms. However,it remains challenging for practitioners to identify the harmfulrepercussions of their own systems prior to deployment, and, oncedeployed, emergent issues can become difficult or impossible totrace back to their source.In this paper, we introduce a framework for algorithmic auditingthat supports artificial intelligence system development end-to-end,to be applied throughout the internal organization development life-cycle. Each stage of the audit yields a set of documents that togetherform an overall audit report, drawing on an organization’s valuesor principles to assess the fit of decisions made throughout the pro-cess. The proposed auditing framework is intended to contribute toclosing theaccountability gapin the development and deploymentof large-scale artificial intelligence systems by embedding a robustprocess to ensure audit integrity. View details
    No Results Found