Sarah D'Angelo

Sarah D'Angelo

Authored Publications
Sort By
  • Title
  • Title, descending
  • Year
  • Year, descending
    Creativity, Generative AI, and Software Development: A Research Agenda
    Victoria Jackson
    Bogdan Vasilescu
    Daniel Russo
    Paul Ralph
    Maliheh Izadi
    Rafael Prikladnicki
    Anielle Lisboa
    Andre van der Hoek
    Preview abstract Creativity has always been considered a major differentiator to separate the good from the great, and we believe the importance of creativity to software development will only increase as GenAI becomes embedded in developer tool-chains and working practices. This paper uses the McLuhan tetrad alongside scenarios of how GenAI may disrupt software development more broadly, to identify potential impacts GenAI may have on creativity within software development. The impacts are discussed along with a future research agenda comprising of six connected themes that consider how individual capabilities, team capabilities, the product, unintended consequences, society, and human aspects can be affected. View details
    Preview abstract At Google, we’ve been running a quarterly large-scale survey with developers since 2018. In this article, we will discuss how we run EngSat, some of our key learnings over the past 6 years, and how we’ve evolved our approach to meet new needs and challenges. View details
    Preview abstract Trust is central to how developers engage with AI. In this article, we discuss what we learned from developers about their level of trust in AI enhanced developer tooling, and how we translated those findings into product design recommendations to support customization, and the challenges we encountered along the way. View details
    Preview abstract It’s no secret that generative artificial intelligence (GenAI) is rapidly changing the landscape of software development, with discussions about best practices for applying this transformative technology dominating the popular press [cite cite cite]. Perhaps nowhere on Earth have these discussions been more frequent and passionate than inside the organizations dedicated to making GenAI accessible and useful to developers, including at Google. During one such discussion between researchers on our DevOps Research and Assessment (DORA) and Engineering Productivity Research (EPR) teams, we were struck by a recurring finding common to development professionals both inside and outside of Google: Using GenAI makes developers feel more productive, and developers who trust GenAI use it more. On the surface, this finding may seem somewhat… obvious. But, for us, it highlighted the deep need to better understand the factors that impact developers’ trust in GenAI systems and ways to foster that trust, so that developers and development firms can yield the most benefit from their investment in GenAI development tools. Here, we share findings from seven studies conducted at Google, regarding the productivity gains of GenAI use in development, the impacts of developers’ trust on GenAI use, and the factors we’ve observed which positively impact developers’ trust in GenAI. We conclude with five suggested strategies that organizations engaged in software development might employ to foster their developers’ trust in GenAI, thereby increasing their GenAI use and maximizing GenAI-related productivity gains. View details
    Preview abstract The evolution of AI is a pivotal moment in history, but it’s not the first time we have experienced technological advances that have changed how humans work. By looking at the advances in automobiles, we are reminded of the importance of focusing on our developers' needs and goals. View details
    Preview abstract AI-powered software development tooling is changing the way that developers interact with tools and write code. However, the ability for AI to truly transform software development depends on developers' level of trust in the tools. In this work, we take a mixed methods approach to measuring the factors that influence developers' trust in AI-powered code completion. We identified that familiarity with AI suggestions, quality of the suggestion, and level of expertise with the language all increased acceptance rate of AI-powered suggestions. While suggestion length and presence in a test file decreased acceptance rates. Based on these findings we propose recommendations for the design of AI-powered development tools to improve trust. View details
    Preview abstract In this installment of Developer Productivity for Humans, we present two lines of research emphasizing the human experience in measuring developer productivity: the experience of flow or focus and the experience of friction during development. View details
    Building and Sustaining Ethnically, Racially, and Gender Diverse Software Engineering Teams: A Study at Google
    Ella Dagan
    Anita Sarma
    Alison Chang
    Jill Dicker
    Emerson Murphy-Hill
    The ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering (ESEC/FSE) (2023) (to appear)
    Preview abstract Teams that build software are largely demographically homogeneous. Without diversity, homogeneous perspectives dominate how, why, and for whom software is designed. To understand how teams can successfully build and sustain diversity, we interviewed 11 engineers and 9 managers from some of the most gender and racially diverse teams at Google, a large software company. Qualitatively analyzing the interviews, we found shared approaches to recruiting, hiring, and promoting an inclusive environment, all of which create a positive feedback loop. Our findings produce actionable practices that every member of the team can take to increase diversity by fostering a more inclusive software engineering environment. View details
    Preview abstract ML enhanced software development tooling is changing the way software engineers develop code. While the development of these tools continues to rise, studies have primarily focused on the accuracy and performance of underlying models, rather than the user experience. Understanding how engineers interact with ML enhanced tooling can help us define what successful interactions with ML based assistance look like. We therefore build upon prior research, by comparing software engineers' perceptions of two types of ML enhanced tools, (1) code completion and (2) code example suggestions. We then use our findings to inform design guidance for ML enhanced software development tooling. This research is intended to spark a growing conversation about the future of ML in software development and guide the design of developer tooling. View details
    Preview abstract Software developers write code nearly everyday, ranging from simple straightforward tasks to challenging and creative tasks. As we have seen across domains, AI/ML based assistants are on the rise in the field of computer science. We refer to them as code generation tools or AI/ML enhanced software developing tooling; and it is changing the way developers write code. As we think about how to design and measure the impact of intelligent writing assistants, the approaches used in software engineering and the considerations unique to writing code can provide a different and complementary perspective for the workshop. In this paper, we propose a focus on two themes: (1) measuring the impact of writing assistants and (2) how code writing assistants are changing the way engineers write code. In our discussion of these topics, we outline approaches used in software engineering, considerations unique to writing code, and how the disciplines of prose writing and code writing can learn from each other. We aim to contribute to the development of a taxonomy of writing assistants that includes possible methods of measurement and considers factors unique to the domain (e.g. prose or code). View details