Ciera Jaspan

Ciera Jaspan

Ciera is the tech lead manager of the Engineering Productivity Research within Developer Infrastructure. The Engineering Productivity Research team brings a data-driven approach to business decisions around engineering productivity. They use a combination of qualitative and quantitative methods to triangulate on measuring productivity. Ciera previously worked on Tricorder, Google's static analysis platform. She received her B.S. in Software Engineering from Cal Poly and her Ph.D. from Carnegie Mellon, where she worked with Jonathan Aldrich on cost-effective static analysis and software framework design.
Authored Publications
Sort By
  • Title
  • Title, descending
  • Year
  • Year, descending
    Preview abstract This is the seventh installment of the Developer Productivity for Humans column. This installment focuses on software quality: what it means, how developers see it, how we break it down into 4 types of quality, and the impact these have on each other. View details
    Preview abstract AI-powered software development tooling is changing the way that developers interact with tools and write code. However, the ability for AI to truly transform software development depends on developers' level of trust in the tools. In this work, we take a mixed methods approach to measuring the factors that influence developers' trust in AI-powered code completion. We identified that familiarity with AI suggestions, quality of the suggestion, and level of expertise with the language all increased acceptance rate of AI-powered suggestions. While suggestion length and presence in a test file decreased acceptance rates. Based on these findings we propose recommendations for the design of AI-powered development tools to improve trust. View details
    Preview abstract At Google, we’ve been running a quarterly large-scale survey with developers since 2018. In this article, we will discuss how we run EngSat, some of our key learnings over the past 6 years, and how we’ve evolved our approach to meet new needs and challenges. View details
    Preview abstract Measuring the productivity of software developers is inherently difficult; it requires measuring humans doing a complex, creative task. They are affected by both technological and sociological aspects of their job, and these need to be evaluated in concert to deeply understand developer productivity. View details
    Systemic Gender Inequities in Who Reviews Code
    Emerson Murphy-Hill
    Jill Dicker
    Amber Horvath
    Laurie R. Weingart
    Nina Chen
    Computer Supported Cooperative Work(2023) (to appear)
    Preview abstract Code review is an essential task for modern software engineers, where the author of a code change assigns other engineers the task of providing feedback on the author’s code. In this paper, we investigate the task of code review through the lens of equity, the proposition that engineers should share reviewing responsibilities fairly. Through this lens, we quantitatively examine gender inequities in code review load at Google. We found that, on average, women perform about 25% fewer reviews than men, an inequity with multiple systemic antecedents, including authors’ tendency to choose men as reviewers, a recommender system’s amplification of human biases, and gender differences in how reviewer credentials are assigned and earned. Although substantial work remains to close the review load gap, we show how one small change has begun to do so. View details
    Preview abstract Beyond self-report data, we lack reliable and non-intrusive methods for identifying flow. However, taking a step back and acknowledging that flow occurs during periods of focus gives us the opportunity to make progress towards measuring flow by isolating focused work. Here, we take a mixed-methods approach to design a logs based metric that leverages machine learning and a comprehensive collection of logs data to identify periods of related actions (indicating focus), and validate this metric against self-reported time in focus or flow using diary data and quarterly survey data. Our results indicate that we can determine when software engineers at a large technology company experience focused work which includes instances of flow. This metric speaks to engineering work, but can be leveraged in other domains to non-disruptively measure when people experience focus. Future research can build upon this work to identify signals associated with other facets of flow. View details
    What Improves Developer Productivity at Google? Code Quality.
    Lan Cheng
    Emerson Rex Murphy-Hill
    Andrea Marie Knight Dolan
    Nan Zhang
    Elizabeth Kammer
    Foundations of Software Engineering: Industry Paper(2022)
    Preview abstract Understanding what affects software developer productivity can help organizations choose wise investments in their technical and social environment. But the research literature either focuses on what correlates with developer productivity in realistic settings or focuses on what causes developer productivity in highly constrained settings. In this paper, we bridge the gap by studying software developers at Google through two analyses. In the first analysis, we use panel data to understand which of 39 productivity factors affect perceived developer productivity, finding that code quality, tech debt, infrastructure tools and support, team communication, goals and priorities, and organizational change and process are all causally linked to developer productivity. In the second analysis, we use a lagged panel analysis to strengthen our causal claims. We find that increases in perceived code quality tend to be followed by increased developer productivity, but not vice versa, providing the strongest evidence to date that code quality affects individual developer productivity. View details
    Preview abstract Code review is a common practice in software organizations, where software engineers give each other feedback about a code change. As in other human decision-making processes, code review is susceptible to human biases, where reviewers’ feedback to the author may depend on how reviewers perceive the author’s demographic identity, whether consciously or unconsciously. Through the lens of role congruity theory, we show that the amount of pushback that code authors receive varies based on their gender, race/ethnicity, and age. Furthermore, we estimate that such pushback costs Google more than 1000 extra engineer hours every day, or about 4% of the estimated time engineers spend responding to reviewer comments, a cost borne by non-White and non-male engineers. View details
    Detecting Interpersonal Conflict in Issues and Code Review: Cross Pollinating Open- and Closed-Source Approaches
    Huilian Sophie Qiu
    Bogdan Vasilescu
    Christian Kästner
    Emerson Rex Murphy-Hill
    International Conference on Software Engineering: Software Engineering on Society(2022)
    Preview abstract In software engineering, interpersonal conflict in code review, such as toxic language or an unnecessary pushback on a change request, is a well-known and extensively studied problem because it is associated with negative outcomes, such as stress and turnover. One effective approach to prevent and mitigate toxic language is to develop automatic detection. Two most-recent attempts on automatic detection were developed under different settings: a toxicity detector using text analytics for open source issue discussions and a pushback detector using logs-based metrics for corporate code reviews. While these settings are arguably distinct, the behaviors that they can capture share similarities. Our work studies how the toxicity detector and the pushback detector can be generalized beyond the respective contexts in which they were developed and how the combination of the two can improve interpersonal conflict detection. This research has implications for designing interventions and offers an opportunity to apply a technique to both open and closed source software, possibly benefiting from synergies, a rarity in software engineering research, in our experience. View details
    Engineering Impacts of Anonymous Author Code Review: A Field Experiment
    Emerson Rex Murphy-Hill
    Jill Dicker
    Lan Cheng
    Liz Kammer
    Ben Holtz
    Andrea Marie Knight Dolan
    Transactions on Software Engineering(2021)
    Preview abstract Code review is a powerful technique to ensure high quality software and spread knowledge of best coding practices between engineers. Unfortunately, code reviewers may have biases about authors of the code they are reviewing, which can lead to inequitable experiences and outcomes. In this paper, we describe a field experiment with anonymous author code review, where we withheld author identity information during 5217 code reviews from 300 professional software engineers at one company. Our results suggest that during anonymous author code review, reviewers can frequently guess authors’ identities; that focus is reduced on reviewer-author power dynamics; and that the practice poses a barrier to offline, high-bandwidth conversations. Based on our findings, we recommend that those who choose to implement anonymous author code review should reveal the time zone of the author by default, have a break-the-glass option for revealing author identity, and reveal author identity directly after the review. View details