Jon Vaver
Jon Vaver is a Senior Quantitative Analyst at Google. His work focuses on experimental and observational methods for measuring advertising effectiveness. He received a B.A. in Mathematics and Physics from Cornell College and a Ph.D. in Applied Mathematics from the University of Virginia. Prior to joining Google, he worked at the Department of Defense and on an operations research team at USWest/Qwest Communications.
Research Areas
Authored Publications
Sort By
Preview abstract
Connected TV (CTV) devices blend characteristics of digital desktop and mobile devices--such as the option to log in and the ability to access a broad range of online content--and linear TV--such as a living room experience that can be shared by multiple members of a household. This blended viewing experience requires the development of measurement methods that are adapted to this novel environment. For other devices, ad measurement and planning have an established history of being guided by the ground truth of panels composed of people who share their device behavior. A CTV panel-only measurement solution for reach is not practical due to the panel size that would be needed to accurately measure smaller digital campaigns. Instead, we generalize the existing approach used to measure reach for other devices that combines panel data with other data sources (e.g., ad server logs, publisher-provided self-reported demographic data, survey data) to account for co-viewing. This paper describes data from a CTV panel and shows how this data can be used to effectively measure the aggregate co-viewing rate and fit demographic models that account for co-viewing behavior. Special considerations include data filtering, weighting at the panelist and household levels to ensure representativeness, and measurement uncertainty.
View details
Preview abstract
Many digital advertisers continue to rely on attribution models to estimate the effectiveness of their marketing spend, allocate budget, and guide bidding decisions for real time auctions. The work described in this paper builds on previous efforts to better understand the capabilities and limitations of attribution models using simulated path data with experiment-based ground truth. While previous efforts were based on a generic specification of user path characteristics (e.g., ad channels considered, observed events included, and the transition rates between observed events), here we generalize the process to include a pre-analysis optimization step that matches the characteristics of the simulated path data with a set of reference path data from a particular advertiser. An attribution model analysis conducted with path-matched data is more relevant and applicable to an advertiser than generic path data. We demonstrate this path-fitting process using data from Booking.com. The simulated matched paths are used to demonstrate a few key capabilities and limitations for several position-based attribution models.
View details
Preview abstract
Co-viewing refers to the situation in which multiple people share the experience of watching video content and ads in the same room and at the same time. In this paper, we use online surveys to measure the co-viewing rate for YouTube videos that are watched on a TV screen. These simple one-question surveys are designed to optimize response accuracy. Our analysis of survey results identifies variations in co-viewing rate with respect to important factors that include the demographic group (age/gender) of the primary viewer, time of day, and the genre of the video content. Additionally, we fit a model based on these covariates to predict the co-viewing rate for ad impressions that are not directly informed by a survey response. We present results from a case study and show how co-viewing changes across these covariates.
View details
Preview abstract
In this paper we tackle the marketing problem of assigning credit for a successful outcome to events that occur prior to the success, otherwise known as the attribution problem. In the world of digital advertising, attribution is widely used to formulate and evaluate marketing but often without a clear specification of the measurement objective and the decision-making needs. We formalize the problem of attribution under a causal framework, note its shortcomings, and suggest an attribution algorithm evaluated via simulation.
View details
Preview abstract
Many advertisers rely on attribution to make a variety of tactical and strategic marketing decisions, and there is no shortage of attribution models for advertisers to consider. In the end, most advertisers choose an attribution model based on their preconceived notions about how attribution credit should be allocated. A misguided selection can lead an advertiser to use erroneous information in making marketing decisions. In this paper, we address this issue by identifying a well-defined objective for attribution modeling and proposing a systematic approach for evaluating and comparing attribution model performance using simulation. Following this process also leads to a better understanding of the conditions under which attribution models are able to provide useful and reliable information for advertisers.
View details
Preview abstract
Two previously published papers (Vaver and Koehler, 2011, 2012) describe
a model for analyzing geo experiments. This model was designed to measure
advertising effectiveness using the rigor of a randomized experiment with replication
across geographic units providing confidence interval estimates. While effective, this
geo-based regression (GBR) approach is less applicable, or not applicable at all,
for situations in which few geographic units are available for testing (e.g. smaller
countries, or subregions of larger countries) These situations also include the so-called
matched market tests, which may compare the behavior of users in a single
control region with the behavior of users in a single test region. To fill this gap, we
have developed an analogous time-based regression (TBR) approach for analyzing
geo experiments. This methodology predicts the time series of the counterfactual
market response, allowing for direct estimation of the cumulative causal effect at
the end of the experiment. In this paper we describe this model and evaluate its
performance using simulation.
View details
Preview abstract
Advertisers often estimate the performance of their online advertising by either running randomized experiments, or applying models to observational data. While randomized experiments are the gold standard of measurement, their cost and complexity often lead advertisers to rely instead on observational methods, such as attribution models. A previous paper demonstrated the limitations of attribution models, as well as information issues that limit their performance. This paper introduces "near impressions", an additional source of observational data that can be used to estimate causal ad impact without experiments. We use both simulated and real experiments to demonstrate that near impressions greatly improve our ability to accurately measure the true value generated by ads.
View details
Preview abstract
Advertising is becoming more and more complex, and there is a strong demand for measurement tools that are capable of keeping up. In tandem with new measurement problems and solutions, new capabilities for evaluating measurement methodologies are needed. Given the complex marketing environment and the multitude of analytical methods that are available, simulation has become an essential tool for evaluating and comparing analysis options.
This paper describes the Aggregate Marketing System Simulator (AMASS), a sim- ulation tool capable of generating aggregate-level time series data related to marketing measurement (e.g., channel-level marketing spend, website visits, competitor spend, pricing, sales volume, etc.). It is flexible enough to model a wide variety of marketing situations that include different mixes of advertising spend, levels of ad effectiveness, types of ad targeting, sales seasonality, competitor activity, and much more. A key feature of AMASS is that it generates ground truth for marketing performance met- rics, including return on ad spend and marginal return on ad spend. The capabilities provided by AMASS create a foundation for evaluating and improving measurement methods, including media mix models (MMMs), campaign optimization (Scott, 2015), and geo experiments (Vaver and Koehler, 2011), across complex modeling scenarios.
View details
Preview abstract
We describe a Digital Advertising System Simulation (DASS) for modeling advertising and its impact on user behavior. DASS is both flexible and general, and can be applied to research on a wide range of topics, such as digital attribution, ad fatigue, campaign optimization, and marketing mix modeling. This paper introduces the basic DASS simulation framework and illustrates its application to digital attribution. We show that common position-based attribution models fail to capture the true causal effects of advertising across several simple scenarios. These results lay a groundwork for the evaluation of more complex attribution models, and the development of improved models.
View details
Preview abstract
The accuracy of an attribution model is limited by the assumptions of the model, and the quality and completeness of the data available to the model. Common digital attribution models on the market make a critical, yet hidden, assumption that ads only affect users by directly changing their propensity to convert. These models assume that ad exposure does not change user behavior in other ways, such as driving additional website visits, generating branded searches, or creating awareness and interest in the advertiser. In a previous paper, we described a Digital Advertising System Simulation (DASS) for modeling advertising and its impact on user behavior. In this paper, we use this simulation to demonstrate that current models fail to accurately capture the true number of incremental conversions generated by ads that impact user behavior, and introduce an Upstream Data-Driven Attribution (UDDA) model to address this shortcoming. We also demonstrate that development beyond UDDA is still required to address a lack of data completeness, and situations that include highly targeting advertising.
View details