
Avishai Zagoury
Software Engineer at Google Research
Authored Publications
Sort By
QUANTITATIVE APPROACH FOR COORDINATION, AT SCALE, OF SIGNALIZED 2 INTERSECTION PAIRS
Jack Haddad
Nitzan Tur
Danny Veikherman
Eliav Buchnik
Shai Ferster
Tom Kalvari
Dan Karliner
Omer Litov
2024
Preview abstract
The coordination of signalized intersections in urban cities improves both traffic operations and environmental aspects. Traffic signal coordination has a long history, where the impact of offset on delays and emissions at signalized intersections have been investigated through simulations and a limited number of experimental findings. Coordinating intersections is often justified by specific engineering requirements and judgment. However, as a consequence, many intersections in cities remain uncoordinated.
In this paper, we examine the potential benefits of coordinating signalized intersections at scale. Unlike previous studies, our analysis is based on aggregated anonymized probe data analysis and does not need to explicitly model traffic-oriented issues such as queue spillback and platoon dispersion. We follow a decentralized approach by considering intersection pairs, i.e. a system of two signalized intersections which can be spatially coupled, but have different cycle lengths. We introduce a new method for coordinating those signalized intersections. The method first evaluates the effect of different offsets on vehicle travel times and emissions. Then, it coordinates the two intersections by setting a common cycle and finding the optimal offset that minimizes emissions and travel times. We present the analysis for several case studies from real intersections at Jakarta, Rio de Janeiro, Kolkata, and Haifa. Finally, we evaluated our method by implementing it in a real experimental study at Jakarta. We collaborated with the city to implement the optimal offset that we had determined, and we compared the results before and after coordination.
View details
Systematic Data Driven Detection of Unintentional Changes in Traffic Light Plans
Dan Karliner
Eliav Buchnik
Shai Ferster
Tom Kalvari
Omer Litov
Nitzan Tur
Danny Veikherman
Jack Haddad
2024
Preview abstract
Abstract—Traffic light plans determine the time allocated to each movement within an intersection. The plan has high influence on vehicle travel performance such as on the average delay time or the probability to stop in the intersection. Traffic engineers of a city control its traffic lights and can make changes in their plans to improve traffic performance. As it is not always easy to predict the impact of such changes, their potential impact can also be negative. We present an experimental study of real changes in traffic plans in 12 cities with a total of over 12000 intersections within a time period of over 40 days. We focus on changes of the cycle time of plans that highly impacted performance metrics such as delay. We compare the overall impact of such changes and dive into several of them through a careful analysis. To the best of our knowledge, our study is one of the largest in its scope among experimental studies of traffic conditions in recent years.
View details
Preview abstract
Given the ubiquity of negative campaigning in recent political elections, we find it important to study its properties from a computational perspective. To this end, we present a model where elections can be manipulated by convincing voters to demote specific non-favored candidates, and study its properties in the classic setting of scoring rules. When the goal is constructive (making a preferred candidate win), we prove that finding such a demotion strategy is easy for Plurality and Veto, while generally hard for t-approval and Borda. We also provide a t-factor approximation for t-approval for every fixed t, and a 3-factor approximation algorithm for Borda. Interestingly enough - following recent trends in political science that show that the effectiveness of negative campaigning depends on the type of candidate and demographic - when assigning varying prices to different possible demotion operations, we are able to provide inapproximability results. When the goal is destructive (making the leading opponent lose), we show that the problem is easy for a broad class of scoring rules.
View details
Preview abstract
Although large neural language models (LMs) like BERT can be finetuned to yield state-of-the-art results on many NLP tasks, it is often unclear what these models actually learn. Here we study using such LMs to fill in entities in comparative questions, like “Which country is older, India or ___?”—i.e., we study the ability of neural LMs to ask (not answer) reasonable questions. We show that accuracy in this fill-in-the-blank task is well-correlated with human judgements of whether a question is reasonable, and that these models can be trained to achieve nearly human-level performance in completing comparative questions in three different sub-domains. However, analysis shows that what they learn fails to model any sort of broad notion of which entities are semantically comparable or similar—instead the trained models are very domain-specific, and performance is highly correlated with co-occurrences between specific entities observed in the training set. This is true both for models that are pre-trained on general text corpora, as well as models trained on a large corpus of comparison questions. Our study thus reinforces recent results on the difficulty of making claims about a deep model’s world knowledge or linguistic competence based on performance on specific benchmark problems. We make our evaluation datasets publicly available to foster future research.
View details