Filip Radlinski
Filip Radlinski is a research scientist at Google in London, UK. He received his PhD from Cornell University and a BSc (Hons) from the Australian National University. His research interests include conversational search and recommendation, online evaluation and machine learning.
More of his publications are listed on Google Scholar.
Authored Publications
Sort By
Towards Realistic Synthetic User-Generated Content: A Scaffolding Approach to Generating Online Discussions
Barbara Ikica
Hamidreza Alvari
Mehdi Hafezi Manshadi
(2024)
Preview abstract
The emergence of synthetic data represents a pivotal shift in modern machine learning, offering a solution to satisfy the need for large volumes of data in domains where real data is scarce, highly private, or difficult to obtain. We investigate the feasibility of creating realistic, large-scale synthetic datasets of user-generated content, noting that such content is increasingly prevalent and a source of frequently sought information. Large language models (LLMs) offer a starting point for generating synthetic social media discussion threads, due to their ability to produce diverse responses that typify online interactions. However, as we demonstrate, straightforward application of LLMs yields limited success in capturing the complex structure of online discussions, and standard prompting mechanisms lack sufficient control. We therefore propose a multi-step generation process, predicated on the idea of creating compact representations of discussion threads, referred to as scaffolds. Our framework is generic yet adaptable to the unique characteristics of specific social media platforms. We demonstrate its feasibility using data from two distinct online discussion platforms. To address the fundamental challenge of ensuring the representativeness and realism of synthetic data, we propose a portfolio of evaluation measures to compare various instantiations of our framework.
View details
Preview abstract
A vast amount of human discussion, storytelling, content creation,
and reporting now occurs on social media platforms. As such, social
media posts are often quoted on web pages as context. In this
paper, we argue that these quotations and their surrounding page
context provide a rich, platform-independent source of data for
studying the intersection of natural language and social media.
We introduce a taxonomy of quotation roles that categorizes how
social media posts are used within content. We release a dataset
of 38M social quotes derived from the Common Crawl, and role
labels for a subset assessed by human raters. We show that the
interplay of accounts, roles, and topics across the web graph reveal
valuable social diffusion patterns, and that roles can be predicted
with fine-tuned large language models from web context.
View details
Conversational Information Seeking
Hamed Zamani
Johanne R. Trippas
Jeff Dalton
Foundations and Trends® in Information Retrieval (2023), pp. 244-456
Preview abstract
Conversational information seeking (CIS) is concerned with a sequence of interactions between one or more users and an information system. Interactions in CIS are primarily based on natural language dialogue, while they may include other types of interactions, such as click, touch, and body gestures. This monograph provides a thorough overview of CIS definitions, applications, interactions, interfaces, design, implementation, and evaluation. This monograph views CIS applications as including conversational search, conversational question answering, and conversational recommendation. Our aim is to provide an overview of past research related to CIS, introduce the current state-of-the-art in CIS, highlight the challenges still being faced in the community, and suggest future directions.
View details
Beyond Single Items: Exploring User Preferences in Item Sets with the Conversational Playlist Curation Dataset
Arun Chaganty
Megan Leszczynski
Shu Zhang
Ravi Ganti
Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval (2023)
Preview abstract
Users in consumption domains, like music, are often able to more efficiently provide preferences over a set of items (e.g. a playlist or radio) than over single items (e.g. songs). Unfortunately, this is an underexplored area of research, with most existing recommendation systems limited to understanding preferences over single items. Curating an item set exponentiates the search space that recommender systems must consider (all subsets of items!): this motivates conversational approaches---where users explicitly state or refine their preferences and systems elicit preferences in natural language---as an efficient way to understand user needs. We call this task conversational item set curation and present a novel data collection methodology that efficiently collects realistic preferences about item sets in a conversational setting by observing both item-level and set-level feedback. We apply this methodology to music recommendation to build the Conversational Playlist Curation Dataset (CPCD), where we
show that it leads raters to express preferences that would not be otherwise expressed. Finally, we propose a wide range of conversational retrieval models as baselines for this task and evaluate them on the dataset.
View details
Measuring the Impact of Explanation Bias: A Study of Natural Language Justifications for Recommender Systems
Andrey Petrov
Extended Abstracts of the 2023 CHI Conference on Human Factors in Computing Systems (CHI EA '23), ACM
Preview abstract
Despite the potential impact of explanations on decision making, there is a lack of research on quantifying the effect of explanations on users' choices. This paper presents an experimental protocol for measuring the degree to which positively or negatively biased explanations can lead to users choosing suboptimal recommendations. Key elements of this protocol include a preference elicitation stage to allow for personalizing recommendations, manual identification and extraction of item aspects from reviews, and a controlled method for introducing bias through the combination of both positive and negative aspects. We also present explanations in two different textual formats: as a list of item aspects and as fluent natural language text. Through a user study with 129 participants, we demonstrate that explanations can significantly affect users' selections and that these findings generalize across explanation formats.
View details
Resolving Indirect Referring Expressions for Entity Selection
Silvia Pareti
Proceedings of the Annual Meetings of the Association for Computational Linguistics (ACL 2023)
Preview abstract
Recent advances in language modeling have enabled new conversational systems. In particular, it is often desirable for people to make choices among specified options when using such systems. We address the problem of reference resolution, when people use natural expressions to choose between real world entities. For example, given the choice `Should we make a Simnel cake or a Pandan cake?' a natural response from a non-expert may be indirect: `let's make the green one'. Such natural expressions have been little studied for reference resolution. We argue that robustly understanding such language has large potential for improving naturalness in dialog, recommendation, and search systems. We create AltEntities (Alternative Entities), a new public dataset of 42K entity pairs and expressions (referring to one entity in the pair), and develop models for the disambiguation problem. Consisting of indirect referring expressions across three domains, our corpus enables for the first time the study of how language models can be adapted to this task. We find they achieve 82%-87% accuracy in realistic settings, which while reasonable also invites further advances.
View details
Large Language Models are Competitive Near Cold-start Recommenders for Language- and Item-based Preferences
Scott Sanner
Proceedings of ACM Conference on Recommender Systems (RecSys ’23) (2023) (to appear)
Preview abstract
Traditional recommender systems leverage users' item preference history to recommend novel content that users may like. However, dialog interfaces that allow users to express language-based preferences offer a fundamentally different modality for preference input. Inspired by recent successes of prompting paradigms for large language models (LLMs), we study their use for making recommendations from both item-based and language-based preferences in comparison to state-of-the-art item-based collaborative filtering (CF) methods. To support this investigation, we collect a new dataset consisting of both item-based and language-based preferences elicited from users along with their ratings on a variety of (biased) recommended items and (unbiased) random items. Among numerous experimental results, we find that LLMs provide competitive recommendation performance for pure language-based preferences (no item preferences) in the near cold-start case in comparison to item-based CF methods, despite having no supervised training for this specific task (zero-shot) or only a few labels (few-shot). This is particularly promising as language-based preference representations are more explainable and scrutable than item-based or vector-based representations.
View details
On Natural Language User Profiles for Transparent and Scrutable Recommendation
Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR '22) (2022)
Preview abstract
Natural interaction with recommendation and personalized search systems has received tremendous attention in recent years. We focus on the challenge of supporting people's understanding and control of these systems and explore a fundamentally new way of thinking about representation of knowledge in recommendation and personalization systems. Specifically, we argue that it may be both desirable and possible for algorithms that use natural language representations of users' preferences to be developed.
We make the case that this could provide significantly greater transparency, as well as affordances for practical actionable interrogation of, and control over, recommendations. Moreover, we argue that such an approach, if successfully applied, may enable a major step towards systems that rely less on noisy implicit observations while increasing portability of knowledge of one's interests.
View details
Conversational Music Retrieval with Synthetic Data
Megan Eileen Leszczynski
Ravi Ganti
Shu Zhang
Arun Tejasvi Chaganty
Second Workshop on Interactive Learning for Natural Language Processing at NeurIPS 2022
Preview abstract
Users looking for recommendations often wish to improve suggestions through
broad natural language feedback (e.g., “How about something more upbeat?”).
However, building such conversational retrieval systems requires conversational
data with rich user utterances paired with slates of items that cover a diverse
range of preferences. This is challenging to collect scalably using conventional
methods like crowd-sourcing. We address this problem with a new technique to
synthesize high-quality dialog data by transforming the domain expertise encoded
in curated item collections into corresponding item-seeking conversations. The
method first generates a sequence of hypothetical slates returned by a system,
and then uses a language model to introduce corresponding user utterances. We
apply the approach on a dataset of curated music playlists to generate 10k diverse
music-seeking conversations. A qualitative human evaluation shows that a majority
of these conversations express believable sequences of slates and include user
utterances that faithfully express preferences for them. When used to train a
conversational retrieval model, the synthetic data yields up to a 23% relative gain
on standard retrieval metrics compared to baselines trained on non-conversational
and conversational datasets.
View details
Subjective Attributes in Conversational Recommendation Systems: Challenges and Opportunities
Ivan Vendrov
Proceedings of the 36th AAAI Conference on Artificial Intelligence (AAAI-22) (2022), pp. 12287-12293
Preview abstract
The ubiquity of recommender systems has increased the need for higher-bandwidth, natural and efficient communication with users. This need is increasingly filled by recommenders that support natural language interaction, often conversationally. Given the inherent semantic subjectivity present in natural language, we argue that modeling subjective attributes in recommenders is a critical, yet understudied, avenue of AI research. We propose a novel framework for understanding different forms of subjectivity, examine various recommender tasks that will benefit from a systematic treatment of subjective attributes, and outline a number of research challenges.
View details