Uri Lerner
I grew up in Israel and completed my bachelor's degree at Tel Aviv university. I moved to the US in 1996 and started my studies at Stanford, where I worked with Daphne Koller. I completed my PhD in 2002, and have been working at Google since then.
I am interested in problems in the domain of text processing, typically involving some machine learning component. Over the years at Google these included word clustering, automatic spell correction (did you mean), machine translation, semantic parsing, and more.
Research Areas
Authored Publications
Sort By
Building a Clinically-Focused Problem List From Medical Notes
Birju Patel
Cathy Cheung
Liwen Xu
Peter Clardy
Rachana Fellinger
LOUHI 2022: The 13th International Workshop on Health Text Mining and Information Analysis (2022)
Preview abstract
Clinical notes often contain vital information not observed in other structured data, but their unstructured nature can lead to critical patient-related information being lost. To make sure this valuable information is utilized for patient care, algorithms that summarize notes into a problem list are often proposed. Focusing on identifying medically-relevant entities in the free-form text, these solutions are often detached from a canonical ontology and do not allow downstream use of the detected text-spans. As a solution, we present here a system for generating a canonical problem list from medical notes, consisting of two major stages. At the first stage, annotation, we use a transformer model to detect all clinical conditions which are mentioned in a single note. These clinical conditions are then grounded to a predefined ontology, and are linked to spans in the text. At the second stage, summarization, we aggregate over the set of clinical conditions detected on all of the patient's note, and produce a concise patient summary that organizes their important conditions.
View details
Source-Side Classifier Preordering for Machine Translation
Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP '13) (2013)
Preview abstract
We present a simple and novel classifier-based preordering approach. Unlike existing preordering models, we train feature-rich discriminative classifiers that directly predict the target-side word order. Our approach combines the strengths of lexical reordering and syntactic preordering models by performing long-distance reorderings using the structure of the parse tree, while utilizing a discriminative model with a rich set of features, including lexical features. We present extensive experiments on 22 language pairs, including preordering into English from 7 other languages. We obtain improvements of up to 1.4 BLEU on language pairs in the WMT 2010 shared task. For languages from different families the improvements often exceed 2 BLEU. Many of these gains are also significant in human evaluations.
View details