Google Research

Semi-supervised Word Sense Disambiguation with Neural Models

  • Dayu Yuan
  • Julian Richardson
  • Ryan Doherty
  • Colin Evans
  • Eric Altendorf


Determining the intended sense of words in text – word sense disambiguation (WSD) – is a long- standing problem in natural language processing. Recently, researchers have shown promising results using word vectors extracted from a neural network language model as features in WSD algorithms. However, a simple average or concatenation of word vectors for each word in a text loses the sequential and syntactic information of the text. In this paper, we study WSD with a sequence learning neural net, LSTM, to better capture the sequential and syntactic patterns of the text. To alleviate the lack of training data in all-words WSD, we employ the same LSTM in a semi-supervised label propagation classifier. We demonstrate state-of-the-art results, especially on verbs.

Learn more about how we do research

We maintain a portfolio of research projects, providing individuals and teams the freedom to emphasize specific types of work