- Daniel Andor
- Chris Alberti
- David Weiss
- Aliaksei Severyn
- Alessandro Presta
- Kuzman Ganchev
- Slav Petrov
- Michael Collins
Association for Computational Linguistics (2016)
We introduce a globally normalized transition-based neural network model that achieves state-of-the-art part-of-speech tagging, dependency parsing and sentence compression results. Our model is a simple feed-forward neural network that operates on a task-specific transition system, yet achieves comparable or better accuracies than recurrent models. We discuss the importance of global as opposed to local normalization: a key insight is that the label bias problem implies that globally normalized models can be strictly more expressive than locally normalized models.
We maintain a portfolio of research projects, providing individuals and teams the freedom to emphasize specific types of work