Enrique Alfonseca
Authored Publications
Sort By
Preview abstract
Large language models (LLMs) have demonstrated human-level performance on vast spectrum of natural language tasks. However, whether they could efficiently memorize or learn from an abstract and structured corpus, like knowledge graph, is largely unexplored. In this work, we propose a method to infuse structure knowledge in LLM, by directly training T5 models on factual triples of knowledge graphs. By evaluating on closed-book QA tasks, we show that models pre-trained with our knowledge-infusing method outperform the T5 baselines, and performs competitively with the models pre-trained on natural language sentences that contain the same knowledge. The proposed method has an advantage that no alignment between the knowledge graph and text corpus is required to curate the training data. This make our method adaptable to industrial scale knowledge graph.
View details
Using Audio Transformations to Improve Comprehension in Voice Question Answering
Johanne R. Trippas
Hanna Silen
Damiano Spina
Crestani F. et al. (eds) Experimental IR Meets Multilinguality, Multimodality, and Interaction. CLEF 2019, Springer, Cham, pp. 164-170
Preview abstract
Many popular form factors of digital assistants—such as Amazon Echo, Apple Homepod, or Google Home—enable the user to hold a conversation with these systems based only on the speech modality. The lack of a screen presents unique challenges. To satisfy the information need of a user, the presentation of the answer needs to be optimized for such voice-only interactions. In this paper, we propose a task of evaluating the usefulness of audio transformations (i.e., prosodic modifications) for voice-only question answering. We introduce a crowdsourcing setup where we evaluate the quality of our proposed modifications along multiple dimensions corresponding to the informativeness, naturalness, and ability of the user to identify key parts of the answer. We offer a set of prosodic modifications that highlight potentially important parts of the answer using various acoustic cues. Our experiments show that some of these modifications lead to better comprehension at the expense of only slightly degraded naturalness of the audio.
View details
Preview abstract
In this paper we study various flavors of variational autoencoders and address the methodological issues with the current neural text generation research and also close some gaps by answering a few natural questions to the studies already published.
View details
Preview abstract
Users try to articulate their complex information needs during search sessions by reformulating their queries. In order to make this process more effective, search engines provide related queries to help users to specify the information need in their search process.
In this paper, we propose a customized sequence-to-sequence model for session-based query suggestion.In our model, we employ a query-aware attention mechanism to capture the structure of the session context. This enables us to control the scope of the session from which we infer the suggested next query, which helps not only handle the noisy data but also automatically detect session boundaries. Furthermore, we observe that based on user query reformulation behavior, a large portion of terms of a query in a session is retained from the previously submitted queries in the same session and consists of mostly infrequent or unseen terms that are usually not included in the vocabulary. We therefore empower the decoder of our model to access the source words from the session context during decoding by incorporating a copy mechanism. Moreover, we propose evaluation metrics to assess the quality of the generative models for query suggestion. We conduct an extensive set of experiments and analysis. The results suggest that our model outperforms the baselines both in terms of the generating queries and scoring candidate queries for the task of query suggestion.
View details
Idest: Learning a Distributed Representation for Event Patterns
Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL'15), pp. 1140-1149
Preview abstract
This paper describes IDEST, a new method for
learning paraphrases of event patterns. It is
based on a new neural network architecture
that only relies on the weak supervision signal
that comes from the news published on the
same day and mention the same real-world entities.
It can generalize across extractions from
different dates to produce a robust paraphrase
model for event patterns that can also capture
meaningful representations for rare patterns.
We compare it with two state-of-the-art
systems and show that it can attain comparable
quality when trained on a small dataset.
Its generalization capabilities also allow it to
leverage much more data, leading to substantial
quality improvements.
View details
Sentence Compression by Deletion with LSTMs
Lukasz Kaiser
Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP'15)
Preview abstract
We present an LSTM approach to
deletion-based sentence compression
where the task is to translate a sentence
into a sequence of zeros and ones, corresponding
to token deletion decisions.
We demonstrate that even the most basic
version of the system, which is given no
syntactic information (no PoS or NE tags,
or dependencies) or desired compression
length, performs surprisingly well: around
30% of the compressions from a large test
set could be regenerated. We compare the
LSTM system with a competitive baseline
which is trained on the same amount of
data but is additionally provided with
all kinds of linguistic features. In an
experiment with human raters the LSTM-based
model outperforms the baseline
achieving 4.5 in readability and 3.8 in
informativeness.
View details
Preview abstract
A popular approach to sentence compression is to formulate the task as a constrained optimization problem and solve it with integer linear programming (ILP) tools. Unfortunately, dependence on ILP may make the compressor prohibitively slow, and thus approximation techniques have been proposed which are often complex and offer a moderate gain in speed. As an alternative solution, we introduce a novel compression algorithm which generates k-best compressions relying on local deletion decisions. Our algorithm is two orders of magnitude faster than a recent ILP-based method while producing better compressions. Moreover, an extensive evaluation demonstrates that the quality of compressions does not degrade much as we move from single best to top-five results.
View details
Modelling Events through Memory-based, Open-IE Patterns for Abstractive Summarization
Marco Cornolti
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (ACL'14) (2014), pp. 892-901
Preview abstract
Abstractive text summarization of news
requires a way of representing events, such
as a collection of pattern clusters in which
every cluster represents an event (e.g.,
marriage) and every pattern in the cluster
is a way of expressing the event (e.g.,
X married Y, X and Y tied the knot). We
compare three ways of extracting event
patterns: heuristics-based, compression-based
and memory-based. While the former
has been used previously in multi-document
abstraction, the latter two have
never been used for this task. Compared
with the first two techniques, the memory-based
method allows for generating significantly
more grammatical and informative
sentences, at the cost of searching a
vast space of hundreds of millions of parse
trees of known grammatical utterances. To
this end, we introduce a data structure and
a search method that make it possible to
efficiently extrapolate from every sentence
the parse sub-trees that match against any
of the stored utterances.
View details
Preview abstract
This paper presents HEADY: a novel, ab- stractive approach for headline generation from news collections. From a web-scale corpus of English news, we mine syntactic patterns that a Noisy-OR model generalizes into event descriptions. At inference time, we query the model with the patterns observed in an unseen news collection, identify the event that better captures the gist of the collection and retrieve the most appropriate pattern to generate a headline. HEADY improves over a state-of-the- art open-domain title abstraction method, bridging half of the gap that separates it from extractive methods using human-generated titles in manual evaluations, and performs comparably to human-generated headlines as evaluated with ROUGE.
View details
WHAD: Wikipedia historical attributes data
Guillermo Garrido
Jean-Yves Delort
Anselmo Peñas
Language Resources and Evaluation (2013), pp. 28
Preview abstract
This paper describes the generation of temporally anchored infobox
attribute data from the Wikipedia history of revisions. By mining (attribute, value)
pairs from the revision history of the English Wikipedia we are able to collect a
comprehensive knowledge base that contains data on how attributes change over
time. When dealing with the Wikipedia edit history, vandalic and erroneous edits
are a concern for data quality. We present a study of vandalism identification in
Wikipedia edits that uses only features from the infoboxes, and show that we can
obtain, on this dataset, an accuracy comparable to a state-of-the-art vandalism
identification method that is based on the whole article. Finally, we discuss different
characteristics of the extracted dataset, which we make available for further study.
View details