Jump to Content

SKILL: Structured Knowledge Infusion for Large Language Models

Fedor Moiseev
Zhe Dong
Martin Jaggi
NAACL 2022 (2022) (to appear)

Abstract

Large language models (LLMs) have demonstrated human-level performance on vast spectrum of natural language tasks. However, whether they could efficiently memorize or learn from an abstract and structured corpus, like knowledge graph, is largely unexplored. In this work, we propose a method to infuse structure knowledge in LLM, by directly training T5 models on factual triples of knowledge graphs. By evaluating on closed-book QA tasks, we show that models pre-trained with our knowledge-infusing method outperform the T5 baselines, and performs competitively with the models pre-trained on natural language sentences that contain the same knowledge. The proposed method has an advantage that no alignment between the knowledge graph and text corpus is required to curate the training data. This make our method adaptable to industrial scale knowledge graph.