Automata-based constraints for language model decoding

Terry Koo
Frederick Liu
First Conference on Language Modeling (2024)

Abstract

LMs are often expected to generate strings in some formal language; for
example, structured data, API calls, or code snippets. Although LMs can
be tuned to improve their adherence to formal syntax, this does not guarantee conformance, especially with smaller LMs suitable for large-scale
deployment. In addition, tuning requires significant resources, making
it impractical for uncommon or task-specific formats. To prevent downstream parsing errors we would ideally constrain the LM to only produce
valid output, but this is severely complicated by tokenization, which is
typically both ambiguous and misaligned with the formal grammar. We
solve these issues through the application of automata theory, deriving
an efficient closed-form solution for the regular languages, a broad class of
formal languages with many practical applications, including API calls or
schema-guided JSON and YAML. We also discuss pragmatic extensions
for coping with the issue of high branching factor. Finally, we extend our
techniques to deterministic context-free languages, which similarly admit an
efficient closed-form solution. In spite of its flexibility and representative
power, our approach only requires access to per-token decoding logits and
lowers into simple calculations that are independent of LM size, making it
both efficient and easy to apply to almost any LM architecture.
×