In-context finetuning for time-series foundation models

Matthew Faw
Abhimanyu Das
2025

Abstract

We pioneer the study of in-context training for time-series foundation models. We create finetuning examples that not only include the usual (context, horizon) pairs for forecasting; but also related time-series examples in-context. We finetune a pretrained time-series foundation model on the type of in-context examples mentioned above. Our training is decoder-only and can adapt not only to any context, horizon pair (up to a certain maximum context) but also to any number of supplementary time-series examples (again up to a certain maximum number of examples). Appropriately trained models can then learn to borrow patterns from these related examples to do better on the original forecasting task. We show that this opens up interesting features like the ability to prompt the time-series foundation model with different related examples. This can help the finetuned model to adapt to specific features of a dataset at inference time. We show that such adaptions can lead to better zero-shot performance on popular forecasting benchmarks as compared to supervised deep learning methods, statistical models as well as other time-series foundation models.