Home
Scholarly Works
Apollo-Forecast: Overcoming Aliasing and Inference...
Conference

Apollo-Forecast: Overcoming Aliasing and Inference Speed Challenges in Language Models for Time Series Forecasting

Abstract

Encoding time series into tokens and using language models for processing has been shown to substantially augment the models' ability to generalize to unseen tasks. However, existing language models for time series forecasting encounter several obstacles, including aliasing distortion and prolonged inference times, primarily due to the limitations of quantization processes and the computational demands of large models. This paper introduces Apollo-Forecast, a novel framework that tackles these challenges with two key innovations: the Anti-Aliasing Quantization Module (AAQM) and the Race Decoding (RD) technique. AAQM adeptly encodes sequences into tokens while mitigating high-frequency noise in the original signals, thus enhancing both signal fidelity and overall quantization efficiency. RD employs a draft model to enable parallel processing and results integration, which markedly accelerates the inference speed for long-term predictions, particularly in large-scale models. Extensive experiments on various real-world datasets show that Apollo-Forecast outperforms state-of-the-art methods by 35.41% and 18.99% in WQL and MASE metrics, respectively, in zero-shot scenarios. Furthermore, our method achieves an acceleration of 1.9X-2.7X in inference speed over the baseline methods.

Authors

Yin T; Wang J; Ma Y; Wang H; Wang C; Zhao Y; Liu M; Shen W

Volume

39

Pagination

pp. 22173-22181

Publisher

Association for the Advancement of Artificial Intelligence (AAAI)

Publication Date

April 11, 2025

DOI

10.1609/aaai.v39i21.34371

Conference proceedings

Proceedings of the AAAI Conference on Artificial Intelligence

Issue

21

ISSN

2159-5399
View published work (Non-McMaster Users)

Contact the Experts team