One fits all: Power general time series analysis by pretrained lm

T Zhou, P Niu, L Sun, R Jin - Advances in neural …, 2023 - proceedings.neurips.cc
T Zhou, P Niu, L Sun, R Jin
Advances in neural information processing systems, 2023proceedings.neurips.cc
Although we have witnessed great success of pre-trained models in natural language
processing (NLP) and computer vision (CV), limited progress has been made for general
time series analysis. Unlike NLP and CV where a unified model can be used to perform
different tasks, specially designed approach still dominates in each time series analysis task
such as classification, anomaly detection, forecasting, and few-shot learning. The main
challenge that blocks the development of pre-trained model for time series analysis is the …
Abstract
Although we have witnessed great success of pre-trained models in natural language processing (NLP) and computer vision (CV), limited progress has been made for general time series analysis. Unlike NLP and CV where a unified model can be used to perform different tasks, specially designed approach still dominates in each time series analysis task such as classification, anomaly detection, forecasting, and few-shot learning. The main challenge that blocks the development of pre-trained model for time series analysis is the lack of a large amount of data for training. In this work, we address this challenge by leveraging language or CV models, pre-trained from billions of tokens, for time series analysis. Specifically, we refrain from altering the self-attention and feedforward layers of the residual blocks in the pre-trained language or image model. This model, known as the Frozen Pretrained Transformer (FPT), is evaluated through fine-tuning on all major types of tasks involving time series. Our results demonstrate that pre-trained models on natural language or images can lead to a comparable or state-of-the-art performance in all main time series analysis tasks, as illustrated in Figure1. We also found both theoretically and empirically that the self-attention module behaviors similarly to principle component analysis (PCA), an observation that helps explains how transformer bridges the domain gap and a crucial step towards understanding the universality of a pre-trained transformer. The code is publicly available at https://anonymous. 4open. science/r/Pretrained-LM-for-TSForcasting-C561.
proceedings.neurips.cc
Showing the best result for this search. See all results