The article discusses part four of a study on a newly developed AI model, emphasizing its application to time series forecasting using a substantial dataset of approximately 244 million samples. The model's pre-training involves skillful selection of datasets to enhance performance in tasks requiring long-term forecasting capabilities. Additionally, the model is benchmarked against state-of-the-art (SOTA) forecasting models to validate its efficacy. This robust evaluation process aids in establishing the model's competitive standing within the AI research community.
We benchmark TTM with the latest public SOTA forecasting models categorized as LLM-based TS pre-trained models, self-supervised pre-trained models, and TS transformer models.
Pre-training employs a subset of the Monash data hub consisting of approximately 244M samples, excluding datasets without sufficient data for long-term forecasting.
Collection
[
|
...
]