Thomas Ortner commited on
Commit
1afdfbe
·
1 Parent(s): 189d217

Initially added files

Browse files
README.md CHANGED
@@ -1,3 +1,84 @@
1
  ---
2
  license: apache-2.0
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
  ---
4
+ # FlowState
5
+ [Paper](https://www.arxiv.org/abs/2508.05287) | [HuggingFace Model Card](https://huggingface.co/ibm-granite/granite-timeseries-flowstate-r1) | [GitHub Model Code](https://github.com/ibm-granite/granite-tsfm/tree/main/tsfm_public/models/flowstate)
6
+
7
+ ![Illustration](figs/FlowState.png)
8
+ FlowState is the first time-scale adjustable Time Series Foundation Model (TSFM), open-sourced by IBM Research.
9
+ Combining an State Space Model (SSM) Encoder with a Functional Basis Decoder allows FlowState to transition into a timescale invariant coefficient space and make a continuous forecast from this space.
10
+ This allows FlowState to seamlessly adjust to all possible sampling rates.
11
+ Therefore, training in one time-scale helps for inference at all scales, allowing for drastically improved utilization of training data across time-scales.
12
+ This innovation leads to a significant improvement in performance, making FlowState the new state-of-the art in zero-shot time series forecasting.
13
+ ## Key Features
14
+ - **FlowState**: We present an SSM-based time series foundation model that can be dynamically adjusted to the specific characteristics of the time series during evaluation.
15
+ - **Functional Basis Decoder (FBD)**: We propose a novel decoder, as a critical component of FlowState, that utilizes a set of continuous basis functions to make continuous forecasts and allow seamless adjustment to specific input characteristics.
16
+ - **Flexible temporal adaptation**: FlowState can dynamically adjust the context and target length to the timescale of the provided time series.
17
+ - **Compact and high-performing**: With fewer than 10M parameters and the ability to forecast multiple consecutive patches in parallel, FlowState delivers state-of-the-art accuracy with exceptional efficiency.
18
+
19
+ ## Benchmark Highlights
20
+ ![Illustration](figs/flowstate_performance.png)
21
+ Despite being **more than 10x smaller** than the 3 next best models,
22
+ FlowState is the **best Zero-Shot model** on the [GIFT-Eval Leaderboard](https://huggingface.co/spaces/Salesforce/GIFT-Eval).
23
+ The Figure compares GIFT MASE Performance vs. model size for FlowState and the 10 next best Zero-Shot Models, as of Sep. 9th 2025.
24
+ ## Model Details
25
+ Model Details can be found in our [Paper](https://www.arxiv.org/abs/2508.05287).
26
+ Currently FlowState only supports zero-shot forecasting.
27
+ ## Recommended Use
28
+ FlowState can be used to make predictions as follows:
29
+ ```Python
30
+ from tsfm_public import FlowStateForPrediction
31
+ import torch
32
+ device= 'cuda'
33
+ predictor = FlowStateForPrediction.from_pretrained("ibm-granite/granite-timeseries-flowstate-r1").to(device)
34
+ time_series = torch.randn((2048, 32, 1), device=device) # context, batch, n_ch
35
+ forecast = predictor(time_series, scale_factor=0.25, prediction_length=960, batch_first=False)
36
+ print(forecast.prediction_outputs.shape) # torch.Size([32, 9, 48, 1]) (batch, quantiles, forecast_length, n_ch)
37
+ ```
38
+ It is recommended for users to determine a suitable scale factor for their specific time series data, as explained in the next section.
39
+ #### Temporal Scaling
40
+ For common sampling rates, we recommend the following scaling factors.
41
+ | Sampling Rate | Recommended Scale Factor |
42
+ |---------------|---------------------------|
43
+ | 15 min | 0.25 |
44
+ | 30 min | 0.5 |
45
+ | Hourly | 1.0 |
46
+ | Daily | 3.43 if data has a weekly cylce, else 0.0656 |
47
+ | Weekly | 0.46 |
48
+ | Monthly | 2 |
49
+
50
+ For optimal performance it is recommended to first determine the seasonality of their data and to calculate the scale factor.
51
+
52
+ Assuming data has repeating structures every N=96 time steps (such as quarter hourly sampled data with a daily cycle), resulting in seasonality 96, the scale factor can be calculated as follows:
53
+
54
+ scale_factor = Base Seasonality / N = 24 / 96 = 0.25
55
+
56
+ Where 24 is the base seasonality used during pretraining.
57
+ If the seasonality is unclear, it is best to experiment with different scale factors and select what works best.
58
+ We recommend forecasting no more than 30 seasons (in our example 96*30=2880 time steps).
59
+ Afterward, forecasting quality declines.
60
+ ## Installation
61
+ To run FlowState follow the installation instructions [here](https://github.com/ibm-granite/granite-tsfm/?tab=readme-ov-file#initial-setup).
62
+ For the GIFT evaluation notebook we recommend using python 3.11, and installing gift-eval according to their [repo](https://github.com/SalesforceAIResearch/gift-eval).
63
+ ## Example Recipes and Notebooks
64
+ - Getting started notebook: [here](https://github.com/ibm-granite/granite-tsfm/tree/main/notebooks/hfdemo/flowstate_getting_started.ipynb)
65
+ - GIFT Eval Notebook: [here](https://github.com/ibm-granite/granite-tsfm/tree/main/notebooks/hfdemo/flowstate_gift_eval.ipynb.)
66
+ ## Pretraining Data
67
+ As pretraining data, we used a subset of [Gift-Eval Pretrain](https://huggingface.co/datasets/Salesforce/GiftEvalPretrain), and a subset of the [Chronos Pretraining Data Corpus](https://huggingface.co/datasets/autogluon/chronos_datasets).
68
+ None of the used datasets (or sub/up-sampled versions thereof) are contained in Gift-Eval (neither train, validation nor test split).
69
+ All our Gift-Eval results are Zero-Shot.
70
+ ## Citation
71
+ Please cite the following paper if you intend to use our model or its associated architectures/approaches in your work.
72
+ ### BibTeX:
73
+ ```
74
+ @article{graf2025flowstate,
75
+ title={FlowState: Sampling Rate Invariant Time Series Forecasting},
76
+ author={Graf, Lars and Ortner, Thomas and Wo{\'L}{\c{s}}niak, Stanis{\'L} and Pantazi, Angeliki and others},
77
+ journal={arXiv preprint arXiv:2508.05287},
78
+ year={2025}
79
+ }
80
+ ```
81
+ ## Model Card Authors
82
+ Lars Graf, Thomas Ortner, Stanislaw Wozniak, Angeliki Pantazi
83
+ ## IBM Public Repository Disclosure
84
+ All content in this repository including code has been provided by IBM under the associated open source software license and IBM is under no obligation to provide enhancements, updates, or support. IBM developers produced this code as an open source project (not as an IBM product), and IBM makes no assertions as to the level of quality nor security, and will not be maintaining this code going forward.
config.json ADDED
@@ -0,0 +1,32 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "FlowStateModel"
4
+ ],
5
+ "context_length": 2048,
6
+ "decoder_dim": 256,
7
+ "decoder_patch_len": 24,
8
+ "decoder_type": "legs",
9
+ "embedding_feature_dim": 512,
10
+ "encoder_num_hippo_blocks": 8,
11
+ "encoder_num_layers": 6,
12
+ "encoder_state_dim": 512,
13
+ "init_processing": true,
14
+ "prediction_type": "quantile",
15
+ "min_context": 2048,
16
+ "model_type": "flowstate",
17
+ "quantiles": [
18
+ 0.1,
19
+ 0.2,
20
+ 0.3,
21
+ 0.4,
22
+ 0.5,
23
+ 0.6,
24
+ 0.7,
25
+ 0.8,
26
+ 0.9
27
+ ],
28
+ "torch_dtype": "float32",
29
+ "transformers_version": "4.52.1",
30
+ "use_freq": true,
31
+ "with_missing": true
32
+ }
figs/FlowState.png ADDED

Git LFS Details

  • SHA256: 92eb6956f449a3e5611e98d4f8be76ff2206da55f1086dc7ca6ab7a84413335b
  • Pointer size: 131 Bytes
  • Size of remote file: 565 kB
figs/flowstate_performance.png ADDED

Git LFS Details

  • SHA256: 0a8c6d33fc36890ac951f6153dbb2861b1a47e10afb730b934ecdf8ca696b699
  • Pointer size: 132 Bytes
  • Size of remote file: 1.07 MB
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:07a7844db841047d3a99ce9c6ce0ce5139a24d1f42e919c37c2d1e285fa0ff98
3
+ size 36284680