Spaces:
Running
Running
Update README.md
Browse files
README.md
CHANGED
|
@@ -25,7 +25,7 @@ Paris Noah's Ark Lab consists of 3 research teams that cover the following topic
|
|
| 25 |
|
| 26 |
|
| 27 |
### Preprints
|
| 28 |
-
|
| 29 |
- [TAG: A Decentralized Framework for Multi-Agent Hierarchical Reinforcement Learning](https://huggingface.co/papers/2502.15425): distributed multi-agent hierarchical reinforcement learning framework.
|
| 30 |
- [SKADA-Bench: Benchmarking Unsupervised Domain Adaptation Methods with Realistic Validation](https://arxiv.org/abs/2407.11676): benchmark of shallow and deep domain adaptation method with realistic validation
|
| 31 |
- [Clustering Head: A Visual Case Study of the Training Dynamics in Transformers](https://arxiv.org/abs/2410.24050): visual and theoretical understanding of training dynamics in transformers.
|
|
@@ -33,6 +33,7 @@ Paris Noah's Ark Lab consists of 3 research teams that cover the following topic
|
|
| 33 |
- [A Systematic Study Comparing Hyperparameter Optimization Engines on Tabular Data](https://balazskegl.medium.com/navigating-the-maze-of-hyperparameter-optimization-insights-from-a-systematic-study-6019675ea96c): insights to navigate the maze of hyperopt techniques.
|
| 34 |
|
| 35 |
### 2025
|
|
|
|
| 36 |
- *(ICML'25)* [AdaPTS: Adapting Univariate Foundation Models to Probabilistic Multivariate Time Series Forecasting](https://arxiv.org/abs/2502.10235): simple yet powerful tricks to extend foundation models.
|
| 37 |
- *(ICLR'25)* [Zero-shot Model-based Reinforcement Learning using Large Language Models](https://huggingface.co/papers/2410.11711): disentangled in-context learning for multivariate time series forecasting and model-based RL.
|
| 38 |
- *(ICASSP'25)* [Easing Optimization Paths: A Circuit Perspective](https://arxiv.org/abs/2501.02362): mechanistic study of training dynamics in transformers.
|
|
|
|
| 25 |
|
| 26 |
|
| 27 |
### Preprints
|
| 28 |
+
- [Time Series Representations for Classification Lie Hidden in Pretrained Vision Transformers](https://arxiv.org/abs/2506.08641): converting time series to images and feeding them into a pre-trained ViT.
|
| 29 |
- [TAG: A Decentralized Framework for Multi-Agent Hierarchical Reinforcement Learning](https://huggingface.co/papers/2502.15425): distributed multi-agent hierarchical reinforcement learning framework.
|
| 30 |
- [SKADA-Bench: Benchmarking Unsupervised Domain Adaptation Methods with Realistic Validation](https://arxiv.org/abs/2407.11676): benchmark of shallow and deep domain adaptation method with realistic validation
|
| 31 |
- [Clustering Head: A Visual Case Study of the Training Dynamics in Transformers](https://arxiv.org/abs/2410.24050): visual and theoretical understanding of training dynamics in transformers.
|
|
|
|
| 33 |
- [A Systematic Study Comparing Hyperparameter Optimization Engines on Tabular Data](https://balazskegl.medium.com/navigating-the-maze-of-hyperparameter-optimization-insights-from-a-systematic-study-6019675ea96c): insights to navigate the maze of hyperopt techniques.
|
| 34 |
|
| 35 |
### 2025
|
| 36 |
+
- *(ICML'25 FMSD Workshop, **Best Paper**)* [CauKer: Classification Time Series Foundation Models Can Be Pretrained on Synthetic Data Only](https://arxiv.org/abs/2508.02879): algorithm to generate diverse synthetic time series.
|
| 37 |
- *(ICML'25)* [AdaPTS: Adapting Univariate Foundation Models to Probabilistic Multivariate Time Series Forecasting](https://arxiv.org/abs/2502.10235): simple yet powerful tricks to extend foundation models.
|
| 38 |
- *(ICLR'25)* [Zero-shot Model-based Reinforcement Learning using Large Language Models](https://huggingface.co/papers/2410.11711): disentangled in-context learning for multivariate time series forecasting and model-based RL.
|
| 39 |
- *(ICASSP'25)* [Easing Optimization Paths: A Circuit Perspective](https://arxiv.org/abs/2501.02362): mechanistic study of training dynamics in transformers.
|