Skip to content

Commit 339158f

Browse files
authored
Update README.md
1 parent 86fd411 commit 339158f

File tree

1 file changed

+1
-201
lines changed

1 file changed

+1
-201
lines changed

README.md

Lines changed: 1 addition & 201 deletions
Original file line numberDiff line numberDiff line change
@@ -1,15 +1,6 @@
11
# FECAM(In VLDB2023 Submission)
22

3-
[![Arxiv link]()](https://arxiv.org/abs/2212.01209)
4-
<!--
5-
https://img.shields.io/badge/arXiv-Time%20Series%20is%20a%20Special%20Sequence%3A%20Forecasting%20with%20Sample%20Convolution%20and%20Interaction-%23B31B1B -->
6-
![state-of-the-art](https://img.shields.io/badge/-STATE--OF--THE--ART-blue?logo=Accenture&labelColor=lightgrey)![pytorch](https://img.shields.io/badge/-PyTorch-%23EE4C2C?logo=PyTorch&labelColor=lightgrey)
7-
8-
9-
10-
This is the original pytorch implementation for the following paper: [FECAM: Frequency Enhanced Channel Attention Mechanism for Time Series Forecasting](# FECAM(In VLDB2023 Submission)
11-
12-
[![Arxiv link]()](https://arxiv.org/pdf/)
3+
[![Arxiv link](FECAM)](https://arxiv.org/abs/2212.01209)
134
<!--
145
https://img.shields.io/badge/arXiv-Time%20Series%20is%20a%20Special%20Sequence%3A%20Forecasting%20with%20Sample%20Convolution%20and%20Interaction-%23B31B1B -->
156
![state-of-the-art](https://img.shields.io/badge/-STATE--OF--THE--ART-blue?logo=Accenture&labelColor=lightgrey)![pytorch](https://img.shields.io/badge/-PyTorch-%23EE4C2C?logo=PyTorch&labelColor=lightgrey)
@@ -158,197 +149,6 @@ We conduct the experiments on **6** popular time-series datasets, namely **Elect
158149

159150

160151

161-
### Dataset preparation
162-
Download data. You can obtain all the six benchmarks from [Tsinghua Cloud](https://cloud.tsinghua.edu.cn/d/e1ccfff39ad541908bae/) or [Google Drive](https://drive.google.com/drive/folders/1ZOYpTUa82_jCcxIdTmyr0LXQfvaM9vIy?usp=sharing). **All the datasets are well pre-processed** and can be used easily.(We thanks Author of Autoformer ,Haixu Wu for sorting datasets and public sharing them.)
163-
164-
The data directory structure is shown as follows.
165-
```
166-
./
167-
└── datasets/
168-
├── electricity
169-
│ └── electricity.csv
170-
├── ETT-small
171-
│ ├── ETTh1.csv
172-
│ ├── ETTh2.csv
173-
│ ├── ETTm1.csv
174-
│ └── ETTm2.csv
175-
├── exchange_rate
176-
│ └── exchange_rate.csv
177-
├── illness
178-
│ └── national_illness.csv
179-
├── traffic
180-
│ └── traffic.csv
181-
└── weather
182-
└── weather.csv
183-
```
184-
## Contact
185-
186-
If you have any questions, feel free to contact us or post github issues. Pull requests are highly welcomed!
187-
188-
```
189-
Maowei Jiang: jiangmaowei@sia.cn
190-
```
191-
192-
193-
## Acknowledgements
194-
195-
Thank you all for your attention to our work!
196-
197-
This code uses ([Autoformer](https://github.com/thuml/Autoformer),[Informer](https://github.com/zhouhaoyi/Informer2020), [Reformer](https://github.com/lucidrains/reformer-pytorch), [Transformer](https://github.com/jadore801120/attention-is-all-you-need-pytorch), [LSTM](https://github.com/jaungiers/LSTM-Neural-Network-for-Time-Series-Prediction),[N-HiTS](https://github.com/Nixtla/neuralforecast), [N-BEATS](https://github.com/ServiceNow/N-BEATS), [Pyraformer](https://github.com/alipay/Pyraformer), [ARIMA](https://github.com/gmonaci/ARIMA)) as baseline methods for comparison and further improvement.
198-
199-
We appreciate the following github repos a lot for their valuable code base or datasets:
200-
201-
https://github.com/zhouhaoyi/Informer2020
202-
203-
https://github.com/thuml/Autoformer
204-
205-
https://github.com/cure-lab/LTSF-Linear
206-
207-
https://github.com/zhouhaoyi/ETDataset
208-
209-
https://github.com/laiguokun/multivariate-time-series-data
210-
). Alse see the [Open Review verision]().
211-
212-
If you find this repository useful for your research work, please consider citing it as follows:
213-
214-
```
215-
216-
@article{2022FECAM,
217-
218-
title={FECAM: Frequency Enhanced Channel Attention Mechanism for Time Series Forecasting},
219-
220-
author={Jiang, Maowei and Zeng, Pengyu and Wang, Kai and Chen, Wenbo and Liu, Huan and Liu, Haoran},
221-
222-
journal=Arxiv, 2022},
223-
224-
year={2022}
225-
226-
}
227-
228-
```
229-
230-
## Updates
231-
- [2022-12-01] FECAM v1.0 is released
232-
233-
234-
## Features
235-
236-
- [x] Support **Six** popular time-series forecasting datasets, namely Electricity Transformer Temperature (ETTh1, ETTh2 and ETTm1,ETTm2) , Traffic, National Illness, Electricity and Exchange Rate , ranging from power, energy, finance,illness and traffic domains.
237-
- [x] **We generalize FECAM into a module which can be flexibly and easily applied into any deep learning models within just few code lines.**
238-
239-
[comment]: <> (![traffic]&#40;https://img.shields.io/badge/🚅-Traffic-yellow&#41;)
240-
241-
[comment]: <> (![electric]&#40;https://img.shields.io/badge/%F0%9F%92%A1-Electricity-yellow&#41;)
242-
243-
[comment]: <> (![Solar Energy]&#40;https://img.shields.io/badge/%F0%9F%94%86-Solar%20Energy-yellow&#41;)
244-
245-
[comment]: <> (![finance]&#40;https://img.shields.io/badge/💵-Finance-yellow&#41;)
246-
247-
- [x] Provide all training logs.
248-
249-
250-
251-
## To-do items
252-
253-
- Integrate FECAM into other mainstream models(eg:Pyraformer,Bi-lstm,etc.) for better performance and higher efficiency on real-world time series.
254-
- Validate FECAM on more spatial-temporal time series datasets.
255-
- As a sequence modelling module,we believe it can work fine on NLP tasks too,like Machine Translation and Name Entity Recognization.Further more,as a frequency enhanced module it can theoretically work in any deep-learning models like Resnet.
256-
257-
Stay tuned!
258-
259-
## Get started
260-
261-
1. Install the required package first(Mainly including Python 3.8, PyTorch 1.9.0):
262-
```
263-
cd FECAM
264-
conda create -n fecam python=3.8
265-
conda activate fecam
266-
pip install -r requirements.txt
267-
```
268-
2. Download data. You can obtain all the six benchmarks from [Tsinghua Cloud](https://cloud.tsinghua.edu.cn/d/e1ccfff39ad541908bae/) or [Google Drive](https://drive.google.com/drive/folders/1ZOYpTUa82_jCcxIdTmyr0LXQfvaM9vIy?usp=sharing). **All the datasets are well pre-processed** and can be used easily.
269-
3. Train the model. We provide the experiment scripts of all benchmarks under the folder `./scripts`. You can reproduce the experiment results by:
270-
```
271-
bash ./scripts/ETT_script/FECAM_ETTm2.sh
272-
bash ./scripts/ECL_script/FECAM.sh
273-
bash ./scripts/Exchange_script/FECAM.sh
274-
bash ./scripts/Traffic_script/FECAM.sh
275-
bash ./scripts/Weather_script/FECAM.sh
276-
bash ./scripts/ILI_script/FECAM.sh
277-
```
278-
## SENET(channel attention)
279-
<p align="center">
280-
<img src=".\pics\SENET.png" height = "250" alt="" align=center />
281-
</p>
282-
283-
## FECAM(Frequency Enhanced Channel Attention Mechanism)
284-
<p align="center">
285-
<img src=".\pics\FECAM.png" height = "350" alt="" align=center />
286-
</p>
287-
288-
## As a module to enhance the frequency domain modeling capability of transformers and LSTM
289-
<p align="center">
290-
<img src=".\pics\as_module.png" height = "450" alt="" align=center />
291-
</p>
292-
293-
## Comparison with Transformers and other mainstream forecasting models
294-
### Multivariate Forecasting:
295-
<p align="center">
296-
<img src=".\pics\mul.png" height = "550" alt="" align=center />
297-
</p>
298-
299-
FECAM outperforms all transformer-based methods by a large margin.
300-
### Univariate Forecasting:
301-
<p align="center">
302-
<img src=".\pics\uni.png" height = "280" alt="" align=center />
303-
</p>
304-
305-
### Efficiency
306-
<p align="center">
307-
<img src=".\pics\parameter_increment.png" height = "185" alt="" align=center />
308-
</p>
309-
Compared to vanilla models, only a few parameters are increased by applying our method (See Table 4), and thereby their computationalcomplexities can be preserved.
310-
311-
312-
### Performance promotion with FECAM module
313-
<p align="center">
314-
<img src=".\pics\performance_promotion.png" height = "390" alt="" align=center />
315-
</p>
316-
317-
318-
## Visualization
319-
### Forecasting visualization:Visualization of ETTm2 and Exchange predictions given by different models.
320-
<p align="center">
321-
<img src=".\pics\Qualitative_withours.png" height = "397" alt="" align=center />
322-
323-
### FECAM visualization:Visualization of frequency enhanced channel attention and output tensor of encoder layer of transformer.x-axis represents channels,y-axis represents frequency from low to high,performing on datasets weather and exchange.
324-
<p align="center">
325-
<img src=".\pics\tensor_visualization.png" height = "345" alt="" align=center />
326-
327-
## Used Datasets
328-
329-
330-
We conduct the experiments on **6** popular time-series datasets, namely **Electricity Transformer Temperature (ETTh1, ETTh2 and ETTm1) and Traffic, Weather,Illness, Electricity and Exchange Rate**, ranging from **power, energy, finance , health care and traffic domains**.
331-
332-
333-
### Overall information of the 9 real world datasets
334-
335-
| Datasets | Variants | Timesteps | Granularity | Start time | Task Type |
336-
| ------------- | -------- | --------- | ----------- | ---------- | ----------- |
337-
| ETTh1 | 7 | 17,420 | 1hour | 7/1/2016 | Multi-step |
338-
| ETTh2 | 7 | 17,420 | 1hour | 7/1/2016 | Multi-step |
339-
| ETTm1 | 7 | 69,680 | 15min | 7/1/2016 | Multi-step |
340-
| ETTm2 | 7 | 69,680 | 15min | 7/1/2016 | Multi-step&Single-step |
341-
| ILI | 7 | 966 | 1hour | 1/1/2002 | Multi-step |
342-
| Exchange-Rate | 8 | 7,588 | 1hour | 1/1/1990 | Multi-step&Single-step |
343-
| Electricity | 321 | 26,304 | 1hour | 1/1/2012 | Multi-step-step |
344-
| Traffic | 862 | 17,544 | 1hour | 1/1/2015 | Multi-step-step |
345-
| Weather | 21 | 52,695 | 10min | 1/1/2020 | Multi-step-step |
346-
347-
348-
349-
350-
351-
352152
### Dataset preparation
353153
Download data. You can obtain all the six benchmarks from [Tsinghua Cloud](https://cloud.tsinghua.edu.cn/d/e1ccfff39ad541908bae/) or [Google Drive](https://drive.google.com/drive/folders/1ZOYpTUa82_jCcxIdTmyr0LXQfvaM9vIy?usp=sharing). **All the datasets are well pre-processed** and can be used easily.(We thanks Author of Autoformer ,Haixu Wu for sorting datasets and public sharing them.)
354154

0 commit comments

Comments
 (0)