Giter Club home page Giter Club logo

scinet's People

Contributors

ailingzengzzz avatar alexminhao avatar mixiancmx avatar ntvthuyen avatar rachellyy avatar vewoxic avatar xuqiang1116 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

scinet's Issues

运行ETTH的时候, 现模型为何没有使用数据中的date信息

在训练的过程中, 发现模型并没有使用date信息, 但是在模型初始化的时候, 却对时间数据进行了encoding, 见train函数的代码(来自 exp_ETTH.py ):

.....
 for i, (batch_x, batch_y, batch_x_mark, batch_y_mark) in enumerate(train_loader): # 
                iter_count += 1
                model_optim.zero_grad()
                pred, pred_scale, mid, mid_scale, true, true_scale = self._process_one_batch_SCINet(
                    train_data, batch_x, batch_y)
.....

batch_x_mark和bacht_y_mark是关于date信息的encoding结果, 但 _process_one_batch_SCINet函数显然没有使用这两个输入,而且模型的dim_in为7, 就是除去date以外的其他7个特征.
请问这是怎么考虑的? 谢谢

Forward path retuns with None in case stack is > 2

elif self.stacks == 2:

In case the stack is > 2 it seems there is no return value given thus the forward path returns with None.

  1. unclear how the test in table-6 was performed for stack > 2 when it is not implemented in the repo
  2. suggestion: adding to the end of stack if-elif" a piece of code with "else raise NotImplementedError" would be useful.

Improved results with no SCINet module

Hello authors / fellow GitHub members,

In models/SCINet.py, in class SCINet, forward() method, comment the lines 334, 335, 336 - we do not pass the input x through the self.blocks1 and the subsequent residual addition step, i.e. we only have x = self.projection1(x) in the first stack. Also, use stacks=1 (there is no need for two stacks). Now, run the experiments on the datasets.

I believe this will outperform the SCINet model.

Do try it out and let me know - it seems the odd-even splitting and interactive learning only worsen the results (as compared to what I have suggested, which, at its core, is a simple linear model).

MS features implementation

Hi!
I want to implement the MS features option, as I'm trying to apply SCINet for financial data forecasting. With financial datasets, you have tons of input data at each moment, so not using all of that data would be a major overlook in model design.

As an author, how would you implement it in the current model?

Thanks!

您好,请问如果输入加入位置编码是否对预测效果有加成?

作者您好,我看您SCINet代码中有positional encoding 模块,想请问一下这个位置编码是否和transformer的一样,以及为什么在预测financial四个数据集的时候,parser.add_argument('--positionalEcoding', type = bool , default=False)
位置编码设置为FALSE,是因为添加位置编码有负面作用么

感谢您的回复

如何训练自己的数据集呢?

在下菜鸟一枚。之前在informer模型上运行了自己的数据集,现在打算在SCINet上也如法炮制一番。但是遇到了如下的报错:
Q_O7} ~}GKAMSQ@RUO)62_Y
我是这样处理的:将run_ETTh.py文件中的data_paser添加我的数据集,对应的dataloader是Dataset_Custom,并在ETTH_data_loader.py文件中添加了配置。加载应该是可以的,但是返回了报错。Etth数据集加载的代码结构和informer模型代码的很像,但在informer模型的代码中就没有报错。请问是为什么呢?

Do I get better result?

Hi all,

I I changed code in block 1 to code in block 2, the result get better. Is there anything wrong?
On dataset(ETTh1, ETTh2, ETTm1)
/block1/ (SCINet.py line 107-114)

        x_even = x_even.permute(0, 2, 1)
        x_odd = x_odd.permute(0, 2, 1)

        d = x_odd.mul(torch.exp(self.phi(x_even)))
        c = x_even.mul(torch.exp(self.psi(x_odd)))

        x_even_update = c + self.U(d)
        x_odd_update = d - self.P(c)

/block1/

/block2/

        x_even = x_even.permute(0, 2, 1)
        x_odd = x_odd.permute(0, 2, 1)

        d = x_even
        c = x_odd

        x_even_update = c
        x_odd_update = d

/block2/
Thanks.

Eason

Add setup.py

Great repo!

May I kindly ask you to add a setup.py file so that we can pip install your repo?

Many thanks

Unable to reproduce the results

I am surprised by the excellent performance of SCINet reported in the paper. However, I cannot reproduce the results. I download the code and use the same command and try to reproduce the results of the exchange dataset when the lookahead is 3. The result is rse 0.018 and corr 0.9738, which is greatly different from the result reported in the paper (rse 0.0147 and corr 0.9868). Could you please tell me what happens?

Questions about training time

Hi all,

Appreciate your work.

I tried to train using the following command, and I noticed that it may take about 10 minutes to train only one epoch. I am using V100 to train.

python run_financial.py --dataset_name electricity --window_size 168 --horizon 24 --hidden-size 8 --single_step 1 --stacks 2 --levels 3 --lr 9e-3 --dropout 0 --batch_size 32 --model_name ele_I168_o24_lr9e-3_bs32_dp0_h8_s2l3_w0.5 --groups 321

I wonder if this is normal, because as far as I am concerned, your network mainly consists of conv1d and sampling, and they may not took that much time if If you can kindly offer some training specs and training time, I would be very thankful.

Matthew

Multi Input Single Output Net

Hello!

I'm in a situation where I have to predict the future value of a variable that highly depends on another one, as I've seen in the state of the art, there exist models that take into account the future value from those exogenous variables to predict the future value of the desired variable.

For example: I want to predict the heart beat frequency taking into account the activity that the user will be doing at the future.

The first idea that came into my mind was to use the 'MS' feature (multi input single value I understood) value from your code instead of 'M' or 'S', to check if the model would learn it implicitly.

The problem is that I think the option 'MS' is not implemented completely, am I wrong?

In case that I wanted to develop it, which strategy do you think I should use, where should I start? Do you think it will work properly? Will it be hard to implement that condition?

Thanks in advance!

Aniol

class Splitting: odd & even functions swapped

Description

I went through the Splitting module, that splits the original sequence S into lower-resolution subsequences Seven and Sodd by separating the even and odd elements. It seems like that the functions are swapped.

Example

import torch
x = torch.linspace(1, 500, 500)
x = x.reshape([10, 50, 1])

Using the even funcion gives

x[:, ::2, :][0]

tensor([[ 1.],
        [ 3.],
        [ 5.],
        [ 7.],
        [ 9.],
        [11.],
        [13.],
        [15.],
        [17.],
        [19.],
        [21.],
        [23.],
        [25.],
        [27.],
        [29.],
        [31.],
        [33.],
        [35.],
        [37.],
        [39.],
        [41.],
        [43.],
        [45.],
        [47.],
        [49.]])

While the odd function gives

x[:, 1::2, :][0]

tensor([[ 2.],
        [ 4.],
        [ 6.],
        [ 8.],
        [10.],
        [12.],
        [14.],
        [16.],
        [18.],
        [20.],
        [22.],
        [24.],
        [26.],
        [28.],
        [30.],
        [32.],
        [34.],
        [36.],
        [38.],
        [40.],
        [42.],
        [44.],
        [46.],
        [48.],
        [50.]])

Which implies that the function names are swapped. Not sure how this affects the results.

v1.1 problem

exp_ETTh.py, line 18: from models.SCINet_decompose import SCINet_decomp

SCINet_decompose, line 42: class SCINet_decompose(nn.Module)

您好,关于文章内容有一个问题想咨询一下

关于文章SCI-Block中公式(1)(2)有两个问题想和您讨论一下:
【1】公式(1)中使用exp的原因是什么?是否是单纯将数据映射到一个理想的数值区间还是说是其他原因?
【2】为什么公式(2)中单纯的加减就可以实现下采样信息损失的弥补?
因为我对卷积了解不是甚多,所以对这两个公式的理解还不到位,希望作者可以从百忙之中答疑解惑。Thanks~

RevIN Addon

Hi,
You have mentioned about adding RevIN support in your library. I was wondering whether it has already been added it or you have yet to do so.

请问论文中Figure3是如何作出的呢?

在下在阅读论文时有个疑问,请问 Figure 3: The prediction results (Horizon = 48) of SCINet, Informer (Zhou et al. 2021), and TCN on randomly-selected sequencesfrom ETTh1 dataset. 里的这些预测的对比图是怎么作出的呢?

Train on custom dataset

Hi,
Can you please guide on how do I train a model on my own custom dataset ?

Thanks for your help!

你好,在financial_dataloader.py中,好像没有用到horizon参数,但是我看运行命令中,有使用horizon参数,请问是我理解有问题嘛

def _batchify(self, idx_set, horizon):
n = len(idx_set)
X = torch.zeros((n, self.P, self.m))
Y = torch.zeros((n, self.h, self.m))
for i in range(n):
end = idx_set[i] - self.h + 1
start = end - self.P
X[i, :, :] = torch.from_numpy(self.dat[start:end, :])
# Y[i, :, :] = torch.from_numpy(self.dat[idx_set[i] - self.h:idx_set[i], :])
Y[i, :, :] = torch.from_numpy(self.dat[end:(idx_set[i]+1), :])

数据集总是OOM

你好,请问您在跑PEMS 数据集 + long term forecasting 的时候有没有发现很容易OOM 呢,咋解决的呢

Fail to reproduce results reported in paper on multiple datasets

Hi all,

I am very delighted that you have created a novel model and reported outstanding performance on nearly all datasets you used. However, when I tried to reproduce those results with the command and code in the repo, I have some difficulty in doing that. For example, when I use the following command,

python run_financial.py --dataset_name solar_AL --window_size 160 --horizon 3 --hidden-size 1  --lastWeight 0.5 --stacks 2 --levels 4 --lradj 2 --lr 1e-4 --dropout 0.25 --batch_size 256 --model_name so_I160_o3_lr1e-4_bs256_dp0.25_h1_s2l4_w0.5

which is exactly like the command you give in the readme file, I get the following result:
rse: 0.1775 corr: 0.9852
And this is very different from the result you reported in the paper:
rse: 0.1609 corr: 0.9934

Also, I found minor bugs in code. For example, in run_financial.py, line 95, var data is not defined in the context.

If you can kindly update the codes so that I can reproduce the results reported in the paper, I will be very thankful.

Thanks.

Matthew

'Tensor' object has no attribute 'dtpye'

hello, I got an error

Traceback (most recent call last):
File "run_pems.py", line 87, in
_, normalize_statistic = exp.train()
File "D:\gjh\workspace\SCINet\experiments\exp_pems.py", line 278, in train
forecast = self.model(inputs)
File "C:\Users\dell\anaconda3\envs\scinet\lib\site-packages\torch\nn\modules\module.py", line 889
, in _call_impl
result = self.forward(*input, **kwargs)
File "D:\gjh\workspace\SCINet\models\SCINet.py", line 355, in forward
output = torch.zeros(x.shape,dtype=x.dtpye).cuda()
AttributeError: 'Tensor' object has no attribute 'dtpye'

when I was running

python run_pems.py --dataset PEMS03 --hidden-size 0.0625 --dropout 0.25 --model_name pems03_h0.0625_dp0.25 --num_decoder_layer 2

I use below command build my environment

conda create -n scinet python=3.8
conda activate scinet
pip install -r requirements.txt (without pytorch)
conda install pytorch==1.8.0 torchvision==0.9.0 torchaudio==0.8.0 cudatoolkit=10.2 -c pytorch -y

How can I solve this problem? Thanks!

Code isn't executing for Test data

Hello,
I'm trying to run the Code for the ETTh1 dataset using the following run command in Google Colab:
!pythonrun_ETTh_10.py --data ETTh1 --features S --seq_len 96 --label_len 48 --pred_len 48 --hidden-size 4 --stacks 1 --levels 3 --lr 3e-3 --batch_size 8 --dropout 0.5 --model_name etth1_M_I48_O24_lr3e-3_bs8_dp0.5_h4_s1l3
and it runs successfully for train before early stopping at epoch 17
`Args in experiment:
Namespace(INN=1, RIN=False, batch_size=8, c_out=1, checkpoints='exp/ETT_checkpoints/', cols=None, concat_len=0, data='ETTh1', data_path='ETTh1.csv', dec_in=1, detail_freq='h', devices='0', dilation=1, dropout=0.5, embed='timeF', enc_in=1, evaluate=False, features='S', freq='h', gpu=0, groups=1, hidden_size=4.0, inverse=False, itr=0, kernel=5, label_len=48, lastWeight=1.0, levels=3, loss='mae', lr=0.003, lradj=1, model='SCINet', model_name='etth1_M_I48_O24_lr3e-3_bs8_dp0.5_h4_s1l3', num_decoder_layer=1, num_workers=0, patience=5, positionalEcoding=False, pred_len=48, resume=False, root_path='./datasets/', save=False, seq_len=96, single_step=0, single_step_output_One=0, stacks=1, target='OT', train_epochs=100, use_amp=False, use_gpu=True, use_multi_gpu=False, window_size=12)
SCINet(
(blocks1): EncoderTree(
(SCINet_Tree): SCINet_Tree(
(workingblock): LevelSCINet(
(interact): InteractorLevel(
(level): Interactor(
(split): Splitting()
(phi): Sequential(
(0): ReplicationPad1d((3, 3))
(1): Conv1d(1, 4, kernel_size=(5,), stride=(1,))
(2): LeakyReLU(negative_slope=0.01, inplace=True)
(3): Dropout(p=0.5, inplace=False)
(4): Conv1d(4, 1, kernel_size=(3,), stride=(1,))
(5): Tanh()
)
(psi): Sequential(
(0): ReplicationPad1d((3, 3))
(1): Conv1d(1, 4, kernel_size=(5,), stride=(1,))
(2): LeakyReLU(negative_slope=0.01, inplace=True)
(3): Dropout(p=0.5, inplace=False)
(4): Conv1d(4, 1, kernel_size=(3,), stride=(1,))
(5): Tanh()
)
(P): Sequential(
(0): ReplicationPad1d((3, 3))
(1): Conv1d(1, 4, kernel_size=(5,), stride=(1,))
(2): LeakyReLU(negative_slope=0.01, inplace=True)
(3): Dropout(p=0.5, inplace=False)
(4): Conv1d(4, 1, kernel_size=(3,), stride=(1,))
(5): Tanh()
)
(U): Sequential(
(0): ReplicationPad1d((3, 3))
(1): Conv1d(1, 4, kernel_size=(5,), stride=(1,))
(2): LeakyReLU(negative_slope=0.01, inplace=True)
(3): Dropout(p=0.5, inplace=False)
(4): Conv1d(4, 1, kernel_size=(3,), stride=(1,))
(5): Tanh()
)
)
)
)
(SCINet_Tree_odd): SCINet_Tree(
(workingblock): LevelSCINet(
(interact): InteractorLevel(
(level): Interactor(
(split): Splitting()
(phi): Sequential(
(0): ReplicationPad1d((3, 3))
(1): Conv1d(1, 4, kernel_size=(5,), stride=(1,))
(2): LeakyReLU(negative_slope=0.01, inplace=True)
(3): Dropout(p=0.5, inplace=False)
(4): Conv1d(4, 1, kernel_size=(3,), stride=(1,))
(5): Tanh()
)
(psi): Sequential(
(0): ReplicationPad1d((3, 3))
(1): Conv1d(1, 4, kernel_size=(5,), stride=(1,))
(2): LeakyReLU(negative_slope=0.01, inplace=True)
(3): Dropout(p=0.5, inplace=False)
(4): Conv1d(4, 1, kernel_size=(3,), stride=(1,))
(5): Tanh()
)
(P): Sequential(
(0): ReplicationPad1d((3, 3))
(1): Conv1d(1, 4, kernel_size=(5,), stride=(1,))
(2): LeakyReLU(negative_slope=0.01, inplace=True)
(3): Dropout(p=0.5, inplace=False)
(4): Conv1d(4, 1, kernel_size=(3,), stride=(1,))
(5): Tanh()
)
(U): Sequential(
(0): ReplicationPad1d((3, 3))
(1): Conv1d(1, 4, kernel_size=(5,), stride=(1,))
(2): LeakyReLU(negative_slope=0.01, inplace=True)
(3): Dropout(p=0.5, inplace=False)
(4): Conv1d(4, 1, kernel_size=(3,), stride=(1,))
(5): Tanh()
)
)
)
)
(SCINet_Tree_odd): SCINet_Tree(
(workingblock): LevelSCINet(
(interact): InteractorLevel(
(level): Interactor(
(split): Splitting()
(phi): Sequential(
(0): ReplicationPad1d((3, 3))
(1): Conv1d(1, 4, kernel_size=(5,), stride=(1,))
(2): LeakyReLU(negative_slope=0.01, inplace=True)
(3): Dropout(p=0.5, inplace=False)
(4): Conv1d(4, 1, kernel_size=(3,), stride=(1,))
(5): Tanh()
)
(psi): Sequential(
(0): ReplicationPad1d((3, 3))
(1): Conv1d(1, 4, kernel_size=(5,), stride=(1,))
(2): LeakyReLU(negative_slope=0.01, inplace=True)
(3): Dropout(p=0.5, inplace=False)
(4): Conv1d(4, 1, kernel_size=(3,), stride=(1,))
(5): Tanh()
)
(P): Sequential(
(0): ReplicationPad1d((3, 3))
(1): Conv1d(1, 4, kernel_size=(5,), stride=(1,))
(2): LeakyReLU(negative_slope=0.01, inplace=True)
(3): Dropout(p=0.5, inplace=False)
(4): Conv1d(4, 1, kernel_size=(3,), stride=(1,))
(5): Tanh()
)
(U): Sequential(
(0): ReplicationPad1d((3, 3))
(1): Conv1d(1, 4, kernel_size=(5,), stride=(1,))
(2): LeakyReLU(negative_slope=0.01, inplace=True)
(3): Dropout(p=0.5, inplace=False)
(4): Conv1d(4, 1, kernel_size=(3,), stride=(1,))
(5): Tanh()
)
)
)
)
)
(SCINet_Tree_even): SCINet_Tree(
(workingblock): LevelSCINet(
(interact): InteractorLevel(
(level): Interactor(
(split): Splitting()
(phi): Sequential(
(0): ReplicationPad1d((3, 3))
(1): Conv1d(1, 4, kernel_size=(5,), stride=(1,))
(2): LeakyReLU(negative_slope=0.01, inplace=True)
(3): Dropout(p=0.5, inplace=False)
(4): Conv1d(4, 1, kernel_size=(3,), stride=(1,))
(5): Tanh()
)
(psi): Sequential(
(0): ReplicationPad1d((3, 3))
(1): Conv1d(1, 4, kernel_size=(5,), stride=(1,))
(2): LeakyReLU(negative_slope=0.01, inplace=True)
(3): Dropout(p=0.5, inplace=False)
(4): Conv1d(4, 1, kernel_size=(3,), stride=(1,))
(5): Tanh()
)
(P): Sequential(
(0): ReplicationPad1d((3, 3))
(1): Conv1d(1, 4, kernel_size=(5,), stride=(1,))
(2): LeakyReLU(negative_slope=0.01, inplace=True)
(3): Dropout(p=0.5, inplace=False)
(4): Conv1d(4, 1, kernel_size=(3,), stride=(1,))
(5): Tanh()
)
(U): Sequential(
(0): ReplicationPad1d((3, 3))
(1): Conv1d(1, 4, kernel_size=(5,), stride=(1,))
(2): LeakyReLU(negative_slope=0.01, inplace=True)
(3): Dropout(p=0.5, inplace=False)
(4): Conv1d(4, 1, kernel_size=(3,), stride=(1,))
(5): Tanh()
)
)
)
)
)
)
(SCINet_Tree_even): SCINet_Tree(
(workingblock): LevelSCINet(
(interact): InteractorLevel(
(level): Interactor(
(split): Splitting()
(phi): Sequential(
(0): ReplicationPad1d((3, 3))
(1): Conv1d(1, 4, kernel_size=(5,), stride=(1,))
(2): LeakyReLU(negative_slope=0.01, inplace=True)
(3): Dropout(p=0.5, inplace=False)
(4): Conv1d(4, 1, kernel_size=(3,), stride=(1,))
(5): Tanh()
)
(psi): Sequential(
(0): ReplicationPad1d((3, 3))
(1): Conv1d(1, 4, kernel_size=(5,), stride=(1,))
(2): LeakyReLU(negative_slope=0.01, inplace=True)
(3): Dropout(p=0.5, inplace=False)
(4): Conv1d(4, 1, kernel_size=(3,), stride=(1,))
(5): Tanh()
)
(P): Sequential(
(0): ReplicationPad1d((3, 3))
(1): Conv1d(1, 4, kernel_size=(5,), stride=(1,))
(2): LeakyReLU(negative_slope=0.01, inplace=True)
(3): Dropout(p=0.5, inplace=False)
(4): Conv1d(4, 1, kernel_size=(3,), stride=(1,))
(5): Tanh()
)
(U): Sequential(
(0): ReplicationPad1d((3, 3))
(1): Conv1d(1, 4, kernel_size=(5,), stride=(1,))
(2): LeakyReLU(negative_slope=0.01, inplace=True)
(3): Dropout(p=0.5, inplace=False)
(4): Conv1d(4, 1, kernel_size=(3,), stride=(1,))
(5): Tanh()
)
)
)
)
(SCINet_Tree_odd): SCINet_Tree(
(workingblock): LevelSCINet(
(interact): InteractorLevel(
(level): Interactor(
(split): Splitting()
(phi): Sequential(
(0): ReplicationPad1d((3, 3))
(1): Conv1d(1, 4, kernel_size=(5,), stride=(1,))
(2): LeakyReLU(negative_slope=0.01, inplace=True)
(3): Dropout(p=0.5, inplace=False)
(4): Conv1d(4, 1, kernel_size=(3,), stride=(1,))
(5): Tanh()
)
(psi): Sequential(
(0): ReplicationPad1d((3, 3))
(1): Conv1d(1, 4, kernel_size=(5,), stride=(1,))
(2): LeakyReLU(negative_slope=0.01, inplace=True)
(3): Dropout(p=0.5, inplace=False)
(4): Conv1d(4, 1, kernel_size=(3,), stride=(1,))
(5): Tanh()
)
(P): Sequential(
(0): ReplicationPad1d((3, 3))
(1): Conv1d(1, 4, kernel_size=(5,), stride=(1,))
(2): LeakyReLU(negative_slope=0.01, inplace=True)
(3): Dropout(p=0.5, inplace=False)
(4): Conv1d(4, 1, kernel_size=(3,), stride=(1,))
(5): Tanh()
)
(U): Sequential(
(0): ReplicationPad1d((3, 3))
(1): Conv1d(1, 4, kernel_size=(5,), stride=(1,))
(2): LeakyReLU(negative_slope=0.01, inplace=True)
(3): Dropout(p=0.5, inplace=False)
(4): Conv1d(4, 1, kernel_size=(3,), stride=(1,))
(5): Tanh()
)
)
)
)
)
(SCINet_Tree_even): SCINet_Tree(
(workingblock): LevelSCINet(
(interact): InteractorLevel(
(level): Interactor(
(split): Splitting()
(phi): Sequential(
(0): ReplicationPad1d((3, 3))
(1): Conv1d(1, 4, kernel_size=(5,), stride=(1,))
(2): LeakyReLU(negative_slope=0.01, inplace=True)
(3): Dropout(p=0.5, inplace=False)
(4): Conv1d(4, 1, kernel_size=(3,), stride=(1,))
(5): Tanh()
)
(psi): Sequential(
(0): ReplicationPad1d((3, 3))
(1): Conv1d(1, 4, kernel_size=(5,), stride=(1,))
(2): LeakyReLU(negative_slope=0.01, inplace=True)
(3): Dropout(p=0.5, inplace=False)
(4): Conv1d(4, 1, kernel_size=(3,), stride=(1,))
(5): Tanh()
)
(P): Sequential(
(0): ReplicationPad1d((3, 3))
(1): Conv1d(1, 4, kernel_size=(5,), stride=(1,))
(2): LeakyReLU(negative_slope=0.01, inplace=True)
(3): Dropout(p=0.5, inplace=False)
(4): Conv1d(4, 1, kernel_size=(3,), stride=(1,))
(5): Tanh()
)
(U): Sequential(
(0): ReplicationPad1d((3, 3))
(1): Conv1d(1, 4, kernel_size=(5,), stride=(1,))
(2): LeakyReLU(negative_slope=0.01, inplace=True)
(3): Dropout(p=0.5, inplace=False)
(4): Conv1d(4, 1, kernel_size=(3,), stride=(1,))
(5): Tanh()
)
)
)
)
)
)
)
)
(projection1): Conv1d(96, 48, kernel_size=(1,), stride=(1,), bias=False)
(div_projection): ModuleList()
)

start training : SCINet_ETTh1_ftS_sl96_ll48_pl48_lr0.003_bs8_hid4.0_s1_l3_dp0.5_invFalse_itr0>>>>>>>>>>>>>>>>>>>>>>>>>>
train 8497
val 2833
test 2833
exp/ETT_checkpoints/SCINet_ETTh1_ftS_sl96_ll48_pl48_lr0.003_bs8_hid4.0_s1_l3_dp0.5_invFalse_itr0
iters: 100, epoch: 1 | loss: 0.2635144
speed: 0.0918s/iter; left time: 9735.2018s
iters: 200, epoch: 1 | loss: 0.2746293
speed: 0.0640s/iter; left time: 6782.9779s
iters: 300, epoch: 1 | loss: 0.2532458
speed: 0.0641s/iter; left time: 6787.8207s
iters: 400, epoch: 1 | loss: 0.2308514
speed: 0.0644s/iter; left time: 6817.4635s
iters: 500, epoch: 1 | loss: 0.3040747
speed: 0.0651s/iter; left time: 6883.0310s
iters: 600, epoch: 1 | loss: 0.2578846
speed: 0.0644s/iter; left time: 6798.4408s
iters: 700, epoch: 1 | loss: 0.2459396
speed: 0.0634s/iter; left time: 6689.3999s
iters: 800, epoch: 1 | loss: 0.2914965
speed: 0.0653s/iter; left time: 6883.0213s
iters: 900, epoch: 1 | loss: 0.2554513
speed: 0.0641s/iter; left time: 6750.0606s
iters: 1000, epoch: 1 | loss: 0.2524573
speed: 0.0650s/iter; left time: 6838.8867s
Epoch: 1 cost time: 71.33614897727966
--------start to validate-----------
normed mse:0.0814, mae:0.2147, rmse:0.2852, mape:1.3623, mspe:25.9642, corr:0.8572
denormed mse:6.8514, mae:1.9706, rmse:2.6175, mape:0.1642, mspe:0.0813, corr:0.8572
--------start to test-----------
normed mse:0.0892, mae:0.2317, rmse:0.2987, mape:0.1735, mspe:0.0476, corr:0.8131
denormed mse:7.5120, mae:2.1258, rmse:2.7408, mape:inf, mspe:inf, corr:0.8131
Epoch: 1, Steps: 1062 | Train Loss: 0.2954810 valid Loss: 0.2147395 Test Loss: 0.2316557
Validation loss decreased (inf --> 0.214739). Saving model ...
Updating learning rate to 0.00285
iters: 100, epoch: 2 | loss: 0.2651560
speed: 0.2249s/iter; left time: 23618.5159s
iters: 200, epoch: 2 | loss: 0.3282328
speed: 0.0653s/iter; left time: 6857.5758s
iters: 300, epoch: 2 | loss: 0.2604796
speed: 0.0648s/iter; left time: 6791.7008s
iters: 400, epoch: 2 | loss: 0.2489426
speed: 0.0648s/iter; left time: 6790.2499s
iters: 500, epoch: 2 | loss: 0.3080887
speed: 0.0640s/iter; left time: 6695.5776s
iters: 600, epoch: 2 | loss: 0.2923984
speed: 0.0637s/iter; left time: 6657.3868s
iters: 700, epoch: 2 | loss: 0.3410122
speed: 0.0639s/iter; left time: 6673.1482s
iters: 800, epoch: 2 | loss: 0.2606724
speed: 0.0651s/iter; left time: 6792.7796s
iters: 900, epoch: 2 | loss: 0.2952042
speed: 0.0646s/iter; left time: 6730.5264s
iters: 1000, epoch: 2 | loss: 0.1823040
speed: 0.0639s/iter; left time: 6656.2354s
Epoch: 2 cost time: 68.4526846408844
--------start to validate-----------
normed mse:0.0799, mae:0.2147, rmse:0.2827, mape:1.4790, mspe:30.6527, corr:0.8621
denormed mse:6.7300, mae:1.9705, rmse:2.5942, mape:0.1594, mspe:0.0701, corr:0.8621
--------start to test-----------
normed mse:0.0586, mae:0.1883, rmse:0.2420, mape:0.1493, mspe:0.0399, corr:0.8178
denormed mse:4.9326, mae:1.7278, rmse:2.2210, mape:inf, mspe:inf, corr:0.8178
Epoch: 2, Steps: 1062 | Train Loss: 0.2776401 valid Loss: 0.2147304 Test Loss: 0.1882900
Validation loss decreased (0.214739 --> 0.214730). Saving model ...
Updating learning rate to 0.0027075
iters: 100, epoch: 3 | loss: 0.3098924
speed: 0.2221s/iter; left time: 23097.3052s
iters: 200, epoch: 3 | loss: 0.2770404
speed: 0.0643s/iter; left time: 6683.5361s
iters: 300, epoch: 3 | loss: 0.3261764
speed: 0.0639s/iter; left time: 6627.7702s
iters: 400, epoch: 3 | loss: 0.3463621
speed: 0.0645s/iter; left time: 6690.5621s
iters: 500, epoch: 3 | loss: 0.2016839
speed: 0.0669s/iter; left time: 6931.5046s
iters: 600, epoch: 3 | loss: 0.2773947
speed: 0.0660s/iter; left time: 6829.7108s
iters: 700, epoch: 3 | loss: 0.2436916
speed: 0.0645s/iter; left time: 6672.7252s
iters: 800, epoch: 3 | loss: 0.3100113
speed: 0.0636s/iter; left time: 6570.5793s
iters: 900, epoch: 3 | loss: 0.2406910
speed: 0.0637s/iter; left time: 6570.7141s
iters: 1000, epoch: 3 | loss: 0.2994752
speed: 0.0644s/iter; left time: 6641.0536s
Epoch: 3 cost time: 68.57215976715088
--------start to validate-----------
normed mse:0.0780, mae:0.2130, rmse:0.2793, mape:1.4954, mspe:33.6005, corr:0.8610
denormed mse:6.5708, mae:1.9548, rmse:2.5634, mape:0.1579, mspe:0.0679, corr:0.8610
--------start to test-----------
normed mse:0.0572, mae:0.1855, rmse:0.2392, mape:0.1437, mspe:0.0355, corr:0.8176
denormed mse:4.8174, mae:1.7019, rmse:2.1949, mape:inf, mspe:inf, corr:0.8176
Epoch: 3, Steps: 1062 | Train Loss: 0.2742680 valid Loss: 0.2130265 Test Loss: 0.1854673
Validation loss decreased (0.214730 --> 0.213026). Saving model ...
Updating learning rate to 0.0025721249999999998
iters: 100, epoch: 4 | loss: 0.3014244
speed: 0.2245s/iter; left time: 23101.4108s
iters: 200, epoch: 4 | loss: 0.2271728
speed: 0.0643s/iter; left time: 6606.3076s
iters: 300, epoch: 4 | loss: 0.3784584
speed: 0.0640s/iter; left time: 6569.4517s
iters: 400, epoch: 4 | loss: 0.2752601
speed: 0.0653s/iter; left time: 6696.5213s
iters: 500, epoch: 4 | loss: 0.3025605
speed: 0.0638s/iter; left time: 6536.2859s
iters: 600, epoch: 4 | loss: 0.2795481
speed: 0.0638s/iter; left time: 6538.9267s
iters: 700, epoch: 4 | loss: 0.2788646
speed: 0.0632s/iter; left time: 6465.8545s
iters: 800, epoch: 4 | loss: 0.2323274
speed: 0.0640s/iter; left time: 6545.1154s
iters: 900, epoch: 4 | loss: 0.2965076
speed: 0.0648s/iter; left time: 6620.8814s
iters: 1000, epoch: 4 | loss: 0.2785395
speed: 0.0643s/iter; left time: 6555.9024s
Epoch: 4 cost time: 68.27407765388489
--------start to validate-----------
normed mse:0.0787, mae:0.2138, rmse:0.2805, mape:1.5257, mspe:33.8453, corr:0.8624
denormed mse:6.6266, mae:1.9617, rmse:2.5742, mape:0.1578, mspe:0.0676, corr:0.8624
--------start to test-----------
normed mse:0.0564, mae:0.1850, rmse:0.2375, mape:0.1471, mspe:0.0391, corr:0.8203
denormed mse:4.7516, mae:1.6981, rmse:2.1798, mape:inf, mspe:inf, corr:0.8203
Epoch: 4, Steps: 1062 | Train Loss: 0.2719553 valid Loss: 0.2137705 Test Loss: 0.1850464
EarlyStopping counter: 1 out of 5
Updating learning rate to 0.0024435187499999996
iters: 100, epoch: 5 | loss: 0.3289642
speed: 0.2193s/iter; left time: 22341.4123s
iters: 200, epoch: 5 | loss: 0.2475609
speed: 0.0641s/iter; left time: 6520.0001s
iters: 300, epoch: 5 | loss: 0.2934369
speed: 0.0640s/iter; left time: 6501.5043s
iters: 400, epoch: 5 | loss: 0.3514774
speed: 0.0639s/iter; left time: 6488.8877s
iters: 500, epoch: 5 | loss: 0.3289756
speed: 0.0650s/iter; left time: 6592.7983s
iters: 600, epoch: 5 | loss: 0.3147124
speed: 0.0660s/iter; left time: 6684.8574s
iters: 700, epoch: 5 | loss: 0.2444675
speed: 0.0656s/iter; left time: 6642.7295s
iters: 800, epoch: 5 | loss: 0.2227931
speed: 0.0648s/iter; left time: 6558.8881s
iters: 900, epoch: 5 | loss: 0.2905650
speed: 0.0645s/iter; left time: 6519.5313s
iters: 1000, epoch: 5 | loss: 0.2011140
speed: 0.0643s/iter; left time: 6490.7155s
Epoch: 5 cost time: 68.49428033828735
--------start to validate-----------
normed mse:0.0795, mae:0.2143, rmse:0.2820, mape:1.5023, mspe:32.9689, corr:0.8631
denormed mse:6.6963, mae:1.9665, rmse:2.5877, mape:0.1583, mspe:0.0665, corr:0.8631
--------start to test-----------
normed mse:0.0568, mae:0.1897, rmse:0.2384, mape:0.1513, mspe:0.0402, corr:0.8214
denormed mse:4.7869, mae:1.7409, rmse:2.1879, mape:inf, mspe:inf, corr:0.8214
Epoch: 5, Steps: 1062 | Train Loss: 0.2680931 valid Loss: 0.2143007 Test Loss: 0.1897096
EarlyStopping counter: 2 out of 5
Updating learning rate to 0.0023213428124999992
iters: 100, epoch: 6 | loss: 0.2878237
speed: 0.2231s/iter; left time: 22487.2134s
iters: 200, epoch: 6 | loss: 0.3053960
speed: 0.0642s/iter; left time: 6463.6262s
iters: 300, epoch: 6 | loss: 0.2794231
speed: 0.0657s/iter; left time: 6608.5483s
iters: 400, epoch: 6 | loss: 0.1824071
speed: 0.0658s/iter; left time: 6613.3665s
iters: 500, epoch: 6 | loss: 0.3717845
speed: 0.0657s/iter; left time: 6599.7808s
iters: 600, epoch: 6 | loss: 0.2623390
speed: 0.0657s/iter; left time: 6589.1822s
iters: 700, epoch: 6 | loss: 0.2274510
speed: 0.0651s/iter; left time: 6523.5771s
iters: 800, epoch: 6 | loss: 0.2571564
speed: 0.0665s/iter; left time: 6657.4378s
iters: 900, epoch: 6 | loss: 0.2891446
speed: 0.0667s/iter; left time: 6670.0139s
iters: 1000, epoch: 6 | loss: 0.3868507
speed: 0.0665s/iter; left time: 6639.8010s
Epoch: 6 cost time: 69.7518322467804
--------start to validate-----------
normed mse:0.0777, mae:0.2116, rmse:0.2788, mape:1.5445, mspe:35.4777, corr:0.8636
denormed mse:6.5435, mae:1.9422, rmse:2.5580, mape:0.1556, mspe:0.0658, corr:0.8636
--------start to test-----------
normed mse:0.0489, mae:0.1677, rmse:0.2211, mape:0.1310, mspe:0.0322, corr:0.8212
denormed mse:4.1181, mae:1.5390, rmse:2.0293, mape:inf, mspe:inf, corr:0.8212
Epoch: 6, Steps: 1062 | Train Loss: 0.2668546 valid Loss: 0.2116495 Test Loss: 0.1677159
Validation loss decreased (0.213026 --> 0.211650). Saving model ...
Updating learning rate to 0.0022052756718749992
iters: 100, epoch: 7 | loss: 0.2596520
speed: 0.2240s/iter; left time: 22334.3117s
iters: 200, epoch: 7 | loss: 0.2324577
speed: 0.0640s/iter; left time: 6381.1203s
iters: 300, epoch: 7 | loss: 0.2214808
speed: 0.0651s/iter; left time: 6474.6273s
iters: 400, epoch: 7 | loss: 0.2045112
speed: 0.0632s/iter; left time: 6281.8838s
iters: 500, epoch: 7 | loss: 0.2396872
speed: 0.0636s/iter; left time: 6316.9631s
iters: 600, epoch: 7 | loss: 0.1907633
speed: 0.0644s/iter; left time: 6388.6488s
iters: 700, epoch: 7 | loss: 0.2620018
speed: 0.0747s/iter; left time: 7404.8432s
iters: 800, epoch: 7 | loss: 0.2821859
speed: 0.0652s/iter; left time: 6457.3174s
iters: 900, epoch: 7 | loss: 0.2233998
speed: 0.0641s/iter; left time: 6339.8982s
iters: 1000, epoch: 7 | loss: 0.2333842
speed: 0.0648s/iter; left time: 6402.4253s
Epoch: 7 cost time: 69.41319704055786
--------start to validate-----------
normed mse:0.0770, mae:0.2089, rmse:0.2776, mape:1.3294, mspe:25.3847, corr:0.8637
denormed mse:6.4876, mae:1.9170, rmse:2.5471, mape:0.1582, mspe:0.0734, corr:0.8637
--------start to test-----------
normed mse:0.0935, mae:0.2430, rmse:0.3058, mape:0.1754, mspe:0.0452, corr:0.8126
denormed mse:7.8770, mae:2.2295, rmse:2.8066, mape:inf, mspe:inf, corr:0.8126
Epoch: 7, Steps: 1062 | Train Loss: 0.2652045 valid Loss: 0.2089042 Test Loss: 0.2429530
Validation loss decreased (0.211650 --> 0.208904). Saving model ...
Updating learning rate to 0.0020950118882812493
iters: 100, epoch: 8 | loss: 0.2476663
speed: 0.2239s/iter; left time: 22095.3285s
iters: 200, epoch: 8 | loss: 0.3140231
speed: 0.0660s/iter; left time: 6502.9781s
iters: 300, epoch: 8 | loss: 0.2049506
speed: 0.0647s/iter; left time: 6368.1886s
iters: 400, epoch: 8 | loss: 0.2751644
speed: 0.0648s/iter; left time: 6373.9229s
iters: 500, epoch: 8 | loss: 0.3105533
speed: 0.0664s/iter; left time: 6520.4210s
iters: 600, epoch: 8 | loss: 0.2195727
speed: 0.0640s/iter; left time: 6287.0444s
iters: 700, epoch: 8 | loss: 0.2385361
speed: 0.0644s/iter; left time: 6313.8700s
iters: 800, epoch: 8 | loss: 0.2514920
speed: 0.0661s/iter; left time: 6476.8801s
iters: 900, epoch: 8 | loss: 0.2696154
speed: 0.0639s/iter; left time: 6250.8148s
iters: 1000, epoch: 8 | loss: 0.2994070
speed: 0.0646s/iter; left time: 6314.7932s
Epoch: 8 cost time: 69.0834846496582
--------start to validate-----------
normed mse:0.0786, mae:0.2097, rmse:0.2803, mape:1.3397, mspe:24.8040, corr:0.8618
denormed mse:6.6175, mae:1.9241, rmse:2.5725, mape:0.1595, mspe:0.0748, corr:0.8618
--------start to test-----------
normed mse:0.0803, mae:0.2185, rmse:0.2833, mape:0.1605, mspe:0.0409, corr:0.8191
denormed mse:6.7598, mae:2.0055, rmse:2.6000, mape:inf, mspe:inf, corr:0.8191
Epoch: 8, Steps: 1062 | Train Loss: 0.2626855 valid Loss: 0.2096790 Test Loss: 0.2185455
EarlyStopping counter: 1 out of 5
Updating learning rate to 0.0019902612938671868
iters: 100, epoch: 9 | loss: 0.2035707
speed: 0.2216s/iter; left time: 21633.7680s
iters: 200, epoch: 9 | loss: 0.1929587
speed: 0.0643s/iter; left time: 6273.6812s
iters: 300, epoch: 9 | loss: 0.2027658
speed: 0.0644s/iter; left time: 6275.3628s
iters: 400, epoch: 9 | loss: 0.1670165
speed: 0.0655s/iter; left time: 6372.5070s
iters: 500, epoch: 9 | loss: 0.3162191
speed: 0.0644s/iter; left time: 6264.6492s
iters: 600, epoch: 9 | loss: 0.2530913
speed: 0.0642s/iter; left time: 6233.9174s
iters: 700, epoch: 9 | loss: 0.2298067
speed: 0.0676s/iter; left time: 6557.6573s
iters: 800, epoch: 9 | loss: 0.2281127
speed: 0.0661s/iter; left time: 6406.0165s
iters: 900, epoch: 9 | loss: 0.2107817
speed: 0.0664s/iter; left time: 6426.1786s
iters: 1000, epoch: 9 | loss: 0.2355524
speed: 0.0656s/iter; left time: 6345.4314s
Epoch: 9 cost time: 69.29217648506165
--------start to validate-----------
normed mse:0.0747, mae:0.2069, rmse:0.2732, mape:1.4617, mspe:31.3834, corr:0.8653
denormed mse:6.2872, mae:1.8984, rmse:2.5074, mape:0.1508, mspe:0.0608, corr:0.8653
--------start to test-----------
normed mse:0.0819, mae:0.2269, rmse:0.2861, mape:0.1668, mspe:0.0438, corr:0.7987
denormed mse:6.8943, mae:2.0818, rmse:2.6257, mape:inf, mspe:inf, corr:0.7987
Epoch: 9, Steps: 1062 | Train Loss: 0.2619727 valid Loss: 0.2068796 Test Loss: 0.2268594
Validation loss decreased (0.208904 --> 0.206880). Saving model ...
Updating learning rate to 0.0018907482291738273
iters: 100, epoch: 10 | loss: 0.2934171
speed: 0.2292s/iter; left time: 22127.8739s
iters: 200, epoch: 10 | loss: 0.3041674
speed: 0.0652s/iter; left time: 6284.6789s
iters: 300, epoch: 10 | loss: 0.2558330
speed: 0.0646s/iter; left time: 6225.6793s
iters: 400, epoch: 10 | loss: 0.3225133
speed: 0.0643s/iter; left time: 6186.9587s
iters: 500, epoch: 10 | loss: 0.2021957
speed: 0.0646s/iter; left time: 6214.6386s
iters: 600, epoch: 10 | loss: 0.2634687
speed: 0.0644s/iter; left time: 6184.0262s
iters: 700, epoch: 10 | loss: 0.3601710
speed: 0.0649s/iter; left time: 6231.3502s
iters: 800, epoch: 10 | loss: 0.2715219
speed: 0.0643s/iter; left time: 6158.7077s
iters: 900, epoch: 10 | loss: 0.3285161
speed: 0.0662s/iter; left time: 6337.7003s
iters: 1000, epoch: 10 | loss: 0.2874655
speed: 0.0650s/iter; left time: 6218.2058s
Epoch: 10 cost time: 69.2235357761383
--------start to validate-----------
normed mse:0.0741, mae:0.2067, rmse:0.2721, mape:1.4288, mspe:32.2580, corr:0.8650
denormed mse:6.2362, mae:1.8968, rmse:2.4972, mape:0.1524, mspe:0.0658, corr:0.8650
--------start to test-----------
normed mse:0.1252, mae:0.2862, rmse:0.3538, mape:0.2056, mspe:0.0598, corr:0.7859
denormed mse:10.5420, mae:2.6266, rmse:3.2468, mape:inf, mspe:inf, corr:0.7859
Epoch: 10, Steps: 1062 | Train Loss: 0.2617135 valid Loss: 0.2067047 Test Loss: 0.2862312
Validation loss decreased (0.206880 --> 0.206705). Saving model ...
Updating learning rate to 0.001796210817715136
iters: 100, epoch: 11 | loss: 0.2604572
speed: 0.2211s/iter; left time: 21110.7798s
iters: 200, epoch: 11 | loss: 0.1902495
speed: 0.0639s/iter; left time: 6093.1127s
iters: 300, epoch: 11 | loss: 0.2706100
speed: 0.0645s/iter; left time: 6144.9446s
iters: 400, epoch: 11 | loss: 0.2700502
speed: 0.0641s/iter; left time: 6099.0807s
iters: 500, epoch: 11 | loss: 0.2039715
speed: 0.0644s/iter; left time: 6119.2977s
iters: 600, epoch: 11 | loss: 0.2211753
speed: 0.0644s/iter; left time: 6116.2928s
iters: 700, epoch: 11 | loss: 0.3060542
speed: 0.0641s/iter; left time: 6078.2290s
iters: 800, epoch: 11 | loss: 0.2073108
speed: 0.0636s/iter; left time: 6031.3409s
iters: 900, epoch: 11 | loss: 0.3166656
speed: 0.0642s/iter; left time: 6077.3573s
iters: 1000, epoch: 11 | loss: 0.2857897
speed: 0.0636s/iter; left time: 6020.0385s
Epoch: 11 cost time: 68.08029365539551
--------start to validate-----------
normed mse:0.0780, mae:0.2101, rmse:0.2792, mape:1.4069, mspe:27.8713, corr:0.8645
denormed mse:6.5642, mae:1.9282, rmse:2.5621, mape:0.1562, mspe:0.0672, corr:0.8645
--------start to test-----------
normed mse:0.0486, mae:0.1696, rmse:0.2206, mape:0.1346, mspe:0.0347, corr:0.8196
denormed mse:4.0965, mae:1.5563, rmse:2.0240, mape:inf, mspe:inf, corr:0.8196
Epoch: 11, Steps: 1062 | Train Loss: 0.2599722 valid Loss: 0.2101241 Test Loss: 0.1695920
EarlyStopping counter: 1 out of 5
Updating learning rate to 0.0017064002768293791
iters: 100, epoch: 12 | loss: 0.3560360
speed: 0.2200s/iter; left time: 20776.4963s
iters: 200, epoch: 12 | loss: 0.2906877
speed: 0.0638s/iter; left time: 6017.2672s
iters: 300, epoch: 12 | loss: 0.3136698
speed: 0.0640s/iter; left time: 6029.7119s
iters: 400, epoch: 12 | loss: 0.4521513
speed: 0.0637s/iter; left time: 5992.6421s
iters: 500, epoch: 12 | loss: 0.2683279
speed: 0.0635s/iter; left time: 5973.5166s
iters: 600, epoch: 12 | loss: 0.2095424
speed: 0.0632s/iter; left time: 5932.0850s
iters: 700, epoch: 12 | loss: 0.3217563
speed: 0.0646s/iter; left time: 6062.9666s
iters: 800, epoch: 12 | loss: 0.2670196
speed: 0.0635s/iter; left time: 5954.2641s
iters: 900, epoch: 12 | loss: 0.2306930
speed: 0.0639s/iter; left time: 5977.7809s
iters: 1000, epoch: 12 | loss: 0.2080201
speed: 0.0633s/iter; left time: 5915.4393s
Epoch: 12 cost time: 67.63420724868774
--------start to validate-----------
normed mse:0.0752, mae:0.2047, rmse:0.2742, mape:1.3057, mspe:24.8466, corr:0.8630
denormed mse:6.3301, mae:1.8782, rmse:2.5160, mape:0.1537, mspe:0.0697, corr:0.8630
--------start to test-----------
normed mse:0.1127, mae:0.2651, rmse:0.3357, mape:0.1911, mspe:0.0544, corr:0.7504
denormed mse:9.4922, mae:2.4327, rmse:3.0809, mape:inf, mspe:inf, corr:0.7504
Epoch: 12, Steps: 1062 | Train Loss: 0.2593013 valid Loss: 0.2046781 Test Loss: 0.2651025
Validation loss decreased (0.206705 --> 0.204678). Saving model ...
Updating learning rate to 0.00162108026298791
iters: 100, epoch: 13 | loss: 0.2679453
speed: 0.2223s/iter; left time: 20751.6236s
iters: 200, epoch: 13 | loss: 0.2244501
speed: 0.0640s/iter; left time: 5970.0265s
iters: 300, epoch: 13 | loss: 0.2729070
speed: 0.0638s/iter; left time: 5944.9959s
iters: 400, epoch: 13 | loss: 0.2141117
speed: 0.0642s/iter; left time: 5973.2411s
iters: 500, epoch: 13 | loss: 0.2737395
speed: 0.0649s/iter; left time: 6035.3685s
iters: 600, epoch: 13 | loss: 0.3773285
speed: 0.0655s/iter; left time: 6084.1809s
iters: 700, epoch: 13 | loss: 0.3060603
speed: 0.0651s/iter; left time: 6037.3794s
iters: 800, epoch: 13 | loss: 0.3271270
speed: 0.0639s/iter; left time: 5919.0781s
iters: 900, epoch: 13 | loss: 0.2570842
speed: 0.0644s/iter; left time: 5959.6727s
iters: 1000, epoch: 13 | loss: 0.1967695
speed: 0.0650s/iter; left time: 6008.8446s
Epoch: 13 cost time: 68.51006627082825
--------start to validate-----------
normed mse:0.0754, mae:0.2070, rmse:0.2745, mape:1.3459, mspe:27.9367, corr:0.8626
denormed mse:6.3463, mae:1.8996, rmse:2.5192, mape:0.1547, mspe:0.0686, corr:0.8626
--------start to test-----------
normed mse:0.1270, mae:0.2863, rmse:0.3563, mape:0.2047, mspe:0.0591, corr:0.7302
denormed mse:10.6924, mae:2.6270, rmse:3.2699, mape:inf, mspe:inf, corr:0.7302
Epoch: 13, Steps: 1062 | Train Loss: 0.2578851 valid Loss: 0.2070073 Test Loss: 0.2862706
EarlyStopping counter: 1 out of 5
Updating learning rate to 0.0015400262498385146
iters: 100, epoch: 14 | loss: 0.3828691
speed: 0.2242s/iter; left time: 20692.6053s
iters: 200, epoch: 14 | loss: 0.2255980
speed: 0.0655s/iter; left time: 6034.3079s
iters: 300, epoch: 14 | loss: 0.2057881
speed: 0.0651s/iter; left time: 5999.9542s
iters: 400, epoch: 14 | loss: 0.2044961
speed: 0.0654s/iter; left time: 6012.5291s
iters: 500, epoch: 14 | loss: 0.1950546
speed: 0.0659s/iter; left time: 6053.8845s
iters: 600, epoch: 14 | loss: 0.2513721
speed: 0.0663s/iter; left time: 6081.8917s
iters: 700, epoch: 14 | loss: 0.3617742
speed: 0.0676s/iter; left time: 6199.0184s
iters: 800, epoch: 14 | loss: 0.1818448
speed: 0.0660s/iter; left time: 6047.2192s
iters: 900, epoch: 14 | loss: 0.2921709
speed: 0.0637s/iter; left time: 5831.9500s
iters: 1000, epoch: 14 | loss: 0.4443547
speed: 0.0636s/iter; left time: 5812.0902s
Epoch: 14 cost time: 69.46138763427734
--------start to validate-----------
normed mse:0.0749, mae:0.2054, rmse:0.2737, mape:1.3964, mspe:29.3059, corr:0.8646
denormed mse:6.3101, mae:1.8853, rmse:2.5120, mape:0.1517, mspe:0.0638, corr:0.8646
--------start to test-----------
normed mse:0.0582, mae:0.1803, rmse:0.2413, mape:0.1373, mspe:0.0348, corr:0.7685
denormed mse:4.9051, mae:1.6544, rmse:2.2147, mape:inf, mspe:inf, corr:0.7685
Epoch: 14, Steps: 1062 | Train Loss: 0.2567903 valid Loss: 0.2054437 Test Loss: 0.1802866
EarlyStopping counter: 2 out of 5
Updating learning rate to 0.0014630249373465886
iters: 100, epoch: 15 | loss: 0.2464596
speed: 0.2214s/iter; left time: 20196.9599s
iters: 200, epoch: 15 | loss: 0.3060012
speed: 0.0648s/iter; left time: 5902.4050s
iters: 300, epoch: 15 | loss: 0.3132369
speed: 0.0647s/iter; left time: 5892.1457s
iters: 400, epoch: 15 | loss: 0.3352527
speed: 0.0645s/iter; left time: 5867.1592s
iters: 500, epoch: 15 | loss: 0.2343948
speed: 0.0640s/iter; left time: 5814.1662s
iters: 600, epoch: 15 | loss: 0.1983065
speed: 0.0636s/iter; left time: 5772.0216s
iters: 700, epoch: 15 | loss: 0.1803256
speed: 0.0638s/iter; left time: 5782.6625s
iters: 800, epoch: 15 | loss: 0.2608042
speed: 0.0640s/iter; left time: 5795.1071s
iters: 900, epoch: 15 | loss: 0.3982482
speed: 0.0656s/iter; left time: 5930.5014s
iters: 1000, epoch: 15 | loss: 0.2794555
speed: 0.0649s/iter; left time: 5859.7955s
Epoch: 15 cost time: 68.22825241088867
--------start to validate-----------
normed mse:0.0751, mae:0.2062, rmse:0.2740, mape:1.4129, mspe:29.4700, corr:0.8638
denormed mse:6.3221, mae:1.8922, rmse:2.5144, mape:0.1511, mspe:0.0607, corr:0.8638
--------start to test-----------
normed mse:0.0743, mae:0.2067, rmse:0.2726, mape:0.1542, mspe:0.0414, corr:0.7309
denormed mse:6.2598, mae:1.8971, rmse:2.5020, mape:inf, mspe:inf, corr:0.7309
Epoch: 15, Steps: 1062 | Train Loss: 0.2556333 valid Loss: 0.2062003 Test Loss: 0.2067328
EarlyStopping counter: 3 out of 5
Updating learning rate to 0.001389873690479259
iters: 100, epoch: 16 | loss: 0.4233045
speed: 0.2250s/iter; left time: 20289.9657s
iters: 200, epoch: 16 | loss: 0.2079662
speed: 0.0654s/iter; left time: 5887.5618s
iters: 300, epoch: 16 | loss: 0.2657562
speed: 0.0672s/iter; left time: 6045.0847s
iters: 400, epoch: 16 | loss: 0.1947590
speed: 0.0716s/iter; left time: 6432.6096s
iters: 500, epoch: 16 | loss: 0.2555844
speed: 0.0686s/iter; left time: 6157.6966s
iters: 600, epoch: 16 | loss: 0.2228586
speed: 0.0686s/iter; left time: 6153.4446s
iters: 700, epoch: 16 | loss: 0.2561190
speed: 0.0693s/iter; left time: 6211.2939s
iters: 800, epoch: 16 | loss: 0.2335158
speed: 0.0682s/iter; left time: 6099.8479s
iters: 900, epoch: 16 | loss: 0.1972947
speed: 0.0681s/iter; left time: 6082.7818s
iters: 1000, epoch: 16 | loss: 0.2210546
speed: 0.0684s/iter; left time: 6108.1396s
Epoch: 16 cost time: 72.44369554519653
--------start to validate-----------
normed mse:0.0767, mae:0.2066, rmse:0.2769, mape:1.3287, mspe:26.0432, corr:0.8614
denormed mse:6.4587, mae:1.8956, rmse:2.5414, mape:0.1549, mspe:0.0696, corr:0.8614
--------start to test-----------
normed mse:0.0722, mae:0.2028, rmse:0.2686, mape:0.1513, mspe:0.0397, corr:0.7538
denormed mse:6.0773, mae:1.8609, rmse:2.4652, mape:inf, mspe:inf, corr:0.7538
Epoch: 16, Steps: 1062 | Train Loss: 0.2538822 valid Loss: 0.2065719 Test Loss: 0.2027889
EarlyStopping counter: 4 out of 5
Updating learning rate to 0.001320380005955296
iters: 100, epoch: 17 | loss: 0.2435877
speed: 0.2235s/iter; left time: 19918.0769s
iters: 200, epoch: 17 | loss: 0.2681281
speed: 0.0643s/iter; left time: 5724.9103s
iters: 300, epoch: 17 | loss: 0.2386408
speed: 0.0645s/iter; left time: 5732.2681s
iters: 400, epoch: 17 | loss: 0.1940629
speed: 0.0645s/iter; left time: 5730.9041s
iters: 500, epoch: 17 | loss: 0.1982391
speed: 0.0647s/iter; left time: 5735.3843s
iters: 600, epoch: 17 | loss: 0.2039342
speed: 0.0645s/iter; left time: 5713.2374s
iters: 700, epoch: 17 | loss: 0.2077632
speed: 0.0648s/iter; left time: 5738.5635s
iters: 800, epoch: 17 | loss: 0.3183163
speed: 0.0640s/iter; left time: 5656.7522s
iters: 900, epoch: 17 | loss: 0.2791365
speed: 0.0648s/iter; left time: 5723.3435s
iters: 1000, epoch: 17 | loss: 0.2257991
speed: 0.0642s/iter; left time: 5660.2778s
Epoch: 17 cost time: 68.39220643043518
--------start to validate-----------
normed mse:0.0785, mae:0.2103, rmse:0.2803, mape:1.3704, mspe:27.2711, corr:0.8559
denormed mse:6.6141, mae:1.9294, rmse:2.5718, mape:0.1568, mspe:0.0703, corr:0.8559
--------start to test-----------
normed mse:0.1018, mae:0.2485, rmse:0.3191, mape:0.1800, mspe:0.0498, corr:0.7221
denormed mse:8.5757, mae:2.2802, rmse:2.9284, mape:inf, mspe:inf, corr:0.7221
Epoch: 17, Steps: 1062 | Train Loss: 0.2532137 valid Loss: 0.2102546 Test Loss: 0.2484858
EarlyStopping counter: 5 out of 5
Early stopping
save model in exp/ETT_checkpoints/SCINet_ETTh1_ftS_sl96_ll48_pl48_lr0.003_bs8_hid4.0_s1_l3_dp0.5_invFalse_itr0/ETTh148.bin
testing : SCINet_ETTh1_ftS_sl96_ll48_pl48_lr0.003_bs8_hid4.0_s1_l3_dp0.5_invFalse_itr0<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
test 2833
normed mse:0.1127, mae:0.2651, rmse:0.3357, mape:0.1911, mspe:0.0544, corr:0.7504
TTTT denormed mse:9.4922, mae:2.4327, rmse:3.0809, mape:inf, mspe:inf, corr:0.7504
Final mean normed mse:0.1127,mae:0.2651,denormed mse:9.4922,mae:2.4327`

However Result Folder is not being formed containing trues.npy and preds.npy files.
Why is the code not executing for the test data ? Is there another separate script?
only the model is being saved in ETT_checkpoint
Please help, so the results could be plotted.

使用自建資料集碰到的問題

您好:

欲將自建資料集使用在 SCINet 上,共有21個維度,每天共有6筆資料,預計以前7天資料(共7x6=42筆),預測後1天資料(共1x6=6筆)

在訓練/驗證/測試集的切分,將最後30天資料(共30*6=180筆)作為測試集,驗證集資料選定100筆,剩下作為訓練資料集
image

使用指令如下:
python run_ETTh.py --data custom_dataset --features M --seq_len 42 --label_len 6 --pred_len 6 --hidden-size 4 --stacks 1 --levels 1 --lr 3e-3 --batch_size 8 --dropout 0.5 --save True

問題1:
ground truth 跟 模型產出 true-scales.npy 的內容對不起來

問題2:
預期測試集長度為180,與模型結果(168, 6, 21)不符

問題3:
嘗試不進行 scale,使用原始數據,將 etth_data_loader.py 內 class Dataset_Pred(Dataset)中的 scale 由 True 改為 False
image
結果無法執行,錯誤資訊:TypeError: expected np.ndarray (got float)

請問能怎麼進行調整呢?
謝謝您!

ETTH result cannot be reproduced

Hi, I wonder the result on ETTH dataset in paper, are they Normed or Denormed?
If they were denormed, the difference between my result and the paper is too large.
This is my result, is this correct? (I used the latest version code)
Final mean normed mse:0.3660,mae:0.3998
denormed mse:8.2375,mae:1.5608

Some questions about the results of the PEMS dataset

Hello, your work is very rewarding! But I had some deviations in the MAE, MAPE, RMSE metrics when I performed the replication on the PEMS dataset, and I only used the model part of the code you provided. When using your code completely, MAE, MAPE, RMSE and the results in the paper are about the same, so I would like to ask you if you use any tips in data preprocessing.

Questions about your to-do list?

Hello, great author, does the content of the to-do list increase much for SCINET? When is the update expected? I was so excited about your algorithm. It blew my mind! I'm trying to study your paper.

您好,关于interact-learning模块有一个疑惑想请教一下

作者您好,关于四个interact-learning模块的细节,我有一个问题想请教一下,我看您的模块中加入了两个一维卷积,并且加入了hidden size的参数,想请问一下加入两个卷积的出发点是什么,他有什么实质性的作用呢?感谢您的回答~

data scaler in etth_data_loader.py

train_data = df_data[border1s[0]:border2s[0]]
self.scaler.fit(train_data.values)
data = self.scaler.transform(df_data.values)

Here, why the mean and var of train data are computed to scale all the data?

Confusion about results of ETT dataset

Hello all,

Thanks for your reply in the previous issue. I found that in your paper, in the table of performance on ETT dataset, you mark that LSTNet's MAE on ETTm1 (with horizon 24) is L1700, which is not a number. I am not sure what this means, because I believe that LSTNet didn't implement experiments on ETT dataset, and you didn't mention that if you re-implement LSTNet on your own.

Thanks.

Matthew

ETTm1数据集无法复现结果

作者您好,我非常喜欢SCINet模型,但是当我将SCINet用在ETTm1上时,发现结果和论文中的结果差别非常大。
我用的是readme中提供的这组参数训练:
python run_ETTh.py --data ETTm1 --features M --seq_len 48 --label_len 24 --pred_len 24 --hidden-size 4 --stacks 1 --levels 3 --lr 0.005 --batch_size 32 --dropout 0.5 --model_name ettm1_M_I48_O24_lr7e-3_bs16_dp0.25_h8_s1l3

训练:
(pytorch) yangbs@hdu-lab:~/python_file/SCINet-main$ python run_ETTh.py --data ETTm1 --features M --seq_len 48 --label_len 24 --pred_len 24 --hidden-size 4 --stacks 1 --levels 3 --lr 0.005 --batch_size 32 --dropout 0.5 --model_name ettm1_M_I48_O24_lr7e-3_bs16_dp0.25_h8_s1l3
Args in experiment:
Namespace(INN=1, RIN=False, batch_size=32, c_out=7, checkpoints='exp/ETT_checkpoints/', cols=None, concat_len=0, data='ETTm1', data_path='ETTm1.csv', dec_in=7, detail_freq='h', devices='0', dilation=1, dropout=0.5, embed='timeF', enc_in=7, evaluate=False, features='M', freq='h', gpu=0, groups=1, hidden_size=4.0, inverse=False, itr=0, kernel=5, label_len=24, lastWeight=1.0, levels=3, loss='mae', lr=0.005, lradj=1, model='SCINet', model_name='ettm1_M_I48_O24_lr7e-3_bs16_dp0.25_h8_s1l3', num_decoder_layer=1, num_workers=0, patience=5, positionalEcoding=False, pred_len=24, resume=False, root_path='./datasets/ETT-data/', save=False, seq_len=48, single_step=0, single_step_output_One=0, stacks=1, target='OT', train_epochs=100, use_amp=False, use_gpu=True, use_multi_gpu=False, window_size=12)
SCINet(
(blocks1): EncoderTree(
(SCINet_Tree): SCINet_Tree(
(workingblock): LevelSCINet(
(interact): InteractorLevel(
(level): Interactor(
(split): Splitting()
(phi): Sequential(
(0): ReplicationPad1d((3, 3))
(1): Conv1d(7, 28, kernel_size=(5,), stride=(1,))
(2): LeakyReLU(negative_slope=0.01, inplace=True)
(3): Dropout(p=0.5, inplace=False)
(4): Conv1d(28, 7, kernel_size=(3,), stride=(1,))
(5): Tanh()
)
(psi): Sequential(
(0): ReplicationPad1d((3, 3))
(1): Conv1d(7, 28, kernel_size=(5,), stride=(1,))
(2): LeakyReLU(negative_slope=0.01, inplace=True)
(3): Dropout(p=0.5, inplace=False)
(4): Conv1d(28, 7, kernel_size=(3,), stride=(1,))
(5): Tanh()
)
(P): Sequential(
(0): ReplicationPad1d((3, 3))
(1): Conv1d(7, 28, kernel_size=(5,), stride=(1,))
(2): LeakyReLU(negative_slope=0.01, inplace=True)
(3): Dropout(p=0.5, inplace=False)
(4): Conv1d(28, 7, kernel_size=(3,), stride=(1,))
(5): Tanh()
)
(U): Sequential(
(0): ReplicationPad1d((3, 3))
(1): Conv1d(7, 28, kernel_size=(5,), stride=(1,))
(2): LeakyReLU(negative_slope=0.01, inplace=True)
(3): Dropout(p=0.5, inplace=False)
(4): Conv1d(28, 7, kernel_size=(3,), stride=(1,))
(5): Tanh()
)
)
)
)
(SCINet_Tree_odd): SCINet_Tree(
(workingblock): LevelSCINet(
(interact): InteractorLevel(
(level): Interactor(
(split): Splitting()
(phi): Sequential(
(0): ReplicationPad1d((3, 3))
(1): Conv1d(7, 28, kernel_size=(5,), stride=(1,))
(2): LeakyReLU(negative_slope=0.01, inplace=True)
(3): Dropout(p=0.5, inplace=False)
(4): Conv1d(28, 7, kernel_size=(3,), stride=(1,))
(5): Tanh()
)
(psi): Sequential(
(0): ReplicationPad1d((3, 3))
(1): Conv1d(7, 28, kernel_size=(5,), stride=(1,))
(2): LeakyReLU(negative_slope=0.01, inplace=True)
(3): Dropout(p=0.5, inplace=False)
(4): Conv1d(28, 7, kernel_size=(3,), stride=(1,))
(5): Tanh()
)
(P): Sequential(
(0): ReplicationPad1d((3, 3))
(1): Conv1d(7, 28, kernel_size=(5,), stride=(1,))
(2): LeakyReLU(negative_slope=0.01, inplace=True)
(3): Dropout(p=0.5, inplace=False)
(4): Conv1d(28, 7, kernel_size=(3,), stride=(1,))
(5): Tanh()
)
(U): Sequential(
(0): ReplicationPad1d((3, 3))
(1): Conv1d(7, 28, kernel_size=(5,), stride=(1,))
(2): LeakyReLU(negative_slope=0.01, inplace=True)
(3): Dropout(p=0.5, inplace=False)
(4): Conv1d(28, 7, kernel_size=(3,), stride=(1,))
(5): Tanh()
)
)
)
)
(SCINet_Tree_odd): SCINet_Tree(
(workingblock): LevelSCINet(
(interact): InteractorLevel(
(level): Interactor(
(split): Splitting()
(phi): Sequential(
(0): ReplicationPad1d((3, 3))
(1): Conv1d(7, 28, kernel_size=(5,), stride=(1,))
(2): LeakyReLU(negative_slope=0.01, inplace=True)
(3): Dropout(p=0.5, inplace=False)
(4): Conv1d(28, 7, kernel_size=(3,), stride=(1,))
(5): Tanh()
)
(psi): Sequential(
(0): ReplicationPad1d((3, 3))
(1): Conv1d(7, 28, kernel_size=(5,), stride=(1,))
(2): LeakyReLU(negative_slope=0.01, inplace=True)
(3): Dropout(p=0.5, inplace=False)
(4): Conv1d(28, 7, kernel_size=(3,), stride=(1,))
(5): Tanh()
)
(P): Sequential(
(0): ReplicationPad1d((3, 3))
(1): Conv1d(7, 28, kernel_size=(5,), stride=(1,))
(2): LeakyReLU(negative_slope=0.01, inplace=True)
(3): Dropout(p=0.5, inplace=False)
(4): Conv1d(28, 7, kernel_size=(3,), stride=(1,))
(5): Tanh()
)
(U): Sequential(
(0): ReplicationPad1d((3, 3))
(1): Conv1d(7, 28, kernel_size=(5,), stride=(1,))
(2): LeakyReLU(negative_slope=0.01, inplace=True)
(3): Dropout(p=0.5, inplace=False)
(4): Conv1d(28, 7, kernel_size=(3,), stride=(1,))
(5): Tanh()
)
)
)
)
)
(SCINet_Tree_even): SCINet_Tree(
(workingblock): LevelSCINet(
(interact): InteractorLevel(
(level): Interactor(
(split): Splitting()
(phi): Sequential(
(0): ReplicationPad1d((3, 3))
(1): Conv1d(7, 28, kernel_size=(5,), stride=(1,))
(2): LeakyReLU(negative_slope=0.01, inplace=True)
(3): Dropout(p=0.5, inplace=False)
(4): Conv1d(28, 7, kernel_size=(3,), stride=(1,))
(5): Tanh()
)
(psi): Sequential(
(0): ReplicationPad1d((3, 3))
(1): Conv1d(7, 28, kernel_size=(5,), stride=(1,))
(2): LeakyReLU(negative_slope=0.01, inplace=True)
(3): Dropout(p=0.5, inplace=False)
(4): Conv1d(28, 7, kernel_size=(3,), stride=(1,))
(5): Tanh()
)
(P): Sequential(
(0): ReplicationPad1d((3, 3))
(1): Conv1d(7, 28, kernel_size=(5,), stride=(1,))
(2): LeakyReLU(negative_slope=0.01, inplace=True)
(3): Dropout(p=0.5, inplace=False)
(4): Conv1d(28, 7, kernel_size=(3,), stride=(1,))
(5): Tanh()
)
(U): Sequential(
(0): ReplicationPad1d((3, 3))
(1): Conv1d(7, 28, kernel_size=(5,), stride=(1,))
(2): LeakyReLU(negative_slope=0.01, inplace=True)
(3): Dropout(p=0.5, inplace=False)
(4): Conv1d(28, 7, kernel_size=(3,), stride=(1,))
(5): Tanh()
)
)
)
)
)
)
(SCINet_Tree_even): SCINet_Tree(
(workingblock): LevelSCINet(
(interact): InteractorLevel(
(level): Interactor(
(split): Splitting()
(phi): Sequential(
(0): ReplicationPad1d((3, 3))
(1): Conv1d(7, 28, kernel_size=(5,), stride=(1,))
(2): LeakyReLU(negative_slope=0.01, inplace=True)
(3): Dropout(p=0.5, inplace=False)
(4): Conv1d(28, 7, kernel_size=(3,), stride=(1,))
(5): Tanh()
)
(psi): Sequential(
(0): ReplicationPad1d((3, 3))
(1): Conv1d(7, 28, kernel_size=(5,), stride=(1,))
(2): LeakyReLU(negative_slope=0.01, inplace=True)
(3): Dropout(p=0.5, inplace=False)
(4): Conv1d(28, 7, kernel_size=(3,), stride=(1,))
(5): Tanh()
)
(P): Sequential(
(0): ReplicationPad1d((3, 3))
(1): Conv1d(7, 28, kernel_size=(5,), stride=(1,))
(2): LeakyReLU(negative_slope=0.01, inplace=True)
(3): Dropout(p=0.5, inplace=False)
(4): Conv1d(28, 7, kernel_size=(3,), stride=(1,))
(5): Tanh()
)
(U): Sequential(
(0): ReplicationPad1d((3, 3))
(1): Conv1d(7, 28, kernel_size=(5,), stride=(1,))
(2): LeakyReLU(negative_slope=0.01, inplace=True)
(3): Dropout(p=0.5, inplace=False)
(4): Conv1d(28, 7, kernel_size=(3,), stride=(1,))
(5): Tanh()
)
)
)
)
(SCINet_Tree_odd): SCINet_Tree(
(workingblock): LevelSCINet(
(interact): InteractorLevel(
(level): Interactor(
(split): Splitting()
(phi): Sequential(
(0): ReplicationPad1d((3, 3))
(1): Conv1d(7, 28, kernel_size=(5,), stride=(1,))
(2): LeakyReLU(negative_slope=0.01, inplace=True)
(3): Dropout(p=0.5, inplace=False)
(4): Conv1d(28, 7, kernel_size=(3,), stride=(1,))
(5): Tanh()
)
(psi): Sequential(
(0): ReplicationPad1d((3, 3))
(1): Conv1d(7, 28, kernel_size=(5,), stride=(1,))
(2): LeakyReLU(negative_slope=0.01, inplace=True)
(3): Dropout(p=0.5, inplace=False)
(4): Conv1d(28, 7, kernel_size=(3,), stride=(1,))
(5): Tanh()
)
(P): Sequential(
(0): ReplicationPad1d((3, 3))
(1): Conv1d(7, 28, kernel_size=(5,), stride=(1,))
(2): LeakyReLU(negative_slope=0.01, inplace=True)
(3): Dropout(p=0.5, inplace=False)
(4): Conv1d(28, 7, kernel_size=(3,), stride=(1,))
(5): Tanh()
)
(U): Sequential(
(0): ReplicationPad1d((3, 3))
(1): Conv1d(7, 28, kernel_size=(5,), stride=(1,))
(2): LeakyReLU(negative_slope=0.01, inplace=True)
(3): Dropout(p=0.5, inplace=False)
(4): Conv1d(28, 7, kernel_size=(3,), stride=(1,))
(5): Tanh()
)
)
)
)
)
(SCINet_Tree_even): SCINet_Tree(
(workingblock): LevelSCINet(
(interact): InteractorLevel(
(level): Interactor(
(split): Splitting()
(phi): Sequential(
(0): ReplicationPad1d((3, 3))
(1): Conv1d(7, 28, kernel_size=(5,), stride=(1,))
(2): LeakyReLU(negative_slope=0.01, inplace=True)
(3): Dropout(p=0.5, inplace=False)
(4): Conv1d(28, 7, kernel_size=(3,), stride=(1,))
(5): Tanh()
)
(psi): Sequential(
(0): ReplicationPad1d((3, 3))
(1): Conv1d(7, 28, kernel_size=(5,), stride=(1,))
(2): LeakyReLU(negative_slope=0.01, inplace=True)
(3): Dropout(p=0.5, inplace=False)
(4): Conv1d(28, 7, kernel_size=(3,), stride=(1,))
(5): Tanh()
)
(P): Sequential(
(0): ReplicationPad1d((3, 3))
(1): Conv1d(7, 28, kernel_size=(5,), stride=(1,))
(2): LeakyReLU(negative_slope=0.01, inplace=True)
(3): Dropout(p=0.5, inplace=False)
(4): Conv1d(28, 7, kernel_size=(3,), stride=(1,))
(5): Tanh()
)
(U): Sequential(
(0): ReplicationPad1d((3, 3))
(1): Conv1d(7, 28, kernel_size=(5,), stride=(1,))
(2): LeakyReLU(negative_slope=0.01, inplace=True)
(3): Dropout(p=0.5, inplace=False)
(4): Conv1d(28, 7, kernel_size=(3,), stride=(1,))
(5): Tanh()
)
)
)
)
)
)
)
)
(projection1): Conv1d(48, 24, kernel_size=(1,), stride=(1,), bias=False)
(div_projection): ModuleList()
)

start training : SCINet_ETTm1_ftM_sl48_ll24_pl24_lr0.005_bs32_hid4.0_s1_l3_dp0.5_invFalse_itr0>>>>>>>>>>>>>>>>>>>>>>>>>>
train 34489
val 11497
test 11497
exp/ETT_checkpoints/SCINet_ETTm1_ftM_sl48_ll24_pl24_lr0.005_bs32_hid4.0_s1_l3_dp0.5_invFalse_itr0
iters: 100, epoch: 1 | loss: 0.3739801
speed: 0.0367s/iter; left time: 3946.8521s
iters: 200, epoch: 1 | loss: 0.3262153
speed: 0.0373s/iter; left time: 4006.7345s
iters: 300, epoch: 1 | loss: 0.3147443
speed: 0.0354s/iter; left time: 3798.6481s
iters: 400, epoch: 1 | loss: 0.2697085
speed: 0.0367s/iter; left time: 3937.1097s
iters: 500, epoch: 1 | loss: 0.3082295
speed: 0.0386s/iter; left time: 4141.7307s
iters: 600, epoch: 1 | loss: 0.3121381
speed: 0.0361s/iter; left time: 3865.2089s
iters: 700, epoch: 1 | loss: 0.2578067
speed: 0.0372s/iter; left time: 3978.2897s
iters: 800, epoch: 1 | loss: 0.2487013
speed: 0.0364s/iter; left time: 3890.3315s
iters: 900, epoch: 1 | loss: 0.3098436
speed: 0.0361s/iter; left time: 3851.4770s
iters: 1000, epoch: 1 | loss: 0.2831023
speed: 0.0366s/iter; left time: 3904.9395s
Epoch: 1 cost time: 39.633970499038696
--------start to validate-----------
normed mse:0.3765, mae:0.3829, rmse:0.6136, mape:1.5322, mspe:98.9865, corr:0.8636
denormed mse:7.5041, mae:1.3557, rmse:2.7394, mape:inf, mspe:inf, corr:0.8636
--------start to test-----------
normed mse:0.4453, mae:0.4023, rmse:0.6673, mape:2.0176, mspe:239.7528, corr:0.6953
denormed mse:9.6947, mae:1.4535, rmse:3.1136, mape:inf, mspe:inf, corr:0.6953
Epoch: 1, Steps: 1077 | Train Loss: 0.3188260 valid Loss: 0.3829074 Test Loss: 0.4023201
Validation loss decreased (inf --> 0.382907). Saving model ...
Updating learning rate to 0.00475
iters: 100, epoch: 2 | loss: 0.2759543
speed: 0.1175s/iter; left time: 12520.1546s
iters: 200, epoch: 2 | loss: 0.2776425
speed: 0.0372s/iter; left time: 3960.5041s
iters: 300, epoch: 2 | loss: 0.2510246
speed: 0.0364s/iter; left time: 3867.7107s
iters: 400, epoch: 2 | loss: 0.2699694
speed: 0.0363s/iter; left time: 3858.9464s
iters: 500, epoch: 2 | loss: 0.2756911
speed: 0.0349s/iter; left time: 3704.1203s
iters: 600, epoch: 2 | loss: 0.2634075
speed: 0.0353s/iter; left time: 3741.4141s
iters: 700, epoch: 2 | loss: 0.2871340
speed: 0.0322s/iter; left time: 3414.6127s
iters: 800, epoch: 2 | loss: 0.2899915
speed: 0.0348s/iter; left time: 3682.8367s
iters: 900, epoch: 2 | loss: 0.2889665
speed: 0.0324s/iter; left time: 3425.9789s
iters: 1000, epoch: 2 | loss: 0.2933974
speed: 0.0337s/iter; left time: 3561.6619s
Epoch: 2 cost time: 37.750977993011475
--------start to validate-----------
normed mse:0.3650, mae:0.3797, rmse:0.6042, mape:1.5390, mspe:105.6250, corr:0.8662
denormed mse:7.2495, mae:1.3509, rmse:2.6925, mape:inf, mspe:inf, corr:0.8662
--------start to test-----------
normed mse:0.4244, mae:0.3921, rmse:0.6515, mape:2.0092, mspe:239.8366, corr:0.7195
denormed mse:9.3696, mae:1.4211, rmse:3.0610, mape:inf, mspe:inf, corr:0.7195
Epoch: 2, Steps: 1077 | Train Loss: 0.2805793 valid Loss: 0.3797391 Test Loss: 0.3921406
Validation loss decreased (0.382907 --> 0.379739). Saving model ...
Updating learning rate to 0.0045125
iters: 100, epoch: 3 | loss: 0.2760756
speed: 0.1095s/iter; left time: 11547.2991s
iters: 200, epoch: 3 | loss: 0.2942024
speed: 0.0349s/iter; left time: 3681.6949s
iters: 300, epoch: 3 | loss: 0.2703648
speed: 0.0358s/iter; left time: 3764.4492s
iters: 400, epoch: 3 | loss: 0.2604382
speed: 0.0321s/iter; left time: 3376.7976s
iters: 500, epoch: 3 | loss: 0.2719360
speed: 0.0346s/iter; left time: 3636.1447s
iters: 600, epoch: 3 | loss: 0.3051446
speed: 0.0332s/iter; left time: 3480.0340s
iters: 700, epoch: 3 | loss: 0.2547747
speed: 0.0357s/iter; left time: 3741.3550s
iters: 800, epoch: 3 | loss: 0.2777190
speed: 0.0370s/iter; left time: 3875.6183s
iters: 900, epoch: 3 | loss: 0.2795755
speed: 0.0360s/iter; left time: 3762.1882s
iters: 1000, epoch: 3 | loss: 0.2347599
speed: 0.0361s/iter; left time: 3775.3991s
Epoch: 3 cost time: 37.68521690368652
--------start to validate-----------
normed mse:0.3767, mae:0.3858, rmse:0.6137, mape:1.6021, mspe:118.2857, corr:0.8611
denormed mse:7.5594, mae:1.3794, rmse:2.7494, mape:inf, mspe:inf, corr:0.8611
--------start to test-----------
normed mse:0.4306, mae:0.3893, rmse:0.6562, mape:2.1228, mspe:263.4662, corr:0.7101
denormed mse:9.6026, mae:1.4040, rmse:3.0988, mape:inf, mspe:inf, corr:0.7101
Epoch: 3, Steps: 1077 | Train Loss: 0.2734880 valid Loss: 0.3857621 Test Loss: 0.3892814
EarlyStopping counter: 1 out of 5
Updating learning rate to 0.004286875
iters: 100, epoch: 4 | loss: 0.2904731
speed: 0.1151s/iter; left time: 12008.3887s
iters: 200, epoch: 4 | loss: 0.2831732
speed: 0.0366s/iter; left time: 3812.8340s
iters: 300, epoch: 4 | loss: 0.2510524
speed: 0.0368s/iter; left time: 3829.7625s
iters: 400, epoch: 4 | loss: 0.2525309
speed: 0.0358s/iter; left time: 3725.9019s
iters: 500, epoch: 4 | loss: 0.3122713
speed: 0.0375s/iter; left time: 3902.5162s
iters: 600, epoch: 4 | loss: 0.2780880
speed: 0.0378s/iter; left time: 3927.4578s
iters: 700, epoch: 4 | loss: 0.2514291
speed: 0.0358s/iter; left time: 3715.8615s
iters: 800, epoch: 4 | loss: 0.2489728
speed: 0.0382s/iter; left time: 3959.9480s
iters: 900, epoch: 4 | loss: 0.2793048
speed: 0.0374s/iter; left time: 3869.3397s
iters: 1000, epoch: 4 | loss: 0.3334440
speed: 0.0360s/iter; left time: 3724.1339s
Epoch: 4 cost time: 39.77986168861389
--------start to validate-----------
normed mse:0.3818, mae:0.3851, rmse:0.6179, mape:1.5316, mspe:107.1045, corr:0.8611
denormed mse:7.7168, mae:1.3784, rmse:2.7779, mape:inf, mspe:inf, corr:0.8611
--------start to test-----------
normed mse:0.4437, mae:0.3990, rmse:0.6661, mape:2.0132, mspe:225.4112, corr:0.7100
denormed mse:9.7923, mae:1.4451, rmse:3.1293, mape:inf, mspe:inf, corr:0.7100
Epoch: 4, Steps: 1077 | Train Loss: 0.2690475 valid Loss: 0.3850551 Test Loss: 0.3990499
EarlyStopping counter: 2 out of 5
Updating learning rate to 0.00407253125
iters: 100, epoch: 5 | loss: 0.2933790
speed: 0.1150s/iter; left time: 11882.3186s
iters: 200, epoch: 5 | loss: 0.2759682
speed: 0.0381s/iter; left time: 3934.1997s
iters: 300, epoch: 5 | loss: 0.2519405
speed: 0.0356s/iter; left time: 3665.0695s
iters: 400, epoch: 5 | loss: 0.2472868
speed: 0.0365s/iter; left time: 3763.1916s
iters: 500, epoch: 5 | loss: 0.2710534
speed: 0.0365s/iter; left time: 3757.1487s
iters: 600, epoch: 5 | loss: 0.2372332
speed: 0.0351s/iter; left time: 3607.9622s
iters: 700, epoch: 5 | loss: 0.2341183
speed: 0.0363s/iter; left time: 3730.8824s
iters: 800, epoch: 5 | loss: 0.2868078
speed: 0.0376s/iter; left time: 3858.2005s
iters: 900, epoch: 5 | loss: 0.2986693
speed: 0.0368s/iter; left time: 3768.4641s
iters: 1000, epoch: 5 | loss: 0.2259211
speed: 0.0368s/iter; left time: 3766.6636s
Epoch: 5 cost time: 39.46049451828003
--------start to validate-----------
normed mse:0.3734, mae:0.3856, rmse:0.6111, mape:1.5616, mspe:108.8203, corr:0.8622
denormed mse:7.4035, mae:1.3714, rmse:2.7209, mape:inf, mspe:inf, corr:0.8622
--------start to test-----------
normed mse:0.4317, mae:0.3961, rmse:0.6570, mape:2.0357, mspe:238.6893, corr:0.7124
denormed mse:9.5342, mae:1.4352, rmse:3.0878, mape:inf, mspe:inf, corr:0.7124
Epoch: 5, Steps: 1077 | Train Loss: 0.2656843 valid Loss: 0.3856070 Test Loss: 0.3961082
EarlyStopping counter: 3 out of 5
Updating learning rate to 0.003868904687499999
iters: 100, epoch: 6 | loss: 0.2594722
speed: 0.1180s/iter; left time: 12056.5803s
iters: 200, epoch: 6 | loss: 0.2736794
speed: 0.0366s/iter; left time: 3740.0203s
iters: 300, epoch: 6 | loss: 0.2381817
speed: 0.0358s/iter; left time: 3650.3643s
iters: 400, epoch: 6 | loss: 0.3105860
speed: 0.0366s/iter; left time: 3728.1291s
iters: 500, epoch: 6 | loss: 0.3017042
speed: 0.0377s/iter; left time: 3843.1407s
iters: 600, epoch: 6 | loss: 0.2300297
speed: 0.0381s/iter; left time: 3871.6219s
iters: 700, epoch: 6 | loss: 0.2827681
speed: 0.0377s/iter; left time: 3831.4567s
iters: 800, epoch: 6 | loss: 0.2552932
speed: 0.0367s/iter; left time: 3725.0346s
iters: 900, epoch: 6 | loss: 0.2631693
speed: 0.0348s/iter; left time: 3524.3761s
iters: 1000, epoch: 6 | loss: 0.2475155
speed: 0.0342s/iter; left time: 3465.3240s
Epoch: 6 cost time: 39.30917811393738
--------start to validate-----------
normed mse:0.3922, mae:0.3959, rmse:0.6263, mape:1.5281, mspe:107.3200, corr:0.8561
denormed mse:7.7303, mae:1.4168, rmse:2.7803, mape:inf, mspe:inf, corr:0.8561
--------start to test-----------
normed mse:0.4305, mae:0.4053, rmse:0.6561, mape:1.9767, mspe:210.2253, corr:0.7186
denormed mse:9.3486, mae:1.4834, rmse:3.0576, mape:inf, mspe:inf, corr:0.7186
Epoch: 6, Steps: 1077 | Train Loss: 0.2634311 valid Loss: 0.3958629 Test Loss: 0.4052754
EarlyStopping counter: 4 out of 5
Updating learning rate to 0.003675459453124999
iters: 100, epoch: 7 | loss: 0.2738683
speed: 0.1171s/iter; left time: 11838.4515s
iters: 200, epoch: 7 | loss: 0.2384376
speed: 0.0355s/iter; left time: 3591.3280s
iters: 300, epoch: 7 | loss: 0.2410344
speed: 0.0386s/iter; left time: 3899.4886s
iters: 400, epoch: 7 | loss: 0.2687979
speed: 0.0369s/iter; left time: 3716.0695s
iters: 500, epoch: 7 | loss: 0.2845374
speed: 0.0362s/iter; left time: 3643.1523s
iters: 600, epoch: 7 | loss: 0.2612757
speed: 0.0376s/iter; left time: 3784.8979s
iters: 700, epoch: 7 | loss: 0.2670177
speed: 0.0373s/iter; left time: 3748.7166s
iters: 800, epoch: 7 | loss: 0.2495043
speed: 0.0365s/iter; left time: 3668.4008s
iters: 900, epoch: 7 | loss: 0.2436933
speed: 0.0351s/iter; left time: 3525.0689s
iters: 1000, epoch: 7 | loss: 0.2172739
speed: 0.0366s/iter; left time: 3670.3609s
Epoch: 7 cost time: 39.5423789024353
--------start to validate-----------
normed mse:0.3880, mae:0.3938, rmse:0.6229, mape:1.6160, mspe:124.2217, corr:0.8533
denormed mse:8.0379, mae:1.4364, rmse:2.8351, mape:inf, mspe:inf, corr:0.8533
--------start to test-----------
normed mse:0.4276, mae:0.3934, rmse:0.6539, mape:2.0801, mspe:240.9516, corr:0.7048
denormed mse:9.4629, mae:1.4191, rmse:3.0762, mape:inf, mspe:inf, corr:0.7048
Epoch: 7, Steps: 1077 | Train Loss: 0.2614641 valid Loss: 0.3938183 Test Loss: 0.3934116
EarlyStopping counter: 5 out of 5
Early stopping
save model in exp/ETT_checkpoints/SCINet_ETTm1_ftM_sl48_ll24_pl24_lr0.005_bs32_hid4.0_s1_l3_dp0.5_invFalse_itr0/ETTm124.bin
testing : SCINet_ETTm1_ftM_sl48_ll24_pl24_lr0.005_bs32_hid4.0_s1_l3_dp0.5_invFalse_itr0<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
test 11497
normed mse:0.4244, mae:0.3921, rmse:0.6515, mape:2.0092, mspe:239.8366, corr:0.7195
TTTT denormed mse:9.3696, mae:1.4211, rmse:3.0610, mape:inf, mspe:inf, corr:0.7195
Final mean normed mse:0.4244,mae:0.3921,denormed mse:9.3696,mae:1.4211

结果我发现只7个epoch训练就停止了,最终的结果和论文中相差甚大。这其中可能存在哪些问题呢?我一直困惑于ETTm1的复现问题,如果您能回答我,我将非常感激。
谢谢!

Seperate testing script

Hi, i couldn't find a way only to run the testing part without running the entire flow. Is there any way to do this?

unable to replicate ETTM1 results

Hello,

I find the results of this paper to be very impressive. I tried replicating the ETTM1 out 48 results running the command found in the readme. Do you have any suggestions on how to get the mse of 0.126 and mae of 0.229 found in the paper? My results are mse of 0.3808 and mae of0.3899.

https://github.com/cure-lab/SCINet#for-ettm1-dataset

here are my results:

Epoch: 7 cost time: 714.845440864563
--------start to validate-----------
mid --> normed mse:0.4309, mae:0.4268, rmse:0.6564, mape:1.5992, mspe:116.5968, corr:0.8446
mid --> denormed mse:7.6822, mae:1.4870, rmse:2.7717, mape:inf, mspe:inf, corr:0.8446
final --> normed mse:0.4345, mae:0.4250, rmse:0.6591, mape:1.5075, mspe:102.4135, corr:0.8480
final --> denormed mse:7.5653, mae:1.4714, rmse:2.7505, mape:inf, mspe:inf, corr:0.8480
--------start to test-----------
mid --> normed mse:0.4958, mae:0.4530, rmse:0.7041, mape:1.9340, mspe:219.9087, corr:0.6378
mid --> denormed mse:8.9583, mae:1.5916, rmse:2.9930, mape:inf, mspe:inf, corr:0.6378
final --> normed mse:0.4729, mae:0.4426, rmse:0.6877, mape:1.8262, mspe:176.8408, corr:0.6674
final --> denormed mse:9.1656, mae:1.5982, rmse:3.0275, mape:inf, mspe:inf, corr:0.6674
Epoch: 7, Steps: 2151 | Train Loss: 0.4837916 valid Loss: 0.8518541 Test Loss: 0.8955729
EarlyStopping counter: 5 out of 5
Early stopping
save model in  exp/ETT_checkpoints/SCINet_ETTm1_ftM_sl96_ll48_pl48_lr0.001_bs16_hid4.0_s2_l4_dp0.5_invFalse_itr0/ETTm148.bin
>>>>>>>testing : SCINet_ETTm1_ftM_sl96_ll48_pl48_lr0.001_bs16_hid4.0_s2_l4_dp0.5_invFalse_itr0<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
test 11473
Mid --> normed mse:0.3980, mae:0.3977, rmse:0.6309, mape:1.8941, mspe:239.4249, corr:0.6997
TTTT Final --> denormed mse:0.3808, mae:0.3899, rmse:0.6171, mape:1.8406, mspe:221.7707, corr:0.7211
Final mean normed mse:0.3808,mae:0.3899,denormed mse:6.8933,mae:1.3593

thank you for the help

How to Plot the Results of ETTh1 dataset

I'm trying to run the Code for the ETTh1 dataset using the following run command:
!python run_ETTh_10.py --data ETTh1 --features S --seq_len 96 --label_len 48 --pred_len 48 --hidden-size 4 --stacks 1 --levels 3 --lr 3e-3 --batch_size 8 --dropout 0.5 --model_name etth1_M_I48_O24_lr3e-3_bs8_dp0.5_h4_s1l3
and it runs successfully before early stopping at epoch 17 and i get the MAE and MSE values for both normalized and de-normalized data.
`Args in experiment:
Namespace(INN=1, RIN=False, batch_size=8, c_out=1, checkpoints='exp/ETT_checkpoints/', cols=None, concat_len=0, data='ETTh1', data_path='ETTh1.csv', dec_in=1, detail_freq='h', devices='0', dilation=1, dropout=0.5, embed='timeF', enc_in=1, evaluate=False, features='S', freq='h', gpu=0, groups=1, hidden_size=4.0, inverse=False, itr=0, kernel=5, label_len=48, lastWeight=1.0, levels=3, loss='mae', lr=0.003, lradj=1, model='SCINet', model_name='etth1_M_I48_O24_lr3e-3_bs8_dp0.5_h4_s1l3', num_decoder_layer=1, num_workers=0, patience=5, positionalEcoding=False, pred_len=48, resume=False, root_path='./datasets/', save=False, seq_len=96, single_step=0, single_step_output_One=0, stacks=1, target='OT', train_epochs=100, use_amp=False, use_gpu=True, use_multi_gpu=False, window_size=12)
SCINet(
(blocks1): EncoderTree(
(SCINet_Tree): SCINet_Tree(
(workingblock): LevelSCINet(
(interact): InteractorLevel(
(level): Interactor(
(split): Splitting()
(phi): Sequential(
(0): ReplicationPad1d((3, 3))
(1): Conv1d(1, 4, kernel_size=(5,), stride=(1,))
(2): LeakyReLU(negative_slope=0.01, inplace=True)
(3): Dropout(p=0.5, inplace=False)
(4): Conv1d(4, 1, kernel_size=(3,), stride=(1,))
(5): Tanh()
)
(psi): Sequential(
(0): ReplicationPad1d((3, 3))
(1): Conv1d(1, 4, kernel_size=(5,), stride=(1,))
(2): LeakyReLU(negative_slope=0.01, inplace=True)
(3): Dropout(p=0.5, inplace=False)
(4): Conv1d(4, 1, kernel_size=(3,), stride=(1,))
(5): Tanh()
)
(P): Sequential(
(0): ReplicationPad1d((3, 3))
(1): Conv1d(1, 4, kernel_size=(5,), stride=(1,))
(2): LeakyReLU(negative_slope=0.01, inplace=True)
(3): Dropout(p=0.5, inplace=False)
(4): Conv1d(4, 1, kernel_size=(3,), stride=(1,))
(5): Tanh()
)
(U): Sequential(
(0): ReplicationPad1d((3, 3))
(1): Conv1d(1, 4, kernel_size=(5,), stride=(1,))
(2): LeakyReLU(negative_slope=0.01, inplace=True)
(3): Dropout(p=0.5, inplace=False)
(4): Conv1d(4, 1, kernel_size=(3,), stride=(1,))
(5): Tanh()
)
)
)
)
(SCINet_Tree_odd): SCINet_Tree(
(workingblock): LevelSCINet(
(interact): InteractorLevel(
(level): Interactor(
(split): Splitting()
(phi): Sequential(
(0): ReplicationPad1d((3, 3))
(1): Conv1d(1, 4, kernel_size=(5,), stride=(1,))
(2): LeakyReLU(negative_slope=0.01, inplace=True)
(3): Dropout(p=0.5, inplace=False)
(4): Conv1d(4, 1, kernel_size=(3,), stride=(1,))
(5): Tanh()
)
(psi): Sequential(
(0): ReplicationPad1d((3, 3))
(1): Conv1d(1, 4, kernel_size=(5,), stride=(1,))
(2): LeakyReLU(negative_slope=0.01, inplace=True)
(3): Dropout(p=0.5, inplace=False)
(4): Conv1d(4, 1, kernel_size=(3,), stride=(1,))
(5): Tanh()
)
(P): Sequential(
(0): ReplicationPad1d((3, 3))
(1): Conv1d(1, 4, kernel_size=(5,), stride=(1,))
(2): LeakyReLU(negative_slope=0.01, inplace=True)
(3): Dropout(p=0.5, inplace=False)
(4): Conv1d(4, 1, kernel_size=(3,), stride=(1,))
(5): Tanh()
)
(U): Sequential(
(0): ReplicationPad1d((3, 3))
(1): Conv1d(1, 4, kernel_size=(5,), stride=(1,))
(2): LeakyReLU(negative_slope=0.01, inplace=True)
(3): Dropout(p=0.5, inplace=False)
(4): Conv1d(4, 1, kernel_size=(3,), stride=(1,))
(5): Tanh()
)
)
)
)
(SCINet_Tree_odd): SCINet_Tree(
(workingblock): LevelSCINet(
(interact): InteractorLevel(
(level): Interactor(
(split): Splitting()
(phi): Sequential(
(0): ReplicationPad1d((3, 3))
(1): Conv1d(1, 4, kernel_size=(5,), stride=(1,))
(2): LeakyReLU(negative_slope=0.01, inplace=True)
(3): Dropout(p=0.5, inplace=False)
(4): Conv1d(4, 1, kernel_size=(3,), stride=(1,))
(5): Tanh()
)
(psi): Sequential(
(0): ReplicationPad1d((3, 3))
(1): Conv1d(1, 4, kernel_size=(5,), stride=(1,))
(2): LeakyReLU(negative_slope=0.01, inplace=True)
(3): Dropout(p=0.5, inplace=False)
(4): Conv1d(4, 1, kernel_size=(3,), stride=(1,))
(5): Tanh()
)
(P): Sequential(
(0): ReplicationPad1d((3, 3))
(1): Conv1d(1, 4, kernel_size=(5,), stride=(1,))
(2): LeakyReLU(negative_slope=0.01, inplace=True)
(3): Dropout(p=0.5, inplace=False)
(4): Conv1d(4, 1, kernel_size=(3,), stride=(1,))
(5): Tanh()
)
(U): Sequential(
(0): ReplicationPad1d((3, 3))
(1): Conv1d(1, 4, kernel_size=(5,), stride=(1,))
(2): LeakyReLU(negative_slope=0.01, inplace=True)
(3): Dropout(p=0.5, inplace=False)
(4): Conv1d(4, 1, kernel_size=(3,), stride=(1,))
(5): Tanh()
)
)
)
)
)
(SCINet_Tree_even): SCINet_Tree(
(workingblock): LevelSCINet(
(interact): InteractorLevel(
(level): Interactor(
(split): Splitting()
(phi): Sequential(
(0): ReplicationPad1d((3, 3))
(1): Conv1d(1, 4, kernel_size=(5,), stride=(1,))
(2): LeakyReLU(negative_slope=0.01, inplace=True)
(3): Dropout(p=0.5, inplace=False)
(4): Conv1d(4, 1, kernel_size=(3,), stride=(1,))
(5): Tanh()
)
(psi): Sequential(
(0): ReplicationPad1d((3, 3))
(1): Conv1d(1, 4, kernel_size=(5,), stride=(1,))
(2): LeakyReLU(negative_slope=0.01, inplace=True)
(3): Dropout(p=0.5, inplace=False)
(4): Conv1d(4, 1, kernel_size=(3,), stride=(1,))
(5): Tanh()
)
(P): Sequential(
(0): ReplicationPad1d((3, 3))
(1): Conv1d(1, 4, kernel_size=(5,), stride=(1,))
(2): LeakyReLU(negative_slope=0.01, inplace=True)
(3): Dropout(p=0.5, inplace=False)
(4): Conv1d(4, 1, kernel_size=(3,), stride=(1,))
(5): Tanh()
)
(U): Sequential(
(0): ReplicationPad1d((3, 3))
(1): Conv1d(1, 4, kernel_size=(5,), stride=(1,))
(2): LeakyReLU(negative_slope=0.01, inplace=True)
(3): Dropout(p=0.5, inplace=False)
(4): Conv1d(4, 1, kernel_size=(3,), stride=(1,))
(5): Tanh()
)
)
)
)
)
)
(SCINet_Tree_even): SCINet_Tree(
(workingblock): LevelSCINet(
(interact): InteractorLevel(
(level): Interactor(
(split): Splitting()
(phi): Sequential(
(0): ReplicationPad1d((3, 3))
(1): Conv1d(1, 4, kernel_size=(5,), stride=(1,))
(2): LeakyReLU(negative_slope=0.01, inplace=True)
(3): Dropout(p=0.5, inplace=False)
(4): Conv1d(4, 1, kernel_size=(3,), stride=(1,))
(5): Tanh()
)
(psi): Sequential(
(0): ReplicationPad1d((3, 3))
(1): Conv1d(1, 4, kernel_size=(5,), stride=(1,))
(2): LeakyReLU(negative_slope=0.01, inplace=True)
(3): Dropout(p=0.5, inplace=False)
(4): Conv1d(4, 1, kernel_size=(3,), stride=(1,))
(5): Tanh()
)
(P): Sequential(
(0): ReplicationPad1d((3, 3))
(1): Conv1d(1, 4, kernel_size=(5,), stride=(1,))
(2): LeakyReLU(negative_slope=0.01, inplace=True)
(3): Dropout(p=0.5, inplace=False)
(4): Conv1d(4, 1, kernel_size=(3,), stride=(1,))
(5): Tanh()
)
(U): Sequential(
(0): ReplicationPad1d((3, 3))
(1): Conv1d(1, 4, kernel_size=(5,), stride=(1,))
(2): LeakyReLU(negative_slope=0.01, inplace=True)
(3): Dropout(p=0.5, inplace=False)
(4): Conv1d(4, 1, kernel_size=(3,), stride=(1,))
(5): Tanh()
)
)
)
)
(SCINet_Tree_odd): SCINet_Tree(
(workingblock): LevelSCINet(
(interact): InteractorLevel(
(level): Interactor(
(split): Splitting()
(phi): Sequential(
(0): ReplicationPad1d((3, 3))
(1): Conv1d(1, 4, kernel_size=(5,), stride=(1,))
(2): LeakyReLU(negative_slope=0.01, inplace=True)
(3): Dropout(p=0.5, inplace=False)
(4): Conv1d(4, 1, kernel_size=(3,), stride=(1,))
(5): Tanh()
)
(psi): Sequential(
(0): ReplicationPad1d((3, 3))
(1): Conv1d(1, 4, kernel_size=(5,), stride=(1,))
(2): LeakyReLU(negative_slope=0.01, inplace=True)
(3): Dropout(p=0.5, inplace=False)
(4): Conv1d(4, 1, kernel_size=(3,), stride=(1,))
(5): Tanh()
)
(P): Sequential(
(0): ReplicationPad1d((3, 3))
(1): Conv1d(1, 4, kernel_size=(5,), stride=(1,))
(2): LeakyReLU(negative_slope=0.01, inplace=True)
(3): Dropout(p=0.5, inplace=False)
(4): Conv1d(4, 1, kernel_size=(3,), stride=(1,))
(5): Tanh()
)
(U): Sequential(
(0): ReplicationPad1d((3, 3))
(1): Conv1d(1, 4, kernel_size=(5,), stride=(1,))
(2): LeakyReLU(negative_slope=0.01, inplace=True)
(3): Dropout(p=0.5, inplace=False)
(4): Conv1d(4, 1, kernel_size=(3,), stride=(1,))
(5): Tanh()
)
)
)
)
)
(SCINet_Tree_even): SCINet_Tree(
(workingblock): LevelSCINet(
(interact): InteractorLevel(
(level): Interactor(
(split): Splitting()
(phi): Sequential(
(0): ReplicationPad1d((3, 3))
(1): Conv1d(1, 4, kernel_size=(5,), stride=(1,))
(2): LeakyReLU(negative_slope=0.01, inplace=True)
(3): Dropout(p=0.5, inplace=False)
(4): Conv1d(4, 1, kernel_size=(3,), stride=(1,))
(5): Tanh()
)
(psi): Sequential(
(0): ReplicationPad1d((3, 3))
(1): Conv1d(1, 4, kernel_size=(5,), stride=(1,))
(2): LeakyReLU(negative_slope=0.01, inplace=True)
(3): Dropout(p=0.5, inplace=False)
(4): Conv1d(4, 1, kernel_size=(3,), stride=(1,))
(5): Tanh()
)
(P): Sequential(
(0): ReplicationPad1d((3, 3))
(1): Conv1d(1, 4, kernel_size=(5,), stride=(1,))
(2): LeakyReLU(negative_slope=0.01, inplace=True)
(3): Dropout(p=0.5, inplace=False)
(4): Conv1d(4, 1, kernel_size=(3,), stride=(1,))
(5): Tanh()
)
(U): Sequential(
(0): ReplicationPad1d((3, 3))
(1): Conv1d(1, 4, kernel_size=(5,), stride=(1,))
(2): LeakyReLU(negative_slope=0.01, inplace=True)
(3): Dropout(p=0.5, inplace=False)
(4): Conv1d(4, 1, kernel_size=(3,), stride=(1,))
(5): Tanh()
)
)
)
)
)
)
)
)
(projection1): Conv1d(96, 48, kernel_size=(1,), stride=(1,), bias=False)
(div_projection): ModuleList()
)

start training : SCINet_ETTh1_ftS_sl96_ll48_pl48_lr0.003_bs8_hid4.0_s1_l3_dp0.5_invFalse_itr0>>>>>>>>>>>>>>>>>>>>>>>>>>
train 8497
val 2833
test 2833
exp/ETT_checkpoints/SCINet_ETTh1_ftS_sl96_ll48_pl48_lr0.003_bs8_hid4.0_s1_l3_dp0.5_invFalse_itr0
iters: 100, epoch: 1 | loss: 0.2635144
speed: 0.0918s/iter; left time: 9735.2018s
iters: 200, epoch: 1 | loss: 0.2746293
speed: 0.0640s/iter; left time: 6782.9779s
iters: 300, epoch: 1 | loss: 0.2532458
speed: 0.0641s/iter; left time: 6787.8207s
iters: 400, epoch: 1 | loss: 0.2308514
speed: 0.0644s/iter; left time: 6817.4635s
iters: 500, epoch: 1 | loss: 0.3040747
speed: 0.0651s/iter; left time: 6883.0310s
iters: 600, epoch: 1 | loss: 0.2578846
speed: 0.0644s/iter; left time: 6798.4408s
iters: 700, epoch: 1 | loss: 0.2459396
speed: 0.0634s/iter; left time: 6689.3999s
iters: 800, epoch: 1 | loss: 0.2914965
speed: 0.0653s/iter; left time: 6883.0213s
iters: 900, epoch: 1 | loss: 0.2554513
speed: 0.0641s/iter; left time: 6750.0606s
iters: 1000, epoch: 1 | loss: 0.2524573
speed: 0.0650s/iter; left time: 6838.8867s
Epoch: 1 cost time: 71.33614897727966
--------start to validate-----------
normed mse:0.0814, mae:0.2147, rmse:0.2852, mape:1.3623, mspe:25.9642, corr:0.8572
denormed mse:6.8514, mae:1.9706, rmse:2.6175, mape:0.1642, mspe:0.0813, corr:0.8572
--------start to test-----------
normed mse:0.0892, mae:0.2317, rmse:0.2987, mape:0.1735, mspe:0.0476, corr:0.8131
denormed mse:7.5120, mae:2.1258, rmse:2.7408, mape:inf, mspe:inf, corr:0.8131
Epoch: 1, Steps: 1062 | Train Loss: 0.2954810 valid Loss: 0.2147395 Test Loss: 0.2316557
Validation loss decreased (inf --> 0.214739). Saving model ...
Updating learning rate to 0.00285
iters: 100, epoch: 2 | loss: 0.2651560
speed: 0.2249s/iter; left time: 23618.5159s
iters: 200, epoch: 2 | loss: 0.3282328
speed: 0.0653s/iter; left time: 6857.5758s
iters: 300, epoch: 2 | loss: 0.2604796
speed: 0.0648s/iter; left time: 6791.7008s
iters: 400, epoch: 2 | loss: 0.2489426
speed: 0.0648s/iter; left time: 6790.2499s
iters: 500, epoch: 2 | loss: 0.3080887
speed: 0.0640s/iter; left time: 6695.5776s
iters: 600, epoch: 2 | loss: 0.2923984
speed: 0.0637s/iter; left time: 6657.3868s
iters: 700, epoch: 2 | loss: 0.3410122
speed: 0.0639s/iter; left time: 6673.1482s
iters: 800, epoch: 2 | loss: 0.2606724
speed: 0.0651s/iter; left time: 6792.7796s
iters: 900, epoch: 2 | loss: 0.2952042
speed: 0.0646s/iter; left time: 6730.5264s
iters: 1000, epoch: 2 | loss: 0.1823040
speed: 0.0639s/iter; left time: 6656.2354s
Epoch: 2 cost time: 68.4526846408844
--------start to validate-----------
normed mse:0.0799, mae:0.2147, rmse:0.2827, mape:1.4790, mspe:30.6527, corr:0.8621
denormed mse:6.7300, mae:1.9705, rmse:2.5942, mape:0.1594, mspe:0.0701, corr:0.8621
--------start to test-----------
normed mse:0.0586, mae:0.1883, rmse:0.2420, mape:0.1493, mspe:0.0399, corr:0.8178
denormed mse:4.9326, mae:1.7278, rmse:2.2210, mape:inf, mspe:inf, corr:0.8178
Epoch: 2, Steps: 1062 | Train Loss: 0.2776401 valid Loss: 0.2147304 Test Loss: 0.1882900
Validation loss decreased (0.214739 --> 0.214730). Saving model ...
Updating learning rate to 0.0027075
iters: 100, epoch: 3 | loss: 0.3098924
speed: 0.2221s/iter; left time: 23097.3052s
iters: 200, epoch: 3 | loss: 0.2770404
speed: 0.0643s/iter; left time: 6683.5361s
iters: 300, epoch: 3 | loss: 0.3261764
speed: 0.0639s/iter; left time: 6627.7702s
iters: 400, epoch: 3 | loss: 0.3463621
speed: 0.0645s/iter; left time: 6690.5621s
iters: 500, epoch: 3 | loss: 0.2016839
speed: 0.0669s/iter; left time: 6931.5046s
iters: 600, epoch: 3 | loss: 0.2773947
speed: 0.0660s/iter; left time: 6829.7108s
iters: 700, epoch: 3 | loss: 0.2436916
speed: 0.0645s/iter; left time: 6672.7252s
iters: 800, epoch: 3 | loss: 0.3100113
speed: 0.0636s/iter; left time: 6570.5793s
iters: 900, epoch: 3 | loss: 0.2406910
speed: 0.0637s/iter; left time: 6570.7141s
iters: 1000, epoch: 3 | loss: 0.2994752
speed: 0.0644s/iter; left time: 6641.0536s
Epoch: 3 cost time: 68.57215976715088
--------start to validate-----------
normed mse:0.0780, mae:0.2130, rmse:0.2793, mape:1.4954, mspe:33.6005, corr:0.8610
denormed mse:6.5708, mae:1.9548, rmse:2.5634, mape:0.1579, mspe:0.0679, corr:0.8610
--------start to test-----------
normed mse:0.0572, mae:0.1855, rmse:0.2392, mape:0.1437, mspe:0.0355, corr:0.8176
denormed mse:4.8174, mae:1.7019, rmse:2.1949, mape:inf, mspe:inf, corr:0.8176
Epoch: 3, Steps: 1062 | Train Loss: 0.2742680 valid Loss: 0.2130265 Test Loss: 0.1854673
Validation loss decreased (0.214730 --> 0.213026). Saving model ...
Updating learning rate to 0.0025721249999999998
iters: 100, epoch: 4 | loss: 0.3014244
speed: 0.2245s/iter; left time: 23101.4108s
iters: 200, epoch: 4 | loss: 0.2271728
speed: 0.0643s/iter; left time: 6606.3076s
iters: 300, epoch: 4 | loss: 0.3784584
speed: 0.0640s/iter; left time: 6569.4517s
iters: 400, epoch: 4 | loss: 0.2752601
speed: 0.0653s/iter; left time: 6696.5213s
iters: 500, epoch: 4 | loss: 0.3025605
speed: 0.0638s/iter; left time: 6536.2859s
iters: 600, epoch: 4 | loss: 0.2795481
speed: 0.0638s/iter; left time: 6538.9267s
iters: 700, epoch: 4 | loss: 0.2788646
speed: 0.0632s/iter; left time: 6465.8545s
iters: 800, epoch: 4 | loss: 0.2323274
speed: 0.0640s/iter; left time: 6545.1154s
iters: 900, epoch: 4 | loss: 0.2965076
speed: 0.0648s/iter; left time: 6620.8814s
iters: 1000, epoch: 4 | loss: 0.2785395
speed: 0.0643s/iter; left time: 6555.9024s
Epoch: 4 cost time: 68.27407765388489
--------start to validate-----------
normed mse:0.0787, mae:0.2138, rmse:0.2805, mape:1.5257, mspe:33.8453, corr:0.8624
denormed mse:6.6266, mae:1.9617, rmse:2.5742, mape:0.1578, mspe:0.0676, corr:0.8624
--------start to test-----------
normed mse:0.0564, mae:0.1850, rmse:0.2375, mape:0.1471, mspe:0.0391, corr:0.8203
denormed mse:4.7516, mae:1.6981, rmse:2.1798, mape:inf, mspe:inf, corr:0.8203
Epoch: 4, Steps: 1062 | Train Loss: 0.2719553 valid Loss: 0.2137705 Test Loss: 0.1850464
EarlyStopping counter: 1 out of 5
Updating learning rate to 0.0024435187499999996
iters: 100, epoch: 5 | loss: 0.3289642
speed: 0.2193s/iter; left time: 22341.4123s
iters: 200, epoch: 5 | loss: 0.2475609
speed: 0.0641s/iter; left time: 6520.0001s
iters: 300, epoch: 5 | loss: 0.2934369
speed: 0.0640s/iter; left time: 6501.5043s
iters: 400, epoch: 5 | loss: 0.3514774
speed: 0.0639s/iter; left time: 6488.8877s
iters: 500, epoch: 5 | loss: 0.3289756
speed: 0.0650s/iter; left time: 6592.7983s
iters: 600, epoch: 5 | loss: 0.3147124
speed: 0.0660s/iter; left time: 6684.8574s
iters: 700, epoch: 5 | loss: 0.2444675
speed: 0.0656s/iter; left time: 6642.7295s
iters: 800, epoch: 5 | loss: 0.2227931
speed: 0.0648s/iter; left time: 6558.8881s
iters: 900, epoch: 5 | loss: 0.2905650
speed: 0.0645s/iter; left time: 6519.5313s
iters: 1000, epoch: 5 | loss: 0.2011140
speed: 0.0643s/iter; left time: 6490.7155s
Epoch: 5 cost time: 68.49428033828735
--------start to validate-----------
normed mse:0.0795, mae:0.2143, rmse:0.2820, mape:1.5023, mspe:32.9689, corr:0.8631
denormed mse:6.6963, mae:1.9665, rmse:2.5877, mape:0.1583, mspe:0.0665, corr:0.8631
--------start to test-----------
normed mse:0.0568, mae:0.1897, rmse:0.2384, mape:0.1513, mspe:0.0402, corr:0.8214
denormed mse:4.7869, mae:1.7409, rmse:2.1879, mape:inf, mspe:inf, corr:0.8214
Epoch: 5, Steps: 1062 | Train Loss: 0.2680931 valid Loss: 0.2143007 Test Loss: 0.1897096
EarlyStopping counter: 2 out of 5
Updating learning rate to 0.0023213428124999992
iters: 100, epoch: 6 | loss: 0.2878237
speed: 0.2231s/iter; left time: 22487.2134s
iters: 200, epoch: 6 | loss: 0.3053960
speed: 0.0642s/iter; left time: 6463.6262s
iters: 300, epoch: 6 | loss: 0.2794231
speed: 0.0657s/iter; left time: 6608.5483s
iters: 400, epoch: 6 | loss: 0.1824071
speed: 0.0658s/iter; left time: 6613.3665s
iters: 500, epoch: 6 | loss: 0.3717845
speed: 0.0657s/iter; left time: 6599.7808s
iters: 600, epoch: 6 | loss: 0.2623390
speed: 0.0657s/iter; left time: 6589.1822s
iters: 700, epoch: 6 | loss: 0.2274510
speed: 0.0651s/iter; left time: 6523.5771s
iters: 800, epoch: 6 | loss: 0.2571564
speed: 0.0665s/iter; left time: 6657.4378s
iters: 900, epoch: 6 | loss: 0.2891446
speed: 0.0667s/iter; left time: 6670.0139s
iters: 1000, epoch: 6 | loss: 0.3868507
speed: 0.0665s/iter; left time: 6639.8010s
Epoch: 6 cost time: 69.7518322467804
--------start to validate-----------
normed mse:0.0777, mae:0.2116, rmse:0.2788, mape:1.5445, mspe:35.4777, corr:0.8636
denormed mse:6.5435, mae:1.9422, rmse:2.5580, mape:0.1556, mspe:0.0658, corr:0.8636
--------start to test-----------
normed mse:0.0489, mae:0.1677, rmse:0.2211, mape:0.1310, mspe:0.0322, corr:0.8212
denormed mse:4.1181, mae:1.5390, rmse:2.0293, mape:inf, mspe:inf, corr:0.8212
Epoch: 6, Steps: 1062 | Train Loss: 0.2668546 valid Loss: 0.2116495 Test Loss: 0.1677159
Validation loss decreased (0.213026 --> 0.211650). Saving model ...
Updating learning rate to 0.0022052756718749992
iters: 100, epoch: 7 | loss: 0.2596520
speed: 0.2240s/iter; left time: 22334.3117s
iters: 200, epoch: 7 | loss: 0.2324577
speed: 0.0640s/iter; left time: 6381.1203s
iters: 300, epoch: 7 | loss: 0.2214808
speed: 0.0651s/iter; left time: 6474.6273s
iters: 400, epoch: 7 | loss: 0.2045112
speed: 0.0632s/iter; left time: 6281.8838s
iters: 500, epoch: 7 | loss: 0.2396872
speed: 0.0636s/iter; left time: 6316.9631s
iters: 600, epoch: 7 | loss: 0.1907633
speed: 0.0644s/iter; left time: 6388.6488s
iters: 700, epoch: 7 | loss: 0.2620018
speed: 0.0747s/iter; left time: 7404.8432s
iters: 800, epoch: 7 | loss: 0.2821859
speed: 0.0652s/iter; left time: 6457.3174s
iters: 900, epoch: 7 | loss: 0.2233998
speed: 0.0641s/iter; left time: 6339.8982s
iters: 1000, epoch: 7 | loss: 0.2333842
speed: 0.0648s/iter; left time: 6402.4253s
Epoch: 7 cost time: 69.41319704055786
--------start to validate-----------
normed mse:0.0770, mae:0.2089, rmse:0.2776, mape:1.3294, mspe:25.3847, corr:0.8637
denormed mse:6.4876, mae:1.9170, rmse:2.5471, mape:0.1582, mspe:0.0734, corr:0.8637
--------start to test-----------
normed mse:0.0935, mae:0.2430, rmse:0.3058, mape:0.1754, mspe:0.0452, corr:0.8126
denormed mse:7.8770, mae:2.2295, rmse:2.8066, mape:inf, mspe:inf, corr:0.8126
Epoch: 7, Steps: 1062 | Train Loss: 0.2652045 valid Loss: 0.2089042 Test Loss: 0.2429530
Validation loss decreased (0.211650 --> 0.208904). Saving model ...
Updating learning rate to 0.0020950118882812493
iters: 100, epoch: 8 | loss: 0.2476663
speed: 0.2239s/iter; left time: 22095.3285s
iters: 200, epoch: 8 | loss: 0.3140231
speed: 0.0660s/iter; left time: 6502.9781s
iters: 300, epoch: 8 | loss: 0.2049506
speed: 0.0647s/iter; left time: 6368.1886s
iters: 400, epoch: 8 | loss: 0.2751644
speed: 0.0648s/iter; left time: 6373.9229s
iters: 500, epoch: 8 | loss: 0.3105533
speed: 0.0664s/iter; left time: 6520.4210s
iters: 600, epoch: 8 | loss: 0.2195727
speed: 0.0640s/iter; left time: 6287.0444s
iters: 700, epoch: 8 | loss: 0.2385361
speed: 0.0644s/iter; left time: 6313.8700s
iters: 800, epoch: 8 | loss: 0.2514920
speed: 0.0661s/iter; left time: 6476.8801s
iters: 900, epoch: 8 | loss: 0.2696154
speed: 0.0639s/iter; left time: 6250.8148s
iters: 1000, epoch: 8 | loss: 0.2994070
speed: 0.0646s/iter; left time: 6314.7932s
Epoch: 8 cost time: 69.0834846496582
--------start to validate-----------
normed mse:0.0786, mae:0.2097, rmse:0.2803, mape:1.3397, mspe:24.8040, corr:0.8618
denormed mse:6.6175, mae:1.9241, rmse:2.5725, mape:0.1595, mspe:0.0748, corr:0.8618
--------start to test-----------
normed mse:0.0803, mae:0.2185, rmse:0.2833, mape:0.1605, mspe:0.0409, corr:0.8191
denormed mse:6.7598, mae:2.0055, rmse:2.6000, mape:inf, mspe:inf, corr:0.8191
Epoch: 8, Steps: 1062 | Train Loss: 0.2626855 valid Loss: 0.2096790 Test Loss: 0.2185455
EarlyStopping counter: 1 out of 5
Updating learning rate to 0.0019902612938671868
iters: 100, epoch: 9 | loss: 0.2035707
speed: 0.2216s/iter; left time: 21633.7680s
iters: 200, epoch: 9 | loss: 0.1929587
speed: 0.0643s/iter; left time: 6273.6812s
iters: 300, epoch: 9 | loss: 0.2027658
speed: 0.0644s/iter; left time: 6275.3628s
iters: 400, epoch: 9 | loss: 0.1670165
speed: 0.0655s/iter; left time: 6372.5070s
iters: 500, epoch: 9 | loss: 0.3162191
speed: 0.0644s/iter; left time: 6264.6492s
iters: 600, epoch: 9 | loss: 0.2530913
speed: 0.0642s/iter; left time: 6233.9174s
iters: 700, epoch: 9 | loss: 0.2298067
speed: 0.0676s/iter; left time: 6557.6573s
iters: 800, epoch: 9 | loss: 0.2281127
speed: 0.0661s/iter; left time: 6406.0165s
iters: 900, epoch: 9 | loss: 0.2107817
speed: 0.0664s/iter; left time: 6426.1786s
iters: 1000, epoch: 9 | loss: 0.2355524
speed: 0.0656s/iter; left time: 6345.4314s
Epoch: 9 cost time: 69.29217648506165
--------start to validate-----------
normed mse:0.0747, mae:0.2069, rmse:0.2732, mape:1.4617, mspe:31.3834, corr:0.8653
denormed mse:6.2872, mae:1.8984, rmse:2.5074, mape:0.1508, mspe:0.0608, corr:0.8653
--------start to test-----------
normed mse:0.0819, mae:0.2269, rmse:0.2861, mape:0.1668, mspe:0.0438, corr:0.7987
denormed mse:6.8943, mae:2.0818, rmse:2.6257, mape:inf, mspe:inf, corr:0.7987
Epoch: 9, Steps: 1062 | Train Loss: 0.2619727 valid Loss: 0.2068796 Test Loss: 0.2268594
Validation loss decreased (0.208904 --> 0.206880). Saving model ...
Updating learning rate to 0.0018907482291738273
iters: 100, epoch: 10 | loss: 0.2934171
speed: 0.2292s/iter; left time: 22127.8739s
iters: 200, epoch: 10 | loss: 0.3041674
speed: 0.0652s/iter; left time: 6284.6789s
iters: 300, epoch: 10 | loss: 0.2558330
speed: 0.0646s/iter; left time: 6225.6793s
iters: 400, epoch: 10 | loss: 0.3225133
speed: 0.0643s/iter; left time: 6186.9587s
iters: 500, epoch: 10 | loss: 0.2021957
speed: 0.0646s/iter; left time: 6214.6386s
iters: 600, epoch: 10 | loss: 0.2634687
speed: 0.0644s/iter; left time: 6184.0262s
iters: 700, epoch: 10 | loss: 0.3601710
speed: 0.0649s/iter; left time: 6231.3502s
iters: 800, epoch: 10 | loss: 0.2715219
speed: 0.0643s/iter; left time: 6158.7077s
iters: 900, epoch: 10 | loss: 0.3285161
speed: 0.0662s/iter; left time: 6337.7003s
iters: 1000, epoch: 10 | loss: 0.2874655
speed: 0.0650s/iter; left time: 6218.2058s
Epoch: 10 cost time: 69.2235357761383
--------start to validate-----------
normed mse:0.0741, mae:0.2067, rmse:0.2721, mape:1.4288, mspe:32.2580, corr:0.8650
denormed mse:6.2362, mae:1.8968, rmse:2.4972, mape:0.1524, mspe:0.0658, corr:0.8650
--------start to test-----------
normed mse:0.1252, mae:0.2862, rmse:0.3538, mape:0.2056, mspe:0.0598, corr:0.7859
denormed mse:10.5420, mae:2.6266, rmse:3.2468, mape:inf, mspe:inf, corr:0.7859
Epoch: 10, Steps: 1062 | Train Loss: 0.2617135 valid Loss: 0.2067047 Test Loss: 0.2862312
Validation loss decreased (0.206880 --> 0.206705). Saving model ...
Updating learning rate to 0.001796210817715136
iters: 100, epoch: 11 | loss: 0.2604572
speed: 0.2211s/iter; left time: 21110.7798s
iters: 200, epoch: 11 | loss: 0.1902495
speed: 0.0639s/iter; left time: 6093.1127s
iters: 300, epoch: 11 | loss: 0.2706100
speed: 0.0645s/iter; left time: 6144.9446s
iters: 400, epoch: 11 | loss: 0.2700502
speed: 0.0641s/iter; left time: 6099.0807s
iters: 500, epoch: 11 | loss: 0.2039715
speed: 0.0644s/iter; left time: 6119.2977s
iters: 600, epoch: 11 | loss: 0.2211753
speed: 0.0644s/iter; left time: 6116.2928s
iters: 700, epoch: 11 | loss: 0.3060542
speed: 0.0641s/iter; left time: 6078.2290s
iters: 800, epoch: 11 | loss: 0.2073108
speed: 0.0636s/iter; left time: 6031.3409s
iters: 900, epoch: 11 | loss: 0.3166656
speed: 0.0642s/iter; left time: 6077.3573s
iters: 1000, epoch: 11 | loss: 0.2857897
speed: 0.0636s/iter; left time: 6020.0385s
Epoch: 11 cost time: 68.08029365539551
--------start to validate-----------
normed mse:0.0780, mae:0.2101, rmse:0.2792, mape:1.4069, mspe:27.8713, corr:0.8645
denormed mse:6.5642, mae:1.9282, rmse:2.5621, mape:0.1562, mspe:0.0672, corr:0.8645
--------start to test-----------
normed mse:0.0486, mae:0.1696, rmse:0.2206, mape:0.1346, mspe:0.0347, corr:0.8196
denormed mse:4.0965, mae:1.5563, rmse:2.0240, mape:inf, mspe:inf, corr:0.8196
Epoch: 11, Steps: 1062 | Train Loss: 0.2599722 valid Loss: 0.2101241 Test Loss: 0.1695920
EarlyStopping counter: 1 out of 5
Updating learning rate to 0.0017064002768293791
iters: 100, epoch: 12 | loss: 0.3560360
speed: 0.2200s/iter; left time: 20776.4963s
iters: 200, epoch: 12 | loss: 0.2906877
speed: 0.0638s/iter; left time: 6017.2672s
iters: 300, epoch: 12 | loss: 0.3136698
speed: 0.0640s/iter; left time: 6029.7119s
iters: 400, epoch: 12 | loss: 0.4521513
speed: 0.0637s/iter; left time: 5992.6421s
iters: 500, epoch: 12 | loss: 0.2683279
speed: 0.0635s/iter; left time: 5973.5166s
iters: 600, epoch: 12 | loss: 0.2095424
speed: 0.0632s/iter; left time: 5932.0850s
iters: 700, epoch: 12 | loss: 0.3217563
speed: 0.0646s/iter; left time: 6062.9666s
iters: 800, epoch: 12 | loss: 0.2670196
speed: 0.0635s/iter; left time: 5954.2641s
iters: 900, epoch: 12 | loss: 0.2306930
speed: 0.0639s/iter; left time: 5977.7809s
iters: 1000, epoch: 12 | loss: 0.2080201
speed: 0.0633s/iter; left time: 5915.4393s
Epoch: 12 cost time: 67.63420724868774
--------start to validate-----------
normed mse:0.0752, mae:0.2047, rmse:0.2742, mape:1.3057, mspe:24.8466, corr:0.8630
denormed mse:6.3301, mae:1.8782, rmse:2.5160, mape:0.1537, mspe:0.0697, corr:0.8630
--------start to test-----------
normed mse:0.1127, mae:0.2651, rmse:0.3357, mape:0.1911, mspe:0.0544, corr:0.7504
denormed mse:9.4922, mae:2.4327, rmse:3.0809, mape:inf, mspe:inf, corr:0.7504
Epoch: 12, Steps: 1062 | Train Loss: 0.2593013 valid Loss: 0.2046781 Test Loss: 0.2651025
Validation loss decreased (0.206705 --> 0.204678). Saving model ...
Updating learning rate to 0.00162108026298791
iters: 100, epoch: 13 | loss: 0.2679453
speed: 0.2223s/iter; left time: 20751.6236s
iters: 200, epoch: 13 | loss: 0.2244501
speed: 0.0640s/iter; left time: 5970.0265s
iters: 300, epoch: 13 | loss: 0.2729070
speed: 0.0638s/iter; left time: 5944.9959s
iters: 400, epoch: 13 | loss: 0.2141117
speed: 0.0642s/iter; left time: 5973.2411s
iters: 500, epoch: 13 | loss: 0.2737395
speed: 0.0649s/iter; left time: 6035.3685s
iters: 600, epoch: 13 | loss: 0.3773285
speed: 0.0655s/iter; left time: 6084.1809s
iters: 700, epoch: 13 | loss: 0.3060603
speed: 0.0651s/iter; left time: 6037.3794s
iters: 800, epoch: 13 | loss: 0.3271270
speed: 0.0639s/iter; left time: 5919.0781s
iters: 900, epoch: 13 | loss: 0.2570842
speed: 0.0644s/iter; left time: 5959.6727s
iters: 1000, epoch: 13 | loss: 0.1967695
speed: 0.0650s/iter; left time: 6008.8446s
Epoch: 13 cost time: 68.51006627082825
--------start to validate-----------
normed mse:0.0754, mae:0.2070, rmse:0.2745, mape:1.3459, mspe:27.9367, corr:0.8626
denormed mse:6.3463, mae:1.8996, rmse:2.5192, mape:0.1547, mspe:0.0686, corr:0.8626
--------start to test-----------
normed mse:0.1270, mae:0.2863, rmse:0.3563, mape:0.2047, mspe:0.0591, corr:0.7302
denormed mse:10.6924, mae:2.6270, rmse:3.2699, mape:inf, mspe:inf, corr:0.7302
Epoch: 13, Steps: 1062 | Train Loss: 0.2578851 valid Loss: 0.2070073 Test Loss: 0.2862706
EarlyStopping counter: 1 out of 5
Updating learning rate to 0.0015400262498385146
iters: 100, epoch: 14 | loss: 0.3828691
speed: 0.2242s/iter; left time: 20692.6053s
iters: 200, epoch: 14 | loss: 0.2255980
speed: 0.0655s/iter; left time: 6034.3079s
iters: 300, epoch: 14 | loss: 0.2057881
speed: 0.0651s/iter; left time: 5999.9542s
iters: 400, epoch: 14 | loss: 0.2044961
speed: 0.0654s/iter; left time: 6012.5291s
iters: 500, epoch: 14 | loss: 0.1950546
speed: 0.0659s/iter; left time: 6053.8845s
iters: 600, epoch: 14 | loss: 0.2513721
speed: 0.0663s/iter; left time: 6081.8917s
iters: 700, epoch: 14 | loss: 0.3617742
speed: 0.0676s/iter; left time: 6199.0184s
iters: 800, epoch: 14 | loss: 0.1818448
speed: 0.0660s/iter; left time: 6047.2192s
iters: 900, epoch: 14 | loss: 0.2921709
speed: 0.0637s/iter; left time: 5831.9500s
iters: 1000, epoch: 14 | loss: 0.4443547
speed: 0.0636s/iter; left time: 5812.0902s
Epoch: 14 cost time: 69.46138763427734
--------start to validate-----------
normed mse:0.0749, mae:0.2054, rmse:0.2737, mape:1.3964, mspe:29.3059, corr:0.8646
denormed mse:6.3101, mae:1.8853, rmse:2.5120, mape:0.1517, mspe:0.0638, corr:0.8646
--------start to test-----------
normed mse:0.0582, mae:0.1803, rmse:0.2413, mape:0.1373, mspe:0.0348, corr:0.7685
denormed mse:4.9051, mae:1.6544, rmse:2.2147, mape:inf, mspe:inf, corr:0.7685
Epoch: 14, Steps: 1062 | Train Loss: 0.2567903 valid Loss: 0.2054437 Test Loss: 0.1802866
EarlyStopping counter: 2 out of 5
Updating learning rate to 0.0014630249373465886
iters: 100, epoch: 15 | loss: 0.2464596
speed: 0.2214s/iter; left time: 20196.9599s
iters: 200, epoch: 15 | loss: 0.3060012
speed: 0.0648s/iter; left time: 5902.4050s
iters: 300, epoch: 15 | loss: 0.3132369
speed: 0.0647s/iter; left time: 5892.1457s
iters: 400, epoch: 15 | loss: 0.3352527
speed: 0.0645s/iter; left time: 5867.1592s
iters: 500, epoch: 15 | loss: 0.2343948
speed: 0.0640s/iter; left time: 5814.1662s
iters: 600, epoch: 15 | loss: 0.1983065
speed: 0.0636s/iter; left time: 5772.0216s
iters: 700, epoch: 15 | loss: 0.1803256
speed: 0.0638s/iter; left time: 5782.6625s
iters: 800, epoch: 15 | loss: 0.2608042
speed: 0.0640s/iter; left time: 5795.1071s
iters: 900, epoch: 15 | loss: 0.3982482
speed: 0.0656s/iter; left time: 5930.5014s
iters: 1000, epoch: 15 | loss: 0.2794555
speed: 0.0649s/iter; left time: 5859.7955s
Epoch: 15 cost time: 68.22825241088867
--------start to validate-----------
normed mse:0.0751, mae:0.2062, rmse:0.2740, mape:1.4129, mspe:29.4700, corr:0.8638
denormed mse:6.3221, mae:1.8922, rmse:2.5144, mape:0.1511, mspe:0.0607, corr:0.8638
--------start to test-----------
normed mse:0.0743, mae:0.2067, rmse:0.2726, mape:0.1542, mspe:0.0414, corr:0.7309
denormed mse:6.2598, mae:1.8971, rmse:2.5020, mape:inf, mspe:inf, corr:0.7309
Epoch: 15, Steps: 1062 | Train Loss: 0.2556333 valid Loss: 0.2062003 Test Loss: 0.2067328
EarlyStopping counter: 3 out of 5
Updating learning rate to 0.001389873690479259
iters: 100, epoch: 16 | loss: 0.4233045
speed: 0.2250s/iter; left time: 20289.9657s
iters: 200, epoch: 16 | loss: 0.2079662
speed: 0.0654s/iter; left time: 5887.5618s
iters: 300, epoch: 16 | loss: 0.2657562
speed: 0.0672s/iter; left time: 6045.0847s
iters: 400, epoch: 16 | loss: 0.1947590
speed: 0.0716s/iter; left time: 6432.6096s
iters: 500, epoch: 16 | loss: 0.2555844
speed: 0.0686s/iter; left time: 6157.6966s
iters: 600, epoch: 16 | loss: 0.2228586
speed: 0.0686s/iter; left time: 6153.4446s
iters: 700, epoch: 16 | loss: 0.2561190
speed: 0.0693s/iter; left time: 6211.2939s
iters: 800, epoch: 16 | loss: 0.2335158
speed: 0.0682s/iter; left time: 6099.8479s
iters: 900, epoch: 16 | loss: 0.1972947
speed: 0.0681s/iter; left time: 6082.7818s
iters: 1000, epoch: 16 | loss: 0.2210546
speed: 0.0684s/iter; left time: 6108.1396s
Epoch: 16 cost time: 72.44369554519653
--------start to validate-----------
normed mse:0.0767, mae:0.2066, rmse:0.2769, mape:1.3287, mspe:26.0432, corr:0.8614
denormed mse:6.4587, mae:1.8956, rmse:2.5414, mape:0.1549, mspe:0.0696, corr:0.8614
--------start to test-----------
normed mse:0.0722, mae:0.2028, rmse:0.2686, mape:0.1513, mspe:0.0397, corr:0.7538
denormed mse:6.0773, mae:1.8609, rmse:2.4652, mape:inf, mspe:inf, corr:0.7538
Epoch: 16, Steps: 1062 | Train Loss: 0.2538822 valid Loss: 0.2065719 Test Loss: 0.2027889
EarlyStopping counter: 4 out of 5
Updating learning rate to 0.001320380005955296
iters: 100, epoch: 17 | loss: 0.2435877
speed: 0.2235s/iter; left time: 19918.0769s
iters: 200, epoch: 17 | loss: 0.2681281
speed: 0.0643s/iter; left time: 5724.9103s
iters: 300, epoch: 17 | loss: 0.2386408
speed: 0.0645s/iter; left time: 5732.2681s
iters: 400, epoch: 17 | loss: 0.1940629
speed: 0.0645s/iter; left time: 5730.9041s
iters: 500, epoch: 17 | loss: 0.1982391
speed: 0.0647s/iter; left time: 5735.3843s
iters: 600, epoch: 17 | loss: 0.2039342
speed: 0.0645s/iter; left time: 5713.2374s
iters: 700, epoch: 17 | loss: 0.2077632
speed: 0.0648s/iter; left time: 5738.5635s
iters: 800, epoch: 17 | loss: 0.3183163
speed: 0.0640s/iter; left time: 5656.7522s
iters: 900, epoch: 17 | loss: 0.2791365
speed: 0.0648s/iter; left time: 5723.3435s
iters: 1000, epoch: 17 | loss: 0.2257991
speed: 0.0642s/iter; left time: 5660.2778s
Epoch: 17 cost time: 68.39220643043518
--------start to validate-----------
normed mse:0.0785, mae:0.2103, rmse:0.2803, mape:1.3704, mspe:27.2711, corr:0.8559
denormed mse:6.6141, mae:1.9294, rmse:2.5718, mape:0.1568, mspe:0.0703, corr:0.8559
--------start to test-----------
normed mse:0.1018, mae:0.2485, rmse:0.3191, mape:0.1800, mspe:0.0498, corr:0.7221
denormed mse:8.5757, mae:2.2802, rmse:2.9284, mape:inf, mspe:inf, corr:0.7221
Epoch: 17, Steps: 1062 | Train Loss: 0.2532137 valid Loss: 0.2102546 Test Loss: 0.2484858
EarlyStopping counter: 5 out of 5
Early stopping
save model in exp/ETT_checkpoints/SCINet_ETTh1_ftS_sl96_ll48_pl48_lr0.003_bs8_hid4.0_s1_l3_dp0.5_invFalse_itr0/ETTh148.bin
testing : SCINet_ETTh1_ftS_sl96_ll48_pl48_lr0.003_bs8_hid4.0_s1_l3_dp0.5_invFalse_itr0<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
test 2833
normed mse:0.1127, mae:0.2651, rmse:0.3357, mape:0.1911, mspe:0.0544, corr:0.7504
TTTT denormed mse:9.4922, mae:2.4327, rmse:3.0809, mape:inf, mspe:inf, corr:0.7504
Final mean normed mse:0.1127,mae:0.2651,denormed mse:9.4922,mae:2.4327`

After this a new folder in exp is formed exp/ETT_checkpoints/SCINet_ETTh1_ftS_sl96_ll48_pl48_lr0.003_bs8_hid4.0_s1_l3_dp0.5_invFalse_itr0

and there are 2 folders withing it 1.) ETTh148.bin
2.) checkpoint.pth
Upon extracting it we get
1.) archive/data.pkl

When i try to convert this data.pkl file into a data frame by running this code:
import numpy as np import pandas as pd import pickle

df = pd.read_pickle('out.pkl') print(df)
I get the following error:
unpicklingerror: a load persistent id instruction was encountered, but no persistent_load function was specified.

How do i Plot the results of ETTh1 ?
Thank you

Confusion about ETTh dataloader

Hello, I was confused looking at your code for the ETTh dataloader. There is a variable length_len that is normally set to the same length as the prediction, so the y value will have some overlap with the x value. Could you please let me know the purpose of this length_len and if this overlap will cause issues with evaluation? Thanks!

r_begin = s_end - self.label_len

Long Term Prediction implementation

I've been using my own custom dataset with SCINet to great success (great work done here), but I noticed that the financial dataloader had a specific option just for long term prediction. This option is not listed in the ETT dataloader, so I am wondering what its purpose is, and if I should implement it in the use case of long term prediction on a custom dataset.

The experimental results of ETTH1 cannot be reproduced

Hi thank you very much for your work. When training the ETTH1 dataset, with the same hyperparameters as yours, the mse and mae I get are 0.3594 and 0.3923, respectively, which is a bit large compared to the data 0.311 and 0.348 in your paper, can you tell me if it is still There are parameters that need to be modified, thank you! Below is my log.
Args in experiment:
Namespace(INN=1, RIN=False, batch_size=4, c_out=7, checkpoints='exp/ETT_checkpoints/', cols=None, concat_len=0, data='ETTh1', data_path='ETTh1.csv', dec_in=7, detail_freq='h', devices='0', dilation=1, dropout=0.5, embed='timeF', enc_in=7, evaluate=False, features='M', freq='h', gpu=0, groups=1, hidden_size=4.0, inverse=False, itr=0, kernel=5, label_len=24, lastWeight=1.0, levels=3, loss='mae', lr=0.009, lradj=1, model='SCINet', model_name='etth1_M_I48_O24_lr3e-3_bs8_dp0.5_h4_s1l3', num_workers=0, patience=15, positionalEcoding=False, pred_len=24, resume=False, root_path='./datasets/ETT-small/', save=False, seq_len=48, single_step=0, single_step_output_One=0, stacks=1, target='OT', train_epochs=100, use_amp=False, use_gpu=True, use_multi_gpu=False, window_size=12)
SCINet(
(blocks1): EncoderTree(
(SCINet_Tree): SCINet_Tree(
(workingblock): LevelSCINet(
(interact): InteractorLevel(
(level): Interactor(
(split): Splitting()
(phi): Sequential(
(0): ReplicationPad1d((3, 3))
(1): Conv1d(7, 28, kernel_size=(5,), stride=(1,))
(2): LeakyReLU(negative_slope=0.01, inplace=True)
(3): Dropout(p=0.5, inplace=False)
(4): Conv1d(28, 7, kernel_size=(3,), stride=(1,))
(5): Tanh()
)
(psi): Sequential(
(0): ReplicationPad1d((3, 3))
(1): Conv1d(7, 28, kernel_size=(5,), stride=(1,))
(2): LeakyReLU(negative_slope=0.01, inplace=True)
(3): Dropout(p=0.5, inplace=False)
(4): Conv1d(28, 7, kernel_size=(3,), stride=(1,))
(5): Tanh()
)
(P): Sequential(
(0): ReplicationPad1d((3, 3))
(1): Conv1d(7, 28, kernel_size=(5,), stride=(1,))
(2): LeakyReLU(negative_slope=0.01, inplace=True)
(3): Dropout(p=0.5, inplace=False)
(4): Conv1d(28, 7, kernel_size=(3,), stride=(1,))
(5): Tanh()
)
(U): Sequential(
(0): ReplicationPad1d((3, 3))
(1): Conv1d(7, 28, kernel_size=(5,), stride=(1,))
(2): LeakyReLU(negative_slope=0.01, inplace=True)
(3): Dropout(p=0.5, inplace=False)
(4): Conv1d(28, 7, kernel_size=(3,), stride=(1,))
(5): Tanh()
)
)
)
)
(SCINet_Tree_odd): SCINet_Tree(
(workingblock): LevelSCINet(
(interact): InteractorLevel(
(level): Interactor(
(split): Splitting()
(phi): Sequential(
(0): ReplicationPad1d((3, 3))
(1): Conv1d(7, 28, kernel_size=(5,), stride=(1,))
(2): LeakyReLU(negative_slope=0.01, inplace=True)
(3): Dropout(p=0.5, inplace=False)
(4): Conv1d(28, 7, kernel_size=(3,), stride=(1,))
(5): Tanh()
)
(psi): Sequential(
(0): ReplicationPad1d((3, 3))
(1): Conv1d(7, 28, kernel_size=(5,), stride=(1,))
(2): LeakyReLU(negative_slope=0.01, inplace=True)
(3): Dropout(p=0.5, inplace=False)
(4): Conv1d(28, 7, kernel_size=(3,), stride=(1,))
(5): Tanh()
)
(P): Sequential(
(0): ReplicationPad1d((3, 3))
(1): Conv1d(7, 28, kernel_size=(5,), stride=(1,))
(2): LeakyReLU(negative_slope=0.01, inplace=True)
(3): Dropout(p=0.5, inplace=False)
(4): Conv1d(28, 7, kernel_size=(3,), stride=(1,))
(5): Tanh()
)
(U): Sequential(
(0): ReplicationPad1d((3, 3))
(1): Conv1d(7, 28, kernel_size=(5,), stride=(1,))
(2): LeakyReLU(negative_slope=0.01, inplace=True)
(3): Dropout(p=0.5, inplace=False)
(4): Conv1d(28, 7, kernel_size=(3,), stride=(1,))
(5): Tanh()
)
)
)
)
(SCINet_Tree_odd): SCINet_Tree(
(workingblock): LevelSCINet(
(interact): InteractorLevel(
(level): Interactor(
(split): Splitting()
(phi): Sequential(
(0): ReplicationPad1d((3, 3))
(1): Conv1d(7, 28, kernel_size=(5,), stride=(1,))
(2): LeakyReLU(negative_slope=0.01, inplace=True)
(3): Dropout(p=0.5, inplace=False)
(4): Conv1d(28, 7, kernel_size=(3,), stride=(1,))
(5): Tanh()
)
(psi): Sequential(
(0): ReplicationPad1d((3, 3))
(1): Conv1d(7, 28, kernel_size=(5,), stride=(1,))
(2): LeakyReLU(negative_slope=0.01, inplace=True)
(3): Dropout(p=0.5, inplace=False)
(4): Conv1d(28, 7, kernel_size=(3,), stride=(1,))
(5): Tanh()
)
(P): Sequential(
(0): ReplicationPad1d((3, 3))
(1): Conv1d(7, 28, kernel_size=(5,), stride=(1,))
(2): LeakyReLU(negative_slope=0.01, inplace=True)
(3): Dropout(p=0.5, inplace=False)
(4): Conv1d(28, 7, kernel_size=(3,), stride=(1,))
(5): Tanh()
)
(U): Sequential(
(0): ReplicationPad1d((3, 3))
(1): Conv1d(7, 28, kernel_size=(5,), stride=(1,))
(2): LeakyReLU(negative_slope=0.01, inplace=True)
(3): Dropout(p=0.5, inplace=False)
(4): Conv1d(28, 7, kernel_size=(3,), stride=(1,))
(5): Tanh()
)
)
)
)
)
(SCINet_Tree_even): SCINet_Tree(
(workingblock): LevelSCINet(
(interact): InteractorLevel(
(level): Interactor(
(split): Splitting()
(phi): Sequential(
(0): ReplicationPad1d((3, 3))
(1): Conv1d(7, 28, kernel_size=(5,), stride=(1,))
(2): LeakyReLU(negative_slope=0.01, inplace=True)
(3): Dropout(p=0.5, inplace=False)
(4): Conv1d(28, 7, kernel_size=(3,), stride=(1,))
(5): Tanh()
)
(psi): Sequential(
(0): ReplicationPad1d((3, 3))
(1): Conv1d(7, 28, kernel_size=(5,), stride=(1,))
(2): LeakyReLU(negative_slope=0.01, inplace=True)
(3): Dropout(p=0.5, inplace=False)
(4): Conv1d(28, 7, kernel_size=(3,), stride=(1,))
(5): Tanh()
)
(P): Sequential(
(0): ReplicationPad1d((3, 3))
(1): Conv1d(7, 28, kernel_size=(5,), stride=(1,))
(2): LeakyReLU(negative_slope=0.01, inplace=True)
(3): Dropout(p=0.5, inplace=False)
(4): Conv1d(28, 7, kernel_size=(3,), stride=(1,))
(5): Tanh()
)
(U): Sequential(
(0): ReplicationPad1d((3, 3))
(1): Conv1d(7, 28, kernel_size=(5,), stride=(1,))
(2): LeakyReLU(negative_slope=0.01, inplace=True)
(3): Dropout(p=0.5, inplace=False)
(4): Conv1d(28, 7, kernel_size=(3,), stride=(1,))
(5): Tanh()
)
)
)
)
)
)
(SCINet_Tree_even): SCINet_Tree(
(workingblock): LevelSCINet(
(interact): InteractorLevel(
(level): Interactor(
(split): Splitting()
(phi): Sequential(
(0): ReplicationPad1d((3, 3))
(1): Conv1d(7, 28, kernel_size=(5,), stride=(1,))
(2): LeakyReLU(negative_slope=0.01, inplace=True)
(3): Dropout(p=0.5, inplace=False)
(4): Conv1d(28, 7, kernel_size=(3,), stride=(1,))
(5): Tanh()
)
(psi): Sequential(
(0): ReplicationPad1d((3, 3))
(1): Conv1d(7, 28, kernel_size=(5,), stride=(1,))
(2): LeakyReLU(negative_slope=0.01, inplace=True)
(3): Dropout(p=0.5, inplace=False)
(4): Conv1d(28, 7, kernel_size=(3,), stride=(1,))
(5): Tanh()
)
(P): Sequential(
(0): ReplicationPad1d((3, 3))
(1): Conv1d(7, 28, kernel_size=(5,), stride=(1,))
(2): LeakyReLU(negative_slope=0.01, inplace=True)
(3): Dropout(p=0.5, inplace=False)
(4): Conv1d(28, 7, kernel_size=(3,), stride=(1,))
(5): Tanh()
)
(U): Sequential(
(0): ReplicationPad1d((3, 3))
(1): Conv1d(7, 28, kernel_size=(5,), stride=(1,))
(2): LeakyReLU(negative_slope=0.01, inplace=True)
(3): Dropout(p=0.5, inplace=False)
(4): Conv1d(28, 7, kernel_size=(3,), stride=(1,))
(5): Tanh()
)
)
)
)
(SCINet_Tree_odd): SCINet_Tree(
(workingblock): LevelSCINet(
(interact): InteractorLevel(
(level): Interactor(
(split): Splitting()
(phi): Sequential(
(0): ReplicationPad1d((3, 3))
(1): Conv1d(7, 28, kernel_size=(5,), stride=(1,))
(2): LeakyReLU(negative_slope=0.01, inplace=True)
(3): Dropout(p=0.5, inplace=False)
(4): Conv1d(28, 7, kernel_size=(3,), stride=(1,))
(5): Tanh()
)
(psi): Sequential(
(0): ReplicationPad1d((3, 3))
(1): Conv1d(7, 28, kernel_size=(5,), stride=(1,))
(2): LeakyReLU(negative_slope=0.01, inplace=True)
(3): Dropout(p=0.5, inplace=False)
(4): Conv1d(28, 7, kernel_size=(3,), stride=(1,))
(5): Tanh()
)
(P): Sequential(
(0): ReplicationPad1d((3, 3))
(1): Conv1d(7, 28, kernel_size=(5,), stride=(1,))
(2): LeakyReLU(negative_slope=0.01, inplace=True)
(3): Dropout(p=0.5, inplace=False)
(4): Conv1d(28, 7, kernel_size=(3,), stride=(1,))
(5): Tanh()
)
(U): Sequential(
(0): ReplicationPad1d((3, 3))
(1): Conv1d(7, 28, kernel_size=(5,), stride=(1,))
(2): LeakyReLU(negative_slope=0.01, inplace=True)
(3): Dropout(p=0.5, inplace=False)
(4): Conv1d(28, 7, kernel_size=(3,), stride=(1,))
(5): Tanh()
)
)
)
)
)
(SCINet_Tree_even): SCINet_Tree(
(workingblock): LevelSCINet(
(interact): InteractorLevel(
(level): Interactor(
(split): Splitting()
(phi): Sequential(
(0): ReplicationPad1d((3, 3))
(1): Conv1d(7, 28, kernel_size=(5,), stride=(1,))
(2): LeakyReLU(negative_slope=0.01, inplace=True)
(3): Dropout(p=0.5, inplace=False)
(4): Conv1d(28, 7, kernel_size=(3,), stride=(1,))
(5): Tanh()
)
(psi): Sequential(
(0): ReplicationPad1d((3, 3))
(1): Conv1d(7, 28, kernel_size=(5,), stride=(1,))
(2): LeakyReLU(negative_slope=0.01, inplace=True)
(3): Dropout(p=0.5, inplace=False)
(4): Conv1d(28, 7, kernel_size=(3,), stride=(1,))
(5): Tanh()
)
(P): Sequential(
(0): ReplicationPad1d((3, 3))
(1): Conv1d(7, 28, kernel_size=(5,), stride=(1,))
(2): LeakyReLU(negative_slope=0.01, inplace=True)
(3): Dropout(p=0.5, inplace=False)
(4): Conv1d(28, 7, kernel_size=(3,), stride=(1,))
(5): Tanh()
)
(U): Sequential(
(0): ReplicationPad1d((3, 3))
(1): Conv1d(7, 28, kernel_size=(5,), stride=(1,))
(2): LeakyReLU(negative_slope=0.01, inplace=True)
(3): Dropout(p=0.5, inplace=False)
(4): Conv1d(28, 7, kernel_size=(3,), stride=(1,))
(5): Tanh()
)
)
)
)
)
)
)
)
(projection1): Conv1d(48, 24, kernel_size=(1,), stride=(1,), bias=False)
)

start training : SCINet_ETTh1_ftM_sl48_ll24_pl24_lr0.009_bs4_hid4.0_s1_l3_dp0.5_invFalse_itr0>>>>>>>>>>>>>>>>>>>>>>>>>>
train 8569
val 2857
test 2857
exp/ETT_checkpoints/SCINet_ETTh1_ftM_sl48_ll24_pl24_lr0.009_bs4_hid4.0_s1_l3_dp0.5_invFalse_itr0
iters: 100, epoch: 1 | loss: 0.3651017
speed: 0.0295s/iter; left time: 6318.7703s
iters: 200, epoch: 1 | loss: 0.3505008
speed: 0.0279s/iter; left time: 5976.3270s
iters: 300, epoch: 1 | loss: 0.3929793
speed: 0.0279s/iter; left time: 5962.0694s
iters: 400, epoch: 1 | loss: 0.3301361
speed: 0.0279s/iter; left time: 5966.3788s
iters: 500, epoch: 1 | loss: 0.3228126
speed: 0.0282s/iter; left time: 6027.5348s
iters: 600, epoch: 1 | loss: 0.3718488
speed: 0.0283s/iter; left time: 6044.8674s
iters: 700, epoch: 1 | loss: 0.3676443
speed: 0.0282s/iter; left time: 6011.5804s
iters: 800, epoch: 1 | loss: 0.3627875
speed: 0.0282s/iter; left time: 6025.8960s
iters: 900, epoch: 1 | loss: 0.4119731
speed: 0.0279s/iter; left time: 5941.4842s
iters: 1000, epoch: 1 | loss: 0.3445457
speed: 0.0278s/iter; left time: 5927.6358s
iters: 1100, epoch: 1 | loss: 0.3446910
speed: 0.0279s/iter; left time: 5936.7592s
iters: 1200, epoch: 1 | loss: 0.2979367
speed: 0.0277s/iter; left time: 5891.3081s
iters: 1300, epoch: 1 | loss: 0.3462140
speed: 0.0275s/iter; left time: 5860.2575s
iters: 1400, epoch: 1 | loss: 0.5417344
speed: 0.0276s/iter; left time: 5875.8505s
iters: 1500, epoch: 1 | loss: 0.4214476
speed: 0.0278s/iter; left time: 5909.9664s
iters: 1600, epoch: 1 | loss: 0.2875453
speed: 0.0278s/iter; left time: 5902.4232s
iters: 1700, epoch: 1 | loss: 0.4202584
speed: 0.0278s/iter; left time: 5899.5329s
iters: 1800, epoch: 1 | loss: 0.3184803
speed: 0.0278s/iter; left time: 5903.0943s
iters: 1900, epoch: 1 | loss: 0.4054892
speed: 0.0278s/iter; left time: 5910.5245s
iters: 2000, epoch: 1 | loss: 0.3641047
speed: 0.0279s/iter; left time: 5925.2561s
iters: 2100, epoch: 1 | loss: 0.2887170
speed: 0.0284s/iter; left time: 6015.2713s
Epoch: 1 cost time: 59.9717230796814
--------start to validate-----------
normed mse:0.4301, mae:0.4408, rmse:0.6558, mape:6.1262, mspe:25663.3134, corr:0.8366
denormed mse:7.0964, mae:1.5289, rmse:2.6639, mape:inf, mspe:inf, corr:0.8366
--------start to test-----------
normed mse:0.3644, mae:0.4027, rmse:0.6037, mape:8.4118, mspe:29899.2508, corr:0.7235
denormed mse:7.4835, mae:1.4891, rmse:2.7356, mape:inf, mspe:inf, corr:0.7235
Epoch: 1, Steps: 2142 | Train Loss: 0.3791955 valid Loss: 0.4408401 Test Loss: 0.4026500
Validation loss decreased (inf --> 0.440840). Saving model ...
Updating learning rate to 0.008549999999999999
iters: 100, epoch: 2 | loss: 0.4033299
speed: 0.1533s/iter; left time: 32495.7200s
iters: 200, epoch: 2 | loss: 0.4220750
speed: 0.0276s/iter; left time: 5842.5447s
iters: 300, epoch: 2 | loss: 0.3310422
speed: 0.0275s/iter; left time: 5828.7353s
iters: 400, epoch: 2 | loss: 0.3571222
speed: 0.0277s/iter; left time: 5853.6155s
iters: 500, epoch: 2 | loss: 0.3545492
speed: 0.0280s/iter; left time: 5915.4474s
iters: 600, epoch: 2 | loss: 0.2830701
speed: 0.0280s/iter; left time: 5922.0331s
iters: 700, epoch: 2 | loss: 0.3858727
speed: 0.0280s/iter; left time: 5917.8075s
iters: 800, epoch: 2 | loss: 0.3042993
speed: 0.0280s/iter; left time: 5922.7572s
iters: 900, epoch: 2 | loss: 0.3451869
speed: 0.0278s/iter; left time: 5873.5665s
iters: 1000, epoch: 2 | loss: 0.2723933
speed: 0.0277s/iter; left time: 5853.1839s
iters: 1100, epoch: 2 | loss: 0.4236742
speed: 0.0275s/iter; left time: 5806.4575s
iters: 1200, epoch: 2 | loss: 0.4081782
speed: 0.0276s/iter; left time: 5812.7159s
iters: 1300, epoch: 2 | loss: 0.3291027
speed: 0.0278s/iter; left time: 5868.3560s
iters: 1400, epoch: 2 | loss: 0.4038144
speed: 0.0280s/iter; left time: 5888.6294s
iters: 1500, epoch: 2 | loss: 0.3101422
speed: 0.0279s/iter; left time: 5884.1583s
iters: 1600, epoch: 2 | loss: 0.3712929
speed: 0.0279s/iter; left time: 5879.8941s
iters: 1700, epoch: 2 | loss: 0.3459564
speed: 0.0281s/iter; left time: 5904.1190s
iters: 1800, epoch: 2 | loss: 0.3104483
speed: 0.0281s/iter; left time: 5911.8671s
iters: 1900, epoch: 2 | loss: 0.3965688
speed: 0.0275s/iter; left time: 5785.9043s
iters: 2000, epoch: 2 | loss: 0.4188831
speed: 0.0275s/iter; left time: 5772.6210s
iters: 2100, epoch: 2 | loss: 0.3976253
speed: 0.0275s/iter; left time: 5780.8356s
Epoch: 2 cost time: 59.54820132255554
--------start to validate-----------
normed mse:0.4192, mae:0.4341, rmse:0.6475, mape:5.8769, mspe:24098.6398, corr:0.8388
denormed mse:6.6228, mae:1.4725, rmse:2.5735, mape:inf, mspe:inf, corr:0.8388
--------start to test-----------
normed mse:0.3648, mae:0.4016, rmse:0.6040, mape:7.6375, mspe:23761.7005, corr:0.7234
denormed mse:6.9325, mae:1.4331, rmse:2.6330, mape:inf, mspe:inf, corr:0.7234
Epoch: 2, Steps: 2142 | Train Loss: 0.3706865 valid Loss: 0.4340943 Test Loss: 0.4016285
Validation loss decreased (0.440840 --> 0.434094). Saving model ...
Updating learning rate to 0.0081225
iters: 100, epoch: 3 | loss: 0.3006243
speed: 0.1515s/iter; left time: 31777.6295s
iters: 200, epoch: 3 | loss: 0.3596947
speed: 0.0278s/iter; left time: 5825.7000s
iters: 300, epoch: 3 | loss: 0.2566329
speed: 0.0276s/iter; left time: 5794.7708s
iters: 400, epoch: 3 | loss: 0.3511219
speed: 0.0276s/iter; left time: 5783.3121s
iters: 500, epoch: 3 | loss: 0.3138544
speed: 0.0278s/iter; left time: 5827.6712s
iters: 600, epoch: 3 | loss: 0.2947013
speed: 0.0285s/iter; left time: 5969.1809s
iters: 700, epoch: 3 | loss: 0.3902800
speed: 0.0284s/iter; left time: 5944.6223s
iters: 800, epoch: 3 | loss: 0.3481478
speed: 0.0278s/iter; left time: 5813.9536s
iters: 900, epoch: 3 | loss: 0.4417644
speed: 0.0278s/iter; left time: 5802.4645s
iters: 1000, epoch: 3 | loss: 0.4901435
speed: 0.0278s/iter; left time: 5805.0026s
iters: 1100, epoch: 3 | loss: 0.3575970
speed: 0.0278s/iter; left time: 5809.4056s
iters: 1200, epoch: 3 | loss: 0.3082275
speed: 0.0278s/iter; left time: 5800.3406s
iters: 1300, epoch: 3 | loss: 0.3474898
speed: 0.0278s/iter; left time: 5796.6274s
iters: 1400, epoch: 3 | loss: 0.3873351
speed: 0.0278s/iter; left time: 5801.9911s
iters: 1500, epoch: 3 | loss: 0.3985312
speed: 0.0278s/iter; left time: 5796.4244s
iters: 1600, epoch: 3 | loss: 0.3473285
speed: 0.0279s/iter; left time: 5813.9807s
iters: 1700, epoch: 3 | loss: 0.2953809
speed: 0.0280s/iter; left time: 5837.9339s
iters: 1800, epoch: 3 | loss: 0.2939656
speed: 0.0280s/iter; left time: 5836.6911s
iters: 1900, epoch: 3 | loss: 0.3971220
speed: 0.0280s/iter; left time: 5832.4716s
iters: 2000, epoch: 3 | loss: 0.3230156
speed: 0.0280s/iter; left time: 5817.5040s
iters: 2100, epoch: 3 | loss: 0.3292217
speed: 0.0280s/iter; left time: 5815.1257s
Epoch: 3 cost time: 59.7959942817688
--------start to validate-----------
normed mse:0.4106, mae:0.4224, rmse:0.6408, mape:5.8930, mspe:23804.0522, corr:0.8409
denormed mse:6.5745, mae:1.4459, rmse:2.5641, mape:inf, mspe:inf, corr:0.8409
--------start to test-----------
normed mse:0.3594, mae:0.3923, rmse:0.5995, mape:7.5035, mspe:24301.3721, corr:0.7205
denormed mse:6.9649, mae:1.4154, rmse:2.6391, mape:inf, mspe:inf, corr:0.7205
Epoch: 3, Steps: 2142 | Train Loss: 0.3675621 valid Loss: 0.4224180 Test Loss: 0.3923225
Validation loss decreased (0.434094 --> 0.422418). Saving model ...
Updating learning rate to 0.007716374999999998
iters: 100, epoch: 4 | loss: 0.4576252
speed: 0.1493s/iter; left time: 30998.9868s
iters: 200, epoch: 4 | loss: 0.4031994
speed: 0.0281s/iter; left time: 5830.8540s
iters: 300, epoch: 4 | loss: 0.4278273
speed: 0.0286s/iter; left time: 5938.6052s
iters: 400, epoch: 4 | loss: 0.3313950
speed: 0.0280s/iter; left time: 5805.4616s
iters: 500, epoch: 4 | loss: 0.2830721
speed: 0.0288s/iter; left time: 5969.7668s
iters: 600, epoch: 4 | loss: 0.3394617
speed: 0.0287s/iter; left time: 5953.4050s
iters: 700, epoch: 4 | loss: 0.3362619
speed: 0.0291s/iter; left time: 6016.4815s
iters: 800, epoch: 4 | loss: 0.2766969
speed: 0.0287s/iter; left time: 5939.6163s
iters: 900, epoch: 4 | loss: 0.3397340
speed: 0.0290s/iter; left time: 5996.2590s
iters: 1000, epoch: 4 | loss: 0.3242672
speed: 0.0282s/iter; left time: 5828.7358s
iters: 1100, epoch: 4 | loss: 0.3496055
speed: 0.0284s/iter; left time: 5871.3363s
iters: 1200, epoch: 4 | loss: 0.3492430
speed: 0.0283s/iter; left time: 5851.4826s
iters: 1300, epoch: 4 | loss: 0.3514314
speed: 0.0280s/iter; left time: 5772.2568s
iters: 1400, epoch: 4 | loss: 0.2979621
speed: 0.0294s/iter; left time: 6071.8731s
iters: 1500, epoch: 4 | loss: 0.3437521
speed: 0.0289s/iter; left time: 5961.6856s
iters: 1600, epoch: 4 | loss: 0.2574955
speed: 0.0282s/iter; left time: 5816.3523s
iters: 1700, epoch: 4 | loss: 0.3036486
speed: 0.0276s/iter; left time: 5679.0905s
iters: 1800, epoch: 4 | loss: 0.3060672
speed: 0.0289s/iter; left time: 5955.5722s
iters: 1900, epoch: 4 | loss: 0.3665306
speed: 0.0283s/iter; left time: 5833.5155s
iters: 2000, epoch: 4 | loss: 0.3999368
speed: 0.0280s/iter; left time: 5760.6245s
iters: 2100, epoch: 4 | loss: 0.3088425
speed: 0.0280s/iter; left time: 5756.6908s
Epoch: 4 cost time: 60.87839937210083
--------start to validate-----------
normed mse:0.4124, mae:0.4279, rmse:0.6422, mape:5.9884, mspe:24114.0258, corr:0.8404
denormed mse:6.6824, mae:1.4731, rmse:2.5850, mape:inf, mspe:inf, corr:0.8404
--------start to test-----------
normed mse:0.3601, mae:0.3963, rmse:0.6001, mape:8.4487, mspe:28741.5909, corr:0.7119
denormed mse:7.3766, mae:1.4553, rmse:2.7160, mape:inf, mspe:inf, corr:0.7119
Epoch: 4, Steps: 2142 | Train Loss: 0.3661055 valid Loss: 0.4279180 Test Loss: 0.3962910
EarlyStopping counter: 1 out of 15
Updating learning rate to 0.007330556249999998
iters: 100, epoch: 5 | loss: 0.3216497
speed: 0.1513s/iter; left time: 31094.1250s
iters: 200, epoch: 5 | loss: 0.3528404
speed: 0.0277s/iter; left time: 5697.3340s
iters: 300, epoch: 5 | loss: 0.3661552
speed: 0.0283s/iter; left time: 5803.2182s
iters: 400, epoch: 5 | loss: 0.3049476
speed: 0.0284s/iter; left time: 5837.1130s
iters: 500, epoch: 5 | loss: 0.4457057
speed: 0.0287s/iter; left time: 5893.7462s
iters: 600, epoch: 5 | loss: 0.3698492
speed: 0.0292s/iter; left time: 5985.3711s
iters: 700, epoch: 5 | loss: 0.3416702
speed: 0.0286s/iter; left time: 5867.2302s
iters: 800, epoch: 5 | loss: 0.3614202
speed: 0.0292s/iter; left time: 5972.0056s
iters: 900, epoch: 5 | loss: 0.3991942
speed: 0.0286s/iter; left time: 5851.8310s
iters: 1000, epoch: 5 | loss: 0.2818972
speed: 0.0281s/iter; left time: 5748.2781s
iters: 1100, epoch: 5 | loss: 0.3370062
speed: 0.0281s/iter; left time: 5757.3880s
iters: 1200, epoch: 5 | loss: 0.3217392
speed: 0.0289s/iter; left time: 5908.1417s
iters: 1300, epoch: 5 | loss: 0.2573744
speed: 0.0295s/iter; left time: 6031.2498s
iters: 1400, epoch: 5 | loss: 0.4832221
speed: 0.0283s/iter; left time: 5770.3668s
iters: 1500, epoch: 5 | loss: 0.3038020
speed: 0.0281s/iter; left time: 5735.7644s
iters: 1600, epoch: 5 | loss: 0.4502504
speed: 0.0285s/iter; left time: 5810.1176s
iters: 1700, epoch: 5 | loss: 0.3126244
speed: 0.0289s/iter; left time: 5901.8960s
iters: 1800, epoch: 5 | loss: 0.3371652
speed: 0.0291s/iter; left time: 5924.0287s
iters: 1900, epoch: 5 | loss: 0.3134840
speed: 0.0287s/iter; left time: 5837.1210s
iters: 2000, epoch: 5 | loss: 0.3738162
speed: 0.0286s/iter; left time: 5818.7893s
iters: 2100, epoch: 5 | loss: 0.4856545
speed: 0.0282s/iter; left time: 5739.0360s
Epoch: 5 cost time: 61.17092418670654
--------start to validate-----------
normed mse:0.4397, mae:0.4497, rmse:0.6631, mape:5.5498, mspe:20281.5202, corr:0.8386
denormed mse:6.8925, mae:1.5190, rmse:2.6254, mape:inf, mspe:inf, corr:0.8386
--------start to test-----------
normed mse:0.4051, mae:0.4371, rmse:0.6365, mape:7.2726, mspe:20975.8068, corr:0.7116
denormed mse:8.7119, mae:1.6950, rmse:2.9516, mape:inf, mspe:inf, corr:0.7116
Epoch: 5, Steps: 2142 | Train Loss: 0.3653860 valid Loss: 0.4497384 Test Loss: 0.4371418
EarlyStopping counter: 2 out of 15
Updating learning rate to 0.006964028437499998
iters: 100, epoch: 6 | loss: 0.4057341
speed: 0.1540s/iter; left time: 31326.4922s
iters: 200, epoch: 6 | loss: 0.4183176
speed: 0.0283s/iter; left time: 5756.8585s
iters: 300, epoch: 6 | loss: 0.3801117
speed: 0.0279s/iter; left time: 5662.7553s
iters: 400, epoch: 6 | loss: 0.3694860
speed: 0.0283s/iter; left time: 5748.9176s
iters: 500, epoch: 6 | loss: 0.4518154
speed: 0.0295s/iter; left time: 5995.1529s
iters: 600, epoch: 6 | loss: 0.3504744
speed: 0.0284s/iter; left time: 5764.1939s
iters: 700, epoch: 6 | loss: 0.3283683
speed: 0.0277s/iter; left time: 5621.3028s
iters: 800, epoch: 6 | loss: 0.3374281
speed: 0.0282s/iter; left time: 5724.4058s
iters: 900, epoch: 6 | loss: 0.4623635
speed: 0.0289s/iter; left time: 5860.4811s
iters: 1000, epoch: 6 | loss: 0.4638884
speed: 0.0292s/iter; left time: 5912.6509s
iters: 1100, epoch: 6 | loss: 0.3457366
speed: 0.0294s/iter; left time: 5960.2961s
iters: 1200, epoch: 6 | loss: 0.3443884
speed: 0.0295s/iter; left time: 5971.9475s
iters: 1300, epoch: 6 | loss: 0.3005659
speed: 0.0295s/iter; left time: 5963.8961s
iters: 1400, epoch: 6 | loss: 0.3177886
speed: 0.0295s/iter; left time: 5965.3368s
iters: 1500, epoch: 6 | loss: 0.4176153
speed: 0.0291s/iter; left time: 5885.1216s
iters: 1600, epoch: 6 | loss: 0.3041800
speed: 0.0283s/iter; left time: 5714.1515s
iters: 1700, epoch: 6 | loss: 0.4047523
speed: 0.0283s/iter; left time: 5708.8059s
iters: 1800, epoch: 6 | loss: 0.4489512
speed: 0.0283s/iter; left time: 5717.4653s
iters: 1900, epoch: 6 | loss: 0.3522519
speed: 0.0282s/iter; left time: 5684.8170s
iters: 2000, epoch: 6 | loss: 0.2812566
speed: 0.0289s/iter; left time: 5819.9594s
iters: 2100, epoch: 6 | loss: 0.2865458
speed: 0.0279s/iter; left time: 5614.1825s
Epoch: 6 cost time: 61.42661213874817
--------start to validate-----------
normed mse:0.4341, mae:0.4399, rmse:0.6588, mape:6.3993, mspe:29826.0129, corr:0.8372
denormed mse:7.1085, mae:1.5216, rmse:2.6662, mape:inf, mspe:inf, corr:0.8372
--------start to test-----------
normed mse:0.3834, mae:0.4100, rmse:0.6192, mape:7.4289, mspe:23121.1416, corr:0.7140
denormed mse:7.5249, mae:1.4937, rmse:2.7431, mape:inf, mspe:inf, corr:0.7140
Epoch: 6, Steps: 2142 | Train Loss: 0.3628683 valid Loss: 0.4399267 Test Loss: 0.4100347
EarlyStopping counter: 3 out of 15
Updating learning rate to 0.006615827015624997
iters: 100, epoch: 7 | loss: 0.2912202
speed: 0.1505s/iter; left time: 30285.9419s
iters: 200, epoch: 7 | loss: 0.4448686
speed: 0.0276s/iter; left time: 5547.5923s
iters: 300, epoch: 7 | loss: 0.3186685
speed: 0.0276s/iter; left time: 5557.9615s
iters: 400, epoch: 7 | loss: 0.3669732
speed: 0.0276s/iter; left time: 5550.2082s
iters: 500, epoch: 7 | loss: 0.3279809
speed: 0.0278s/iter; left time: 5585.8188s
iters: 600, epoch: 7 | loss: 0.3229183
speed: 0.0277s/iter; left time: 5565.8671s
iters: 700, epoch: 7 | loss: 0.3569002
speed: 0.0280s/iter; left time: 5610.5575s
iters: 800, epoch: 7 | loss: 0.2938167
speed: 0.0280s/iter; left time: 5606.9675s
iters: 900, epoch: 7 | loss: 0.3984652
speed: 0.0284s/iter; left time: 5685.5962s
iters: 1000, epoch: 7 | loss: 0.3540559
speed: 0.0282s/iter; left time: 5648.0905s
iters: 1100, epoch: 7 | loss: 0.3409511
speed: 0.0283s/iter; left time: 5663.9862s
iters: 1200, epoch: 7 | loss: 0.4427322
speed: 0.0282s/iter; left time: 5638.7087s
iters: 1300, epoch: 7 | loss: 0.2954441
speed: 0.0282s/iter; left time: 5640.2222s
iters: 1400, epoch: 7 | loss: 0.3650829
speed: 0.0282s/iter; left time: 5638.8868s
iters: 1500, epoch: 7 | loss: 0.3450136
speed: 0.0282s/iter; left time: 5637.7205s
iters: 1600, epoch: 7 | loss: 0.2694797
speed: 0.0282s/iter; left time: 5632.1659s
iters: 1700, epoch: 7 | loss: 0.2669967
speed: 0.0282s/iter; left time: 5629.5071s
iters: 1800, epoch: 7 | loss: 0.3467433
speed: 0.0282s/iter; left time: 5630.7933s
iters: 1900, epoch: 7 | loss: 0.3424510
speed: 0.0279s/iter; left time: 5556.7276s
iters: 2000, epoch: 7 | loss: 0.3744892
speed: 0.0279s/iter; left time: 5559.5775s
iters: 2100, epoch: 7 | loss: 0.3527800
speed: 0.0278s/iter; left time: 5548.8207s
Epoch: 7 cost time: 59.94628930091858
--------start to validate-----------
normed mse:0.4303, mae:0.4424, rmse:0.6560, mape:5.9782, mspe:24160.5354, corr:0.8376
denormed mse:7.0387, mae:1.5203, rmse:2.6531, mape:inf, mspe:inf, corr:0.8376
--------start to test-----------
normed mse:0.3878, mae:0.4161, rmse:0.6227, mape:7.4371, mspe:22721.2904, corr:0.7129
denormed mse:7.5770, mae:1.5209, rmse:2.7526, mape:inf, mspe:inf, corr:0.7129
Epoch: 7, Steps: 2142 | Train Loss: 0.3609556 valid Loss: 0.4423997 Test Loss: 0.4160987
EarlyStopping counter: 4 out of 15
Updating learning rate to 0.006285035664843747
iters: 100, epoch: 8 | loss: 0.3686014
speed: 0.1510s/iter; left time: 30068.4973s
iters: 200, epoch: 8 | loss: 0.3988787
speed: 0.0279s/iter; left time: 5556.8577s
iters: 300, epoch: 8 | loss: 0.2740081
speed: 0.0279s/iter; left time: 5549.9116s
iters: 400, epoch: 8 | loss: 0.2599104
speed: 0.0277s/iter; left time: 5513.6021s
iters: 500, epoch: 8 | loss: 0.3056968
speed: 0.0277s/iter; left time: 5509.8173s
iters: 600, epoch: 8 | loss: 0.4316751
speed: 0.0279s/iter; left time: 5533.8005s
iters: 700, epoch: 8 | loss: 0.3636506
speed: 0.0280s/iter; left time: 5548.2945s
iters: 800, epoch: 8 | loss: 0.3442728
speed: 0.0279s/iter; left time: 5543.6797s
iters: 900, epoch: 8 | loss: 0.3701980
speed: 0.0278s/iter; left time: 5512.7161s
iters: 1000, epoch: 8 | loss: 0.4252179
speed: 0.0280s/iter; left time: 5541.9490s
iters: 1100, epoch: 8 | loss: 0.3149583
speed: 0.0277s/iter; left time: 5481.9597s
iters: 1200, epoch: 8 | loss: 0.3407968
speed: 0.0276s/iter; left time: 5473.4407s
iters: 1300, epoch: 8 | loss: 0.3287254
speed: 0.0275s/iter; left time: 5448.2962s
iters: 1400, epoch: 8 | loss: 0.2965896
speed: 0.0275s/iter; left time: 5445.9206s
iters: 1500, epoch: 8 | loss: 0.3934681
speed: 0.0275s/iter; left time: 5443.9127s
iters: 1600, epoch: 8 | loss: 0.4411677
speed: 0.0277s/iter; left time: 5470.0075s
iters: 1700, epoch: 8 | loss: 0.4214006
speed: 0.0276s/iter; left time: 5455.5593s
iters: 1800, epoch: 8 | loss: 0.2960534
speed: 0.0275s/iter; left time: 5427.0669s
iters: 1900, epoch: 8 | loss: 0.3062829
speed: 0.0274s/iter; left time: 5415.3308s
iters: 2000, epoch: 8 | loss: 0.3225315
speed: 0.0275s/iter; left time: 5427.1049s
iters: 2100, epoch: 8 | loss: 0.3319520
speed: 0.0275s/iter; left time: 5426.2641s
Epoch: 8 cost time: 59.34956979751587
--------start to validate-----------
normed mse:0.4230, mae:0.4322, rmse:0.6504, mape:6.2235, mspe:25939.0239, corr:0.8386
denormed mse:6.7046, mae:1.4756, rmse:2.5893, mape:inf, mspe:inf, corr:0.8386
--------start to test-----------
normed mse:0.3464, mae:0.3819, rmse:0.5885, mape:8.2333, mspe:31089.5456, corr:0.7259
denormed mse:6.9707, mae:1.3995, rmse:2.6402, mape:inf, mspe:inf, corr:0.7259
Epoch: 8, Steps: 2142 | Train Loss: 0.3605087 valid Loss: 0.4322181 Test Loss: 0.3819402
EarlyStopping counter: 5 out of 15
Updating learning rate to 0.0059707838816015595
iters: 100, epoch: 9 | loss: 0.4236137
speed: 0.1485s/iter; left time: 29246.6871s
iters: 200, epoch: 9 | loss: 0.3181653
speed: 0.0280s/iter; left time: 5515.5740s
iters: 300, epoch: 9 | loss: 0.3890515
speed: 0.0278s/iter; left time: 5469.9286s
iters: 400, epoch: 9 | loss: 0.3365726
speed: 0.0279s/iter; left time: 5485.1154s
iters: 500, epoch: 9 | loss: 0.3757693
speed: 0.0278s/iter; left time: 5455.4438s
iters: 600, epoch: 9 | loss: 0.3484147
speed: 0.0278s/iter; left time: 5453.4754s
iters: 700, epoch: 9 | loss: 0.4239825
speed: 0.0278s/iter; left time: 5460.2592s
iters: 800, epoch: 9 | loss: 0.2945142
speed: 0.0278s/iter; left time: 5457.7710s
iters: 900, epoch: 9 | loss: 0.4924962
speed: 0.0278s/iter; left time: 5456.6360s
iters: 1000, epoch: 9 | loss: 0.2940640
speed: 0.0279s/iter; left time: 5467.9393s
iters: 1100, epoch: 9 | loss: 0.3161500
speed: 0.0279s/iter; left time: 5475.9595s
iters: 1200, epoch: 9 | loss: 0.3189687
speed: 0.0278s/iter; left time: 5446.5077s
iters: 1300, epoch: 9 | loss: 0.3449011
speed: 0.0281s/iter; left time: 5510.5618s
iters: 1400, epoch: 9 | loss: 0.3816228
speed: 0.0283s/iter; left time: 5529.3776s
iters: 1500, epoch: 9 | loss: 0.4140366
speed: 0.0282s/iter; left time: 5518.1599s
iters: 1600, epoch: 9 | loss: 0.3251882
speed: 0.0282s/iter; left time: 5515.2035s
iters: 1700, epoch: 9 | loss: 0.3331461
speed: 0.0280s/iter; left time: 5475.0170s
iters: 1800, epoch: 9 | loss: 0.4283416
speed: 0.0278s/iter; left time: 5429.8488s
iters: 1900, epoch: 9 | loss: 0.3325794
speed: 0.0278s/iter; left time: 5420.5551s
iters: 2000, epoch: 9 | loss: 0.3564896
speed: 0.0278s/iter; left time: 5425.4002s
iters: 2100, epoch: 9 | loss: 0.3499347
speed: 0.0278s/iter; left time: 5425.6226s
Epoch: 9 cost time: 59.779510259628296
--------start to validate-----------
normed mse:0.4228, mae:0.4387, rmse:0.6502, mape:6.3290, mspe:29452.3138, corr:0.8390
denormed mse:6.6619, mae:1.4925, rmse:2.5811, mape:inf, mspe:inf, corr:0.8390
--------start to test-----------
normed mse:0.3737, mae:0.4073, rmse:0.6113, mape:7.3629, mspe:21858.6990, corr:0.7157
denormed mse:7.2940, mae:1.4876, rmse:2.7007, mape:inf, mspe:inf, corr:0.7157
Epoch: 9, Steps: 2142 | Train Loss: 0.3616719 valid Loss: 0.4386964 Test Loss: 0.4073216
EarlyStopping counter: 6 out of 15
Updating learning rate to 0.0056722446875214815
iters: 100, epoch: 10 | loss: 0.2900885
speed: 0.1486s/iter; left time: 28951.5411s
iters: 200, epoch: 10 | loss: 0.3973311
speed: 0.0275s/iter; left time: 5350.2237s
iters: 300, epoch: 10 | loss: 0.3671376
speed: 0.0276s/iter; left time: 5379.2409s
iters: 400, epoch: 10 | loss: 0.3689273
speed: 0.0280s/iter; left time: 5437.8581s
iters: 500, epoch: 10 | loss: 0.3674505
speed: 0.0280s/iter; left time: 5438.4738s
iters: 600, epoch: 10 | loss: 0.3244604
speed: 0.0278s/iter; left time: 5397.7057s
iters: 700, epoch: 10 | loss: 0.3428321
speed: 0.0279s/iter; left time: 5417.2417s
iters: 800, epoch: 10 | loss: 0.2774915
speed: 0.0276s/iter; left time: 5354.6267s
iters: 900, epoch: 10 | loss: 0.5094953
speed: 0.0276s/iter; left time: 5350.0250s
iters: 1000, epoch: 10 | loss: 0.3412493
speed: 0.0280s/iter; left time: 5430.0755s
iters: 1100, epoch: 10 | loss: 0.3945991
speed: 0.0275s/iter; left time: 5338.2592s
iters: 1200, epoch: 10 | loss: 0.3750900
speed: 0.0275s/iter; left time: 5336.3502s
iters: 1300, epoch: 10 | loss: 0.4605041
speed: 0.0275s/iter; left time: 5332.7286s
iters: 1400, epoch: 10 | loss: 0.4066358
speed: 0.0276s/iter; left time: 5334.1538s
iters: 1500, epoch: 10 | loss: 0.3515254
speed: 0.0275s/iter; left time: 5328.1034s
iters: 1600, epoch: 10 | loss: 0.5100784
speed: 0.0276s/iter; left time: 5332.2505s
iters: 1700, epoch: 10 | loss: 0.4475189
speed: 0.0278s/iter; left time: 5367.5615s
iters: 1800, epoch: 10 | loss: 0.3756194
speed: 0.0280s/iter; left time: 5404.4193s
iters: 1900, epoch: 10 | loss: 0.3614157
speed: 0.0282s/iter; left time: 5435.4784s
iters: 2000, epoch: 10 | loss: 0.4440518
speed: 0.0288s/iter; left time: 5560.7261s
iters: 2100, epoch: 10 | loss: 0.3380460
speed: 0.0285s/iter; left time: 5492.9167s
Epoch: 10 cost time: 59.5790855884552
--------start to validate-----------
normed mse:0.4307, mae:0.4398, rmse:0.6563, mape:6.2669, mspe:29055.0190, corr:0.8386
denormed mse:6.8327, mae:1.5033, rmse:2.6139, mape:inf, mspe:inf, corr:0.8386
--------start to test-----------
normed mse:0.3560, mae:0.3932, rmse:0.5967, mape:7.4114, mspe:24304.7538, corr:0.7213
denormed mse:7.0991, mae:1.4397, rmse:2.6644, mape:inf, mspe:inf, corr:0.7213
Epoch: 10, Steps: 2142 | Train Loss: 0.3579478 valid Loss: 0.4398302 Test Loss: 0.3932088
EarlyStopping counter: 7 out of 15
Updating learning rate to 0.0053886324531454075
iters: 100, epoch: 11 | loss: 0.4010978
speed: 0.1514s/iter; left time: 29166.4668s
iters: 200, epoch: 11 | loss: 0.3511622
speed: 0.0281s/iter; left time: 5411.7574s
iters: 300, epoch: 11 | loss: 0.3938432
speed: 0.0281s/iter; left time: 5401.9186s
iters: 400, epoch: 11 | loss: 0.3107118
speed: 0.0280s/iter; left time: 5395.8313s
iters: 500, epoch: 11 | loss: 0.3301463
speed: 0.0282s/iter; left time: 5427.8730s
iters: 600, epoch: 11 | loss: 0.3875788
speed: 0.0280s/iter; left time: 5389.4144s
iters: 700, epoch: 11 | loss: 0.3090936
speed: 0.0280s/iter; left time: 5383.3654s
iters: 800, epoch: 11 | loss: 0.3182698
speed: 0.0278s/iter; left time: 5345.9382s
iters: 900, epoch: 11 | loss: 0.3655111
speed: 0.0278s/iter; left time: 5331.0441s
iters: 1000, epoch: 11 | loss: 0.2847968
speed: 0.0278s/iter; left time: 5331.1720s
iters: 1100, epoch: 11 | loss: 0.2750012
speed: 0.0282s/iter; left time: 5414.7571s
iters: 1200, epoch: 11 | loss: 0.4303603
speed: 0.0281s/iter; left time: 5391.7382s
iters: 1300, epoch: 11 | loss: 0.2917214
speed: 0.0284s/iter; left time: 5436.3090s
iters: 1400, epoch: 11 | loss: 0.3828793
speed: 0.0280s/iter; left time: 5349.8228s
iters: 1500, epoch: 11 | loss: 0.3677426
speed: 0.0279s/iter; left time: 5340.5779s
iters: 1600, epoch: 11 | loss: 0.2885067
speed: 0.0279s/iter; left time: 5336.3839s
iters: 1700, epoch: 11 | loss: 0.3390763
speed: 0.0280s/iter; left time: 5350.4018s
iters: 1800, epoch: 11 | loss: 0.2992726
speed: 0.0280s/iter; left time: 5343.0944s
iters: 1900, epoch: 11 | loss: 0.3509193
speed: 0.0280s/iter; left time: 5341.6770s
iters: 2000, epoch: 11 | loss: 0.3269296
speed: 0.0280s/iter; left time: 5348.3123s
iters: 2100, epoch: 11 | loss: 0.6610678
speed: 0.0284s/iter; left time: 5410.5713s
Epoch: 11 cost time: 60.09651231765747
--------start to validate-----------
normed mse:0.4126, mae:0.4290, rmse:0.6424, mape:6.2955, mspe:28839.2656, corr:0.8417
denormed mse:6.6323, mae:1.4699, rmse:2.5753, mape:inf, mspe:inf, corr:0.8417
--------start to test-----------
normed mse:0.3609, mae:0.3942, rmse:0.6007, mape:8.2736, mspe:29474.0744, corr:0.7162
denormed mse:6.8909, mae:1.4109, rmse:2.6250, mape:inf, mspe:inf, corr:0.7162
Epoch: 11, Steps: 2142 | Train Loss: 0.3590887 valid Loss: 0.4289519 Test Loss: 0.3941905
EarlyStopping counter: 8 out of 15
Updating learning rate to 0.0051192008304881366
iters: 100, epoch: 12 | loss: 0.3217768
speed: 0.1514s/iter; left time: 28853.4659s
iters: 200, epoch: 12 | loss: 0.3381207
speed: 0.0283s/iter; left time: 5385.6346s
iters: 300, epoch: 12 | loss: 0.3095023
speed: 0.0283s/iter; left time: 5377.6092s
iters: 400, epoch: 12 | loss: 0.3355850
speed: 0.0282s/iter; left time: 5361.3733s
iters: 500, epoch: 12 | loss: 0.4180435
speed: 0.0282s/iter; left time: 5361.3226s
iters: 600, epoch: 12 | loss: 0.2757073
speed: 0.0282s/iter; left time: 5365.0668s
iters: 700, epoch: 12 | loss: 0.3483370
speed: 0.0282s/iter; left time: 5347.4459s
iters: 800, epoch: 12 | loss: 0.2809200
speed: 0.0279s/iter; left time: 5294.3898s
iters: 900, epoch: 12 | loss: 0.3577588
speed: 0.0279s/iter; left time: 5287.8842s
iters: 1000, epoch: 12 | loss: 0.3402998
speed: 0.0285s/iter; left time: 5411.8483s
iters: 1100, epoch: 12 | loss: 0.3226743
speed: 0.0283s/iter; left time: 5354.6314s
iters: 1200, epoch: 12 | loss: 0.3679209
speed: 0.0280s/iter; left time: 5307.4323s
iters: 1300, epoch: 12 | loss: 0.4554799
speed: 0.0280s/iter; left time: 5307.0615s
iters: 1400, epoch: 12 | loss: 0.4203964
speed: 0.0282s/iter; left time: 5334.6971s
iters: 1500, epoch: 12 | loss: 0.3709324
speed: 0.0278s/iter; left time: 5256.8723s
iters: 1600, epoch: 12 | loss: 0.4153709
speed: 0.0282s/iter; left time: 5327.5631s
iters: 1700, epoch: 12 | loss: 0.3395702
speed: 0.0284s/iter; left time: 5367.3621s
iters: 1800, epoch: 12 | loss: 0.3653106
speed: 0.0275s/iter; left time: 5200.3756s
iters: 1900, epoch: 12 | loss: 0.3667592
speed: 0.0276s/iter; left time: 5202.0497s
iters: 2000, epoch: 12 | loss: 0.3408771
speed: 0.0277s/iter; left time: 5229.9565s
iters: 2100, epoch: 12 | loss: 0.3763930
speed: 0.0279s/iter; left time: 5266.1365s
Epoch: 12 cost time: 60.12066292762756
--------start to validate-----------
normed mse:0.4283, mae:0.4382, rmse:0.6545, mape:6.6969, mspe:32954.8826, corr:0.8367
denormed mse:6.7505, mae:1.4951, rmse:2.5982, mape:inf, mspe:inf, corr:0.8367
--------start to test-----------
normed mse:0.3669, mae:0.3961, rmse:0.6057, mape:7.2348, mspe:22615.7119, corr:0.7106
denormed mse:6.8549, mae:1.3993, rmse:2.6182, mape:inf, mspe:inf, corr:0.7106
Epoch: 12, Steps: 2142 | Train Loss: 0.3563409 valid Loss: 0.4382461 Test Loss: 0.3960557
EarlyStopping counter: 9 out of 15
Updating learning rate to 0.00486324078896373
iters: 100, epoch: 13 | loss: 0.4522523
speed: 0.1505s/iter; left time: 28353.4317s
iters: 200, epoch: 13 | loss: 0.3278935
speed: 0.0280s/iter; left time: 5268.7788s
iters: 300, epoch: 13 | loss: 0.3633406
speed: 0.0279s/iter; left time: 5256.5203s
iters: 400, epoch: 13 | loss: 0.4294580
speed: 0.0279s/iter; left time: 5251.6118s
iters: 500, epoch: 13 | loss: 0.3454820
speed: 0.0280s/iter; left time: 5257.8887s
iters: 600, epoch: 13 | loss: 0.3645260
speed: 0.0280s/iter; left time: 5261.4420s
iters: 700, epoch: 13 | loss: 0.3417323
speed: 0.0280s/iter; left time: 5260.9822s
iters: 800, epoch: 13 | loss: 0.3129990
speed: 0.0280s/iter; left time: 5255.6694s
iters: 900, epoch: 13 | loss: 0.2905439
speed: 0.0280s/iter; left time: 5252.0826s
iters: 1000, epoch: 13 | loss: 0.2850313
speed: 0.0281s/iter; left time: 5259.5704s
iters: 1100, epoch: 13 | loss: 0.3172965
speed: 0.0280s/iter; left time: 5246.1584s
iters: 1200, epoch: 13 | loss: 0.3209522
speed: 0.0280s/iter; left time: 5246.7961s
iters: 1300, epoch: 13 | loss: 0.3128294
speed: 0.0280s/iter; left time: 5249.5419s
iters: 1400, epoch: 13 | loss: 0.3069360
speed: 0.0280s/iter; left time: 5244.6790s
iters: 1500, epoch: 13 | loss: 0.2832389
speed: 0.0280s/iter; left time: 5245.1639s
iters: 1600, epoch: 13 | loss: 0.3491664
speed: 0.0279s/iter; left time: 5205.3280s
iters: 1700, epoch: 13 | loss: 0.3340007
speed: 0.0284s/iter; left time: 5309.2219s
iters: 1800, epoch: 13 | loss: 0.3344918
speed: 0.0285s/iter; left time: 5313.6044s
iters: 1900, epoch: 13 | loss: 0.3484406
speed: 0.0285s/iter; left time: 5308.9329s
iters: 2000, epoch: 13 | loss: 0.2734198
speed: 0.0284s/iter; left time: 5304.1585s
iters: 2100, epoch: 13 | loss: 0.3701983
speed: 0.0279s/iter; left time: 5199.2803s
Epoch: 13 cost time: 60.11739897727966
--------start to validate-----------
normed mse:0.4303, mae:0.4390, rmse:0.6560, mape:6.6884, mspe:34114.8247, corr:0.8375
denormed mse:6.7497, mae:1.4995, rmse:2.5980, mape:inf, mspe:inf, corr:0.8375
--------start to test-----------
normed mse:0.3589, mae:0.3925, rmse:0.5991, mape:7.5275, mspe:25406.8618, corr:0.7192
denormed mse:6.9260, mae:1.4134, rmse:2.6317, mape:inf, mspe:inf, corr:0.7192
Epoch: 13, Steps: 2142 | Train Loss: 0.3540810 valid Loss: 0.4389815 Test Loss: 0.3925090
EarlyStopping counter: 10 out of 15
Updating learning rate to 0.004620078749515543
iters: 100, epoch: 14 | loss: 0.3438686
speed: 0.1479s/iter; left time: 27544.2964s
iters: 200, epoch: 14 | loss: 0.2884781
speed: 0.0280s/iter; left time: 5210.4980s
iters: 300, epoch: 14 | loss: 0.3126722
speed: 0.0280s/iter; left time: 5203.0928s
iters: 400, epoch: 14 | loss: 0.3271807
speed: 0.0280s/iter; left time: 5200.1641s
iters: 500, epoch: 14 | loss: 0.4116562
speed: 0.0280s/iter; left time: 5199.4813s
iters: 600, epoch: 14 | loss: 0.3995543
speed: 0.0281s/iter; left time: 5212.3402s
iters: 700, epoch: 14 | loss: 0.3520363
speed: 0.0282s/iter; left time: 5228.8119s
iters: 800, epoch: 14 | loss: 0.3702521
speed: 0.0280s/iter; left time: 5202.5050s
iters: 900, epoch: 14 | loss: 0.3815005
speed: 0.0281s/iter; left time: 5220.2957s
iters: 1000, epoch: 14 | loss: 0.3172183
speed: 0.0281s/iter; left time: 5214.7825s
iters: 1100, epoch: 14 | loss: 0.2710787
speed: 0.0280s/iter; left time: 5182.3521s
iters: 1200, epoch: 14 | loss: 0.3369085
speed: 0.0281s/iter; left time: 5194.1470s
iters: 1300, epoch: 14 | loss: 0.3464781
speed: 0.0281s/iter; left time: 5205.2410s
iters: 1400, epoch: 14 | loss: 0.3249996
speed: 0.0280s/iter; left time: 5172.7727s
iters: 1500, epoch: 14 | loss: 0.3748113
speed: 0.0279s/iter; left time: 5164.6070s
iters: 1600, epoch: 14 | loss: 0.3545220
speed: 0.0282s/iter; left time: 5205.9071s
iters: 1700, epoch: 14 | loss: 0.3138350
speed: 0.0283s/iter; left time: 5226.5786s
iters: 1800, epoch: 14 | loss: 0.2359443
speed: 0.0279s/iter; left time: 5154.5516s
iters: 1900, epoch: 14 | loss: 0.3059377
speed: 0.0280s/iter; left time: 5166.2721s
iters: 2000, epoch: 14 | loss: 0.4641337
speed: 0.0275s/iter; left time: 5076.5863s
iters: 2100, epoch: 14 | loss: 0.5113470
speed: 0.0275s/iter; left time: 5072.0486s
Epoch: 14 cost time: 59.94197106361389
--------start to validate-----------
normed mse:0.4293, mae:0.4387, rmse:0.6552, mape:6.3817, mspe:29298.9563, corr:0.8369
denormed mse:6.6472, mae:1.4853, rmse:2.5782, mape:inf, mspe:inf, corr:0.8369
--------start to test-----------
normed mse:0.3625, mae:0.3948, rmse:0.6021, mape:7.2433, mspe:23448.7457, corr:0.7215
denormed mse:6.9705, mae:1.4218, rmse:2.6402, mape:inf, mspe:inf, corr:0.7215
Epoch: 14, Steps: 2142 | Train Loss: 0.3525640 valid Loss: 0.4387257 Test Loss: 0.3948280
EarlyStopping counter: 11 out of 15
Updating learning rate to 0.004389074812039766
iters: 100, epoch: 15 | loss: 0.2835026
speed: 0.1466s/iter; left time: 26997.5798s
iters: 200, epoch: 15 | loss: 0.3711831
speed: 0.0277s/iter; left time: 5104.0032s
iters: 300, epoch: 15 | loss: 0.2705855
speed: 0.0280s/iter; left time: 5143.2221s
iters: 400, epoch: 15 | loss: 0.3280960
speed: 0.0280s/iter; left time: 5146.2398s
iters: 500, epoch: 15 | loss: 0.3160923
speed: 0.0294s/iter; left time: 5406.9974s
iters: 600, epoch: 15 | loss: 0.3436387
speed: 0.0289s/iter; left time: 5302.4576s
iters: 700, epoch: 15 | loss: 0.3115176
speed: 0.0288s/iter; left time: 5280.9901s
iters: 800, epoch: 15 | loss: 0.3346707
speed: 0.0284s/iter; left time: 5212.5078s
iters: 900, epoch: 15 | loss: 0.3677571
speed: 0.0286s/iter; left time: 5244.5566s
iters: 1000, epoch: 15 | loss: 0.4072943
speed: 0.0284s/iter; left time: 5203.1259s
iters: 1100, epoch: 15 | loss: 0.3804372
speed: 0.0280s/iter; left time: 5128.9054s
iters: 1200, epoch: 15 | loss: 0.3443121
speed: 0.0279s/iter; left time: 5112.3673s
iters: 1300, epoch: 15 | loss: 0.3637579
speed: 0.0283s/iter; left time: 5176.0714s
iters: 1400, epoch: 15 | loss: 0.3499659
speed: 0.0286s/iter; left time: 5227.1227s
iters: 1500, epoch: 15 | loss: 0.5039499
speed: 0.0291s/iter; left time: 5310.4525s
iters: 1600, epoch: 15 | loss: 0.4006560
speed: 0.0283s/iter; left time: 5170.5931s
iters: 1700, epoch: 15 | loss: 0.3319346
speed: 0.0296s/iter; left time: 5400.1587s
iters: 1800, epoch: 15 | loss: 0.2939537
speed: 0.0289s/iter; left time: 5280.7841s
iters: 1900, epoch: 15 | loss: 0.3131467
speed: 0.0288s/iter; left time: 5247.6471s
iters: 2000, epoch: 15 | loss: 0.3423584
speed: 0.0282s/iter; left time: 5135.6943s
iters: 2100, epoch: 15 | loss: 0.3027259
speed: 0.0293s/iter; left time: 5336.4273s
Epoch: 15 cost time: 61.06135439872742
--------start to validate-----------
normed mse:0.4270, mae:0.4357, rmse:0.6535, mape:6.2969, mspe:29361.8301, corr:0.8387
denormed mse:6.7744, mae:1.4934, rmse:2.6028, mape:inf, mspe:inf, corr:0.8387
--------start to test-----------
normed mse:0.3801, mae:0.4064, rmse:0.6165, mape:7.6775, mspe:24234.3983, corr:0.7222
denormed mse:7.0129, mae:1.4326, rmse:2.6482, mape:inf, mspe:inf, corr:0.7222
Epoch: 15, Steps: 2142 | Train Loss: 0.3506703 valid Loss: 0.4357405 Test Loss: 0.4064314
EarlyStopping counter: 12 out of 15
Updating learning rate to 0.004169621071437777
iters: 100, epoch: 16 | loss: 0.4510111
speed: 0.1491s/iter; left time: 27125.3628s
iters: 200, epoch: 16 | loss: 0.2692741
speed: 0.0277s/iter; left time: 5035.8793s
iters: 300, epoch: 16 | loss: 0.3289906
speed: 0.0276s/iter; left time: 5009.9902s
iters: 400, epoch: 16 | loss: 0.3797024
speed: 0.0279s/iter; left time: 5068.8114s
iters: 500, epoch: 16 | loss: 0.3341546
speed: 0.0279s/iter; left time: 5071.4988s
iters: 600, epoch: 16 | loss: 0.3495839
speed: 0.0279s/iter; left time: 5062.5783s
iters: 700, epoch: 16 | loss: 0.4357414
speed: 0.0279s/iter; left time: 5067.4511s
iters: 800, epoch: 16 | loss: 0.3485725
speed: 0.0282s/iter; left time: 5117.5560s
iters: 900, epoch: 16 | loss: 0.3322166
speed: 0.0280s/iter; left time: 5073.7118s
iters: 1000, epoch: 16 | loss: 0.4298945
speed: 0.0278s/iter; left time: 5034.2741s
iters: 1100, epoch: 16 | loss: 0.3356522
speed: 0.0277s/iter; left time: 5019.3092s
iters: 1200, epoch: 16 | loss: 0.3022777
speed: 0.0278s/iter; left time: 5030.0439s
iters: 1300, epoch: 16 | loss: 0.5145088
speed: 0.0279s/iter; left time: 5043.8716s
iters: 1400, epoch: 16 | loss: 0.3351660
speed: 0.0280s/iter; left time: 5053.5629s
iters: 1500, epoch: 16 | loss: 0.2188999
speed: 0.0280s/iter; left time: 5056.2751s
iters: 1600, epoch: 16 | loss: 0.3333143
speed: 0.0280s/iter; left time: 5047.3125s
iters: 1700, epoch: 16 | loss: 0.3249144
speed: 0.0280s/iter; left time: 5042.0314s
iters: 1800, epoch: 16 | loss: 0.3542661
speed: 0.0286s/iter; left time: 5164.6782s
iters: 1900, epoch: 16 | loss: 0.2972439
speed: 0.0282s/iter; left time: 5072.0900s
iters: 2000, epoch: 16 | loss: 0.5136545
speed: 0.0280s/iter; left time: 5043.7353s
iters: 2100, epoch: 16 | loss: 0.4824405
speed: 0.0276s/iter; left time: 4965.6085s
Epoch: 16 cost time: 59.80357551574707
--------start to validate-----------
normed mse:0.4162, mae:0.4301, rmse:0.6451, mape:6.6630, mspe:33023.5124, corr:0.8392
denormed mse:6.6257, mae:1.4663, rmse:2.5741, mape:inf, mspe:inf, corr:0.8392
--------start to test-----------
normed mse:0.3918, mae:0.4191, rmse:0.6259, mape:8.4547, mspe:26315.2061, corr:0.7104
denormed mse:6.8745, mae:1.4420, rmse:2.6219, mape:inf, mspe:inf, corr:0.7104
Epoch: 16, Steps: 2142 | Train Loss: 0.3491714 valid Loss: 0.4301337 Test Loss: 0.4191123
EarlyStopping counter: 13 out of 15
Updating learning rate to 0.003961140017865888
iters: 100, epoch: 17 | loss: 0.3080508
speed: 0.1480s/iter; left time: 26622.6370s
iters: 200, epoch: 17 | loss: 0.3300118
speed: 0.0282s/iter; left time: 5071.8839s
iters: 300, epoch: 17 | loss: 0.3696486
speed: 0.0283s/iter; left time: 5076.6316s
iters: 400, epoch: 17 | loss: 0.3103400
speed: 0.0280s/iter; left time: 5030.2413s
iters: 500, epoch: 17 | loss: 0.2892926
speed: 0.0276s/iter; left time: 4947.6185s
iters: 600, epoch: 17 | loss: 0.2658455
speed: 0.0277s/iter; left time: 4960.8161s
iters: 700, epoch: 17 | loss: 0.3430028
speed: 0.0279s/iter; left time: 5003.9050s
iters: 800, epoch: 17 | loss: 0.4180345
speed: 0.0280s/iter; left time: 5014.6019s
iters: 900, epoch: 17 | loss: 0.3565879
speed: 0.0281s/iter; left time: 5037.1669s
iters: 1000, epoch: 17 | loss: 0.3849063
speed: 0.0282s/iter; left time: 5042.6895s
iters: 1100, epoch: 17 | loss: 0.3202542
speed: 0.0282s/iter; left time: 5041.5153s
iters: 1200, epoch: 17 | loss: 0.2963192
speed: 0.0280s/iter; left time: 5011.3633s
iters: 1300, epoch: 17 | loss: 0.2859437
speed: 0.0281s/iter; left time: 5014.8229s
iters: 1400, epoch: 17 | loss: 0.2909003
speed: 0.0280s/iter; left time: 5007.0248s
iters: 1500, epoch: 17 | loss: 0.3638932
speed: 0.0281s/iter; left time: 5008.2811s
iters: 1600, epoch: 17 | loss: 0.3561967
speed: 0.0280s/iter; left time: 4997.1358s
iters: 1700, epoch: 17 | loss: 0.5093152
speed: 0.0277s/iter; left time: 4935.8631s
iters: 1800, epoch: 17 | loss: 0.2677069
speed: 0.0276s/iter; left time: 4913.5956s
iters: 1900, epoch: 17 | loss: 0.3010083
speed: 0.0276s/iter; left time: 4913.4009s
iters: 2000, epoch: 17 | loss: 0.2638042
speed: 0.0276s/iter; left time: 4907.3877s
iters: 2100, epoch: 17 | loss: 0.4300926
speed: 0.0276s/iter; left time: 4903.8334s
Epoch: 17 cost time: 59.81618595123291
--------start to validate-----------
normed mse:0.4280, mae:0.4377, rmse:0.6542, mape:5.5515, mspe:21868.2175, corr:0.8367
denormed mse:6.6570, mae:1.4786, rmse:2.5801, mape:inf, mspe:inf, corr:0.8367
--------start to test-----------
normed mse:0.3892, mae:0.4184, rmse:0.6239, mape:7.2614, mspe:21184.3688, corr:0.7183
denormed mse:7.3174, mae:1.5167, rmse:2.7051, mape:inf, mspe:inf, corr:0.7183
Epoch: 17, Steps: 2142 | Train Loss: 0.3478450 valid Loss: 0.4376619 Test Loss: 0.4184305
EarlyStopping counter: 14 out of 15
Updating learning rate to 0.0037630830169725934
iters: 100, epoch: 18 | loss: 0.3493851
speed: 0.1469s/iter; left time: 26103.3502s
iters: 200, epoch: 18 | loss: 0.3936039
speed: 0.0283s/iter; left time: 5019.1979s
iters: 300, epoch: 18 | loss: 0.3425503
speed: 0.0282s/iter; left time: 5013.9808s
iters: 400, epoch: 18 | loss: 0.4523201
speed: 0.0283s/iter; left time: 5015.3901s
iters: 500, epoch: 18 | loss: 0.4523338
speed: 0.0282s/iter; left time: 5007.3683s
iters: 600, epoch: 18 | loss: 0.3734182
speed: 0.0276s/iter; left time: 4890.5614s
iters: 700, epoch: 18 | loss: 0.4001013
speed: 0.0276s/iter; left time: 4883.5716s
iters: 800, epoch: 18 | loss: 0.3639956
speed: 0.0276s/iter; left time: 4889.4474s
iters: 900, epoch: 18 | loss: 0.3252257
speed: 0.0281s/iter; left time: 4966.6539s
iters: 1000, epoch: 18 | loss: 0.2928127
speed: 0.0276s/iter; left time: 4877.2264s
iters: 1100, epoch: 18 | loss: 0.3182337
speed: 0.0277s/iter; left time: 4897.8872s
iters: 1200, epoch: 18 | loss: 0.2747648
speed: 0.0278s/iter; left time: 4904.7353s
iters: 1300, epoch: 18 | loss: 0.3395426
speed: 0.0278s/iter; left time: 4898.0745s
iters: 1400, epoch: 18 | loss: 0.3047990
speed: 0.0277s/iter; left time: 4893.5173s
iters: 1500, epoch: 18 | loss: 0.4318493
speed: 0.0280s/iter; left time: 4939.3328s
iters: 1600, epoch: 18 | loss: 0.4360686
speed: 0.0283s/iter; left time: 4994.3697s
iters: 1700, epoch: 18 | loss: 0.3741941
speed: 0.0280s/iter; left time: 4925.0932s
iters: 1800, epoch: 18 | loss: 0.3823816
speed: 0.0278s/iter; left time: 4887.5877s
iters: 1900, epoch: 18 | loss: 0.3651931
speed: 0.0277s/iter; left time: 4874.1016s
iters: 2000, epoch: 18 | loss: 0.2952438
speed: 0.0280s/iter; left time: 4914.5200s
iters: 2100, epoch: 18 | loss: 0.2951702
speed: 0.0279s/iter; left time: 4908.1618s
Epoch: 18 cost time: 59.75731587409973
--------start to validate-----------
normed mse:0.4196, mae:0.4308, rmse:0.6478, mape:5.8832, mspe:25092.2845, corr:0.8388
denormed mse:6.5500, mae:1.4567, rmse:2.5593, mape:inf, mspe:inf, corr:0.8388
--------start to test-----------
normed mse:0.3768, mae:0.4091, rmse:0.6138, mape:7.3974, mspe:21079.9626, corr:0.7191
denormed mse:7.0757, mae:1.4669, rmse:2.6600, mape:inf, mspe:inf, corr:0.7191
Epoch: 18, Steps: 2142 | Train Loss: 0.3477751 valid Loss: 0.4308268 Test Loss: 0.4091339
EarlyStopping counter: 15 out of 15
Early stopping
save model in exp/ETT_checkpoints/SCINet_ETTh1_ftM_sl48_ll24_pl24_lr0.009_bs4_hid4.0_s1_l3_dp0.5_invFalse_itr0/ETTh124.bin
testing : SCINet_ETTh1_ftM_sl48_ll24_pl24_lr0.009_bs4_hid4.0_s1_l3_dp0.5_invFalse_itr0<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
test 2857
normed mse:0.3594, mae:0.3923, rmse:0.5995, mape:7.5035, mspe:24301.3721, corr:0.7205
TTTT denormed mse:6.9649, mae:1.4154, rmse:2.6391, mape:inf, mspe:inf, corr:0.7205
Final mean normed mse:0.3594,mae:0.3923,denormed mse:6.9649,mae:1.4154

How to run SCINet on a custom dataset ?

Hello again,
What are the changes to be made in run_ETTh.py, exp_ETTh.py and etth_data_loader.py to be able to fit this custom dataset and successfully run the code targeting Voltage
Dataset to be used :
dataset4.csv

Thank You in advance

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.