Comments (5)
Hi, having a constant loss throughout the training can be a real headache, let's try a couple of things:
- What is the exact shape of a batch returned by your dataloader ?
- Have you properly normalized your data ?
- Could you try using one of the LSTM/GRU in the benchmark to see if the problem persists ?
You should keep the MSELoss for now, the OZELoss is proper to my use case and may not fit your problem anyway.
from transformer.
Hi, having a constant loss throughout the training can be a real headache, let's try a couple of things:
- What is the exact shape of a batch returned by your dataloader ?
- Have you properly normalized your data ?
- Could you try using one of the LSTM/GRU in the benchmark to see if the problem persists ?
You should keep the MSELoss for now, the OZELoss is proper to my use case and may not fit your problem anyway.
Thanks for your reply.
Firstly, i used LSTM which was same with LSTM in benchmark,and the results were good.So i want to know the results
with Transformer,
Secondly, i have used MinMax and log normalization,and the log normalization worked better on LSTM,so i used it.
Thirdly, shape returned by dataloader is [batch_size, time_step, input_size]
So should I adjust the other parameters,or the transformer is just not fit with my data?
from transformer.
- LSTM working properly is not a good sign at all
- Either should be fine
- Which is the correct shape.
One thing you could try is reducing the number of layers of the Transformer. Firstly, because smaller models are usually easier to converge, secondly so it's easier to pinpoint at which point does the gradient vanishes, or explodes (if that is indeed the reason for the model being stuck).
from transformer.
- LSTM working properly is not a good sign at all
- Either should be fine
- Which is the correct shape.
One thing you could try is reducing the number of layers of the Transformer. Firstly, because smaller models are usually easier to converge, secondly so it's easier to pinpoint at which point does the gradient vanishes, or explodes (if that is indeed the reason for the model being stuck).
Thanks,
I deleted part of layernorms and residual connections in sub-layer, and then the loss decreased normally.Besides,the performance in Transformer is a bit better than LSTM on my data.
from transformer.
It's funny how these layers you removed supposedly improved convergence on all networks. I guess the transformer architecture still isn't that well understood in that regard.
If you want to get to the bottom of this convergence issue, you could now try plotting attention maps. You could also add these layers back, one at a time, to isolate the problem.
from transformer.
Related Issues (20)
- how to apply this transformer model to a long univariate time series data HOT 1
- **RuntimeError: The size of tensor a (896) must match the size of tensor b (14) at non-singleton dimension 0** HOT 1
- Some questions about the prediction HOT 3
- can you explain more on dimension arguments to transformer class ? HOT 1
- Runtime error: mat1 dim 1 must match mat2 dim 0
- Citation Bibtex HOT 3
- Why the sigmoid in the transformer? HOT 6
- Issue while training the model HOT 1
- Get Error/Applying Univariate Time Series Dataset HOT 3
- Possibility for time series anomaly detection? HOT 3
- Can time series A be used to predict time series Bīŧ HOT 4
- Hello, thanks for your great works, I'm confused with the dataset. HOT 10
- Question about input of the decoder HOT 1
- How do I set d_model, q, v, h, N, dropout, attention_size value? HOT 1
- The input dimension??? HOT 10
- A question HOT 3
- How to set Positional encoding HOT 4
- How to change the program to a classification model ? HOT 1
- RuntimeError: Given groups=1, weight of size [48, 37, 11], expected input[8, 691, 18] to have 37 channels, but got 691 channels instead HOT 6
- Position Encoding HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
đ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. đđđ
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google â¤ī¸ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from transformer.