Comments (4)
Yes, it is actually different.
The reason is that the network has no requirements in patch dimensions.
multi head self-attention
only requires the size of Q
, K
, V
needs consistency.
FFN
only processes on C
dimensions.
So, the encoder doesn't care about how many patches it accepts.
from mae-pytorch.
Hello, in the end-to-end fine-tuning stage, the encoder accepts the full set tokens of image without any masked token. In fact, pre-trained stage just provides a better init for fine-tuning. As you can find the difference in my code:
x = x + self.pos_embed.type_as(x).to(x.device).clone().detach()
B, _, C = x.shape
x_vis = x[~mask].reshape(B, -1, C) # ~mask means visible
for blk in self.blocks:
x_vis = blk(x_vis)
if self.pos_embed is not None:
x = x + self.pos_embed.expand(B, -1, -1).type_as(x).to(x.device).clone().detach()
for blk in self.blocks:
x = blk(x)
Hope this can help you!
from mae-pytorch.
Thanks! I now have one more question: If we see Transformers for NLP, the sequence dimension is fixed to some arbitrary max_len
hyperparameter. However, that is not the case here, where the number of tokens (sequence length) differs in pretraining and fine-tuning.
I wonder if this is due to the batch operation that requires every data in the dataloader to have the same sequence length, and less about the network constraint.
from mae-pytorch.
I have the same question.
In the case of pre-train, I know that only visible tokens (25% of the original image) in the encoder are input, but when fine-tuning, the model does not use a mask, so if the entire original image is input, isn't it a different size?
from mae-pytorch.
Related Issues (20)
- pil_loader slowly
- typo error local-rank HOT 1
- Question upon MSE loss HOT 1
- Do I need to specify the value of mask_ratio before finetune?
- training with 400 epoch has IndexError when training at the last iteration
- Which dataset is used for the released pretrained model?
- A warning when pretraining HOT 2
- Pretrained weight of vit-S
- Patch size for pretraining
- learning rate curve
- Visualize Problems HOT 2
- How to resume from the checkpoint?
- SimMIM test
- I wonder if you plan to release the mask prediction visualization code?
- RuntimeError: Given normalized_shape=[768], expected input with shape [*, 768], but got input of size[12]
- Visual loading model error HOT 6
- How to implement Layer-wise learning rate decay on ResNet?
- Error reported in code finetune, AttributeError: 'VisionTransformer' object has no attribute 'get_num_layers'.
- The import accimage cannot be parsed
- Is Mixup necessary for MAE fine-tuning?
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from mae-pytorch.