Comments (5)
I use the edl loss to train in mini-imagenet dataset with 64 classes, but the loss can't converge and the accuracy is very low.
I think you can modify the parameters:
def edl_loss(self, func, y, alpha, annealing_step, device="cuda"):
y = self.one_hot_embedding(y)
y = y.to(device)
alpha = alpha.to(device)
S = torch.sum(alpha, dim=1, keepdim=True)
A = torch.sum(y * (func(S) - func(alpha)), dim=1, keepdim=True)
# annealing_coef = torch.min(
# torch.tensor(1.0, dtype=torch.float32),
# torch.tensor(self.epoch / annealing_step, dtype=torch.float32),
# )
annealing_coef = 0.1
kl_alpha = (alpha - 1) * (1 - y) + 1
kl_div = annealing_coef * self.kl_divergence(kl_alpha, device=device)
return A + kl_div
set annealing_coef
as 0.1 or lower setting will work, do not set 1, it's too large.
from pytorch-classification-uncertainty.
I use the edl loss to train in mini-imagenet dataset with 64 classes, but the loss can't converge and the accuracy is very low.
I think you can modify the parameters:
def edl_loss(self, func, y, alpha, annealing_step, device="cuda"): y = self.one_hot_embedding(y) y = y.to(device) alpha = alpha.to(device) S = torch.sum(alpha, dim=1, keepdim=True) A = torch.sum(y * (func(S) - func(alpha)), dim=1, keepdim=True) # annealing_coef = torch.min( # torch.tensor(1.0, dtype=torch.float32), # torch.tensor(self.epoch / annealing_step, dtype=torch.float32), # ) annealing_coef = 0.1 kl_alpha = (alpha - 1) * (1 - y) + 1 kl_div = annealing_coef * self.kl_divergence(kl_alpha, device=device) return A + kl_divset
annealing_coef
as 0.1 or lower setting will work, do not set 1, it's too large.
Hi, thank you for your answer. I try to set 'annealing_coef' as 0.1 and 0.05 respectively, but it still not works. Do you let it works successfully?
from pytorch-classification-uncertainty.
I use the edl loss to train in mini-imagenet dataset with 64 classes, but the loss can't converge and the accuracy is very low.
I think you can modify the parameters:
def edl_loss(self, func, y, alpha, annealing_step, device="cuda"): y = self.one_hot_embedding(y) y = y.to(device) alpha = alpha.to(device) S = torch.sum(alpha, dim=1, keepdim=True) A = torch.sum(y * (func(S) - func(alpha)), dim=1, keepdim=True) # annealing_coef = torch.min( # torch.tensor(1.0, dtype=torch.float32), # torch.tensor(self.epoch / annealing_step, dtype=torch.float32), # ) annealing_coef = 0.1 kl_alpha = (alpha - 1) * (1 - y) + 1 kl_div = annealing_coef * self.kl_divergence(kl_alpha, device=device) return A + kl_divset
annealing_coef
as 0.1 or lower setting will work, do not set 1, it's too large.Hi, thank you for your answer. I try to set 'annealing_coef' as 0.1 and 0.05 respectively, but it still not works. Do you let it works successfully?
I haven't tried this repo, but I have tried to train an 8631 ID face recognition network, we use resnet-100 is ok, can get the same accuracy on the in-distribution dataset as the softmax training method, resnet-50 can't get to convergence. Another thing is that we all find the KL loss will damage the accuracy, try to decrease the coefficients or just remove it.
from pytorch-classification-uncertainty.
I use the edl loss to train in mini-imagenet dataset with 64 classes, but the loss can't converge and the accuracy is very low.
I think you can modify the parameters:
def edl_loss(self, func, y, alpha, annealing_step, device="cuda"): y = self.one_hot_embedding(y) y = y.to(device) alpha = alpha.to(device) S = torch.sum(alpha, dim=1, keepdim=True) A = torch.sum(y * (func(S) - func(alpha)), dim=1, keepdim=True) # annealing_coef = torch.min( # torch.tensor(1.0, dtype=torch.float32), # torch.tensor(self.epoch / annealing_step, dtype=torch.float32), # ) annealing_coef = 0.1 kl_alpha = (alpha - 1) * (1 - y) + 1 kl_div = annealing_coef * self.kl_divergence(kl_alpha, device=device) return A + kl_divset
annealing_coef
as 0.1 or lower setting will work, do not set 1, it's too large.Hi, thank you for your answer. I try to set 'annealing_coef' as 0.1 and 0.05 respectively, but it still not works. Do you let it works successfully?
I haven't tried this repo, but I have tried to train an 8631 ID face recognition network, we use resnet-100 is ok, can get the same accuracy on the in-distribution dataset as the softmax training method, resnet-50 can't get to convergence. Another thing is that we all find the KL loss will damage the accuracy, try to decrease the coefficients or just remove it.
Thank you so much. It still don't works. I think I may need to fine-tune other hyperparameters.
from pytorch-classification-uncertainty.
I use the edl loss to train in mini-imagenet dataset with 64 classes, but the loss can't converge and the accuracy is very low.
I think you can modify the parameters:
def edl_loss(self, func, y, alpha, annealing_step, device="cuda"): y = self.one_hot_embedding(y) y = y.to(device) alpha = alpha.to(device) S = torch.sum(alpha, dim=1, keepdim=True) A = torch.sum(y * (func(S) - func(alpha)), dim=1, keepdim=True) # annealing_coef = torch.min( # torch.tensor(1.0, dtype=torch.float32), # torch.tensor(self.epoch / annealing_step, dtype=torch.float32), # ) annealing_coef = 0.1 kl_alpha = (alpha - 1) * (1 - y) + 1 kl_div = annealing_coef * self.kl_divergence(kl_alpha, device=device) return A + kl_divset
annealing_coef
as 0.1 or lower setting will work, do not set 1, it's too large.Hi, thank you for your answer. I try to set 'annealing_coef' as 0.1 and 0.05 respectively, but it still not works. Do you let it works successfully?
I haven't tried this repo, but I have tried to train an 8631 ID face recognition network, we use resnet-100 is ok, can get the same accuracy on the in-distribution dataset as the softmax training method, resnet-50 can't get to convergence. Another thing is that we all find the KL loss will damage the accuracy, try to decrease the coefficients or just remove it.
Thank you so much. It still don't works. I think I may need to fine-tune other hyperparameters.
maybe you can refer this https://github.com/RuoyuChen10/FaceTechnologyTool/blob/master/FaceRecognition/evidential_learning.py, I have tried this on Face Recognition. I'm also failed before. I conclude it's mainly about:
- remove KL loss
- Learning Rate is important.
- The depth of the network.
Maybe the learning rate and depth of the network has few influences on softmax and Cross-Entropy Loss training method.
from pytorch-classification-uncertainty.
Related Issues (9)
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from pytorch-classification-uncertainty.