The maximum classification performance achived by the ResNet-50 when trained and validated on CIFAR-10 dataset.
This acts as the upper bound for all the below compared federated learning methods.
MOON-Prox is an intuitive extention of MOON. In this, we add a proximal term that acts as regulariser enforcing the local model to be close to the global model. The loss for local device can be written as:
Thank's for your work! I'm beeing struggling finding a fine-tuned baseline FL model for my research.
However, I'm little confused about the Beta (degree of non-IID), would you guys mind teaching me which paper do you follow to get your non-IID distribution? It seems different from https://arxiv.org/abs/1909.06335 (which also used Dirchelet distribution).
Hoping to hear from you soon. I would really appreciate it if you can provid me some hints.