It is introduced by Google AI in this paper(https://arxiv.org/abs/1905.11946) and they tried to propose a method that is more efficient as suggested by its name while improving the state of the art results. Generally, the models are made too wide, deep, or with a very high resolution. Increasing these characteristics helps the model initially but it quickly saturates and the model made just has more parameters and is therefore not efficient. In EfficientNet they are scaled in a more principled way i.e. gradually everything is increased. I have made a whole jupyter notebook which compares all these models on Imagenet dataset.
arua23 / efficientnet-b0-to-b7 Goto Github PK
View Code? Open in Web Editor NEWThis project forked from abdul-rehman-astro/efficientnet-b0-to-b7
https://miro.medium.com/max/1324/0*09AED_CjE-PUFxKC.png