[1] O. Vinyals, A. Toshev, S. Bengio and D. Erhan, "Show and tell: A neural image caption generator," CVPR 2015. [pdf] [code]
[2] H. Fang et al., "From captions to visual concepts and back," CVPR 2015.[pdf][code]
[3] X. Jia, E. Gavves, B. Fernando and T. Tuytelaars, "Guiding the Long-Short Term Memory Model for Image Caption Generation" ICCV 2015.[pdf][code]
[4] Zhou, Luowei, et al. "Watch what you just said: Image captioning with text-conditional attention." Proceedings of the on Thematic Workshops of ACM Multimedia 2017. ACM, 2017.[pdf][code]
[5] Q. Wu, C. Shen, L. Liu, A. Dick and A. v. d. Hengel, "What Value Do Explicit High Level Concepts Have in Vision to Language Problems?" CVPR 2016.[pdf][code]
[6] Yao, Ting, et al. "Boosting image captioning with attributes." Proceedings of the IEEE International Conference on Computer Vision. 2017.[pdf][code]
[7] Lu, Jiasen, et al. "Knowing when to look: Adaptive attention via a visual sentinel for image captioning." Proceedings of the IEEE conference on computer vision and pattern recognition. 2017.[pdf][code]
[8] Tanti, Marc, Albert Gatt, and Kenneth P. Camilleri. "Transfer learning from language models to image caption generators: Better models may not transfer better." arXiv preprint arXiv:1901.01216 (2019).[pdf][code]
[9]
end