floodsung / deep-learning-papers-reading-roadmap Goto Github PK
View Code? Open in Web Editor NEWDeep Learning papers reading roadmap for anyone who are eager to learn this amazing tech!
Deep Learning papers reading roadmap for anyone who are eager to learn this amazing tech!
We can add in documentation to use the code in the README.md section.
Anyone has suggested papers of state-of-the-art for apply deep learning to finance? Algorithmic trading, trade execution, risk modelling, using deep reinforcement learning etc.
Hi, In Deep-Learning-Papers-Reading-Roadmap, inappropriate dependency versioning constraints can cause risks.
Below are the dependencies and version constraints that the project is using
mistune>=0.7.2
beautifulsoup4>=4.4.1
six>=1.10.0
The version constraint == will introduce the risk of dependency conflicts because the scope of dependencies is too strict.
The version constraint No Upper Bound and * will introduce the risk of the missing API Error because the latest version of the dependencies may remove some APIs.
After further analysis, in this project,
The version constraint of dependency mistune can be changed to >=0.1.0,<=0.8.4.
The above modification suggestions can reduce the dependency conflicts as much as possible,
and introduce the latest version as much as possible without calling Error in the projects.
The invocation of the current project includes all the following methods.
mistune.markdown
point.text.split clean_text shorten_title link.endswith BeautifulSoup.BeautifulSoup m2.group time.sleep point.find requests.get f.close print replacements.items clean_pdf_link get_extension m1.group open f.write readme_soup.find_all mistune.markdown parser.add_argument link.replace print_title len os.path.join join download_pdf os.path.exists reload os.path.splitext failures.append readme.read os.makedirs re.search parser.parse_args shutil.rmtree text.replace argparse.ArgumentParser sys.setdefaultencoding
@ahmaurya
Could please help me check this issue?
May I pull a request to fix it?
Thank you very much.
Original line 87:
with open('README.md') as readme:
Corrected version of line 87:
with open('README.md','r',encoding='utf-8') as readme:
Explanation:
Windows uses GBK to decode rather than utf-8 at default setting
Hi
i added your repo link to my spreadsheet
you can check it if you have any suggestion
https://docs.google.com/spreadsheets/d/13QQinPFhU9DwujctXS0A7un0up4N5BNyUDZwxK4I6hg/edit#gid=1588925720
Thanks
Thank you
While opening the README.md, download.py gave me this:
UnicodeDecodeError: 'charmap' codec can't decode byte 0x9d in position 33476: character maps to <undefined>
In case someone has the same problem, you may fix it by changing line 73 to:
with open('README.md', encoding="utf8") as readme:
Can you add more categories in Computer Vision?
Hi, I am a freshman in the Deep Learning field. I love reading Deep Learning papers and your work likes the guide help me so much. But it seem so far from the last update.
Are there anyone maintain this project. If not, how can us help you?
[3] "Reducing the dimensionality of data with neur [...] (http://www.cs.toronto.edu/~hinton/science.pdf)
Traceback (most recent call last):
File "download.py", line 102, in
print_title(point.text)
File "download.py", line 50, in print_title
print('\n'.join(("", title, pattern * len(title))))
File "C:\Python27\lib\encodings\cp437.py", line 12, in encode
return codecs.charmap_encode(input,errors,encoding_map)
UnicodeEncodeError: 'charmap' codec can't encode character u'\uff08' in position 23: character maps to
I think the deep learning techniques for knowledge graph, especially for relation extraction are missing. The author is suggested to give some suggestions for these directions. :)
I think the WaveNet paper by DeepMind is worth including. Maybe under "Applications/Audio"?
Sorry but I must say that Word2Vec shouldn'tbe here.
Word2Vec is a shallow (1-hidden layer) linear neural-network.
In fact, the very premise of Word2Vec is trading off model complexity (many layers, nonlinear activation) with a much simpler model that is faster to train with much larger datasets.
Omar Levy is one of the main researchers in word embeddings. See the first answer here: Quora: How does Word2Vec work?
Hi, I followed this repository a year ago. This is definitely a good work for giving those who want do get into this field an introduction.
However, I noticed that the latest commit and merged pull request were dated in one year ago when I go back to check if there is some updates for recent published articles.
Is this repo still alive? If no, is there other repository recommended for up-to-date paper list?
The new address is http://www.deeplearningbook.org/
I found there was an answer almost the same as the repo md:深度学习入门必看的书和论文?有哪些必备的技能需学习?- 景略集智的回答 - 知乎
which made me very angry😡
There is a missing at the end of 'utf8
with open('README.md',encoding='utf8) as readme:
I appreciate 😁 the excellent 🎖 reading list 📚 but it could definitely use ♾ more emojis 🆒. This is not a joke 🤪 or spam 🥷. I am very serious 🧐 🤬 🏴☠️
hope review this article: Pyramid Scene Parsing Network
I got this error:
Traceback (most recent call last):
File "c:\Users\jshat\Documents\Code\Machine Learning\Deep-Learning-Papers-Reading-Roadmap\download.py", line 88, in
readme_html = mistune.markdown(readme.read())
File "C:\Python37\lib\encodings\cp1252.py", line 23, in decode
return codecs.charmap_decode(input,self.errors,decoding_table)[0]
UnicodeDecodeError: 'charmap' codec can't decode byte 0x9d in position 33411: character maps to
Due to the use of “” instead of "" in line 325 of README.md
line 325 (modified):
[1] J. Long, E. Shelhamer, and T. Darrell, "Fully convolutional networks for semantic segmentation." in CVPR, 2015. [pdf] ⭐⭐⭐⭐⭐
Hey Flood,
Great work on this page!
Quick question - how do you assign stars for each paper? Is it subjective, or do you rate them by some algorithm (ie -citations/age)?
Cheers!
I think these repositories are worth looking at
Thanks for your theoretical approach.
Currently, I am making: Machine Learning for Software Engineers (https://github.com/ZuzooVn/machine-learning-for-software-engineers)
Did you have any suggestion for Deep Learning's Top-down learning path for Software Engineers ?
Thank you
[0] Bengio, Yoshua, Ian J. Goodfellow, and Aaron Courville. "Deep learning." An MIT Press book. (2015). [pdf] (Deep Learning Bible, you can read this book while reading following papers.) ⭐⭐⭐⭐⭐
[2] Hinton, Geoffrey E., Simon Osindero, and Yee-Whye Teh. "A fast learning algorithm for deep belief nets." Neural computation 18.7 (2006): 1527-1554. [pdf](Deep Learning Eve) ⭐⭐⭐
[3] Hinton, Geoffrey E., and Ruslan R. Salakhutdinov. "Reducing the dimensionality of data with neural networks." Science 313.5786 (2006): 504-507. [pdf] (Milestone, Show the promise of deep learning) ⭐⭐⭐
[4] Krizhevsky, Alex, Ilya Sutskever, and Geoffrey E. Hinton. "Imagenet classification with deep convolutional neural networks." Advances in neural information processing systems. 2012. [pdf] (AlexNet, Deep Learning Breakthrough) ⭐⭐⭐⭐⭐
[5] Simonyan, Karen, and Andrew Zisserman. "Very deep convolutional networks for large-scale image recognition." arXiv preprint arXiv:1409.1556 (2014). [pdf] (VGGNet,Neural Networks become very deep!) ⭐⭐⭐
[6] Szegedy, Christian, et al. "Going deeper with convolutions." Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2015. [pdf] (GoogLeNet) ⭐⭐⭐
[8] Hinton, Geoffrey, et al. "Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups." IEEE Signal Processing Magazine 29.6 (2012): 82-97. [pdf] (Breakthrough in speech recognition):star::star::star::star:
[9] Graves, Alex, Abdel-rahman Mohamed, and Geoffrey Hinton. "Speech recognition with deep recurrent neural networks." 2013 IEEE international conference on acoustics, speech and signal processing. IEEE, 2013. [pdf] (RNN):star::star::star:
[11] Sak, Haşim, et al. "Fast and accurate recurrent neural network acoustic models for speech recognition." arXiv preprint arXiv:1507.06947 (2015). [pdf] (Google Speech Recognition System) ⭐⭐⭐
[13] W. Xiong, J. Droppo, X. Huang, F. Seide, M. Seltzer, A. Stolcke, D. Yu, G. Zweig "Achieving Human Parity in Conversational Speech Recognition." arXiv preprint arXiv:1610.05256 (2016). [pdf] (State-of-the-art in speech recognition, Microsoft) ⭐⭐⭐⭐
[14] Hinton, Geoffrey E., et al. "Improving neural networks by preventing co-adaptation of feature detectors." arXiv preprint arXiv:1207.0580 (2012). [pdf] (Dropout) ⭐⭐⭐
[15] Srivastava, Nitish, et al. "Dropout: a simple way to prevent neural networks from overfitting." Journal of Machine Learning Research 15.1 (2014): 1929-1958. [pdf] ⭐⭐⭐
[16] Ioffe, Sergey, and Christian Szegedy. "Batch normalization: Accelerating deep network training by reducing internal covariate shift." arXiv preprint arXiv:1502.03167 (2015). [pdf] (An outstanding Work in 2015) ⭐⭐⭐⭐
[18] Courbariaux, Matthieu, et al. "Binarized Neural Networks: Training Neural Networks with Weights and Activations Constrained to+ 1 or−1." [pdf] (New Model,Fast) ⭐⭐⭐
[20] Chen, Tianqi, Ian Goodfellow, and Jonathon Shlens. "Net2net: Accelerating learning via knowledge transfer." arXiv preprint arXiv:1511.05641 (2015). [pdf] (Modify previously trained network to reduce training epochs) ⭐⭐⭐
[25] Han, Song, Huizi Mao, and William J. Dally. "Deep compression: Compressing deep neural network with pruning, trained quantization and huffman coding." CoRR, abs/1510.00149 2 (2015). [pdf] (ICLR best paper, new direction to make NN running fast,DeePhi Tech Startup) ⭐⭐⭐⭐⭐
[26] Iandola, Forrest N., et al. "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and< 1MB model size." arXiv preprint arXiv:1602.07360 (2016). [pdf] (Also a new direction to optimize NN,DeePhi Tech Startup) ⭐⭐⭐⭐
[27] Le, Quoc V. "Building high-level features using large scale unsupervised learning." 2013 IEEE international conference on acoustics, speech and signal processing. IEEE, 2013. [pdf] (Milestone, Andrew Ng, Google Brain Project, Cat) ⭐⭐⭐⭐
[30] Radford, Alec, Luke Metz, and Soumith Chintala. "Unsupervised representation learning with deep convolutional generative adversarial networks." arXiv preprint arXiv:1511.06434 (2015). [pdf] (DCGAN) ⭐⭐⭐⭐
[35] Cho, Kyunghyun, et al. "Learning phrase representations using RNN encoder-decoder for statistical machine translation." arXiv preprint arXiv:1406.1078 (2014). [pdf] (First Seq-to-Seq Paper) ⭐⭐⭐⭐
[36] Sutskever, Ilya, Oriol Vinyals, and Quoc V. Le. "Sequence to sequence learning with neural networks." Advances in neural information processing systems. 2014. [pdf] (Outstanding Work) ⭐⭐⭐⭐⭐
[37] Bahdanau, Dzmitry, KyungHyun Cho, and Yoshua Bengio. "Neural Machine Translation by Jointly Learning to Align and Translate." arXiv preprint arXiv:1409.0473 (2014). [pdf] ⭐⭐⭐⭐
[47] Wang, Ziyu, Nando de Freitas, and Marc Lanctot. "Dueling network architectures for deep reinforcement learning." arXiv preprint arXiv:1511.06581 (2015). [pdf] (ICLR best paper,great idea) ⭐⭐⭐⭐
[53] Bengio, Yoshua. "Deep Learning of Representations for Unsupervised and Transfer Learning." ICML Unsupervised and Transfer Learning 27 (2012): 17-36. [pdf] (A Tutorial) ⭐⭐⭐
[54] Silver, Daniel L., Qiang Yang, and Lianghao Li. "Lifelong Machine Learning Systems: Beyond Learning Algorithms." AAAI Spring Symposium: Lifelong Machine Learning. 2013. [pdf] (A brief discussion about lifelong learning) ⭐⭐⭐
[57] Parisotto, Emilio, Jimmy Lei Ba, and Ruslan Salakhutdinov. "Actor-mimic: Deep multitask and transfer reinforcement learning." arXiv preprint arXiv:1511.06342 (2015). [pdf] (RL domain) ⭐⭐⭐
[59] Lake, Brenden M., Ruslan Salakhutdinov, and Joshua B. Tenenbaum. "Human-level concept learning through probabilistic program induction." Science 350.6266 (2015): 1332-1338. [pdf] (No Deep Learning,but worth reading) ⭐⭐⭐⭐⭐
[1] Szegedy, Christian, Alexander Toshev, and Dumitru Erhan. "Deep neural networks for object detection." Advances in Neural Information Processing Systems. 2013. [pdf] ⭐⭐⭐
[2] Girshick, Ross, et al. "Rich feature hierarchies for accurate object detection and semantic segmentation." Proceedings of the IEEE conference on computer vision and pattern recognition. 2014. [pdf] (RCNN) ⭐⭐⭐⭐⭐
[3] He, Kaiming, et al. "Spatial pyramid pooling in deep convolutional networks for visual recognition." European Conference on Computer Vision. Springer International Publishing, 2014. [pdf] (SPPNet) ⭐⭐⭐⭐
[5] Ren, Shaoqing, et al. "Faster R-CNN: Towards real-time object detection with region proposal networks." Advances in neural information processing systems. 2015. [pdf] ⭐⭐⭐⭐
[1] Wang, Naiyan, and Dit-Yan Yeung. "Learning a deep compact image representation for visual tracking." Advances in neural information processing systems. 2013. [pdf] (First Paper to do visual tracking using Deep Learning,DLT Tracker) ⭐⭐⭐
[3] Wang, Lijun, et al. "Visual tracking with fully convolutional networks." Proceedings of the IEEE International Conference on Computer Vision. 2015. [pdf] (FCNT) ⭐⭐⭐⭐
[4] Held, David, Sebastian Thrun, and Silvio Savarese. "Learning to Track at 100 FPS with Deep Regression Networks." arXiv preprint arXiv:1604.01802 (2016). [pdf] (GOTURN,Really fast as a deep learning method,but still far behind un-deep-learning methods) ⭐⭐⭐⭐
[6] Martin Danelljan, Andreas Robinson, Fahad Khan, Michael Felsberg. "Beyond Correlation Filters: Learning Continuous Convolution Operators for Visual Tracking." ECCV (2016) [pdf] (C-COT) ⭐⭐⭐⭐
[7] Nam, Hyeonseob, Mooyeol Baek, and Bohyung Han. "Modeling and Propagating CNNs in a Tree Structure for Visual Tracking." arXiv preprint arXiv:1608.07242 (2016). [pdf] (VOT2016 Winner,TCNN) ⭐⭐⭐⭐
[1] Farhadi,Ali,etal. "Every picture tells a story: Generating sentences from images". In Computer VisionECCV 2010. Springer Berlin Heidelberg:15-29, 2010. [pdf] ⭐⭐⭐
[4] Donahue, Jeff, et al. "Long-term recurrent convolutional networks for visual recognition and description". In arXiv preprint arXiv:1411.4389 ,2014. [pdf]
[5] Karpathy, Andrej, and Li Fei-Fei. "Deep visual-semantic alignments for generating image descriptions". In arXiv preprint arXiv:1412.2306, 2014. [pdf]:star::star::star::star::star:
[6] Karpathy, Andrej, Armand Joulin, and Fei Fei F. Li. "Deep fragment embeddings for bidirectional image sentence mapping". In Advances in neural information processing systems, 2014. [pdf]:star::star::star::star:
[8] Chen, Xinlei, and C. Lawrence Zitnick. "Learning a recurrent visual representation for image caption generation". In arXiv preprint arXiv:1411.5654, 2014. [pdf]:star::star::star::star:
[10] Xu, Kelvin, et al. "Show, attend and tell: Neural image caption generation with visual attention". In arXiv preprint arXiv:1502.03044, 2015. [pdf]:star::star::star::star::star:
[3] Luong, Minh-Thang, Hieu Pham, and Christopher D. Manning. "Effective approaches to attention-based neural machine translation." arXiv preprint arXiv:1508.04025 (2015). [pdf] ⭐⭐⭐⭐
[4] Chung, et al. "A Character-Level Decoder without Explicit Segmentation for Neural Machine Translation". In arXiv preprint arXiv:1603.06147, 2016. [pdf]:star::star:
[6] Wu, Schuster, Chen, Le, et al. "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation". In arXiv preprint arXiv:1609.08144v2, 2016. [pdf] (Milestone) ⭐⭐⭐⭐
[1] Koutník, Jan, et al. "Evolving large-scale neural networks for vision-based reinforcement learning." Proceedings of the 15th annual conference on Genetic and evolutionary computation. ACM, 2013. [pdf] ⭐⭐⭐
[3] Pinto, Lerrel, and Abhinav Gupta. "Supersizing self-supervision: Learning to grasp from 50k tries and 700 robot hours." arXiv preprint arXiv:1509.06825 (2015). [pdf] ⭐⭐⭐
[4] Levine, Sergey, et al. "Learning Hand-Eye Coordination for Robotic Grasping with Deep Learning and Large-Scale Data Collection." arXiv preprint arXiv:1603.02199 (2016). [pdf] ⭐⭐⭐⭐
[5] Zhu, Yuke, et al. "Target-driven Visual Navigation in Indoor Scenes using Deep Reinforcement Learning." arXiv preprint arXiv:1609.05143 (2016). [pdf] ⭐⭐⭐⭐
[6] Yahya, Ali, et al. "Collective Robot Reinforcement Learning with Distributed Asynchronous Guided Policy Search." arXiv preprint arXiv:1610.00673 (2016). [pdf] ⭐⭐⭐⭐
[8] A Rusu, M Vecerik, Thomas Rothörl, N Heess, R Pascanu, R Hadsell."Sim-to-Real Robot Learning from Pixels with Progressive Nets." arXiv preprint arXiv:1610.04286 (2016). [pdf] ⭐⭐⭐⭐
[3] Zhu, Jun-Yan, et al. "Generative Visual Manipulation on the Natural Image Manifold." European Conference on Computer Vision. Springer International Publishing, 2016. [pdf] (iGAN) ⭐⭐⭐⭐
[6] Johnson, Justin, Alexandre Alahi, and Li Fei-Fei. "Perceptual losses for real-time style transfer and super-resolution." arXiv preprint arXiv:1603.08155 (2016). [pdf] ⭐⭐⭐⭐
[7] Vincent Dumoulin, Jonathon Shlens and Manjunath Kudlur. "A learned representation for artistic style." arXiv preprint arXiv:1610.07629 (2016). [pdf] ⭐⭐⭐⭐
hi, I think that the roadmap should add more papers. There are few papers published in 2017.
If the citations can be import into EndNote directly, it would be more helpful.
Thank you.
Adam: A Method for Stochastic Optimization
https://arxiv.org/abs/1412.6980
I think Speech recognition should also be in the list
This is really great!
I know this isn't really an issue, but since this is a repo about reading papers, I thought someone might be able to help me.
Is there an easy way to just grab a list of papers like this and import them into Mendeley? Ideally so that I can keep the order so that I can read them in the same order.
There have been amazing papers in this area starting from last year and it would be awesome if it could be added. I'm thinking specifically of papers beyond NTMs, such as NPI, Neural RAM, Neural GPU, Neural Programmer, etc.
I'm a noob on meta-learning, and would like to read some papers on the topic. @floodsung , would be awesome if you could suggest some! Thanks.
The link for the paper "Sequence to sequence learning with neural networks" [36] appears to be broken. Can you fix. Thanks.
May be useful to the subject?
Graph Isomorphism in Quasipolynomial Time - László Babai arxiv
I was trying to access the paper [52] Silver, David, et al. "Mastering the game of Go with deep neural networks and tree search." but the link is broken. 😞
@songrotek Thank you.
There is a missing at the end of 'utf8
with open('README.md',encoding='utf8) as readme:
'Densely Connected Convolutional Networks' - G Huang, GB Huang, S Song, K You
obtained state-of-the-art results in image classification on CIFAR-10, CIFAR-100, SVHN, ImageNet
This paper also won CVPR 17 best paper award along with Apple's paper
'U-Net: Convolutional Networks for Biomedical Image Segmentation' - O Ronneberger, P Fischer, T Brox
This paper is quite successful in medical image segmentation and enjoyed success in Kaggle too
'The One Hundred Layers Tiramisu: Fully Convolutional DenseNets for Semantic Segmentation' -
obtained state-of-the-art results in segmentation on CamVid and Gatech datasets. This one is by Bengio group
'Wasserstein GAN' - M Arjovsky, S Chintala, L Bottou
This paper solved a lot of issues with GANs stability. The change in loss function and training paradigm
make loss functions interpretable. They obtained good results even with a normal mlp on LSUN dataset.
I really appreciate your original ideas in zhihu.When all the data can be chosen,5-shot can be better than 1-shot. 1-shot is easier than 5-shot to find the optimum weights,isn't it? So i think 1-shot can be better than 5-shot,but it not.Can you give me some advice?
Whee is the YOLO paper?
I think this paper is worth reading:
Learning to Communicate with Deep Multi-Agent Reinforcement Learning:
https://arxiv.org/pdf/1605.06676v2.pdf
1605.06676v2.pdf
It is very wonderful!!!!
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.