Giter Club home page Giter Club logo

topics-in-deep-learning's Introduction

STAT 991: Topics in deep learning (UPenn)

STAT 991: Topics in Deep Learning is a seminar class at UPenn started in 2018. It surveys advanced topics in deep learning based on student presentations.

Fall 2019

  • Syllabus.

  • Lecture notes. (~170 pages, file size ~30 MB, mostly covering notes from previous semesters.)

Lectures

Lectures 1 and 2: Introduction and uncertainty quantification (jackknife+, and Pearce at al, 2018), presented by Edgar Dobriban.

Lecture 3: NTK by Jiayao Zhang. Blog post on the off-convex blog.

Lecture 4: Adversarial robustness by Yinjun Wu.

Lecture 5: ELMo and BERT by Dan Deutsch.

Lecture 6: TCAV by Ben Auerbach (adapted from Been Kim's slides).

Lecture 7: Spherical CNN by Arjun Guru and Claudia Zhu.

Lecture 8: DNNs and approximation by Yebiao Jin.

Lecture 9: Deep Learning and PDE by Chenyang Fang.

Bias and Fairness by Chetan Parthiban.

Lecture 10: Generalization by Bradford Lynch.

Double Descent by Junhui Cai, adapted from slides by Misha Belkin and Ryan Tibshirani.

Lecture 11: Deep Learning in Practice by Dewang Sultania, adaping some slides from CIS 700. Colab notebook

Lecture 12: Hindsight Experience Replay by Achin Jain.

Lecture 13: Deep Learning and Chemistry by Chris Koch.

Text summarization by Jamaal Hay.

Lecture 14: Deep Learning and Langevin Dynamics, and lecture notes by Kan Chen.

Deep Learning in Asset Pricing by Wu Zhu.

Topics

  • Potential topics: Uncertainty quantification, Adversarial Examples, Symmetry, Theory and Empirics, Interpretation, Fairness, ...

  • Potential papers:

Uncertainty quantification

Predictive inference with the jackknife+. Slides.

High-Quality Prediction Intervals for Deep Learning: A Distribution-Free, Ensembled Approach

Adversarial Examples

Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples

Certified Adversarial Robustness via Randomized Smoothing

On Evaluating Adversarial Robustness

VC Classes are Adversarially Robustly Learnable, but Only Improperly

Adversarial Examples Are Not Bugs, They Are Features

See section 6.1 of my lecture notes for a collection of materials.

Symmetry

Spherical CNNs

Learning SO(3) Equivariant Representations with Spherical CNNs

Invariance reduces Variance: Understanding Data Augmentation in Deep Learning and Beyond

Theory and empirical wonders

Understanding deep learning requires rethinking generalization

Spectrally-normalized margin bounds for neural networks

Neural Tangent Kernel: Convergence and Generalization in Neural Networks. GNTK.

Gradient Descent Provably Optimizes Over-parameterized Neural Networks

Mean-field theory of two-layers neural networks. Youtube talk

Interpretation

Interpretability beyond feature attribution: Quantitative testing with concept activation vectors (tcav)

Sanity checks for saliency maps

Scalability and Federated Learning

Communication-Efficient Learning of Deep Networks from Decentralized Data

Federated Learning: Challenges, Methods, and Future Directions

Fairness

[TBA]

Applications

Climate, energy, healthcare...

Other resources

Course on Coursera. A good way to learn the basics.

Stanford classes: CS231N (Computer vision). CS224N (NLP). Cheat sheet.

Conferences: NeurIPS, ICML, ICLR

Convenient ways to run code online: https://colab.research.google.com/notebooks/welcome.ipynb, https://www.kaggle.com/kernels

Keras is a user-friendly language for DL. Interfaces to R, see this book

Foundations of Deep Learning program at the Simons Institute for the Theory of Computing. workshops: 1, 2, 3. Reading groups and papers

IAS Special Year on Optimization, Statistics, and Theoretical Machine Learning

Materials from previous editions

Lecture notes

The materials draw inspiration from many sources, including David Donoho's course Stat 385 at Stanford, Andrew Ng's Deep Learning course on deeplearning.ai, CS231n at Stanford, David Silver's RL course, Tony Cai's reading group at Wharton. They may contain factual and typographical errors. Thanks to several people who have provided parts of the notes, including Zongyu Dai, Georgios Kissas, Jane Lee, Barry Plunkett, Matteo Sordello, Yibo Yang, Bo Zhang, Yi Zhang, Carolina Zheng. The images included are subject to copyright by their rightful owners, and are included here for educational purposes.

Compared to other sources, these lecture notes are aimed at people with a basic knowledge of probability, statistics, and machine learning. They start with basic concepts from deep learning, and aim to cover selected important topics up to the cutting edge of research.

The entire latex source is included, encouraging reuse (subject to appropriate licenses).

Spring 2019

Topics: sequential decision-making (from bandits to deep reinforcement learning), distributed learning, AutoML, Visual Question Answering.

Presentations

Fall 2018

Topics: basics (deep feedforward networks, training, CNNs, RNNs). Generative Adversarial Networks, Learning Theory, Sequence Learning, Neuroscience, etc.

Presentations

topics-in-deep-learning's People

Contributors

dobriban avatar

Watchers

James Cloos avatar paper2code - bot avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.