Giter Club home page Giter Club logo

deeplearning.ai-summary's Introduction

DeepLearning.ai Courses Notes

This repository contains my personal notes and summaries on DeepLearning.ai specialization courses. I've enjoyed every little bit of the course hope you enjoy my notes too.

DeepLearning.ai contains five courses which can be taken on Coursera. The five courses titles are:

  1. Neural Networks and Deep Learning.
  2. Improving Deep Neural Networks: Hyperparameter tuning, Regularization and Optimization.
  3. Structuring Machine Learning Projects.
  4. Convolutional Neural Networks.
  5. Sequence Models.

This is by far the best course series on deep learning that I've taken. Enjoy!

About This Specialization (From the official Deep Learning Specialization page)

If you want to break into AI, this Specialization will help you do so. Deep Learning is one of the most highly sought after skills in tech. We will help you become good at Deep Learning.

In five courses, you will learn the foundations of Deep Learning, understand how to build neural networks, and learn how to lead successful machine learning projects. You will learn about Convolutional networks, RNNs, LSTM, Adam, Dropout, BatchNorm, Xavier/He initialization, and more. You will work on case studies from healthcare, autonomous driving, sign language reading, music generation, and natural language processing. You will master not only the theory, but also see how it is applied in industry. You will practice all these ideas in Python and in TensorFlow, which we will teach.

You will also hear from many top leaders in Deep Learning, who will share with you their personal stories and give you career advice.

AI is transforming multiple industries. After finishing this specialization, you will likely find creative ways to apply it to your work.

We will help you master Deep Learning, understand how to apply it, and build a career in AI.

Specialization Certificate

At last I've successfully completed the specialization and earned my certificate!

Similar Notes

Reviews

As DeepLearning.ai is one of the most popular courses in the field of AI/ML/DL, there are some good reviews regarding some or whole of the specialization courses.

The list of reviews includes:

A good Facebook group that discusses the courses are here: https://www.facebook.com/groups/DeepLearningAISpecialization/.

Group description:

This group is for current, past or future students of Prof Andrew Ng's deeplearning.ai class in Coursera. The purpose is for students to get to know each other, ask questions, and share insights. However, remember the Coursera Honor Code - please do not post any solution in the forum!

Next steps

Taking fast.ai courses series as it focuses more on the practical works.

Acknowledgements

Thanks to VladKha, wangzhenhui1992, jarpit96, and other contributors for helping me revising and fixing mistakes in the notes.





Mahmoud Badry @ 2018

deeplearning.ai-summary's People

Contributors

abhagat-splunk avatar adityassrana avatar akashadhikari avatar debajyotidatta avatar dotslash21 avatar iiey avatar jacobyan avatar jarpit96 avatar jenglamlow avatar kaddynator avatar kaushal28 avatar mbadry1 avatar mbuet2ner avatar mrbeann avatar nguyenng1802 avatar us avatar vernikagupta avatar vladkha avatar wangzhenhui1992 avatar willychen123 avatar xfge avatar xisisu avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

deeplearning.ai-summary's Issues

Mistake!

Train / Dev / Test sets

If size of the dataset is 100 to 1000000 ==> 60/20/20
If size of the dataset is 1000000 to INF ==> 99/1/1 or 99.5/0.25/0.25

According to this lesson, it should be 98/1/1 but not 99/1/1.

Mistake

Thanks for your summary.
I'm now at the forth course of deeplearning.ai.
And I found a mistake.

Convolutional Implementation of Sliding Windows

As you can see in the above image, we turned the FC layer into a Conv layer using a convolution with the width and height of the filter is the same as the with and height of the input.

It should be "width" instead of "with".

Difficult to understand

Language model and sequence generation

The job of a language model is given any sentence give a probability of that sentence. Also what is the next sentence probability given a sentence.

It is difficult to understand.
Is this meaning?

The job of a language model is giving a probability of any given sentence .Also the probability of the next sentence.

download.py cannot pull the image files

pandoc_version: 1.16.0.2
pandoc_path: /usr/bin/pandoc

Converting 05- Sequence Models
Traceback (most recent call last):
File "./download.py", line 48, in
main()
File "./download.py", line 42, in main
outputfile=(key + ".pdf")
File "/home/he/.local/lib/python3.5/site-packages/pypandoc/init.py", line 140, in convert_file
outputfile=outputfile, filters=filters)
File "/home/he/.local/lib/python3.5/site-packages/pypandoc/init.py", line 325, in _convert_input
'Pandoc died with exitcode "%s" during conversion: %s' % (p.returncode, stderr)
RuntimeError: Pandoc died with exitcode "43" during conversion: b"pandoc: Could not find image Images/01.png', skipping...\npandoc: Could not find image Images/02.png', skipping...\npandoc: Could not find image Images/03.png', skipping...\npandoc: Could not find image Images/04.png', skipping...\npandoc: Could not find image Images/05.png', skipping...\npandoc: Could not find image ................................... LaTeX Error: File lmodern.sty' not found.\n\nType X to quit or to proceed,\nor enter new name. (Default extension: sty)\n\nEnter file name: \n! Emergency stop.\n<read *> \n \nl.3 \usepackage\n\npandoc: Error producing PDF\n"

Unable to understand

GloVe word vectors

For stop words like this, is, the it gives it a low weight, also for words that doesn't occur so much.
It's too difficult to understand and I don't know the meaning.
And I think it should be fixed with correct grammar.

Momentum formulation

Say in iteration t : Vdw = Beta*Vdw + (1-Beta)*dw ---> The first term in this should actually be Vdw exponentially averaged till t-1 iterations. Since we are cacluating Vdw we can not use Vdw but need to use Vdw averaged till t-1th iteration.

maybe there are something wrong in your note

Title: Normalizing activations in a network
the last line, the last character
if gamma = sqrt(variance + epsilon) and beta = mean then Z_tilde[i] = Z_norm[i]
I think there should be : if gamma = sqrt(variance + epsilon) and beta = mean then Z_tilde[i] = Z[i]

mislabeled variable

In "A simple convolution network example" on the third layer, the variable padding "p2" is initialized instead of "p3".
Same goes for "Convolutional neural network example" in the second layer, filter and stride variables are "f1p" and "s1p" instead of "f2p" and "s2p".

If we are reusing the variable then its cool for me 👍

Ambiguity ?

Gradient checking implementation notes

Remember if you use the normal Regularization to add the value of the additional check to your equation
(lamda/2m)sum(W[l])

I think it is difficult to understand what's meaning of this sentence because of the ambiguity.

  1. If you use the normal Regularization to add the value of the additional check to your equation,Remember (lamda/2m)sum(W[l])
  2. Remember if you use the normal Regularization to add the value of the additional check to your equation (lamda/2m)sum(W[l]) or not

Would you like to tell me which one is right?
Thank you at all.

Error in Course 2

In batch normalization and typical normalization, the batch is divided by standard deviation and not variance.

Standard_deviation = sqrt(var^2)

Mistakes?

Regularization

The L2 Regularization version: J(w,b) = (1/m) * Sum(L(y'(i),y'(i))) + (Lmda/2m) * ||W||2
The L1 Regularization version: J(w,b) = (1/m) * Sum(L(y'(i),y'(i))) + (Lmda/2m) * (||W||)
The normal cost function that we want to minimize is: J(W1,b1...,WL,bL) = (1/m) * Sum(L(y'(i),y'(i)))
The L2 Regularization version: J(w,b) = (1/m) * Sum(L(y'(i),y'(i))) + (Lmda/2m) * Sum((||W[l]||) ^2)

The loss function is L(y(i),y'(i)) but not L(y'(i),y'(i)) , right?

Limitation

Recurrent Neural Network Model
So limitation of the discussed architecture is that it learns from behind.
I think it should be like this:
So limitation of the discussed architecture is that it can not learn from behind.
What is your opinion?

Places to be corrected

I found some places which need to be corrected.

Other regularization methods
incorrect: For example in OCR, you'll need the distort the digits.
correct : For example in OCR, you'll need to distort the digits.

Vanishing / Exploding gradients
incorrect: And If W < I (Identity matrix) The weights will explode
correct : And If W < I (Identity matrix) The weights will vanish

Thank You

CNN number of parameters correction

in Convolutional Neural network course, the full network examples that is followed by a table to describe output size and model parameters in each layer here is wrong and it has been updated in the course itself in this section

@mbadry1 if you don't mind , i may create pull request for this , by editing the image or adding text for corrections

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.