This lesson summarizes the topics we'll be covering in this section and why they'll be important to you as a data scientist.
In the previous section you learned a lot about how neural networks work. In this section, you'll learn why deeper networks sometimes lead to better results, and we'll generalize what you have learned before to get your matrix dimensions right for deep networks. You'll build deeper neural networks from scratch, and also learn how to build these using Keras.
You'll learn that deep representations are really good at automating what used to be a tedious and time-consuming process of feature engineering. In this section, you'll see that you can actually build a smaller but deeper neural network with exponentially less hidden units which performs even better than a network with more hidden units. The reason for this is that learning happens in each layer, and adding more layers (even with fewer limits) can lead to very powerful predictions! You'll learn about matrix notation for these deep networks and how to build a network like that from scratch.
In this section, you'll extend your deep learning knowledge by learning about deeper neural networks. You'll also learn how to use Keras to build deep learning models!