This course teaches the “magic” of getting deep learning to work well. Rather than the deep learning process being a black box, the course explains what drives performance, and enables one to be able to systematically produce good results.
This course teaches you to:
- Understand industry best-practices for building deep learning applications.
- Be able to effectively use the common neural network performance enhancements like initialization, L2 and dropout regularization and Batch normalization, gradient checking
- Be able to implement and apply a variety of optimization algorithms, such as mini-batch gradient descent, Momentum, RMSprop and Adam, and check for their convergence.
- Understand new best-practices for the deep learning era of how to set up train/dev/test sets and analyze bias/variance
- Be able to implement a neural network in TensorFlow.
Lectures include:
- Optimization algorithms
- LMini-batch gradient descent
- Understanding mini-batch gradient descent
- Exponentially weighted averages
- Understanding exponentially weighted averages
- Bias correction in exponentially weighted averages
- Gradient descent with momentum
- RMSprop
- Adam optimization algorithm
- Learning rate decay
- The problem of local optima
This is the second course of the Deep Learning Specialization.
The card below has my certification and license information.