Published on July 1, 2019 by

NB: Please view videos through course.fast.ai for full notes, searchable transcripts, etc. Please use forums.fast.ai for all questions – don’t ask questions in the youtube comments section!

In the last lesson we had an outstanding question about PyTorch’s CNN default initialization. In order to answer it, Jeremy did a bit of research, and we start today’s lesson seeing how he went about that research, and what he learned.

Then we do a deep dive into the training loop, and show how to make it concise and flexible. First we look briefly at loss functions and optimizers, including implementing softmax and cross-entropy loss (and the *logsumexp* trick). Then we create a simple training loop, and refactor it step by step to make it more concise and more flexible. In the process we’ll learn about `nn.Parameter` and `nn.Module`, and see how they work with `nn.optim` classes. We’ll also see how `Dataset` and `DataLoader` really work.

Once we have those basic pieces in place, we’ll look closely at some key building blocks of fastai: *callbacks*, *DataBunch*, and *Learner*. We’ll see how they help, and how they’re implemented. Then we’ll start writing lots of callbacks to implement lots of new functionality and best practices!

Category Tag