CSE559A Lecture 8
Paper review sharing.
Recap: Three ways to think about linear classifiers
Geometric view: Hyperplanes in the feature space
Algebraic view: Linear functions of the features
Visual view: One template per class
Continue on linear classification models
Two layer networks as combination of templates.
Interpretability is lost during the depth increase.
A two layer network is a universal approximator (we can approximate any continuous function to arbitrary accuracy). But the hidden layer may need to be huge.
Supervised learning outline
- Collect training data
- Specify model (select hyper-parameters)
- Train model
Hyper-parameters selection
- Number of layers, number of units per layer, learning rate, etc.
- Type of non-linearity, regularization, etc.
- Type of loss function, etc.
- SGD settings: batch size, number of epochs, etc.
Hyper-parameter searching
Use validation set to evaluate the performance of the model.
Never peek the test set.
Use the training set to do K-fold cross validation.
Backpropagation
Computation graphs
SGD update for each parameter
is the error function.
Using the chain rule
Suppose ,
Example:
So ,
General backpropagation algorithm
Last updated on