LinearLR
-
class torch.optim.lr_scheduler.LinearLR(optimizer, start_factor=0.3333333333333333, end_factor=1.0, total_iters=5, last_epoch=-1)[source] -
Decays the learning rate of each parameter group by linearly changing small multiplicative factor.
The multiplication is done until the number of epoch reaches a pre-defined milestone: total_iters. Notice that such decay can happen simultaneously with other changes to the learning rate from outside this scheduler. When last_epoch=-1, sets initial lr as lr.
- Parameters
-
- optimizer (Optimizer) – Wrapped optimizer.
- start_factor (float) – The number we multiply learning rate in the first epoch. The multiplication factor changes towards end_factor in the following epochs. Default: 1./3.
- end_factor (float) – The number we multiply learning rate at the end of linear changing process. Default: 1.0.
- total_iters (int) – The number of iterations that multiplicative factor reaches to 1. Default: 5.
- last_epoch (int) – The index of the last epoch. Default: -1.
Example
>>> # Assuming optimizer uses lr = 0.05 for all groups >>> # lr = 0.003687 if epoch == 0 >>> # lr = 0.004875 if epoch == 1 >>> # lr = 0.006062 if epoch == 2 >>> # lr = 0.00725 if epoch == 3 >>> # ... >>> # lr = 0.05 if epoch >= 40 >>> scheduler = LinearLR(optimizer, start_factor=0.05, total_iters=40) >>> for epoch in range(100): >>> train(...) >>> validate(...) >>> scheduler.step()
-
get_last_lr()[source] -
Return last computed learning rate by current scheduler.
-
load_state_dict(state_dict)[source] -
Load the scheduler’s state.
- Parameters
-
state_dict (dict) – scheduler state. Should be an object returned from a call to
state_dict().
-
state_dict()[source] -
Return the state of the scheduler as a
dict.It contains an entry for every variable in self.__dict__ which is not the optimizer.
-
step(epoch=None)[source] -
Perform a step.