Callbacks

Event handlers

Training Callbacks

Callbacks that add functionlities during the training phase, including Callbacks that make decisions depending how a monitored metric/loss behaves


source

MeanLossGraphCallback


def MeanLossGraphCallback(
    after_create:NoneType=None, before_fit:NoneType=None, before_epoch:NoneType=None, before_train:NoneType=None,
    before_batch:NoneType=None, after_pred:NoneType=None, after_loss:NoneType=None, before_backward:NoneType=None,
    after_cancel_backward:NoneType=None, after_backward:NoneType=None, before_step:NoneType=None,
    after_cancel_step:NoneType=None, after_step:NoneType=None, after_cancel_batch:NoneType=None,
    after_batch:NoneType=None, after_cancel_train:NoneType=None, after_train:NoneType=None,
    before_validate:NoneType=None, after_cancel_validate:NoneType=None, after_validate:NoneType=None,
    after_cancel_epoch:NoneType=None, after_epoch:NoneType=None, after_cancel_fit:NoneType=None,
    after_fit:NoneType=None
):

Update a graph of training and validation loss


source

ShortEpochCallback


def ShortEpochCallback(
    pct:float=0.01, short_valid:bool=True
):

Fit just pct of an epoch, then stop


source

GradientAccumulation


def GradientAccumulation(
    n_acc:int=32
):

Accumulate gradients before updating weights


source

EarlyStoppingCallback


def EarlyStoppingCallback(
    monitor:str='valid_loss', # value (usually loss or metric) being monitored.
    comp:NoneType=None, # numpy comparison operator; np.less if monitor is loss, np.greater if monitor is metric.
    min_delta:float=0.0, # minimum delta between the last monitor value and the best monitor value.
    patience:int=1, # number of epochs to wait when training has not improved model.
    reset_on_fit:bool=True, # before model fitting, reset value being monitored to -infinity (if monitor is metric) or +infinity (if monitor is loss).
):

A TrackerCallback that terminates training when monitored quantity stops improving.


source

SaveModelCallback


def SaveModelCallback(
    monitor:str='valid_loss', # value (usually loss or metric) being monitored.
    comp:NoneType=None, # numpy comparison operator; np.less if monitor is loss, np.greater if monitor is metric.
    min_delta:float=0.0, # minimum delta between the last monitor value and the best monitor value.
    fname:str='model', # model name to be used when saving model.
    every_epoch:bool=False, # if true, save model after every epoch; else save only when model is better than existing best.
    at_end:bool=False, # if true, save model when training ends; else load best model if there is only one saved model.
    with_opt:bool=False, # if true, save optimizer state (if any available) when saving model.
    reset_on_fit:bool=True, # before model fitting, reset value being monitored to -infinity (if monitor is metric) or +infinity (if monitor is loss).
):

A TrackerCallback that saves the model’s best during training and loads it at the end.


source

ReduceLROnPlateau


def ReduceLROnPlateau(
    monitor:str='valid_loss', # value (usually loss or metric) being monitored.
    comp:NoneType=None, # numpy comparison operator; np.less if monitor is loss, np.greater if monitor is metric.
    min_delta:float=0.0, # minimum delta between the last monitor value and the best monitor value.
    patience:int=1, # number of epochs to wait when training has not improved model.
    factor:float=10.0, # the denominator to divide the learning rate by, when reducing the learning rate.
    min_lr:int=0, # the minimum learning rate allowed; learning rate cannot be reduced below this minimum.
    reset_on_fit:bool=True, # before model fitting, reset value being monitored to -infinity (if monitor is metric) or +infinity (if monitor is loss).
):

A TrackerCallback that reduces learning rate when a metric has stopped improving.

Schedulers

Callback and helper functions to schedule hyper-parameters


source

ParamScheduler


def ParamScheduler(
    scheds
):

Schedule hyper-parameters according to scheds

scheds is a dictionary with one key for each hyper-parameter you want to schedule, with either a scheduler or a list of schedulers as values (in the second case, the list must have the same length as the the number of parameters groups of the optimizer).


source

SchedCos


def SchedCos(
    start, end
):

Cosine schedule function from start to end


source

SchedExp


def SchedExp(
    start, end
):

Exponential schedule function from start to end


source

SchedLin


def SchedLin(
    start, end
):

Linear schedule function from start to end


source

SchedNo


def SchedNo(
    start, end
):

Constant schedule function with start value