|View source on GitHub|
Deletes old checkpoints.
`tf.contrib.checkpoint.CheckpointManager`Compat aliases for migration
See Migration guide for more details.
tf.train.CheckpointManager( checkpoint, directory, max_to_keep, keep_checkpoint_every_n_hours=None, checkpoint_name='ckpt' )
import tensorflow as tf checkpoint = tf.train.Checkpoint(optimizer=optimizer, model=model) manager = tf.contrib.checkpoint.CheckpointManager( checkpoint, directory="/tmp/model", max_to_keep=5) status = checkpoint.restore(manager.latest_checkpoint) while True: # train manager.save()
CheckpointManager preserves its own state across instantiations (see the
__init__ documentation for details). Only one should be active in a particular directory at a time.
| || The |
| || The path to a directory in which to write checkpoints. A special file named "checkpoint" is also written to this directory (in a human-readable text format) which contains the state of the |
| || An integer, the number of checkpoints to keep. Unless preserved by |
| || Upon removal from the active set, a checkpoint will be preserved if it has been at least |
| ||Custom name for the checkpoint file.|
| || If |
| || A list of managed checkpoints. |
Note that checkpoints saved due to
| || The prefix of the most recent checkpoint in |
Suitable for passing to
save( checkpoint_number=None )
Creates a new checkpoint and manages it.
| || An optional integer, or an integer-dtype |
| The path to the new checkpoint. It is also recorded in the |
© 2020 The TensorFlow Authors. All rights reserved.
Licensed under the Creative Commons Attribution License 3.0.
Code samples licensed under the Apache 2.0 License.