W3cubDocs

/TensorFlow 2.3

tf.estimator.experimental.InMemoryEvaluatorHook

View source on GitHub

Hook to run evaluation in training without a checkpoint.

Inherits From: SessionRunHook

Example:

def train_input_fn():
  ...
  return train_dataset

def eval_input_fn():
  ...
  return eval_dataset

estimator = tf.estimator.DNNClassifier(...)

evaluator = tf.estimator.experimental.InMemoryEvaluatorHook(
    estimator, eval_input_fn)
estimator.train(train_input_fn, hooks=[evaluator])

Current limitations of this approach are:

  • It doesn't support multi-node distributed mode.
  • It doesn't support saveable objects other than variables (such as boosted tree support)
  • It doesn't support custom saver logic (such as ExponentialMovingAverage support)
Args
estimator A tf.estimator.Estimator instance to call evaluate.
input_fn Equivalent to the input_fn arg to estimator.evaluate. A function that constructs the input data for evaluation. See Creating input functions for more information. The function should construct and return one of the following:
  • A 'tf.data.Dataset' object: Outputs of Dataset object must be a tuple (features, labels) with same constraints as below.
  • A tuple (features, labels): Where features is a Tensor or a dictionary of string feature name to Tensor and labels is a Tensor or a dictionary of string label name to Tensor. Both features and labels are consumed by model_fn. They should satisfy the expectation of model_fn from inputs.
steps Equivalent to the steps arg to estimator.evaluate. Number of steps for which to evaluate model. If None, evaluates until input_fn raises an end-of-input exception.
hooks Equivalent to the hooks arg to estimator.evaluate. List of SessionRunHook subclass instances. Used for callbacks inside the evaluation call.
name Equivalent to the name arg to estimator.evaluate. Name of the evaluation if user needs to run multiple evaluations on different data sets, such as on training data vs test data. Metrics for different evaluations are saved in separate folders, and appear separately in tensorboard.
every_n_iter int, runs the evaluator once every N training iteration.
Raises
ValueError if every_n_iter is non-positive or it's not a single machine training

Methods

after_create_session

View source

Does first run which shows the eval metrics before training.

after_run

View source

Runs evaluator.

before_run

View source

Called before each call to run().

You can return from this call a SessionRunArgs object indicating ops or tensors to add to the upcoming run() call. These ops/tensors will be run together with the ops/tensors originally passed to the original run() call. The run args you return can also contain feeds to be added to the run() call.

The run_context argument is a SessionRunContext that provides information about the upcoming run() call: the originally requested op/tensors, the TensorFlow Session.

At this point graph is finalized and you can not add ops.

Args
run_context A SessionRunContext object.
Returns
None or a SessionRunArgs object.

begin

View source

Build eval graph and restoring op.

end

View source

Runs evaluator for final model.

© 2020 The TensorFlow Authors. All rights reserved.
Licensed under the Creative Commons Attribution License 3.0.
Code samples licensed under the Apache 2.0 License.
https://www.tensorflow.org/versions/r2.3/api_docs/python/tf/estimator/experimental/InMemoryEvaluatorHook