eval_spec: Configuration for the eval component of 'train_and_evaluate'

Description Usage Arguments See Also

View source: R/train_and_evaluate.R

Description

EvalSpec combines details of evaluation of the trained model as well as its export. Evaluation consists of computing metrics to judge the performance of the trained model. Export writes out the trained model on to external storage.

Usage

1
2
3
4
5
6
7
8
9
eval_spec(
  input_fn,
  steps = 100,
  name = NULL,
  hooks = NULL,
  exporters = NULL,
  start_delay_secs = 120,
  throttle_secs = 600
)

Arguments

input_fn

Evaluation input function returning a tuple of:

  • features - Tensor or dictionary of string feature name to Tensor.

  • labels - Tensor or dictionary of Tensor with labels.

steps

Positive number of steps for which to evaluate model. If NULL, evaluates until input_fn raises an end-of-input exception.

name

Name of the evaluation if user needs to run multiple evaluations on different data sets. Metrics for different evaluations are saved in separate folders, and appear separately in tensorboard.

hooks

List of session run hooks to run during evaluation.

exporters

List of Exporters, or a single one, or NULL. exporters will be invoked after each evaluation.

start_delay_secs

Start evaluating after waiting for this many seconds.

throttle_secs

Do not re-evaluate unless the last evaluation was started at least this many seconds ago. Of course, evaluation does not occur if no new checkpoints are available, hence, this is the minimum.

See Also

Other training methods: train_and_evaluate.tf_estimator(), train_spec()


rstudio/tfestimators documentation built on Nov. 24, 2021, 6:56 a.m.