metrics_f1score: F1Score

Description Usage Arguments Details Value Raises Examples

View source: R/metrics.R

Description

Computes F-1 Score.

Usage

1
2
3
4
5
6
7
metrics_f1score(
  num_classes,
  average = NULL,
  threshold = NULL,
  name = "f1_score",
  dtype = tf$float32
)

Arguments

num_classes

Number of unique classes in the dataset.

average

Type of averaging to be performed on data. Acceptable values are NULL, micro, macro and weighted. Default value is NULL. - None: Scores for each class are returned - micro: True positivies, false positives and false negatives are computed globally. - macro: True positivies, false positives and - false negatives are computed for each class and their unweighted mean is returned. - weighted: Metrics are computed for each class and returns the mean weighted by the number of true instances in each class.

threshold

Elements of y_pred above threshold are considered to be 1, and the rest 0. If threshold is NULL, the argmax is converted to 1, and the rest 0.

name

(optional) String name of the metric instance.

dtype

(optional) Data type of the metric result. Defaults to 'tf$float32'.

Details

It is the harmonic mean of precision and recall. Output range is [0, 1]. Works for both multi-class and multi-label classification. F-1 = 2 * (precision * recall) / (precision + recall)

Value

F-1 Score: float

Raises

ValueError: If the 'average' has values other than [NULL, micro, macro, weighted].

Examples

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
## Not run: 
model = keras_model_sequential() %>%
  layer_dense(units = 10, input_shape = ncol(iris) - 1,activation = activation_lisht) %>%
  layer_dense(units = 3)

model %>% compile(loss = 'categorical_crossentropy',
                  optimizer = optimizer_radam(),
                  metrics = metrics_f1score(3))

## End(Not run)

tfaddons documentation built on July 2, 2020, 2:12 a.m.