metric_cohen_kappa: Computes Kappa score between two raters

Description Usage Arguments Details Value Examples

View source: R/metrics.R

Description

Computes Kappa score between two raters

Usage

1
2
3
4
5
6
7
8
metric_cohen_kappa(
  num_classes,
  name = "cohen_kappa",
  weightage = NULL,
  sparse_labels = FALSE,
  regression = FALSE,
  dtype = NULL
)

Arguments

num_classes

Number of unique classes in your dataset.

name

(optional) String name of the metric instance

weightage

(optional) Weighting to be considered for calculating kappa statistics. A valid value is one of [None, 'linear', 'quadratic']. Defaults to 'NULL'

sparse_labels

(bool) Valid only for multi-class scenario. If True, ground truth labels are expected tp be integers and not one-hot encoded

regression

(bool) If set, that means the problem is being treated as a regression problem where you are regressing the predictions. **Note:** If you are regressing for the values, the the output layer should contain a single unit.

dtype

(optional) Data type of the metric result. Defaults to 'NULL'

Details

The score lies in the range [-1, 1]. A score of -1 represents complete disagreement between two raters whereas a score of 1 represents complete agreement between the two raters. A score of 0 means agreement by chance.

Value

Input tensor or list of input tensors.

Examples

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
## Not run: 
model = keras_model_sequential() %>%
  layer_dense(units = 10, input_shape = ncol(iris) - 1,activation = activation_lisht) %>%
  layer_dense(units = 3)

model %>% compile(loss = 'categorical_crossentropy',
                  optimizer = optimizer_radam(),
                  metrics = metric_cohen_kappa(3))

## End(Not run)

tfaddons documentation built on July 2, 2020, 2:12 a.m.