| metric_cohen_kappa | R Documentation | 
Computes Kappa score between two raters
metric_cohen_kappa( num_classes, name = "cohen_kappa", weightage = NULL, sparse_labels = FALSE, regression = FALSE, dtype = NULL )
| num_classes | Number of unique classes in your dataset. | 
| name | (optional) String name of the metric instance | 
| weightage | (optional) Weighting to be considered for calculating kappa statistics. A valid value is one of [None, 'linear', 'quadratic']. Defaults to 'NULL' | 
| sparse_labels | (bool) Valid only for multi-class scenario. If True, ground truth labels are expected tp be integers and not one-hot encoded | 
| regression | (bool) If set, that means the problem is being treated as a regression problem where you are regressing the predictions. **Note:** If you are regressing for the values, the the output layer should contain a single unit. | 
| dtype | (optional) Data type of the metric result. Defaults to 'NULL' | 
The score lies in the range [-1, 1]. A score of -1 represents complete disagreement between two raters whereas a score of 1 represents complete agreement between the two raters. A score of 0 means agreement by chance.
Input tensor or list of input tensors.
## Not run: 
model = keras_model_sequential() %>%
  layer_dense(units = 10, input_shape = ncol(iris) - 1,activation = activation_lisht) %>%
  layer_dense(units = 3)
model %>% compile(loss = 'categorical_crossentropy',
                  optimizer = optimizer_radam(),
                  metrics = metric_cohen_kappa(3))
## End(Not run)
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.