Kappa: Calculates kappa statistic and other classification error...

View source: R/Kappa.R

KappaR Documentation

Calculates kappa statistic and other classification error statistics

Description

The kappa statistic, along with user and producer error rates are conventionally used in the remote sensing to describe the effectiveness of ground cover classifications. Since it simultaneously considers both errors of commission and omission, kappa can be considered a more conservative measure of classification accuracy than the percentage of correctly classified items.

Usage

Kappa(class1, reference)

Arguments

class1

A vector describing a classification of experimental units.

reference

A vector describing the "correct" classification of the experimental units in class1

Value

Returns a list with 4 items

ttl_agreement

The percentage of correctly classified items.

user_accuracy

The user accuracy for each category of the classification.

producer_accuracy

The producer accuracy for each category of the classification.

kappa

The kappa statistic.

table

A two way contingency table comparing the user supplied classification to the reference classification.

Author(s)

Ken Aho

References

Jensen, J. R. (1996) Introductory digital imagery processing 2nd edition. Prentice-Hall.

Examples

reference<-c("hi","low","low","hi","low","med","med")
class1<-c("hi","hi","low","hi","med","med","med")
Kappa(class1,reference)

asbio documentation built on May 29, 2024, 5:57 a.m.