calcm: Determination of optimal coefficients for computing weights...

View source: R/calcm.R

calcmR Documentation

Determination of optimal coefficients for computing weights of evidence in logistic regression

Description

calcAB transforms coefficients alpha and beta computed by calcm into weights of evidence, and then into mass and contour (plausibility) functions. These mass functions can be used to express uncertainty about the prediction of logistic regression or multilayer neural network classifiers (See Denoeux, 2019).

Usage

calcm(x, A, B)

Arguments

x

Matrix (n,d) of feature values, where d is the number of features, and n is the number of observations. Can be a vector if $d=1$.

A

Vector of length d (for M=2) or matrix of size (d,M) (for M>2) of coefficients alpha.

B

Vector of length d (for M=2) or matrix of size (d,M) (for M>2) of coefficients beta

Details

An error may occur if the absolute values of some coefficients are too high. It is then advised to recompute these coefficients by training the logistic regression or neural network classifier with L2 regularization. With M classes, the output mass functions have 2^M focal sets. Using this function with large M may cause memory issues.

Value

A list with six elements:

F

Matrix (2^M,M) of focal sets.

mass

Matrix (n,2^M) of mass functions (one in each row).

pl

Matrix (n,M) containing the plausibilities of singletons.

bel

Matrix (n,M) containing the degrees of belief of singletons.

prob

Matrix (n,M) containing the normalized plausibilities of singletons.

conf

Vector of length n containing the degrees of conflict.

Author(s)

Thierry Denoeux.

References

T. Denoeux. Logistic Regression, Neural Networks and Dempster-Shafer Theory: a New Perspective. Knowledge-Based Systems, Vol. 176, Pages 54–67, 2019.

See Also

calcAB

Examples

## Example with 2 classes and logistic regression
data(ionosphere)
x<-ionosphere$x[,-2]
y<-ionosphere$y-1
fit<-glm(y ~ x,family='binomial')
AB<-calcAB(fit$coefficients,colMeans(x))
Bel<-calcm(x,AB$A,AB$B)
Bel$focal
Bel$mass[1:5,]
Bel$pl[1:5,]
Bel$conf[1:5]
## Example with K>2 classes and multilayer neural network
library(nnet)
data(glass)
K<-max(glass$y)
d<-ncol(glass$x)
n<-nrow(x)
x<-scale(glass$x)
y<-as.factor(glass$y)
p<-3 # number of hidden units
fit<-nnet(y~x,size=p)  # training a neural network with 3 hidden units
W1<-matrix(fit$wts[1:(p*(d+1))],d+1,p) # Input-to-hidden weights
W2<-matrix(fit$wts[(p*(d+1)+1):(p*(d+1) + K*(p+1))],p+1,K) # hidden-to-output weights
a1<-cbind(rep(1,n),x)%*%W1  # hidden unit activations
o1<-1/(1+exp(-a1)) # hidden unit outputs
AB<-calcAB(W2,colMeans(o1))
Bel<-calcm(o1,AB$A,AB$B)
Bel$focal
Bel$mass[1:5,]
Bel$pl[1:5,]
Bel$conf[1:5]

evclass documentation built on Nov. 9, 2023, 5:08 p.m.