KLR: Kernel Logistic Regression

Description Usage Arguments Details Value Author(s) References See Also Examples

View source: R/KLR.R

Description

The function performs a kernel logistic regression for binary outputs.

Usage

1
2
KLR(X, y, xnew, lambda = 0.01, kernel = c("matern", "exponential")[1],
  nu = 1.5, power = 1.95, rho = 0.1)

Arguments

X

a design matrix with dimension n by d.

y

a response vector with length n. The values in the vector are 0 or 1.

xnew

a testing matrix with dimension n_new by d in which each row corresponds to a predictive location.

lambda

a positive value specifing the tuning parameter for KLR. The default is 0.01.

kernel

"matern" or "exponential" which specifies the matern kernel or power exponential kernel. The default is "matern".

nu

a positive value specifying the order of matern kernel if kernel == "matern". The default is 1.5 if matern kernel is chosen.

power

a positive value (between 1.0 and 2.0) specifying the power of power exponential kernel if kernel == "exponential". The default is 1.95 if power exponential kernel is chosen.

rho

a positive value specifying the scale parameter of matern and power exponential kernels. The default is 0.1.

Details

This function performs a kernel logistic regression, where the kernel can be assigned to Matern kernel or power exponential kernel by the argument kernel. The arguments power and rho are the tuning parameters in the power exponential kernel function, and nu and rho are the tuning parameters in the Matern kernel function. The power exponential kernel has the form

K_{ij}=\exp(-\frac{∑_{k}{|x_{ik}-x_{jk}|^{power}}}{rho}),

and the Matern kernel has the form

K_{ij}=∏_{k}\frac{1}{Γ(nu)2^{nu-1}}(2√{nu}\frac{|x_{ik}-x_{jk}|}{rho})^{nu} κ(2√{nu}\frac{|x_{ik}-x_{jk}|}{rho}).

The argument lambda is the tuning parameter for the function smoothness.

Value

Predictive probabilities at given locations xnew.

Author(s)

Chih-Li Sung <iamdfchile@gmail.com>

References

Zhu, J. and Hastie, T. (2005). Kernel logistic regression and the import vector machine. Journal of Computational and Graphical Statistics, 14(1), 185-205.

See Also

cv.KLR for performing cross-validation to choose the tuning parameters.

Examples

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
library(calibrateBinary)

set.seed(1)
np <- 10
xp <- seq(0,1,length.out = np)
eta_fun <- function(x) exp(exp(-0.5*x)*cos(3.5*pi*x)-1) # true probability function
eta_x <- eta_fun(xp)
yp <- rep(0,np)
for(i in 1:np) yp[i] <- rbinom(1,1, eta_x[i])

x.test <- seq(0,1,0.001)
etahat <- KLR(xp,yp,x.test)

plot(xp,yp)
curve(eta_fun, col = "blue", lty = 2, add = TRUE)
lines(x.test, etahat, col = 2)

#####   cross-validation with K=5    #####
##### to determine the parameter rho #####

cv.out <- cv.KLR(xp,yp,K=5)
print(cv.out)

etahat.cv <- KLR(xp,yp,x.test,lambda=cv.out$lambda,rho=cv.out$rho)

plot(xp,yp)
curve(eta_fun, col = "blue", lty = 2, add = TRUE)
lines(x.test, etahat, col = 2)
lines(x.test, etahat.cv, col = 3)

calibrateBinary documentation built on May 2, 2019, 4:20 a.m.