Description Usage Arguments Details Value Author(s) References See Also Examples

Fit the linear generalized distance weighted discrimination (DWD) model and the generalized DWD on Reproducing kernel Hilbert space. The solution path is computed at a grid of values of tuning parameter `lambda`

.

1 |

`x` |
A numerical matrix with |

`y` |
A vector of length |

`kern` |
A kernel function; see |

`lambda` |
A user supplied |

`qval` |
The exponent index of the generalized DWD. Default value is 1. |

`wt` |
A vector of length |

`eps` |
The algorithm stops when (i.e. |

`maxit` |
The maximum of iterations allowed. Default is 1e5. |

Suppose that the generalized DWD loss is *V_q(u) = 1 - u* if *u <= q/(q+1)* and *(1/u)^q * q^q/(q+1)^{(q+1)}* if *u > q/(q+1)*. The value of *λ*, i.e., `lambda`

, is user-specified.

In the linear case (`kern`

is the inner product and N > p), the `kerndwd`

fits a linear DWD by minimizing the L2 penalized DWD loss function,

*(1/N) * sum_i [V_q(y_i(β_0 + X_i'β))] + λ β' β.*

If a linear DWD is fitted when N < p, a kernel DWD with the linear kernel is actually solved. In such case, the coefficient *β* can be obtained from *β = X'α.*

In the kernel case, the `kerndwd`

fits a kernel DWD by minimizing

*(1/N) * sum_i [V_q(y_i(β_0 + K_i' α))] + λ α' K α,*

where *K* is the kernel matrix and *K_i* is the ith row.

The weighted linear DWD and the weighted kernel DWD are formulated as follows,

*(1/N) * sum_i [w_i * V_q(y_i(β_0 + X_i'β))] + λ β' β,*

*(1/N) * sum_i [w_i * V_q(y_i(β_0 + K_i' α))] + λ α' K α,*

where *w_i* is the ith element of `wt`

. The choice of weight factors can be seen in the reference below.

An object with S3 class `kerndwd`

.

`alpha` |
A matrix of DWD coefficients at each |

`lambda` |
The |

`npass` |
Total number of MM iterations for all lambda values. |

`jerr` |
Warnings and errors; 0 if none. |

`info` |
A list including parameters of the loss function, |

`call` |
The call that produced this object. |

Boxiang Wang and Hui Zou

Maintainer: Boxiang Wang [email protected]

Wang, B. and Zou, H. (2018)
“Another Look at Distance Weighted Discrimination,"
*Journal of Royal Statistical Society, Series B*, **80**(1), 177–198.

https://rss.onlinelibrary.wiley.com/doi/10.1111/rssb.12244

Karatzoglou, A., Smola, A., Hornik, K., and Zeileis, A. (2004)
“kernlab – An S4 Package for Kernel Methods in R",
*Journal of Statistical Software*, **11**(9), 1–20.

http://www.jstatsoft.org/v11/i09/paper

Friedman, J., Hastie, T., and Tibshirani, R. (2010), "Regularization paths for generalized
linear models via coordinate descent," *Journal of Statistical Software*, **33**(1), 1–22.

http://www.jstatsoft.org/v33/i01/paper

Marron, J.S., Todd, M.J., and Ahn, J. (2007)
“Distance-Weighted Discrimination"",
*Journal of the American Statistical Association*, **102**(408), 1267–1271.

https://faculty.franklin.uga.edu/jyahn/sites/faculty.franklin.uga.edu.jyahn/files/DWD3.pdf

Qiao, X., Zhang, H., Liu, Y., Todd, M., Marron, J.S. (2010)
“Weighted distance weighted discrimination and its asymptotic properties",
*Journal of the American Statistical Association*, **105**(489), 401–414.

http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2996856/

`predict.kerndwd`

, `plot.kerndwd`

, and `cv.kerndwd`

.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 | ```
data(BUPA)
# standardize the predictors
BUPA$X = scale(BUPA$X, center=TRUE, scale=TRUE)
# a grid of tuning parameters
lambda = 10^(seq(3, -3, length.out=10))
# fit a linear DWD
kern = vanilladot()
DWD_linear = kerndwd(BUPA$X, BUPA$y, kern,
qval=1, lambda=lambda, eps=1e-5, maxit=1e5)
# fit a DWD using Gaussian kernel
kern = rbfdot(sigma=1)
DWD_Gaussian = kerndwd(BUPA$X, BUPA$y, kern,
qval=1, lambda=lambda, eps=1e-5, maxit=1e5)
# fit a weighted kernel DWD
kern = rbfdot(sigma=1)
weights = c(1, 2)[factor(BUPA$y)]
DWD_wtGaussian = kerndwd(BUPA$X, BUPA$y, kern,
qval=1, lambda=lambda, wt = weights, eps=1e-5, maxit=1e5)
``` |

Embedding an R snippet on your website

Add the following code to your website.

For more information on customizing the embed code, read Embedding Snippets.