Description Usage Arguments Details Value See Also

Infers the problem type and learns the appropriate GELnet model via coordinate descent.

1 2 3 4 |

`X` |
n-by-p matrix of n samples in p dimensions |

`y` |
n-by-1 vector of response values. Must be numeric vector for regression, factor with 2 levels for binary classification, or NULL for a one-class task. |

`l1` |
coefficient for the L1-norm penalty |

`l2` |
coefficient for the L2-norm penalty |

`nFeats` |
alternative parameterization that returns the desired number of non-zero weights. Takes precedence over l1 if not NULL (default: NULL) |

`a` |
n-by-1 vector of sample weights (regression only) |

`d` |
p-by-1 vector of feature weights |

`P` |
p-by-p feature association penalty matrix |

`m` |
p-by-1 vector of translation coefficients |

`max.iter` |
maximum number of iterations |

`eps` |
convergence precision |

`w.init` |
initial parameter estimate for the weights |

`b.init` |
initial parameter estimate for the bias term |

`fix.bias` |
set to TRUE to prevent the bias term from being updated (regression only) (default: FALSE) |

`silent` |
set to TRUE to suppress run-time output to stdout (default: FALSE) |

`balanced` |
boolean specifying whether the balanced model is being trained (binary classification only) (default: FALSE) |

`nonneg` |
set to TRUE to enforce non-negativity constraints on the weights (default: FALSE ) |

The method determines the problem type from the labels argument y. If y is a numeric vector, then a regression model is trained by optimizing the following objective function:

* \frac{1}{2n} ∑_i a_i (y_i - (w^T x_i + b))^2 + R(w) *

If y is a factor with two levels, then the function returns a binary classification model, obtained by optimizing the following objective function:

* -\frac{1}{n} ∑_i y_i s_i - \log( 1 + \exp(s_i) ) + R(w) *

where

* s_i = w^T x_i + b *

Finally, if no labels are provided (y == NULL), then a one-class model is constructed using the following objective function:

* -\frac{1}{n} ∑_i s_i - \log( 1 + \exp(s_i) ) + R(w) *

where

* s_i = w^T x_i *

In all cases, the regularizer is defined by

* R(w) = λ_1 ∑_j d_j |w_j| + \frac{λ_2}{2} (w-m)^T P (w-m) *

The training itself is performed through cyclical coordinate descent, and the optimization is terminated after the desired tolerance is achieved or after a maximum number of iterations.

A list with two elements:

- w
p-by-1 vector of p model weights

- b
scalar, bias term for the linear model (omitted for one-class models)

`gelnet.lin.obj`

, `gelnet.logreg.obj`

, `gelnet.oneclass.obj`

Embedding an R snippet on your website

Add the following code to your website.

For more information on customizing the embed code, read Embedding Snippets.