LauraeML_gblinear: Laurae's Machine Learning (xgboost gblinear helper function)

Description Usage Arguments Value Examples

Description

This function is a demonstration function for using xgboost gblinear in LauraeML without premade folds. It has alpha, lambda, and lambda_bias as tunable hyperparameters. It also accepts feature selection, and performs full logging (every part is commented in the source) with writing to an external file in order to follow the hyperparameters and feature count.

Usage

1
2
LauraeML_gblinear(x, y, mobile, parallelized, maximize, logging, data, label,
  folds)

Arguments

x

Type: vector (numeric). The hyperparameters to use.

y

Type: vector (numeric). The features to use, as binary format (0 for not using, 1 for using).

mobile

Type: environment. The environment passed from LauraeML.

parallelized

Type: parallel socket cluster (makeCluster or similar). The parallelized parameter passed from LauraeML (whether to parallelize training per folds or not).

maximize

Type: boolean. The maximize parameter passed from LauraeML (whether to maximize or not the metric).

logging

Type: character. The logging parameter passed from LauraeML (where to store log file).

data

Type: data.table (mandatory). The data features. Comes from LauraeML.

label

Type: vector (numeric). The labels. Comes from LauraeML.

folds

Type: list of numerics. The folds as list. Comes from LauraeML.

Value

The score of the cross-validated xgboost gblinear model, for the provided hyperparameters and features to use.

Examples

1
2
3
4
## Not run: 
# To add

## End(Not run)

Laurae2/Laurae documentation built on May 8, 2019, 7:59 p.m.