Description Usage Details Methods Super class Methods References Examples
Regularized Greedy Forest regressor
Regularized Greedy Forest regressor
1 2 3 4 5 6 7 | # init <- RGF_Regressor$new(max_leaf = 500, test_interval = 100,
# algorithm = "RGF", loss = "LS", reg_depth = 1.0,
# l2 = 0.1, sl2 = NULL, normalize = TRUE,
# min_samples_leaf = 10, n_iter = NULL,
# n_tree_search = 1, opt_interval = 100,
# learning_rate = 0.5, memory_policy = "generous",
# verbose = 0, init_model = NULL)
|
the fit function builds a regressor from the training set (x, y).
the predict function predicts the regression target for x.
the cleanup function removes tempfiles used by this model. See the issue https://github.com/RGF-team/rgf/issues/75, which explains in which cases the cleanup function applies.
the get_params function returns the parameters of the model.
the score function returns the coefficient of determination ( R^2 ) for the predictions.
the feature_importances function returns the feature importances for the data.
the dump_model function currently prints information about the fitted model in the console
the save_model function saves a model to a file from which training can do warm-start in the future.
RGF_Regressor$new(max_leaf = 500, test_interval = 100,
algorithm = "RGF", loss = "LS", reg_depth = 1.0,
l2 = 0.1, sl2 = NULL, normalize = TRUE,
min_samples_leaf = 10, n_iter = NULL,
n_tree_search = 1, opt_interval = 100,
learning_rate = 0.5, memory_policy = "generous",
verbose = 0, init_model = NULL)--------------fit(x, y, sample_weight = NULL)--------------predict(x)--------------cleanup()--------------get_params(deep = TRUE)--------------score(x, y, sample_weight = NULL)--------------feature_importances()--------------dump_model()--------------save_model(filename)--------------RGF::Internal_class -> RGF_Regressor
new()RGF_Regressor$new( max_leaf = 500, test_interval = 100, algorithm = "RGF", loss = "LS", reg_depth = 1, l2 = 0.1, sl2 = NULL, normalize = TRUE, min_samples_leaf = 10, n_iter = NULL, n_tree_search = 1, opt_interval = 100, learning_rate = 0.5, memory_policy = "generous", verbose = 0, init_model = NULL )
max_leafan integer. Training will be terminated when the number of leaf nodes in the forest reaches this value.
test_intervalan integer. Test interval in terms of the number of leaf nodes.
algorithma character string specifying the Regularization algorithm. One of "RGF" (RGF with L2 regularization on leaf-only models), "RGF_Opt" (RGF with min-penalty regularization) or "RGF_Sib" (RGF with min-penalty regularization with the sum-to-zero sibling constraints).
lossa character string specifying the Loss function. One of "LS" (Square loss), "Expo" (Exponential loss) or "Log" (Logistic loss).
reg_deptha float. Must be no smaller than 1.0. Meant for being used with the algorithm RGF Opt or RGF Sib. A larger value penalizes deeper nodes more severely.
l2a float. Used to control the degree of L2 regularization.
sl2a float or NULL. Override L2 regularization parameter l2 for the process of growing the forest. That is, if specified, the weight correction process uses l2 and the forest growing process uses sl2. If NULL, no override takes place and l2 is used throughout training.
normalizea boolean. If True, training targets are normalized so that the average becomes zero.
min_samples_leafan integer or a float. Minimum number of training data points in each leaf node. If an integer, then consider min_samples_leaf as the minimum number. If a float, then min_samples_leaf is a percentage and ceil(min_samples_leaf * n_samples) are the minimum number of samples for each node.
n_iteran integer or NULL. The number of iterations of coordinate descent to optimize weights. If NULL, 10 is used for loss = "LS" and 5 for loss = "Expo" or "Log".
n_tree_searchan integer. The number of trees to be searched for the nodes to split. The most recently grown trees are searched first.
opt_intervalan integer. Weight optimization interval in terms of the number of leaf nodes. For example, by default, weight optimization is performed every time approximately 100 leaf nodes are newly added to the forest.
learning_ratea float. Step size of Newton updates used in coordinate descent to optimize weights.
memory_policya character string. One of "conservative" (it uses less memory at the expense of longer runtime. Try only when with default value it uses too much memory) or "generous" (it runs faster using more memory by keeping the sorted orders of the features on memory for reuse). Memory using policy.
verbosean integer. Controls the verbosity of the tree building process.
init_modeleither NULL or a character string, optional (default=NULL). Filename of a previously saved model from which training should do warm-start. If model has been saved into multiple files, do not include numerical suffixes in the filename. NOTE: Make sure you haven't forgotten to increase the value of the max_leaf parameter regarding to the specified warm-start model because warm-start model trees are counted in the overall number of trees.
clone()The objects of this class are cloneable with this method.
RGF_Regressor$clone(deep = FALSE)
deepWhether to make a deep clone.
https://github.com/RGF-team/rgf/tree/master/python-package, Rie Johnson and Tong Zhang, Learning Nonlinear Functions Using Regularized Greedy Forest
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 |
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.