prune: Post-pruning using a fixed complexity penalty

View source: R/pruning.R

pruneR Documentation

Post-pruning using a fixed complexity penalty

Description

Using a fitted logicDT model and a fixed complexity penalty alpha, its logic decision tree can be (post-)pruned.

Usage

prune(model, alpha, simplify = TRUE)

Arguments

model

A fitted logicDT model

alpha

A fixed complexity penalty value. This value should be determined out-of-sample, e.g., performing hyperparameter optimization on independent validation data.

simplify

Should the pruned model be simplified with regard to the input terms, i.e., should terms that are no longer in the tree contained be removed from the model?

Details

Similar to Breiman et al. (1984), we implement post-pruning by first computing the optimal pruning path and then choosing the tree that is pruned according to the specified complexity penalty.

If no validation data is available or if the tree shall be automatically optimally pruned, cv.prune should be used instead which employs k-fold cross-validation for finding the best complexity penalty value.

Value

The new logicDT model containing the pruned tree


logicDT documentation built on Jan. 14, 2023, 5:06 p.m.