cuml_fil_load_model: Load a XGBoost or LightGBM model file.

Description Usage Arguments Value Examples

View source: R/fil.R

Description

Load a XGBoost or LightGBM model file using Treelite. The resulting model object can be used to perform high-throughput batch inference on new data points using the GPU acceleration functionality from the CuML Forest Inference Library (FIL).

Usage

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
cuml_fil_load_model(
  filename,
  mode = c("classification", "regression"),
  model_type = c("xgboost", "lightgbm"),
  algo = c("auto", "naive", "tree_reorg", "batch_tree_reorg"),
  threshold = 0.5,
  storage_type = c("auto", "dense", "sparse"),
  threads_per_tree = 1L,
  n_items = 0L,
  blocks_per_sm = 0L
)

Arguments

filename

Path to the saved model file.

mode

Type of task to be performed by the model. Must be one of "classification", "regression".

model_type

Format of the saved model file. Notice if filename ends with ".json" and model_type is "xgboost", then cuml will assume the model file is in XGBoost JSON (instead of binary) format. Default: "xgboost".

algo

Type of the algorithm for inference, must be one of the following. - "auto": Choose the algorithm automatically. Currently 'batch_tree_reorg' is used for dense storage, and 'naive' for sparse storage. - "naive": Simple inference using shared memory. - "tree_reorg": Similar to naive but with trees rearranged to be more coalescing- friendly. - "batch_tree_reorg": Similar to 'tree_reorg' but predicting multiple rows per thread block. Default: "auto".

threshold

Class probability threshold for classification. Ignored for regression tasks. Default: 0.5.

storage_type

In-memory storage format of the FIL model. Must be one of the following. - "auto": Choose the storage type automatically, - "dense": Create a dense forest, - "sparse": Create a sparse forest. Requires algo to be 'naive' or 'auto'.

threads_per_tree

If >1, then have multiple (neighboring) threads infer on the same tree within a block, which will improve memory bandwith near tree root (but consuming more shared memory). Default: 1L.

n_items

Number of input samples each thread processes. If 0, then choose (up to 4) that fit into shared memory. Default: 0L.

blocks_per_sm

Indicates how CuML should determine the number of thread blocks to lauch for the inference kernel. - 0: Launches the number of blocks proportional to the number of data points. - >= 1: Attempts to lauch blocks_per_sm blocks for each streaming multiprocessor. This will fail if blocks_per_sm blocks result in more threads than the maximum supported number of threads per GPU. Even if successful, it is not guaranteed that blocks_per_sm blocks will run on an SM concurrently.

Value

A GPU-accelerated FIL model that can be used with the 'predict' S3 generic to make predictions on new data points.

Examples

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
library(cuml)
library(xgboost)

model_path <- file.path(tempdir(), "xgboost.model")

model <- xgboost(
  data = as.matrix(mtcars[which(names(mtcars) != "mpg")]),
  label = as.matrix(mtcars["mpg"]),
  max.depth = 6,
  eta = 1,
  nthread = 2,
  nrounds = 20,
  objective = "reg:squarederror"
)

xgb.save(model, model_path)

model <- cuml_fil_load_model(
  model_path,
  mode = "regression",
  model_type = "xgboost"
)

preds <- predict(model, mtcars[which(names(mtcars) != "mpg")])

print(preds)

cuml documentation built on Sept. 21, 2021, 1:06 a.m.