Description Usage Arguments Details Author(s) References Examples
View source: R/explainability.R
Visualization of 2D PDP vs. unexplained residual predictions.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 |
model |
A model with corresponding predict function that returns numeric values. |
x |
Data frame. |
vnames |
Character vector of the variable set for which the patial dependence function is to be computed. |
type |
Character, either |
depth |
Integer specifiying the number colours in the heat map. |
alpha |
Numeric value for alpha blending of the points in the scatter plot. |
right |
Position where to place the legend relative to the range of the x axis. |
top |
Position where to place the legend relative to the range of the y axis. |
digits |
Nuber of digits for rounding in the legend. |
parallel |
Logical specifying whether computation should be parallel. |
sample.frac |
fraction-size for sampling of x. |
pfunction |
User generated predict function with arguments |
... |
Further arguments to be passed to the |
fw.xpy <- function(model, x, target, parallel = TRUE, sample.frac = 1, pfunction = NULL, ...)
# Initialization and selection of first variable n <- 1 cat("Step", n, "\n")
sel <- NULL trace <- NULL nms <- nms.full <- names(x)[-which(names(x) == target)] xpys <- rep(NA, length(nms)) names(xpys) <- nms
for(v in nms) xpys[which(names(xpys) == v)] <- xpy(model, x, v, viz = F, ...)
sel <- c(sel, which.max(xpys)) trace <- c(trace, max(xpys, na.rm = T)) print(xpys) cat("\n", nms.full[sel], max(xpys, na.rm = T), "\n\n")
# forward selection variables such that explainability is maximized while(length(nms) > 1) n <- n + 1 cat("Step", n, "\n")
nms <- nms.full[-sel] xpys <- cbind(xpys, NA) for(v in nms) xpys[which(rownames(xpys) == v), ncol(xpys)] <- xpy(model, x, c(names(sel), v), viz = F, ...)
sel <- c(sel, which.max(xpys[,ncol(xpys)])) colnames(xpys) <- c(paste("Step", 1:n)) trace <- c(trace, max(xpys, na.rm = T))
print(xpys) cat("\n", nms.full[sel], max(xpys[,ncol(xpys)], na.rm=T), "\n\n")
res <- list(selection.order = sel, explainability = trace, details = xpys) class(res) <- "vsexp" return(res)
#' @export plot.vsexp <- function(x, ...) plot(0:length(x$explainability), c(0,x$explainability), type = "l", xaxt = "n", xlab = "", ylim = c(0, 1), ylab = "explainability") axis(1, at = 1:length(x$selection.order), labels = names(x$selection.order), las = 2)
#' @export print.vsexp <- function(x, ...) print(cbind(x$selection.order, x$explainability))
Szepannek, G. (2019): How Much Can We See? A Note on Quantifying Explainability of Machine Learning Models, arXiv:1910.13376 [stat.ML].
1 2 3 4 5 6 7 8 9 |
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.