Nothing
Predictor$predict
method for mlr3::LearnerRegr
objects (#213)LocalModel
on a data.frame
with a single row (#204)prediction::find_data
function with self-written onedata.table::melt()
(#182)FeatureEffect
handling of empty levels (#160, @grantirv)FeatureImp
) (#158)FeatureEffect
can now be computed with user provided grid points. Works for ice, ale and pdp.FeatureImp
gets new argument features
which allows to calculate feature importance for a subset of features. If a list of characters is provided, the joint feature importance per group is calculated (#156, @grantirv)data
argument in Predictor$new()
is always a data.frame (#126)FeatureEffect$results
data.frame
:.y.hat
and .ale
)FeatureEffects\$plot()
based on {patchwork} nowrun
parameter from all interpretation methods.FeatureEffects
which wraps FeatureEffect
and allows to compute feature effects for all features of a model with one call.$result
data.frame of FeatureEffect
when method="ale"
and the feature is categoricalylim
to FeatureEffect$plot
to manually set the limits of the y-axis for feature effect plots with one feature.predict
method to FeatureEffect, which predicts the marginal effect for data instances.FeatureImp
:method
argument was removed, only shuffling is now possible. This means the cartesian product of all data points with all data points is not an option any longer. It was never really practical to use, except for toy examples.prediction::find_data
function). Data extraction doesn't work with mlr, but target extraction does.FeatureImp
) automatically returned the ratio of permuted model error and original model error. With 0.7.2 the user can choose between the ratio (default) and the difference.Partial
class is deprecated and will be removed in future versions. You should use FeatureEffect
now. Its usage is similar to Partial
but the aggregation
and ice
argument are now combined in the new method
argument, where you can choose between 'ale', 'pdp', 'ice', 'pdp+ice'.FeatureEffect
class (method='ale'
). They are now the default instead of PDPs, because they are faster and unbiased.method='pdp'
Interaction
, FeatureImp
and Partial
are now computed batch-wise in the background. This prevents this methods from overloading the memory. For that, the Predictor
has a new init argument 'batch.size' which limits the number of rows send to the model for prediction for the methods Interaction
, FeatureImp
and Partial
.Interaction
and FeatureImp
additionally allow parallel computation on multiple cores. See vignette("parallel", package = "iml")
for how to use it.Predictor
can be initialized with a type
(e.g. type = "prob"
), which is more convenient than writing a custom predict.fun
. For caret classification models, the default is now to return the response, so make sure to initialize the Predictor
with type = "prob"
for fine-grained results.FeatureImp
supports the n.repetitions
parameter which controls the number of repetitions of the feature shuffling.feature.index
variable from Partial
and renamed .class.name
column in results to .class
.object$run()
does not return self
any longer. This means using object$set.feature()
for example does not automatically print the object summary any longer.PartialDependence
and Ice
. Use Partial
instead.pdp()
is now PartialDependence$new()
).Predictor$new()
.Shapley
and LocalModel
can be set with $explain()
.Lime
has been renamed to LocalModel
.Initial release
Any scripts or data that you put into this service are public.
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.