makeMeasure: Construct performance measure.

Description Usage Arguments Value See Also Examples

Description

A measure object encapsulates a function to evaluate the performance of a prediction. Information about already implemented measures can be obtained here: measures.

A learner is trained on a training set d1, results in a model m and predicts another set d2 (which may be a different one or the training set) resulting in the prediction. The performance measure can now be defined using all of the information of the original task, the fitted model and the prediction.

Object slots:

id [character(1)]

See argument.

minimize [logical(1)]

See argument.

properties [character]

See argument.

fun [function]

See argument.

extra.args [list]

See argument.

aggr [Aggregation]

See argument.

best [numeric(1)]

See argument.

worst [numeric(1)]

See argument.

name [character(1)]

See argument.

note [character(1)]

See argument.

Usage

1
2
3
makeMeasure(id, minimize, properties = character(0L), fun,
  extra.args = list(), aggr = test.mean, best = NULL, worst = NULL,
  name = id, note = "")

Arguments

id

[character(1)]
Name of measure.

minimize

[logical(1)]
Should the measure be minimized? Default is TRUE.

properties

[character]
Set of measure properties. Some standard property names include:

classif

Is the measure applicable for classification?

classif.multi

Is the measure applicable for multi-class classification?

multilabel

Is the measure applicable for multilabel classification?

regr

Is the measure applicable for regression?

surv

Is the measure applicable for survival?

cluster

Is the measure applicable for cluster?

costsens

Is the measure applicable for cost-sensitive learning?

req.pred

Is prediction object required in calculation? Usually the case.

req.truth

Is truth column required in calculation? Usually the case.

req.task

Is task object required in calculation? Usually not the case

req.model

Is model object required in calculation? Usually not the case.

req.feats

Are feature values required in calculation? Usually not the case.

req.prob

Are predicted probabilites required in calculation? Usually not the case, example would be AUC.

Default is character(0).

fun

[function(task, model, pred, feats, extra.args)]
Calculates the performance value. Usually you will only need the prediction object pred.

task [Task]

The task.

model [WrappedModel]

The fitted model.

pred [Prediction]

Prediction object.

feats [data.frame]

The features.

extra.args [list]

See below.

extra.args

[list]
List of extra arguments which will always be passed to fun. Default is empty list.

aggr

[Aggregation]
Aggregation funtion, which is used to aggregate the values measured on test / training sets of the measure to a single value. Default is test.mean.

best

[numeric(1)]
Best obtainable value for measure. Default is -Inf or Inf, depending on minimize.

worst

[numeric(1)]
Worst obtainable value for measure. Default is Inf or -Inf, depending on minimize.

name

[character]
Name of the measure. Default is id.

note

[character]
Description and additional notes for the measure. Default is “”.

Value

[Measure].

See Also

Other performance: ConfusionMatrix, calculateConfusionMatrix, calculateROCMeasures, estimateRelativeOverfitting, makeCostMeasure, makeCustomResampledMeasure, measures, performance

Examples

1
2
3
f = function(task, model, pred, extra.args)
  sum((pred$data$response - pred$data$truth)^2)
makeMeasure(id = "my.sse", minimize = TRUE, properties = c("regr", "response"), fun = f)

Najah-lshanableh/R-data-mining2 documentation built on May 6, 2019, 10:11 a.m.