Scoring_system: The main function for algorithms scoring based on accuracy,...

Description Usage Arguments Details Value Note References See Also Examples

View source: R/Algorithms_assessment.R

Description

The main function for algorithms scoring based on accuracy, precision, and effectiveness.

Usage

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
Scoring_system(
  Inputs,
  method = "sort-based",
  param_sort = list(decreasing = TRUE, max.score = NULL),
  param_interval = list(trim = FALSE, reward.punishment = TRUE, decreasing = TRUE,
    hundred.percent = FALSE),
  remove.negative = FALSE,
  accuracy.metrics = c("MAE", "CMAPE"),
  precision.metrics = c("BIAS", "CMRPE")
)

Scoring_system_bootstrap(
  Times = 1000,
  Inputs,
  replace = TRUE,
  method = "sort-based",
  metrics_used = 1,
  param_sort = list(decreasing = TRUE, max.score = NULL),
  param_interval = list(trim = FALSE, reward.punishment = TRUE, decreasing = TRUE,
    hundred.percent = FALSE),
  remove.negative = FALSE,
  dont_blend = FALSE,
  verbose = TRUE
)

Arguments

Inputs

The list returned form function Getting_Asses_results

method

The method selected to score algorithms:

  • sort-based (default) which is scored by the sort of accuracy and precision metrics (see more in Score_algorithms_sort).

  • sort-based2 which is scored by the sort of accuracy and precision metrics (see more in Score_algorithms_sort).

  • interval-based which is relatively scored by the interval of accuracy and precision (used by Brewin et al. (2015) and Neil et al. (2019)). See more in Score_algorithms_interval).

param_sort

The parameters of function Score_algorithms_sort

param_interval

The parameters of function Score_algorithms_interval

remove.negative

Option to replace the negative score as zero (default as FALSE)

accuracy.metrics

accuracy used metrics, default as c("MAE", "CMAPE")

precision.metrics

precision used metrics, default as c("BIAS", "CMRPE")

Times

Parameter of Scoring_system_bootstrap. The bootstrap time for running Scoring_system (default as 1000)

replace

Parameter of Scoring_system_bootstrap. The sample method for bootstrap running. Default as TRUE. See more details in sample.

metrics_used

The metric combination used in the function. Default is 1.

If metrics_used = 1 then the used metrics are c("MAE", "CMAPE", "BIAS", "CMRPE")

If metrics_used = 2 (dont use this) then the used metrics are c("MAE", "CMAPE", "BIAS", "CMRPE", "RATIO")

dont_blend

Whether to runing the algorithm blending process. Default is FALSE. This is useful when you just want to score the candidate algorithms.

verbose

Show the iteration message.

Details

The Accuracy and Precision is newly defined in FCMm package (referred by Hooker et al. (2005)):

In other words, Accuracy is telling a story truthfully and precision is how similarly the story is represented over and over again. Here we use AE, a vector for each sample, for instance:

Finally, the function will multiply the total score (Accuracy + Precision) by the effectiveness (i.e., Valid_percent returned by Assessment_via_cluster).

Value

The result of Scoring_system are including:

The result of Scoring_system_bootstrap are including:

Note

Scoring_system_bootstrap is the bootstrap mode of Scoring_system which is useful when the outcome is unstable for large number of samples. The default boostrap time in Scoring_system_bootstrap is set as 1000 and the result of it is the list of several aggregated data.frames and standard deviations.

References

See Also

Other Algorithm assessment: Assessment_via_cluster(), Getting_Asses_results(), Sampling_via_cluster(), Score_algorithms_interval(), Score_algorithms_sort()

Examples

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
library(FCMm) 
library(ggplot2) 
library(magrittr)
library(stringr)
data("Nechad2015")
x <- Nechad2015[,3:11]
wv <- gsub("X","",names(x)) %>% as.numeric
set.seed(1234)
w <- sample.int(nrow(x))
x <- x[w, ]
names(x) <- wv
nb = 4 # Obtained from the vignette "Cluster a new dataset by FCMm"
set.seed(1234)
FD <- FuzzifierDetermination(x, wv, do.stand=TRUE)
result <- FCM.new(FD, nb, fast.mode = TRUE)
p.spec <- plot_spec(result, show.stand=TRUE)
print(p.spec$p.cluster.spec)
Chla <- Nechad2015$X.Chl_a..ug.L.[w]
Chla[Chla >= 999] <- NA
dt_Chla <- run_all_Chla_algorithms(x) %>% as.data.frame
dt_Chla <- data.frame(Chla_true = Chla,
BR_Gil10 = dt_Chla$BR_Gil10, 
OC4_OLCI = dt_Chla$OC4_OLCI, 
OCI_Hu12 = dt_Chla$OCI_Hu12, 
NDCI_Mi12= dt_Chla$NDCI_Mi12) %>% round(3)
w = which(!is.na(dt_Chla$Chla_true))
dt_Chla = dt_Chla[w,]
memb = result$res.FCM$u[w,] %>% round(4)
cluster =  result$res.FCM$cluster[w]
Asses_results <- Getting_Asses_results(sample.size=length(cluster), 
pred = dt_Chla[,-1], meas = data.frame(dt_Chla[,1]), memb = memb, 
cluster = cluster)
Score = Scoring_system(Asses_results)
# show the total score table
knitr::kable(round(Score$Total_score, 2))

# Examples of `Scoring_system_bootstrap`

set.seed(1234)
Score_boo <- Scoring_system_bootstrap(Times = 3, Asses_results) 
# try to set large `Times` when using your own data

# Show the bar plot of scores
Score_boo$plot_col

# Show the scatter plot of measure-estimation pairs
Score_boo$plot_scatter

# Show error metrics
knitr::kable(round(Score_boo$metric_results$MAE, 2), caption = "MAE")
knitr::kable(round(Score_boo$metric_results$CMAPE, 2), caption = "CAPE")
knitr::kable(round(Score_boo$metric_results$BIAS, 2), caption = "BIAS")
knitr::kable(round(Score_boo$metric_results$CMRPE, 2), caption = "CRPE")

# you would see the blending estimations outperform than other candidates

bishun945/FCMm documentation built on Oct. 15, 2021, 6:43 p.m.