View source: R/compute_consensus.R
compute_consensus | R Documentation |
Compute the consensus ranking using either cumulative
probability (CP) or maximum a posteriori (MAP) consensus
\insertCitevitelli2018BayesMallows. For mixture models, the consensus
is given for each mixture. Consensus of augmented ranks can also be
computed for each assessor, by setting parameter = "Rtilde"
.
compute_consensus(model_fit, ...)
## S3 method for class 'BayesMallows'
compute_consensus(
model_fit,
type = c("CP", "MAP"),
parameter = c("rho", "Rtilde"),
assessors = 1L,
...
)
## S3 method for class 'SMCMallows'
compute_consensus(model_fit, type = c("CP", "MAP"), parameter = "rho", ...)
model_fit |
A model fit. |
... |
Other arguments passed on to other methods. Currently not used. |
type |
Character string specifying which consensus to compute. Either
|
parameter |
Character string defining the parameter for which to compute
the consensus. Defaults to |
assessors |
When |
Other posterior quantities:
assign_cluster()
,
compute_posterior_intervals()
,
get_acceptance_ratios()
,
heat_plot()
,
plot.BayesMallows()
,
plot.SMCMallows()
,
plot_elbow()
,
plot_top_k()
,
predict_top_k()
,
print.BayesMallows()
# The example datasets potato_visual and potato_weighing contain complete
# rankings of 20 items, by 12 assessors. We first analyse these using the
# Mallows model:
model_fit <- compute_mallows(setup_rank_data(potato_visual))
# Se the documentation to compute_mallows for how to assess the convergence of
# the algorithm. Having chosen burin = 1000, we compute posterior intervals
burnin(model_fit) <- 1000
# We then compute the CP consensus.
compute_consensus(model_fit, type = "CP")
# And we compute the MAP consensus
compute_consensus(model_fit, type = "MAP")
## Not run:
# CLUSTERWISE CONSENSUS
# We can run a mixture of Mallows models, using the n_clusters argument
# We use the sushi example data. See the documentation of compute_mallows for
# a more elaborate example
model_fit <- compute_mallows(
setup_rank_data(sushi_rankings),
model_options = set_model_options(n_clusters = 5))
# Keeping the burnin at 1000, we can compute the consensus ranking per cluster
burnin(model_fit) <- 1000
cp_consensus_df <- compute_consensus(model_fit, type = "CP")
# We can now make a table which shows the ranking in each cluster:
cp_consensus_df$cumprob <- NULL
stats::reshape(cp_consensus_df, direction = "wide", idvar = "ranking",
timevar = "cluster",
varying = list(sort(unique(cp_consensus_df$cluster))))
## End(Not run)
## Not run:
# MAP CONSENSUS FOR PAIRWISE PREFENCE DATA
# We use the example dataset with beach preferences.
model_fit <- compute_mallows(setup_rank_data(preferences = beach_preferences))
# We set burnin = 1000
burnin(model_fit) <- 1000
# We now compute the MAP consensus
map_consensus_df <- compute_consensus(model_fit, type = "MAP")
# CP CONSENSUS FOR AUGMENTED RANKINGS
# We use the example dataset with beach preferences.
model_fit <- compute_mallows(
setup_rank_data(preferences = beach_preferences),
compute_options = set_compute_options(save_aug = TRUE, aug_thinning = 2))
# We set burnin = 1000
burnin(model_fit) <- 1000
# We now compute the CP consensus of augmented ranks for assessors 1 and 3
cp_consensus_df <- compute_consensus(
model_fit, type = "CP", parameter = "Rtilde", assessors = c(1L, 3L))
# We can also compute the MAP consensus for assessor 2
map_consensus_df <- compute_consensus(
model_fit, type = "MAP", parameter = "Rtilde", assessors = 2L)
# Caution!
# With very sparse data or with too few iterations, there may be ties in the
# MAP consensus. This is illustrated below for the case of only 5 post-burnin
# iterations. Two MAP rankings are equally likely in this case (and for this
# seed).
model_fit <- compute_mallows(
setup_rank_data(preferences = beach_preferences),
compute_options = set_compute_options(
nmc = 1005, save_aug = TRUE, aug_thinning = 1))
burnin(model_fit) <- 1000
compute_consensus(model_fit, type = "MAP", parameter = "Rtilde",
assessors = 2L)
## End(Not run)
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.