compute_consensus: Compute Consensus Ranking

View source: R/compute_consensus.R

compute_consensusR Documentation

Compute Consensus Ranking

Description

Compute the consensus ranking using either cumulative probability (CP) or maximum a posteriori (MAP) consensus \insertCitevitelli2018BayesMallows. For mixture models, the consensus is given for each mixture. Consensus of augmented ranks can also be computed for each assessor, by setting parameter = "Rtilde".

Usage

compute_consensus(model_fit, ...)

Arguments

model_fit

A model fit.

...

other arguments passed to methods.

References

\insertAllCited

See Also

Other posterior quantities: assign_cluster(), compute_consensus.BayesMallows(), compute_consensus.SMCMallows(), compute_posterior_intervals.BayesMallows(), compute_posterior_intervals.SMCMallows(), compute_posterior_intervals(), heat_plot(), plot.BayesMallows(), plot.SMCMallows(), plot_elbow(), plot_top_k(), predict_top_k(), print.BayesMallowsMixtures(), print.BayesMallows()

Examples

# The example datasets potato_visual and potato_weighing contain complete
# rankings of 20 items, by 12 assessors. We first analyse these using the Mallows
# model:
model_fit <- compute_mallows(potato_visual)

# Se the documentation to compute_mallows for how to assess the convergence of the algorithm
# Having chosen burin = 1000, we compute posterior intervals
model_fit$burnin <- 1000
# We then compute the CP consensus.
compute_consensus(model_fit, type = "CP")
# And we compute the MAP consensus
compute_consensus(model_fit, type = "MAP")

## Not run: 
  # CLUSTERWISE CONSENSUS
  # We can run a mixture of Mallows models, using the n_clusters argument
  # We use the sushi example data. See the documentation of compute_mallows for a more elaborate
  # example
  model_fit <- compute_mallows(sushi_rankings, n_clusters = 5)
  # Keeping the burnin at 1000, we can compute the consensus ranking per cluster
  model_fit$burnin <- 1000
  cp_consensus_df <- compute_consensus(model_fit, type = "CP")
  # We can now make a table which shows the ranking in each cluster:
  cp_consensus_df$cumprob <- NULL
  stats::reshape(cp_consensus_df, direction = "wide", idvar = "ranking",
                 timevar = "cluster",
                 varying = list(sort(unique(cp_consensus_df$cluster))))

## End(Not run)

## Not run: 
  # MAP CONSENSUS FOR PAIRWISE PREFENCE DATA
  # We use the example dataset with beach preferences.
  model_fit <- compute_mallows(preferences = beach_preferences)
  # We set burnin = 1000
  model_fit$burnin <- 1000
  # We now compute the MAP consensus
  map_consensus_df <- compute_consensus(model_fit, type = "MAP")

## End(Not run)

## Not run: 
  # CP CONSENSUS FOR AUGMENTED RANKINGS
  # We use the example dataset with beach preferences.
  model_fit <- compute_mallows(preferences = beach_preferences, save_aug = TRUE,
                               aug_thinning = 2, seed = 123L)
  # We set burnin = 1000
  model_fit$burnin <- 1000
  # We now compute the CP consensus of augmented ranks for assessors 1 and 3
  cp_consensus_df <- compute_consensus(model_fit, type = "CP",
                                       parameter = "Rtilde", assessors = c(1L, 3L))
  # We can also compute the MAP consensus for assessor 2
  map_consensus_df <- compute_consensus(model_fit, type = "MAP",
                                        parameter = "Rtilde", assessors = 2L)

  # Caution!
  # With very sparse data or with too few iterations, there may be ties in the MAP consensus
  # This is illustrated below for the case of only 5 post-burnin iterations. Two MAP rankings are
  # equally likely in this case (and for this seed).
  model_fit <- compute_mallows(preferences = beach_preferences, nmc = 1005,
                               save_aug = TRUE, aug_thinning = 1, seed = 123L)
  model_fit$burnin <- 1000
  compute_consensus(model_fit, type = "MAP", parameter = "Rtilde", assessors = 2L)

## End(Not run)

BayesMallows documentation built on Nov. 25, 2023, 5:09 p.m.