Recursive partitioning method for the prediction of preference rankings based upon Kemeny distances.
A n by m data matrix, in which there are n judges and m objects to be judged. Each row is a ranking of the objects which are represented by the columns.
A dataframe containing the predictor, that must have n rows.
prunplot=TRUE returns the plot of the pruning sequence. Default value: FALSE
a list of options that control details of the
arguments passed bypassing
The user can use any algorithm implemented in the
consrank function from the ConsRank package. All algorithms allow the user to set the option 'full=TRUE'
if the median ranking(s) must be searched in the restricted space of permutations instead of in the unconstrained universe of rankings of n items including all possible ties.
The output consists in a object of the class "ranktree". It contains:
|X||the predictors: it must be a dataframe|
|Y||the response variable: the matrix of the rankings|
|node||a list containing teh tree-based structure:|
|terminal||logical: TRUE is terminal node|
|father||father node number of the current node|
|idfather||id of the father node of the current node|
|size||sample size within node|
|impur||impurity at node|
|wimpur||weighted impurity at node|
|idatnode||id of the observations within node|
|class||median ranking within node in terms of orderings|
|nclass||median ranking within node in terms of rankings|
|mclass||eventual multiple median rankings|
|tau||Tau_x rank correlation coefficient at node|
|wtau||weighted Tau_x rank correlation coefficient at node|
|error||error at node|
|werror||weighted error at node|
|varsplit||variables generating split|
|varsplitid||id of variables generating split|
|children||children nodes generated by current node|
|idchildren||id of children nodes generated by current node|
|...||other info about node|
|control||parameters used to build the tree|
|numnodes||number of nodes of the tree|
|tsynt||list containing the synthesis of the tree:|
|children||list containing all information about leaves|
|parents||list containing all information about parent nodes|
|geneaoly||data frame containing information about all nodes|
|idgenealogy||data frame containing information about all nodes in terms of nodes id|
|idparents||id of the parents of all the nodes|
|goodness||goodness -and badness- of fit measures of the tree: Tau_X, error, impurity|
|nomin||information about nature of the predictors|
|alpha||alpha parameter for pruning sequence|
|pruneinfo||list containing information about the pruning sequence:|
|prunelist||information about the pruning|
|tau||tau_X rank correlation coefficient of each subtree|
|error||error of each subtree|
|termnodes||number of terminal nodes of each subtree|
|subtrees||list of each subtree created with the cost-complexity pruning procedure|
An object of the class ranktree. See details for detailed information.
Antonio D'Ambrosio firstname.lastname@example.org
D'Ambrosio, A., and Heiser W.J. (2016). A recursive partitioning method for the prediction of preference rankings based upon Kemeny distances. Psychometrika, vol. 81 (3), pp.774-94.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
data("Univranks") tree <- ranktree(Univranks$rankings,Univranks$predictors,num=50) data(Irish) #build the tree with default options tree <- ranktree(Irish$rankings,Irish$predictors) #plot the tree plot(tree,dispclass=TRUE) #visualize information summary(tree) #get information about the paths leading to terminal nodes (all the paths) infopaths <- treepaths(tree) #the terminal nodes infopaths$leaves #sample size within each terminal node infopaths$size #visualize the path of the second leave (terminal node number 8) infopaths$paths[] #alternatively nodepath(termnode=8,tree) set.seed(132) #for reproducibility #validation of the tree via v-fold cross-validation (default value of V=5) vtree <- validatetree(tree,method="cv") #extract the "best" tree dtree <- getsubtree(tree,vtree$best_tau) summary(dtree) #plot the validated tree plot(dtree,dispclass=TRUE) #predicted rankings rankfit <- predict(dtree,newx=Irish$predictors) #fit of rankings rankfit$rankings #fit in terms of orderings rankfit$orderings #all info about the fit (id og the leaf, predictor values, and fit) rankfit$orderings
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.