View source: R/discretize_jointly.R
| discretize.jointly | R Documentation |
Discretize multivariate continuous data using a grid that captures the joint distribution via preserving clusters in original data
discretize.jointly(
data,
k = c(2:10),
min_level = 1,
max_level = 100,
cluster_method = c("Ball+BIC", "kmeans+silhouette", "PAM"),
grid_method = c("DP approx likelihood 1-way", "DP approx likelihood 2-way",
"DP exact likelihood", "DP Compressed majority", "DP", "Sort+split",
"MultiChannel.WUC"),
eval_method = c("ARI", "purity", "upsllion", "CAIR"),
cluster_label = NULL,
cutoff = 0,
entropy = FALSE,
noise = FALSE,
dim_reduction = FALSE,
scale = FALSE,
variance = 0.5,
nthread = 1
)
data |
a numeric matrix for multivariate data or a numeric vector for univariate data. In case of a matrix, columns are continuous variables; rows are observations. |
k |
either an integer, a integer vector,
or |
min_level |
an integer or an integer vector, to specify the minimum number of levels
along each dimension. If a vector of size |
max_level |
an integer or an integer vector, to specify the maximum
number of levels along each dimension. It works in the
same way as |
cluster_method |
a character string to specify a clustering
method to be used. Ignored if
|
grid_method |
a character string to specify a grid
discretization method. Default:
|
eval_method |
a character string to specify a method to evaluate quality of discretized data. |
cluster_label |
a vector of labels for each data point or
observation. It can be class labels on the input |
cutoff |
a numeric value. A grid line is added only when the
quality of the line is not smaller than |
entropy |
a logical to chose either entropy
( |
noise |
a logical to apply jitter noise to original
data if |
dim_reduction |
a logical to turn on/off
dimension reduction. Default: |
scale |
a logical to specify linear
scaling of the variable in each dimension
if |
variance |
a numeric value to specify noise variance to be added to the data |
nthread |
an integer to specify number of CPU threads to use. Automatically adjusted if invalid or exceeding available cores. |
The function implements both published algorithms described in \insertCiteJwang2020BCBGridOnClusters and new algorithms for multivariate discretization.
The included grid discretization methods can be summarized into three categories:
By Density
"Sort+split" \insertCiteJwang2020BCBGridOnClusters
sorts clusters by mean in each dimension. It then
splits consecutive pairs only if the sum of error rate of each cluster is
less than or equal to 50%. It is possible that no grid line will be added
in a certain dimension. The maximum number of lines is the number of
clusters minus one.
By SSE (Sum of Squared Errors)
"MultiChannel.WUC" splits each dimension by weighted with-in cluster
sum of squared distances by Ckmeans.1d.dp::MultiChannel.WUC(). Applied in
each projection on each dimension. The channel of each point is defined by
its multivariate cluster label.
"DP" orders labels by data in each dimension and then cuts data
into a maximum of max_level bins. It evaluates the quality of each
cut to find a best number of bins.
"DP Compressed majority" orders labels by data in each dimension.
It then compresses labels neighbored by the same label to avoid
discretization within consecutive points of the same cluster label, so as to
greatly reduce runtime of dynamic programming. Then it cuts data into
a maximum of max_level bins, and it evaluates the quality of
each cut by the majority of data to find a best number of bins.
By cluster likelihood
"DP exact likelihood" orders labels by data in each dimension.
It then compresses labels neighbored by the same label to avoid
discretization within consecutive points of the same cluster label,
so as to greatly reduce runtime of dynamic programming.
Then cut the data into a maximum of max_level bins.
"DP approx likelihood 1-way" is a sped-up version of the
"DP exact likelihood" method, but it is not always optimal.
"DP approx likelihood 2-way" is a bidirectional variant of the
"DP approx likelihood" method. It performs approximate dynamic
programming in both the forward and backward directions and selects
the better of the two results. This approach provides additional robustness
compared to the one-directional version, but optimality is not always achieved.
A list that contains four items:
D |
a matrix of discretized values from original |
grid |
a list of numeric vectors of decision boundaries for each variable/dimension. |
clabels |
a vector of cluster labels for each observation in |
csimilarity |
a similarity score between clusters from joint discretization
|
The default grid_method is changed
from "Sort+Split" \insertCiteJwang2020BCBGridOnClusters (up to released package version 0.1.0.2)
to "DP approx likelihood 1-way" (since version 0.3.2),
representing a major improvement.
Jiandong Wang, Sajal Kumar, and Mingzhou Song
See Ckmeans.1d.dp for discretizing univariate continuous data.
# using a specified k
x = rnorm(100)
y = sin(x)
z = cos(x)
data = cbind(x, y, z)
discretized_data = discretize.jointly(data, k=5)$D
# using a range of k
x = rnorm(100)
y = log1p(abs(x))
z = tan(x)
data = cbind(x, y, z)
discretized_data = discretize.jointly(data, k=c(3:10))$D
# using k = Inf
x = c()
y = c()
mns = seq(0,1200,100)
for(i in 1:12){
x = c(x,runif(n=20, min=mns[i], max=mns[i]+20))
y = c(y,runif(n=20, min=mns[i], max=mns[i]+20))
}
data = cbind(x, y)
discretized_data = discretize.jointly(data, k=Inf)$D
# using an alternate clustering method to k-means
library(cluster)
x = rnorm(100)
y = log1p(abs(x))
z = sin(x)
data = cbind(x, y, z)
# pre-cluster the data using partition around medoids (PAM)
cluster_label = pam(x=data, diss = FALSE, metric = "euclidean", k = 5)$clustering
discretized_data = discretize.jointly(data, cluster_label = cluster_label)$D
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.