umap2 | R Documentation |
Carry out dimensionality reduction of a dataset using the Uniform Manifold Approximation and Projection (UMAP) method (McInnes et al., 2018).
umap2(
X,
n_neighbors = 15,
n_components = 2,
metric = "euclidean",
n_epochs = NULL,
learning_rate = 1,
scale = FALSE,
init = "spectral",
init_sdev = "range",
spread = 1,
min_dist = 0.1,
set_op_mix_ratio = 1,
local_connectivity = 1,
bandwidth = 1,
repulsion_strength = 1,
negative_sample_rate = 5,
a = NULL,
b = NULL,
nn_method = NULL,
n_trees = 50,
search_k = 2 * n_neighbors * n_trees,
approx_pow = FALSE,
y = NULL,
target_n_neighbors = n_neighbors,
target_metric = "euclidean",
target_weight = 0.5,
pca = NULL,
pca_center = TRUE,
pcg_rand = TRUE,
fast_sgd = FALSE,
ret_model = FALSE,
ret_nn = FALSE,
ret_extra = c(),
n_threads = NULL,
n_sgd_threads = 0,
grain_size = 1,
tmpdir = tempdir(),
verbose = getOption("verbose", TRUE),
batch = TRUE,
opt_args = NULL,
epoch_callback = NULL,
pca_method = NULL,
binary_edge_weights = FALSE,
dens_scale = NULL,
seed = NULL,
nn_args = list()
)
X |
Input data. Can be a |
n_neighbors |
The size of local neighborhood (in terms of number of
neighboring sample points) used for manifold approximation. Larger values
result in more global views of the manifold, while smaller values result in
more local data being preserved. In general values should be in the range
|
n_components |
The dimension of the space to embed into. This defaults
to |
metric |
Type of distance metric to use to find nearest neighbors. For
For
If rnndescent is
installed and
For more details see the package documentation of If Each metric calculation results in a separate fuzzy simplicial set, which are intersected together to produce the final set. Metric names can be repeated. Because non-numeric columns are removed from the data frame, it is safer to use column names than integer ids. Factor columns can also be used by specifying the metric name
For a given data block, you may override the |
n_epochs |
Number of epochs to use during the optimization of the
embedded coordinates. By default, this value is set to |
learning_rate |
Initial learning rate used in optimization of the coordinates. |
scale |
Scaling to apply to
For UMAP, the default is |
init |
Type of initialization for the coordinates. Options are:
For spectral initializations, ( |
init_sdev |
If non- |
spread |
The effective scale of embedded points. In combination with
|
min_dist |
The effective minimum distance between embedded points.
Smaller values will result in a more clustered/clumped embedding where
nearby points on the manifold are drawn closer together, while larger
values will result on a more even dispersal of points. The value should be
set relative to the |
set_op_mix_ratio |
Interpolate between (fuzzy) union and intersection as
the set operation used to combine local fuzzy simplicial sets to obtain a
global fuzzy simplicial sets. Both fuzzy set operations use the product
t-norm. The value of this parameter should be between |
local_connectivity |
The local connectivity required – i.e. the number of nearest neighbors that should be assumed to be connected at a local level. The higher this value the more connected the manifold becomes locally. In practice this should be not more than the local intrinsic dimension of the manifold. |
bandwidth |
The effective bandwidth of the kernel if we view the algorithm as similar to Laplacian Eigenmaps. Larger values induce more connectivity and a more global view of the data, smaller values concentrate more locally. |
repulsion_strength |
Weighting applied to negative samples in low dimensional embedding optimization. Values higher than one will result in greater weight being given to negative samples. |
negative_sample_rate |
The number of negative edge/1-simplex samples to use per positive edge/1-simplex sample in optimizing the low dimensional embedding. |
a |
More specific parameters controlling the embedding. If |
b |
More specific parameters controlling the embedding. If |
nn_method |
Method for finding nearest neighbors. Options are:
By default, if
or a sparse distance matrix of type |
n_trees |
Number of trees to build when constructing the nearest
neighbor index. The more trees specified, the larger the index, but the
better the results. With |
search_k |
Number of nodes to search during the neighbor retrieval. The
larger k, the more the accurate results, but the longer the search takes.
With |
approx_pow |
If |
y |
Optional target data for supervised dimension reduction. Can be a
vector, matrix or data frame. Use the
Unlike |
target_n_neighbors |
Number of nearest neighbors to use to construct the
target simplicial set. Default value is |
target_metric |
The metric used to measure distance for |
target_weight |
Weighting factor between data topology and target
topology. A value of 0.0 weights entirely on data, a value of 1.0 weights
entirely on target. The default of 0.5 balances the weighting equally
between data and target. Only applies if |
pca |
If set to a positive integer value, reduce data to this number of
columns using PCA. Doesn't applied if the distance |
pca_center |
If |
pcg_rand |
If |
fast_sgd |
If |
ret_model |
If |
ret_nn |
If |
ret_extra |
A vector indicating what extra data to return. May contain any combination of the following strings:
|
n_threads |
Number of threads to use (except during stochastic gradient
descent). Default is half the number of concurrent threads supported by the
system. For nearest neighbor search, only applies if
|
n_sgd_threads |
Number of threads to use during stochastic gradient
descent. If set to > 1, then be aware that if |
grain_size |
The minimum amount of work to do on each thread. If this
value is set high enough, then less than |
tmpdir |
Temporary directory to store nearest neighbor indexes during
nearest neighbor search. Default is |
verbose |
If |
batch |
If |
opt_args |
A list of optimizer parameters, used when
|
epoch_callback |
A function which will be invoked at the end of every
epoch. Its signature should be:
|
pca_method |
Method to carry out any PCA dimensionality reduction when
the
|
binary_edge_weights |
If |
dens_scale |
A value between 0 and 1. If > 0 then the output attempts
to preserve relative local density around each observation. This uses an
approximation to the densMAP method (Narayan and co-workers, 2021). The
larger the value of |
seed |
Integer seed to use to initialize the random number generator
state. Combined with |
nn_args |
A list containing additional arguments to pass to the nearest
neighbor method. For
For
|
This function behaves like umap
except with some updated
defaults that make it behave more like the Python implementation and which
cannot be added to umap
without breaking backwards
compatibility. In addition:
if RcppHNSW is installed, it will be used in preference to Annoy if a compatible metric is requested.
if RcppHNSW is not present, but rnndescent is installed, it will be used in preference to Annoy if a compatible metric is requested.
if batch = TRUE
then the default n_sgd_threads
is set
to the same value as n_threads
.
if the input data X
is a sparse matrix, it is interpreted
similarly to a dense matrix or dataframe, and not as a distance matrix.
This requires rnndescent
package to be installed.
A matrix of optimized coordinates, or:
if ret_model = TRUE
(or ret_extra
contains
"model"
), returns a list containing extra information that can be
used to add new data to an existing embedding via
umap_transform
. In this case, the coordinates are available
in the list item embedding
. NOTE: The contents of
the model
list should not be considered stable or part of
the public API, and are purposely left undocumented.
if ret_nn = TRUE
(or ret_extra
contains "nn"
),
returns the nearest neighbor data as a list called nn
. This
contains one list for each metric
calculated, itself containing a
matrix idx
with the integer ids of the neighbors; and a matrix
dist
with the distances. The nn
list (or a sub-list) can be
used as input to the nn_method
parameter.
if ret_extra
contains "fgraph"
, returns the high
dimensional fuzzy graph as a sparse matrix called fgraph
, of type
dgCMatrix-class.
if ret_extra
contains "sigma"
, returns a vector of the
smooth knn distance normalization terms for each observation as
"sigma"
and a vector "rho"
containing the largest
distance to the locally connected neighbors of each observation.
if ret_extra
contains "localr"
, returns a vector of
the estimated local radii, the sum of "sigma"
and "rho"
.
The returned list contains the combined data from any combination of
specifying ret_model
, ret_nn
and ret_extra
.
Belkin, M., & Niyogi, P. (2002). Laplacian eigenmaps and spectral techniques for embedding and clustering. In Advances in neural information processing systems (pp. 585-591). http://papers.nips.cc/paper/1961-laplacian-eigenmaps-and-spectral-techniques-for-embedding-and-clustering.pdf
Böhm, J. N., Berens, P., & Kobak, D. (2020). A unifying perspective on neighbor embeddings along the attraction-repulsion spectrum. arXiv preprint arXiv:2007.08902. https://arxiv.org/abs/2007.08902
Damrich, S., & Hamprecht, F. A. (2021). On UMAP's true loss function. Advances in Neural Information Processing Systems, 34. https://proceedings.neurips.cc/paper/2021/hash/2de5d16682c3c35007e4e92982f1a2ba-Abstract.html
Dong, W., Moses, C., & Li, K. (2011, March). Efficient k-nearest neighbor graph construction for generic similarity measures. In Proceedings of the 20th international conference on World Wide Web (pp. 577-586). ACM. \Sexpr[results=rd]{tools:::Rd_expr_doi("10.1145/1963405.1963487")}.
Kingma, D. P., & Ba, J. (2014). Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. https://arxiv.org/abs/1412.6980
Malkov, Y. A., & Yashunin, D. A. (2018). Efficient and robust approximate nearest neighbor search using hierarchical navigable small world graphs. IEEE transactions on pattern analysis and machine intelligence, 42(4), 824-836.
McInnes, L., Healy, J., & Melville, J. (2018). UMAP: Uniform Manifold Approximation and Projection for Dimension Reduction arXiv preprint arXiv:1802.03426. https://arxiv.org/abs/1802.03426
Narayan, A., Berger, B., & Cho, H. (2021). Assessing single-cell transcriptomic variability through density-preserving data visualization. Nature biotechnology, 39(6), 765-774. \Sexpr[results=rd]{tools:::Rd_expr_doi("10.1038/s41587-020-00801-7")}
O’Neill, M. E. (2014). PCG: A family of simple fast space-efficient statistically good algorithms for random number generation (Report No. HMC-CS-2014-0905). Harvey Mudd College.
Tang, J., Liu, J., Zhang, M., & Mei, Q. (2016, April). Visualizing large-scale and high-dimensional data. In Proceedings of the 25th International Conference on World Wide Web (pp. 287-297). International World Wide Web Conferences Steering Committee. https://arxiv.org/abs/1602.00370
Van der Maaten, L., & Hinton, G. (2008). Visualizing data using t-SNE. Journal of Machine Learning Research, 9 (2579-2605). https://www.jmlr.org/papers/v9/vandermaaten08a.html
Wang, Y., Huang, H., Rudin, C., & Shaposhnik, Y. (2021). Understanding How Dimension Reduction Tools Work: An Empirical Approach to Deciphering t-SNE, UMAP, TriMap, and PaCMAP for Data Visualization. Journal of Machine Learning Research, 22(201), 1-73. https://www.jmlr.org/papers/v22/20-1061.html
iris30 <- iris[c(1:10, 51:60, 101:110), ]
iris_umap <- umap2(iris30, n_neighbors = 5)
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.