#' @title ZoomOut: Spectral Upsampling for Efficient Correspondence Refinement
#'
#' @description
#' Implements the ZoomOut algorithm described in:
#'
#' **ZoomOut: Spectral Upsampling for Efficient Shape Correspondence**
#' Simone Melzi*, Jing Ren*, Emanuele RodolĂ , Abhishek Sharma, Peter Wonka, and Maks Ovsjanikov.
#' ACM Transactions on Graphics (TOG), Volume 38, Number 6, Article 155, 2019.
#'
#' ZoomOut refines a given initial correspondence (encoded as a functional map) between two domains
#' (originally shapes). Here, we present a generalized version that takes two arbitrary undirected graphs
#' with weighted adjacency matrices as input. By leveraging the Laplacian eigen-decomposition of these graphs,
#' ZoomOut iteratively improves the input functional map, upsampling it in the spectral domain.
#'
#' The key idea (see Section 4 of the paper) is to iteratively:
#' 1. Convert the current functional map \eqn{C_k} into a pointwise map \eqn{T} (Eq. (2) in the paper),
#' by matching each node in the source domain to its nearest neighbor in the target domain's spectral embedding.
#' 2. Convert the obtained pointwise map \eqn{T} back into a larger functional map \eqn{C_{k+1}} (Eq. (1) in the paper).
#'
#' Iterating these two steps refines and "upsamples" the functional map, often yielding a more accurate correspondence.
#'
#' @param W_M A symmetric \code{nM x nM} adjacency matrix of the source graph (sparse Matrix recommended).
#' It should be weighted and nonnegative. Diagonal entries should be zero.
#' @param W_N A symmetric \code{nN x nN} adjacency matrix of the target graph (sparse Matrix recommended).
#' Same format as W_M.
#' @param C0 A \code{k0 x k0} initial functional map, typically with a small \eqn{k0} (e.g. 4-20).
#' This can be derived from an initial correspondence or from low-frequency descriptor alignment.
#' @param k_max The maximum size of the functional map to upsample to. Must be >= k0.
#' @param step Increment step for increasing the spectral dimension at each iteration. Default is 1.
#' Larger step sizes can speed up computation but may lead to less stable convergence.
#' @param normalized_laplacian Logical, whether to use the normalized graph Laplacian. Default \code{FALSE}.
#' @param ncv Number of Lanczos vectors used in the eigendecomposition (passed to RSpectra::eigs_sym). Default is 2*k_max + 10.
#' Increase if eigen-decomposition fails to converge or if you want more stable results.
#' @param approx_nn Logical, whether to use approximate nearest neighbors (via RANN::nn2) instead of exact search. Default \code{FALSE}.
#' @param verbose Logical, print progress messages. Default \code{FALSE}.
#'
#' @details
#' **Key Steps:**
#'
#' 1. Compute the Laplacian eigen-decomposition of both graphs up to \eqn{k_{max}+1} dimensions.
#' Let \eqn{\Phi_M^k} and \eqn{\Phi_N^k} be the first \eqn{k} eigenvectors of each graph Laplacian.
#'
#' 2. Given the current functional map \eqn{C_k}, compute the pointwise map \eqn{T} as in Eq. (2) of the paper:
#' \deqn{T(p) = \arg\min_q \| C_k (\Phi_N(q,:))^T - \Phi_M(p,:)^T \|_2.}
#' Here \eqn{\Phi_M(p,:)} is the spectral coordinate of node \eqn{p} in the source graph, and similarly for \eqn{\Phi_N(q,:)}.
#'
#' 3. Convert the pointwise map \eqn{T} back into a functional map of size \eqn{(k+1) \times (k+1)} as in Eq. (1):
#' \deqn{C_{k+1} = (\Phi_M^{k+1})^+ \Pi \Phi_N^{k+1},}
#' where \eqn{\Pi} is the permutation matrix representation of \eqn{T}, and \eqn{(^+)} denotes the Moore-Penrose pseudoinverse.
#'
#' 4. Iterate until \eqn{k = k_{max}}.
#'
#' See Section 4 of the paper and especially Eqs. (1) and (2).
#'
#' **Complexity & Sparse Matrices:**
#' Using sparse matrices for W_M and W_N is recommended, especially for large graphs.
#' The eigen-decomposition step (via RSpectra) will be efficient if W is sparse.
#' The iterative refinement steps mainly involve nearest neighbor queries in \eqn{k}-dimensional space,
#' whose complexity grows with \eqn{k}.
#'
#' @references
#' Melzi, S., Ren, J., RodolĂ , E., Sharma, A., Wonka, P., & Ovsjanikov, M. (2019). "ZoomOut: Spectral Upsampling for Efficient Shape Correspondence." ACM Transactions on Graphics (TOG), 38(6), 155.
#'
#' @return A list with elements:
#' \item{C_final}{A \eqn{k_{max} \times k_{max}} refined functional map.}
#' \item{T_final}{A vector of length \eqn{nM} giving the final pointwise correspondences from graph M to graph N.}
#' \item{Phi_M}{The first \eqn{k_{max}} eigenvectors of the source graph Laplacian.}
#' \item{Phi_N}{The first \eqn{k_{max}} eigenvectors of the target graph Laplacian.}
#'
#' @importFrom Matrix Matrix t
#' @importFrom RSpectra eigs_sym
#' @importFrom RANN nn2
#' @examples
#' \dontrun{
#' # Example with small random graphs
#' set.seed(123)
#' nM <- 50
#' nN <- 60
#' # Random adjacency for M
#' WM <- matrix(runif(nM*nM,0,1), nM, nM)
#' WM[WM<0.95] <- 0
#' WM <- (WM + t(WM))/2
#' diag(WM) <- 0
#' # Random adjacency for N
#' WN <- matrix(runif(nN*nN,0,1), nN, nN)
#' WN[WN<0.95] <- 0
#' WN <- (WN + t(WN))/2
#' diag(WN) <- 0
#'
#' # Initial small functional map (say k0=5)
#' C0 <- diag(5)
#'
#' # Run ZoomOut up to k_max=15
#' res <- zoomout(WM, WN, C0, k_max=15, step=2, verbose=TRUE)
#' }
#'
zoomout <- function(W_M, W_N, C0, k_max, step=1, normalized_laplacian=FALSE, ncv=2*k_max+10,
approx_nn=FALSE, verbose=FALSE) {
#### Helper functions ####
# Compute (normalized) Laplacian eigen-decomposition for the first k_max eigenpairs.
# Reference: Sec. 3 of the paper for spectral basis computation.
# Uses: L = D - W, optionally normalized: L_sym = D^{-1/2} L D^{-1/2}
compute_eigs <- function(W, k) {
n <- nrow(W)
d <- Matrix::rowSums(W)
D <- d
if(normalized_laplacian) {
# Normalized Laplacian: L = I - D^{-1/2} W D^{-1/2}
# M = D^{-1/2} W D^{-1/2}
# Then L = I - M, eigen(L) = 1 - eigen(M)
inv_sqrt_d <- 1 / sqrt(D + .Machine$double.eps)
# Construct M = D^{-1/2} W D^{-1/2} as a sparse matrix
inv_sqrt_D <- Matrix::Diagonal(n, inv_sqrt_d)
M <- inv_sqrt_D %*% W %*% inv_sqrt_D
# We want smallest k eigen of L, which correspond to largest k eigen of M, but we can just do RSpectra on M.
# Actually, we want the first k eigenvectors of L. The smallest eigenvalues of L are closest to 0.
# This is equivalent to largest eigenvalues of M. But we only have eigs_sym for largest/smallest.
# We'll look for the SMALLEST eigenvalues of L, which are the LARGEST of M = I - L. Actually it's simpler:
# L = I - M, smallest eigenvalues(L) = 1 - largest eigenvalues(M).
# But RSpectra only returns a certain set, let's just find k smallest eigen of L directly with RSpectra.
# We'll use eigs_sym(L, k, which="SM") directly.
# Construct L explicitly might be large.
# L = I - M actually. M = D^{-1/2} W D^{-1/2}.
# Lx = x - Mx = x - D^{-1/2} W D^{-1/2} x
# We'll use a function for matrix multiplication in eigs_sym.
Lmult <- function(x, A=NULL) {
# Lx = x - D^{-1/2} W D^{-1/2} x
y <- x - inv_sqrt_d * (W %*% (inv_sqrt_d * x))
return(y)
}
res <- RSpectra::eigs_sym(Lmult, n, k=k, which="SM", ncv=ncv)
# eigenvalues are res$values
# eigenvectors are res$vectors
# We want eigenvectors of L. We got them directly.
# If normalized: the eigenvectors here are L eigenvectors.
# Typically the embedding is chosen as Phi = D^{-1/2} * X where X are eigenvectors of L?
# Actually, for shape analysis, eigenvectors of L are used directly.
# Here let's mimic shape scenario: we want eigenfunctions w.r.t. area measure.
# On graphs, a common approach: we can just use the eigenvectors of L for embedding.
# If normalized, the eigenvectors are already "normalized".
# We'll just return them.
# Note: The functional map framework relies on orthonormal w.r.t. A. Here we have no A given.
# We simply use standard orthonormal eigenvectors.
return(list(values=res$values, vectors=res$vectors))
} else {
# Unnormalized Laplacian: L = D - W
# We want the first k eigenvectors of L, smallest eigenvalues.
# L is symmetric and PSD.
L <- (Matrix::Diagonal(n, D) - W)
res <- RSpectra::eigs_sym(L, k=k, which="SM", ncv=ncv)
return(list(values=res$values, vectors=res$vectors))
}
}
# Given a functional map C_k (k x k), and eigenvectors Phi_M (nM x k), Phi_N (nN x k),
# compute the pointwise map T: T(p) = argmin_q ||C_k Phi_N(q)^T - Phi_M(p)^T||_2
# Eq. (2) in the paper.
compute_pointwise_map <- function(C, Phi_M, Phi_N, approx_nn=FALSE) {
# M: nM x k
# N: nN x k
# We want to find for each row in Phi_M, the closest row in (Phi_N * C^T).
# Actually: C Phi_N(q,:)^T means row-wise: For each q, we have a k-dim vector Phi_N(q,:),
# first we transform by C: (C Phi_N(q,:)^T)^T = Phi_N(q,:) C^T if we think in row terms.
# But easier: define target embedding E = Phi_N %*% t(C). E: nN x k
# We want for each p: argmin_q || E(q,:) - Phi_M(p,:) ||.
# This is a NN search problem in k-dim.
# Construct E = Phi_N %*% t(C)
E <- Phi_N %*% t(C)
# Use nearest neighbor search
if(!approx_nn) {
# exact NN: O(nM*nN), might be expensive for large graphs
# For large graphs consider KD-tree or approximate methods.
# Here we do a brute force approach if approx_nn=FALSE.
# A more efficient approach:
# We'll try a partial search. If large, consider approx.
# Distance = ||Phi_M(p,:) - E(q,:)||^2
# Minimizing w.r.t q.
# We'll do row-wise:
# If memory is an issue, consider a chunked approach.
# We'll implement a simple brute force for demonstration.
# But brute force could be O(nM*nN), large. Let's just do it anyway for simplicity.
# For large graphs, user can set approx_nn=TRUE.
nM <- nrow(Phi_M)
nN <- nrow(E)
# Distances:
# Instead of computing full matrix (which is large), we can do:
# Argmin over q: The minimal distance can be found by using a KD-tree if needed.
# We'll rely on the user to not use huge graphs with approx_nn=FALSE here.
# Let's do an RANN approx anyway internally if approx_nn=FALSE is still large.
# Actually RANN always returns approx or exact if we set search type.
# We'll just do RANN in both cases:
library(RANN)
nn_res <- RANN::nn2(E, query=Phi_M, k=1)
T_map <- as.vector(nn_res$nn.idx)
} else {
# use RANN for approximate NN
library(RANN)
nn_res <- RANN::nn2(E, query=Phi_M, k=1, searchtype="standard")
T_map <- as.vector(nn_res$nn.idx)
}
return(T_map)
}
# Given a pointwise map T_map (length nM), build Pi (nM x nN) a permutation-like matrix
# but since T_map may not be a perfect permutation, Pi is a 0/1 indicator.
# Pi(i,T_map[i])=1 else 0.
# Sparse representation recommended.
# Then compute the new functional map C_{k+1} = (Phi_M^{k+1})^+ Pi Phi_N^{k+1}
# Eq. (1) in the paper.
update_functional_map <- function(T_map, Phi_M, Phi_N) {
nM <- length(T_map)
nN <- nrow(Phi_N)
# Construct Pi as sparse:
# i-th row has 1 at column T_map[i]
# Use sparseMatrix:
library(Matrix)
Pi <- sparseMatrix(i=1:nM, j=T_map, x=1, dims=c(nM,nN))
# Compute C: k+1 x k+1 = (Phi_M^+) * Pi * Phi_N
# Phi_M and Phi_N: n x k,
# Phi_M^+ = (Phi_M^T) since Phi_M is orthonormal (eigenvectors)
# In general, the pseudo-inverse of Phi_M: If Phi_M is orthonormal wrt standard inner product:
# Phi_M^+ = Phi_M^T.
# If not strictly orthonormal (e.g. if we had weighting), we'd do more complicated step.
# Here we assume standard orthonormal eigenvectors from RSpectra.
C_new <- (t(Phi_M) %*% Pi %*% Phi_N)
return(C_new)
}
#### Main Code ####
nM <- nrow(W_M)
nN <- nrow(W_N)
if(verbose) cat("Computing eigen-decomposition of source graph...\n")
eigM <- compute_eigs(W_M, k=k_max)
Phi_M_full <- eigM$vectors
if(verbose) cat("Computing eigen-decomposition of target graph...\n")
eigN <- compute_eigs(W_N, k=k_max)
Phi_N_full <- eigN$vectors
# Ensure C0 is k0 x k0
k0 <- nrow(C0)
if(k0 > k_max) stop("Initial C0 dimension larger than k_max.")
# Starting functional map
C_current <- C0
k_current <- k0
if(verbose) cat("Starting ZoomOut refinement from k0 =", k0, "to k_max =", k_max, " with step =", step, "...\n")
while(k_current < k_max) {
# Current dimension
if(verbose) cat("Upsampling from dimension k =", k_current, "to", min(k_current+step, k_max), "...\n")
k_next <- min(k_current + step, k_max)
# Extract sub-bases
Phi_M <- Phi_M_full[,1:k_current,drop=FALSE]
Phi_N <- Phi_N_full[,1:k_current,drop=FALSE]
# Step 1: Convert C_current to pointwise map
# Eq. (2) in the paper
T_map <- compute_pointwise_map(C_current, Phi_M, Phi_N, approx_nn=approx_nn)
# Step 2: Convert pointwise map to C_{k_next}
Phi_M_next <- Phi_M_full[,1:k_next,drop=FALSE]
Phi_N_next <- Phi_N_full[,1:k_next,drop=FALSE]
C_next <- update_functional_map(T_map, Phi_M_next, Phi_N_next)
C_current <- C_next
k_current <- k_next
}
# Final step: also compute final T_map
Phi_M_final <- Phi_M_full[,1:k_max,drop=FALSE]
Phi_N_final <- Phi_N_full[,1:k_max,drop=FALSE]
T_final <- compute_pointwise_map(C_current, Phi_M_final, Phi_N_final, approx_nn=approx_nn)
if(verbose) cat("ZoomOut refinement completed.\n")
return(list(
C_final=C_current,
T_final=T_final,
Phi_M=Phi_M_final,
Phi_N=Phi_N_final
))
}
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.