Introduction to RKeOps

knitr::opts_chunk$set(
  collapse = TRUE,
  progress = TRUE,
  warning = FALSE
)

Authors

Feel free to contact us for any bug report or feature request, you can also fill an issue report on GitHub.

KeOps, PyKeOps, KeOpsLab

RKeOps

Contributors


What is RKeOps?

RKeOps is the R package interfacing the KeOps library. Here you can find a few slides explaining functionalities of the KeOps library.

KeOps

Seamless Kernel Operations on GPU (or CPU), with auto-differentiation and without memory overflows

The KeOps library (http://www.kernel-operations.io) provides routines to compute generic reductions of large 2d arrays whose entries are given by a mathematical formula. Using a C++/CUDA-based implementation with GPU support, it combines a tiled reduction scheme with an automatic differentiation engine. Relying on online map-reduce schemes, it is perfectly suited to the scalable computation of kernel dot products and the associated gradients, even when the full kernel matrix does not fit into the GPU memory.

KeOps is all about breaking through this memory bottleneck and making GPU power available for seamless standard mathematical routine computations. As of 2019, this effort has been mostly restricted to the operations needed to implement Convolutional Neural Networks: linear algebra routines and convolutions on grids, images and volumes. KeOps provides CPU and GPU support without the cost of developing a specific CUDA implementation of your custom mathematical operators.

To ensure its versatility, KeOps can be used through Matlab, Python (NumPy or PyTorch) and R back-ends.

RKeOps

RKeOps is a library that can

Applications: RKeOps can be used to implement a wide range of problems encountered in machine learning, statistics and more: such as $k$-nearest neighbor classification, $k$-means clustering, Gaussian-kernel-based problems (e.g. linear system with Ridge regularization), etc.

Why using RKeOps?

RKeOps provides


Matrix reduction and kernel operator

The general framework of RKeOps (and KeOps) is to provide fast and scalable matrix operations on GPU, in particular kernel-based computations of the form $$\underset{i=1,...,M}{\text{reduction}}\ G(\boldsymbol{\sigma}, \mathbf x_i, \mathbf y_j) \ \ \ \ \text{or}\ \ \ \ \underset{j=1,...,N}{\text{reduction}}\ G(\boldsymbol{\sigma}, \mathbf x_i, \mathbf y_j)$$ where

RKeOps creates (and compiles on the fly) an operator implementing your formula. You can apply it to your data, or compute its gradient regarding some data points.

Note: You can use a wide range of reduction such as sum, min, argmin, max, argmax, etc.

What you need to do

To use RKeOps you only need to express your computations as a formula with the previous form.

RKeOps allows to use a wide range of mathematical functions to define your operators (see https://www.kernel-operations.io/keops/api/math-operations.html).

You can use two type of input matrices with RKeOps:

More details about input matrices (size, storage order) are given in the vignette 'Using RKeOps'.

Example in R

We want to implement with RKeOps the following mathematical formula $$\sum_{j=1}^{N} \exp\Big(-\sigma || \mathbf x_i - \mathbf y_j ||_2^{\,2}\Big)\,\mathbf b_j$$ with

In R, we can define the corresponding KeOps formula as a simple text string:

formula = "Sum_Reduction(Exp(-s * SqNorm2(x - y)) * b, 0)"

and the corresponding arguments of the formula, i.e. parameters or variables indexed by $i$ or $j$ with their corresponding inner dimensions:

args = c("x = Vi(3)",      # vector indexed by i (of dim 3)
         "y = Vj(3)",      # vector indexed by j (of dim 3)
         "b = Vj(6)",      # vector indexed by j (of dim 6)
         "s = Pm(1)")      # parameter (scalar) 

Then we just compile the corresponding operator and apply it to some data

# compilation
op <- keops_kernel(formula, args)
# data and parameter values
nx <- 100
ny <- 150
X <- matrix(runif(nx*3), nrow=nx)   # matrix 100 x 3
Y <- matrix(runif(ny*3), nrow=ny)   # matrix 150 x 3
B <- matrix(runif(ny*6), nrow=ny)   # matrix 150 x 6
s <- 0.2
# computation (order of the input arguments should be similar to `args`)
res <- op(list(X, Y, B, s))

Generic kernel function

With RKeOps, you can define kernel functions $K: \mathbb R^D \times \mathbb R^D \to \mathbb R$ such as, for some vectors $\mathbf x_i$, $\mathbf y_j\in \mathbb{R}^D$

Then you can compute reductions based on such functions, especially when the $M \times N$ matrix $\mathbf K = [K(\mathbf x_i, \mathbf y_j)]$ is too large to fit into memory, such as

CPU and GPU computing

Based on your formulae, RKeOps compile on the fly operators that can be used to run the corresponding computations on CPU or GPU, it uses a tiling scheme to decompose the data and avoid (i) useless and costly memory transfers between host and GPU (performance gain) and (ii) memory overflow.

Note: You can use the same code (i.e. define the same operators) for CPU or GPU computing. The only difference will be the compiler used for the compilation of your operators (upon the availability of CUDA on your system).

To use CPU computing mode, you can call use_cpu() (with an optional argument ncore specifying the number of cores used to run parallel computations).

To use GPU computing mode, you can call use_gpu() (with an optional argument device to choose a specific GPU id to run computations).


Installing and using RKeOps

See the specific vignette Using RKeOps.



Try the rkeops package in your browser

Any scripts or data that you put into this service are public.

rkeops documentation built on Feb. 17, 2021, 5:08 p.m.