knitr::opts_chunk$set(
  collapse = TRUE,
  comment = "#>",
  fig.path = "man/figures/README-",
  out.width = "100%"
)

lltm

The goal of lltm is to be a minimal implementation of an extension for torch that interfaces with the underlying C++ interface, called LibTorch.

In this pakage we provide an implementation of a new recurrent unit that is similar to a LSTM but it lacks a forget gate and uses an Exponential Linear Unit (ELU) as its internal activation function. Because this unit never forgets, we’ll call it LLTM, or Long-Long-Term-Memory unit.

The example implemented here is a port of the official PyTorch tutorial on custom C++ and CUDA extensions.

High-Level overview

Writing C++ extensions for torch requires us to coordinate the communication between multiple agents in the torch ecossytem. The following diagram is a high-level overview on how they communicate in this package.

On the torch package side the agents that appear are:

In the extension side the actors are:

Project structure

csrc: Implementing the operators and their C wrappers.

The // [[torch::export]] marks will allow torchexport that is called during when building with cmake to autogenerate C wrappers necessary to handle errors and to correctly pass data between this library and the R package.

```cpp // [[torch::export]] std::vector lltm_forward( torch::Tensor input, torch::Tensor weights, torch::Tensor bias, torch::Tensor old_h, torch::Tensor old_cell) { auto X = torch::cat({old_h, input}, /dim=/1);

  auto gate_weights = torch::addmm(bias, X, weights.transpose(0, 1));
  auto gates = gate_weights.chunk(3, /*dim=*/1);

  auto input_gate = torch::sigmoid(gates[0]);
  auto output_gate = torch::sigmoid(gates[1]);
  auto candidate_cell = torch::elu(gates[2], /*alpha=*/1.0);

  auto new_cell = old_cell + candidate_cell * input_gate;
  auto new_h = torch::tanh(new_cell) * output_gate;

  return {new_h,
          new_cell,
          input_gate,
          output_gate,
          candidate_cell,
          X,
          gate_weights};

} ```

For example, the current definition is:

LIBRARY LLTM EXPORTS _lltm_forward _lltm_backward

The library implemented in csrc can be compiled with CMake. We use the following commands to compile and install it locally:

cd csrc && mkdir build
cmake .. && cmake --build . --target install --config Release

src: Wrapping the library with Rcpp

Now that we implemented the operators that we wanted to call from R, we can now implement the Rcpp wrappers that will allow us to call those operators from R.

You can find all the available types in the torch namespace available when you include <torch.h>.

cpp #include <Rcpp.h> #define LLTM_HEADERS_ONLY // should only be defined in a single file #include <lltm/lltm.h> #define TORCH_IMPL // should only be defined in a single file #define IMPORT_TORCH // should only be defined in a single file #include <torch.h>

R API

Now the Rcpp wrappers are implemented and exported you have now access to lltm_forward in the R side.

Packaging

It's not trivial to package torch extensions because they can't be entirely built on CRAN machines. We would need to include pre-built binaries in the package tarball but for security reasons that's not accepted on CRAN.

In this package we implement a suggested way of packaging torch extensions that makes it really easy for users to install your package without having to use custom installation steps or building libraries from source. The diagram below shows an overview of the packaging process.

Installation

~~You can install the released version of lltm from CRAN with:~~

install.packages("lltm")

And the development version from GitHub with:

# install.packages("devtools")
devtools::install_github("mlverse/lltm")


mlverse/lltm documentation built on Jan. 28, 2022, 8:41 p.m.