The *Repo* R Data Manager

knitr::opts_chunk$set(fig.width=7, fig.height=7, comment="")


This is a getting-started guide for the Repo R package, which implements an R objects repository manager. It is a data-centered data flow manager.

The Repo package builds one (or more) centralized local repository where R objects are stored together with corresponding annotations, tags, dependency notes, provenance traces, source code. Once a repository has been populated, stored objects can be easily searched, navigated, edited, imported/exported. Annotations can be exploited to reconstruct data flows and perform typical pipeline management operations.

Additional information can be found in the paper: Napolitano, F. repo: an R package for data-centered management of bioinformatic pipelines. BMC Bioinformatics 18, 112 (2017).

Repo latest version can be found at:

Repo is also on CRAN at:


The following command creates a new repository in a temporary path (the default would be "~/.R_repo"). The same function opens existing repositories. The variable rp will be used as the main interface to the repository throughout this guide.

rp <- repo_open(tempdir(), force=T)

This document is produced by a script named index.Rmd. The script itself can be added to the repository and newly created resources annotated as being produced by it. The annotation is made automatic using the options command.

rp$attach("index.Rmd", "Source code for Repo vignette")

Populating the repository

Here is a normalized version of the Iris dataset to be stored in the repository:

myiris <- scale(as.matrix(iris[,1:4]))

The shortest way to permanently store the myiris object in the repository is simply:


However, richer annotation is possible, for example:

## chunk "myiris" {
myiris <- scale(as.matrix(iris[,1:4]))

    obj = myiris,
    name = "myiris", 
    description = paste(
        "A normalized version of the iris dataset coming with R.",
        "Normalization is made with the scale function",
        "with default parameters."
    tags = c("dataset", "iris", "repodemo")
## }

The call provides the data to be stored (obj), an identifier (name), a longer description, a list of tags.

The comment lines (## chunk "myiris" { and ## }) have a special meaning: they associate the corresponding code to the resource. The code can be showed as follows:


The code associated with an item should take care of building and storing it. The build command executes the code in the current environment. It can automatically build dependencies, too.

rp$build("myiris", "index.Rmd")

In this example, the Iris class annotation will be stored separately:

rp$put(iris$Species, "irisLabels",
             tags = c("labels", "iris", "repodemo"))

Attaching figures

The following code produces a 2D visualization of the Iris data and shows it:

irispca <- princomp(myiris)
iris2d <- irispca$scores[,c(1,2)]
plot(iris2d, main="2D visualization of the Iris dataset",

Note that irisLabels is loaded on the fly from the repository.

It would be nice to store the figure itself in the repo together with the Iris data. This is done using the attach method, which stores any file in the repo as is (as opposed to R objects), plus annotations. Two parameters differ from put:

fpath <- file.path(rp$root(), "iris2D.pdf")
plot(iris2d, main="2D visualization of the Iris dataset",
rp$attach(fpath, "Iris 2D visualization obtained with PCA.",
            c("visualization", "iris", "repodemo"),

The attached PDF can be accessed using an external PDF viewer directly from within Repo through the sys command. On a Linux system, this command runs the Evince document viewer and shows iris2D.pdf:

rp$sys("iris2D.pdf", "evince")

The following code makes a clustering of the Iris data and stores it in the repository. There is one parameter to note:

kiris <- kmeans(myiris, 5)$cluster
rp$put(kiris, "iris_5clu", "Kmeans clustering of the Iris data, k=5.",
         c("metadata", "iris", "kmeans", "clustering", "repodemo"),

The following shows what the clustering looks like. The figure will be attached to the repository as well.

plot(iris2d, main="Iris dataset kmeans clustering", col=kiris)
fpath <- file.path(rp$root(), "iris2Dclu.pdf")
plot(iris2d, main="Iris dataset kmeans clustering", col=kiris)
rp$attach(fpath, "Iris K-means clustering.",
    c("visualization", "iris", "clustering", "kmeans", "repodemo"),

Finally, a contingency table of the Iris classes versus clusters is computed below. The special tag hide prevents an item from being shown unless explicitly requested.

res <- table(rp$get("irisLabels"), kiris)
rp$put(res, "iris_cluVsSpecies",
         paste("Contingency table of the kmeans clustering versus the",
               "original labels of the Iris dataset."),
         c("result", "iris","validation", "clustering", "repodemo", "hide"),
         src="index.Rmd", depends=c("myiris", "irisLabels", "iris_5clu"))

Looking at the repository

The info command summarizes some information about a repository:


The Repo library supports an S3 print method that shows the contents of the repository. All non-hidden items will be shown, together with some details, which by defaults are: name, dimensions, size.

rp ## resolves to print(rp)

Hidden items are... hidden. The following will show them too:

print(rp, all=T)

Items can also be filtered. With the following call, only items tagged with "clustering" will be shown:

print(rp, tags="clustering", all=T)

print can show information selectively. This command shows tags and size on disk:


The find command will match a search string against all item fields in the repository:

rp$find("clu", all=T)

It is also possible to obtain a visual synthetic summary of the repository by using the pies command:


Finally, the check command runs an integrity check verifying that the stored data has not been modified/corrupted. The command will also check the presence of extraneous (not indexed) files. Since the rp repository was created in a temporary directory, a few extraneous files will pop up.


Showing dependencies

In Repo, the relations "generated by", "attached to" and "dependent on" are summarized in a dependency graph. The formal representation of the graph is a matrix, in which the entry (i,j) represent a relation from i to j of type 1, 2 or 3 (dependency, attachment or generation). Here's how it looks like:

depgraph <- rp$dependencies(plot=F)

Omitting the plot=F parameter, the dependencies method will plot the dependency graph. This plot requires the igraph library.

if(require("igraph", NULL, T, F))

The three types of edges can be shown selectively, so here's how the graph looks like without the "generated" edges:


Accessing items in the repo

The get command is used to retrieve items from a repository. In the following the variable myiris is loaded into the variable x in the current environment.

x <- rp$get("myiris")

An even simpler command is load, which uses the item name also as variable name:

"myiris" %in% ls()

The info command can provide additional information about an entry:


Item versions, temporary items, remote contents

There are actually 3 different ways of adding an object to a repository:

Plus, item contents for an existing entry can be downloaded if an URL is provided with it (rp$pull).


The K-means algorithm will likely provide different solutions over multiple runs. Alternative solutions can be stored as new versions of the iris_5clu item as follows:

kiris2 <- kmeans(myiris, 5)$cluster
rp$put(kiris2, "iris_5clu",
         "Kmeans clustering of the Iris data, k=5. Today's version!",
           depends="myiris", replace="addversion")

The new repository looks like the old one:


Except that iris_5clu is actually the one just put (look at the description):


The old one has been renamed and hidden:



It is also possible to use the repository for caching purposes by using the lazydo command. It will run an expression and store the results. When the same expression is run again, the results will be loaded from the repository instead of being built again.

## First run
    result <- "This took half a second to compute"

## Second run
    result <- "This took half a second to compute"


Existing items can feature an URL property. The pull function is meant to update item contents by downloading them from the Internet. This allows for the distribution of "stub" repositories containing all items information without the actual data. The following code creates an item provided with a remote URL. A call to pull overwrites the stub local content with the remote content.

rp$put("Local content", "item1",
    "This points to big data you may want to download",
    "tag", URL="http://exampleURL/repo")
rp$pull("item1", replace=T)
rp$set("item1", obj="Remote content")


The handlers method returns a list of functions by the same names of the items in the repo. Each of these functions can call Repo methods (get by default) on the corresponding items. In this way all item names are loaded, which may be useful for example to exploit auto-completion features of the editor.

h <- rp$handlers()

Handlers call get by default:


The tag command (not yet described) adds a tag to an item:

h$iris_cluVsSpecies("tag", "onenewtag")

One may want to open a repo directly with:

h <- repo_open(rp$root())$handlers()

In that case, the handler to the repo itself will come handy:


If items are removed or added, handlers may need a refresh:

h <- h$repo$handlers()

Further documentation

The repo manual starts at:


In order to get help on the function "func", try the following:

## cleaning the tempdir causes CRAN checks to fail on some platforms,
## so it is now left behind
## unlink(rp$root(), recursive=T)

Based on Repo build r packageVersion("repo")

Try the repo package in your browser

Any scripts or data that you put into this service are public.

repo documentation built on March 26, 2020, 8:25 p.m.