analysisPipelines: Compose Interoperable Analysis Pipelines & Put Them in Production

Enables data scientists to compose pipelines of analysis which consist of data manipulation, exploratory analysis & reporting, as well as modeling steps. Data scientists can use tools of their choice through an R interface, and compose interoperable pipelines between R, Spark, and Python. Credits to Mu Sigma for supporting the development of the package. Note - To enable pipelines involving Spark tasks, the package uses the 'SparkR' package. The SparkR package needs to be installed to use Spark as an engine within a pipeline. SparkR is distributed natively with Apache Spark and is not distributed on CRAN. The SparkR version needs to directly map to the Spark version (hence the native distribution), and care needs to be taken to ensure that this is configured properly. To install SparkR from Github, run the following command if you know the Spark version: 'devtools::install_github('apache/spark@v2.x.x', subdir='R/pkg')'. The other option is to install SparkR by running the following terminal commands if Spark has already been installed: '$ export SPARK_HOME=/path/to/spark/directory && cd $SPARK_HOME/R/lib/SparkR/ && R -e "devtools::install('.')"'.

Package details

AuthorNaren Srinivasan [aut], Zubin Dowlaty [aut], Sanjay [ctb], Neeratyoy Mallik [ctb], Anoop S [ctb], Mu Sigma, Inc. [cre]
Maintainer"Mu Sigma, Inc." <ird.experiencelab@mu-sigma.com>
LicenseApache License 2.0
Version1.0.2
URL https://github.com/Mu-Sigma/analysis-pipelines
Package repositoryView on CRAN
Installation Install the latest version of this package by entering the following in R:
install.packages("analysisPipelines")

Try the analysisPipelines package in your browser

Any scripts or data that you put into this service are public.

analysisPipelines documentation built on July 1, 2020, 7:09 p.m.