Man pages for rstudio/sparklyr
R Interface to Apache Spark

checkpoint_directorySet/Get Spark checkpoint directory
compile_package_jarsCompile Scala sources into a Java Archive (jar)
connection_configRead configuration values for a connection
connection_is_openCheck whether the connection is open
connection_spark_shinyappA Shiny app that can be used to construct a 'spark_connect'...
copy_to.spark_connectionCopy an R Data Frame to Spark
DBISparkResult-classDBI Spark Result.
download_scalacDownloads default Scala Compilers
ensureEnforce Specific Structure for R Objects
find_scalacDiscover the Scala Compiler
ft_binarizerFeature Transformation - Binarizer (Transformer)
ft_bucketizerFeature Transformation - Bucketizer (Transformer)
ft_count_vectorizerFeature Tranformation - CountVectorizer (Estimator)
ft_dctFeature Transformation - Discrete Cosine Transform (DCT)...
ft_elementwise_productFeature Transformation - ElementwiseProduct (Transformer)
ft_hashing_tfFeature Transformation - HashingTF (Transformer)
ft_idfFeature Tranformation - IDF (Estimator)
ft_index_to_stringFeature Transformation - IndexToString (Transformer)
ft_ngramFeature Tranformation - NGram (Transformer)
ft_one_hot_encoderFeature Transformation - OneHotEncoder (Transformer)
ft_pcaFeature Tranformation - PCA (Estimator)
ft_quantile_discretizerFeature Transformation - QuantileDiscretizer (Estimator)
ft_regex_tokenizerFeature Tranformation - RegexTokenizer (Transformer)
ft_r_formulaFeature Tranformation - RFormula (Estimator)
ft_stop_words_removerFeature Tranformation - StopWordsRemover (Transformer)
ft_string_indexerFeature Tranformation - StringIndexer (Estimator)
ft_tokenizerFeature Tranformation - Tokenizer (Transformer)
ft_vector_assemblerFeature Transformation - VectorAssembler (Transformer)
ft_word2vecFeature Tranformation - Word2Vec (Estimator)
hive_context_configRuntime configuration interface for Hive
invokeInvoke a Method on a JVM Object
invoke_methodGeneric call interface for spark shell
jobj_classSuperclasses of object
livy_configCreate a Spark Configuration for Livy
livy_installInstall Livy
livy_serviceStart Livy
ml_aft_survival_regressionSpark ML - Survival Regression
ml_alsSpark ML - ALS
ml_bisecting_kmeansSpark ML - Bisecting K-Means Clustering
ml_decision_treeSpark ML - Decision Trees
ml_default_stop_wordsDefault stop words
ml_evaluateSpark ML - Evaluate prediction frames with evaluators
ml_evaluatorSpark ML - Evaluators
ml_gaussian_mixtureSpark ML - Gaussian Mixture clustering.
ml_generalized_linear_regressionSpark ML - Generalized Linear Regression
ml_glm_tidiersTidying methods for Spark ML linear models
ml_gradient_boosted_treesSpark ML - Gradient Boosted Trees
ml_isotonic_regressionSpark ML - Isotonic Regression
ml_kmeansSpark ML - K-Means Clustering
ml_ldaSpark ML - Latent Dirichlet Allocation
ml_linear_regressionSpark ML - Linear Regression
ml_linear_svcSpark ML - LinearSVC
ml_logistic_regressionSpark ML - Logistic Regression
ml_model_dataExtracts data associated with a Spark ML model
ml_multilayer_perceptron_classifierSpark ML - Multilayer Perceptron
ml_naive_bayesSpark ML - Naive-Bayes
ml_one_vs_restSpark ML - OneVsRest
ml-paramsSpark ML - ML Params
ml-persistenceSpark ML - Model Persistence
ml_pipelineSpark ML - Pipelines
ml_random_forestSpark ML - Random Forest
ml_stageSpark ML - Pipeline stage extraction
ml_summarySpark ML - Extraction of summary metrics
ml-transform-methodsSpark ML - Transform, fit, and predict methods (ml_...
ml_tree_feature_importanceSpark ML - Feature Importance for Tree Models
ml-tuningSpark ML - Tuning
ml_uidSpark ML - UID
na.replaceReplace Missing Values in Objects
pipePipe operator
print_jobjGeneric method for print jobj for a connection type
random_stringRandom string generation
reexportsObjects exported from other packages
register_extensionRegister a Package that Implements a Spark Extension
sdf_alongCreate DataFrame for along Object
sdf_bindBind multiple Spark DataFrames by row and column
sdf_broadcastBroadcast hint
sdf_checkpointCheckpoint a Spark DataFrame
sdf_coalesceCoalesces a Spark DataFrame
sdf_copy_toCopy an Object into Spark
sdf_describeCompute summary statistics for columns of a data frame
sdf_dimSupport for Dimension Operations
sdf_fast_bind_colsFast cbind for Spark DataFrames
sdf_last_indexReturns the last index of a Spark DataFrame
sdf_lenCreate DataFrame for Length
sdf_mutateMutate a Spark DataFrame
sdf_num_partitionsGets number of partitions of a Spark DataFrame
sdf_partitionPartition a Spark Dataframe
sdf_persistPersist a Spark DataFrame
sdf_pivotPivot a Spark DataFrame
sdf_projectProject features onto principal components
sdf_quantileCompute (Approximate) Quantiles with a Spark DataFrame
sdf_read_columnRead a Column from a Spark DataFrame
sdf_registerRegister a Spark DataFrame
sdf_repartitionRepartition a Spark DataFrame
sdf_residualsModel Residuals
sdf_sampleRandomly Sample Rows from a Spark DataFrame
sdf-saveloadSave / Load a Spark DataFrame
sdf_schemaRead the Schema of a Spark DataFrame
sdf_separate_columnSeparate a Vector Column into Scalar Columns
sdf_seqCreate DataFrame for Range
sdf_sortSort a Spark DataFrame
sdf-transform-methodsSpark ML - Transform, fit, and predict methods (sdf_...
sdf_with_sequential_idAdd a Sequential ID Column to a Spark DataFrame
sdf_with_unique_idAdd a Unique ID Column to a Spark DataFrame
spark-apiAccess the Spark API
spark_applyApply an R Function in Spark
spark_apply_bundleCreate Bundle for Spark Apply
spark_apply_logLog Writer for Spark Apply
spark_compilation_specDefine a Spark Compilation Specification
spark_compileCompile Scala sources into a Java Archive
spark_configRead Spark Configuration
spark_config_existsA helper function to check value exist under 'spark_config()'
spark_config_valueA helper function to retrieve values from 'spark_config()'
spark_connectionRetrieve the Spark Connection Associated with an R Object
spark-connectionsManage Spark Connections
spark_context_configRuntime configuration interface for Spark.
spark_dataframeRetrieve a Spark DataFrame
spark_default_compilation_specDefault Compilation Specification for Spark Extensions
spark_default_versiondetermine the version that will be used by default if version...
spark_dependencyDefine a Spark dependency
spark_home_dirFind the SPARK_HOME directory for a version of Spark
spark_home_setSet the SPARK_HOME environment variable
spark_installFind a given Spark installation by version.
spark_install_synchelper function to sync sparkinstall project to sparklyr
spark_jobjRetrieve a Spark JVM Object Reference
spark_load_tableReads from a Spark Table into a Spark DataFrame.
spark_logView Entries in the Spark Log
spark_read_csvRead a CSV file into a Spark DataFrame
spark_read_jdbcRead from JDBC connection into a Spark DataFrame.
spark_read_jsonRead a JSON file into a Spark DataFrame
spark_read_libsvmRead libsvm file into a Spark DataFrame.
spark_read_parquetRead a Parquet file into a Spark DataFrame
spark_read_sourceRead from a generic source into a Spark DataFrame.
spark_read_tableReads from a Spark Table into a Spark DataFrame.
spark_read_textRead a Text file into a Spark DataFrame
spark_save_tableSaves a Spark DataFrame as a Spark table
spark_table_nameGenerate a Table Name from Expression
spark_versionGet the Spark Version Associated with a Spark Connection
spark_version_from_homeGet the Spark Version Associated with a Spark Installation
spark_versionsRetrieves a dataframe available Spark versions that van be...
spark_webOpen the Spark web interface
spark_write_csvWrite a Spark DataFrame to a CSV
spark_write_jdbcWrites a Spark DataFrame into a JDBC table
spark_write_jsonWrite a Spark DataFrame to a JSON file
spark_write_parquetWrite a Spark DataFrame to a Parquet file
spark_write_sourceWrites a Spark DataFrame into a generic source
spark_write_tableWrites a Spark DataFrame into a Spark table
spark_write_textWrite a Spark DataFrame to a Text file
sql-transformerFeature Transformation - SQLTransformer
src_databasesShow database list
tbl_cacheCache a Spark Table
tbl_change_dbUse specific database
tbl_uncacheUncache a Spark Table
worker_spark_apply_unbundleExtracts a bundle of dependencies required by 'spark_apply()'
rstudio/sparklyr documentation built on Dec. 11, 2017, 8:26 p.m.