AFTSurvivalRegressionModel-class | S4 class that represents a AFTSurvivalRegressionModel |
approxQuantile | Calculates the approximate quantiles of numerical columns of... |
as-sdf | Convert to a SparkR 'SparkDataFrame' |
avg | avg |
awaitTermination | awaitTermination |
BisectingKMeansModel-class | S4 class that represents a BisectingKMeansModel |
coalesce | Reduce partitions OR Find first non-missing element |
coalesce.Column | Coalesce 'Columns' |
coalesce.spark_tbl | Coalesce the number of partitions in a 'spark_tbl' |
column | Returns a Column based on the given column name |
column_aggregate_functions | Aggregate functions for Column operations |
column_collection_functions | Collection functions for Column operations |
column_datetime_diff_functions | Date time arithmetic functions for Column operations |
column_datetime_functions | Date time functions for Column operations |
columnfunctions | A set of operations working with SparkDataFrame columns |
Column-functions | Column Functions |
column_math_functions | Math functions for Column operations |
column_misc_functions | Miscellaneous functions for Column operations |
Column-missing | Check missing values in Column objects |
column_nonaggregate_functions | Non-aggregate functions for Column operations |
column_string_functions | String functions for Column operations |
Column-type | Column Type Conversions |
column_window_functions | Window functions for Column operations |
corr | corr |
cov | Covariance |
covariance | Covariance on a Spark DataFrame |
create_lambda | Create o.a.s.sql.expressions.LambdaFunction corresponding to... |
crosstab | Computes a pair-wise frequency table of the given columns |
DecisionTreeClassificationModel-class | S4 class that represents a DecisionTreeClassificationModel |
DecisionTreeRegressionModel-class | S4 class that represents a DecisionTreeRegressionModel |
explain.spark_tbl | Explain Plan |
first | firstItem |
float-type | Float Vectors |
floor_date | Floor Date |
freqItems | Finding frequent items for columns, possibly with false... |
gaussianMixture | Multivariate Gaussian Mixture Model (GMM) |
GaussianMixtureModel-class | S4 class that represents a GaussianMixtureModel |
GBTClassificationModel-class | S4 class that represents a GBTClassificationModel |
GBTRegressionModel-class | S4 class that represents a GBTRegressionModel |
GeneralizedLinearRegressionModel-class | S4 class that represents a generalized linear model |
get_spark_context | Get Spark Context |
get_spark_session | Get Spark Session |
hashCode | Compute the hashCode of an object |
invoke_higher_order_function | Invokes higher order function expression identified by name,... |
isActive | isActive |
IsotonicRegressionModel-class | S4 class that represents an IsotonicRegressionModel |
isStreaming | isStreaming |
javacall | Call Java Classes and Methods |
json_sample | Sample JSON Data |
KMeansModel-class | S4 class that represents a KMeansModel |
last | lastItem |
lastProgress | lastProgress |
LDAModel-class | S4 class that represents an LDAModel |
limit | Limit or show a sample of a 'spark_tbl' |
LinearSVCModel-class | S4 class that represents an LinearSVCModel |
lit | Create a Column of literal value |
LogisticRegressionModel-class | S4 class that represents an LogisticRegressionModel |
lubridate-Column | Lubridate-style Column Functions |
ml_bisectingKmeans | Spark ML - Bisecting K-Means Clustering |
ml_decision_tree | Decision Tree Model for Regression and Classification |
ml_gbt | Gradient Boosted Tree Model for Regression and Classification |
ml_glm | Generalized Linear Models |
ml_isoreg | Isotonic Regression Model |
ml_kmeans | K-Means Clustering Model |
ml_lda | Latent Dirichlet Allocation |
ml_logit | Logistic Regression Model |
ml_mlp | Multilayer Perceptron Classification Model |
ml_naive_bayes | Naive Bayes Models |
ml_random_forest | Random Forest Model for Regression and Classification |
ml_survreg | Accelerated Failure Time (AFT) Survival Regression Model |
ml_svm_linear | Linear SVM Model |
MultilayerPerceptronClassificationModel-class | S4 class that represents a... |
NaiveBayesModel-class | S4 class that represents a NaiveBayesModel |
not | not |
n_partitions | Get the Number of Partitions in a 'spark_tbl' |
operations | Column Operations |
orderBy | Ordering Columns in a WindowSpec |
ovarian | Ovarian Cancer Survival Data |
over | over |
partitionBy | partitionBy |
persist | Storage Functions |
powerIterationClustering | PowerIterationClustering |
PowerIterationClustering-class | S4 class that represents a PowerIterationClustering |
predict | Makes predictions from a MLlib model |
print.jobj | Print a JVM object reference. |
queryName | queryName |
RandomForestClassificationModel-class | S4 class that represents a RandomForestClassificationModel |
RandomForestRegressionModel-class | S4 class that represents a RandomForestRegressionModel |
rangeBetween | rangeBetween |
read_ml | Load a fitted MLlib model from the input path. |
read.stream | Load a streaming SparkDataFrame |
register_temp_view | Create or replace a temporary view |
repartition | Repartition a 'spark_tbl' |
rowsBetween | rowsBetween |
sampleBy | Returns a stratified sample without replacement |
schema | Get schema object |
schema-types | 'tidyspark' Schema Types |
show | show |
spark_apply | Apply an R UDF in Spark |
spark_class | Get Spark Class |
SparkContext | The 'SparkContext' Class |
spark_grouped_apply | Apply an R UDF in Spark on Grouped Data |
spark_lapply | Apply a Function over a List or Vector, Distribute operations... |
spark_read_csv | Read a CSV file into a 'spark_tbl' |
spark_read_delta | Read a Delta file into a 'spark_tbl'. |
spark_read_jdbc | Create spark_tbl from JDBC connection |
spark_read_json | Read a JSON file into a 'spark_tbl'. |
spark_read_orc | Read an orc file into a 'spark_tbl'. |
spark_read_parquet | Read a parquet file into a 'spark_tbl'. |
spark_read_source | Read from a generic source into a 'spark_tbl' |
spark_read_table | Read a Spark Managed Table |
spark_session | Get or create a SparkSession |
SparkSession | The 'SparkSession' Class |
spark_session_stop | Stop the Spark Session and Spark Context |
spark_sql | Spark SQL |
spark-tbl | Create a 'spark_tbl' |
spark_write_csv | Write a 'spark_tbl' to CSV format |
spark_write_delta | Write a 'spark_tbl' to a Delta file |
spark_write_insert | Insert into a Spark Managed Table |
spark_write_jdbc | Write to a JDBC table |
spark_write_json | Write a 'spark_tbl' to JSON format |
spark_write_orc | Write a 'spark_tbl' to ORC format |
spark_write_parquet | Write a 'spark_tbl' to Parquet format |
spark_write_table | Write to a Spark table |
spark_write_text | Write a 'spark_tbl' to text file |
status | status |
stopQuery | stopQuery |
StreamingQuery | S4 class that represents a StreamingQuery |
StructField | StructField |
StructType | StructType |
ts-type | Timestamp and Date Vectors |
unresolved_named_lambda_var | Create o.a.s.sql.expressions.UnresolvedNamedLambdaVariable,... |
windowOrderBy | windowOrderBy |
windowPartitionBy | windowPartitionBy |
WindowSpec | S4 class that represents a WindowSpec |
write_file | Write a 'spark_tbl' to an arbitrary file format |
write_ml | Saves the MLlib model to the input path |
write.stream | Write the streaming SparkDataFrame to a data source. |
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.