Description Usage Arguments Note See Also Examples
This function initializes a new SparkContext.
1 2 3 4 5 6 7 8 9 | sparkR.init(
master = "",
appName = "SparkR",
sparkHome = Sys.getenv("SPARK_HOME"),
sparkEnvir = list(),
sparkExecutorEnv = list(),
sparkJars = "",
sparkPackages = ""
)
|
master |
The Spark master URL |
appName |
Application name to register with cluster manager |
sparkHome |
Spark Home directory |
sparkEnvir |
Named list of environment variables to set on worker nodes |
sparkExecutorEnv |
Named list of environment variables to be used when launching executors |
sparkJars |
Character vector of jar files to pass to the worker nodes |
sparkPackages |
Character vector of package coordinates |
sparkR.init since 1.4.0
sparkR.session
1 2 3 4 5 6 7 8 9 10 11 | ## Not run:
sc <- sparkR.init("local[2]", "SparkR", "/home/spark")
sc <- sparkR.init("local[2]", "SparkR", "/home/spark",
list(spark.executor.memory="1g"))
sc <- sparkR.init("yarn-client", "SparkR", "/home/spark",
list(spark.executor.memory="4g"),
list(LD_LIBRARY_PATH="/directory of JVM libraries (libjvm.so) on workers/"),
c("one.jar", "two.jar", "three.jar"),
c("com.databricks:spark-avro_2.11:2.0.1"))
## End(Not run)
|
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.