sdf_schema_json | R Documentation |
These functions support flexible schema inspection both algorithmically and in human-friendly ways.
sdf_schema_json( x, parse_json = TRUE, simplify = FALSE, append_complex_type = TRUE ) sdf_schema_viewer( x, simplify = TRUE, append_complex_type = TRUE, use_react = FALSE )
x |
An |
parse_json |
Logical. If |
simplify |
Logical. If |
append_complex_type |
Logical. This only matters if |
use_react |
Logical. If |
sdf_schema
## Not run: library(testthat) library(jsonlite) library(sparklyr) library(sparklyr.nested) sample_json <- paste0( '{"aircraft_id":["string"],"phase_sequence":["string"],"phases (array)":{"start_point (struct)":', '{"segment_phase":["string"],"agl":["double"],"elevation":["double"],"time":["long"],', '"latitude":["double"],"longitude":["double"],"altitude":["double"],"course":["double"],', '"speed":["double"],"source_point_keys (array)":["[string]"],"primary_key":["string"]},', '"end_point (struct)":{"segment_phase":["string"],"agl":["double"],"elevation":["double"],', '"time":["long"],"latitude":["double"],"longitude":["double"],"altitude":["double"],', '"course":["double"],"speed":["double"],"source_point_keys (array)":["[string]"],', '"primary_key":["string"]},"phase":["string"],"primary_key":["string"]},"primary_key":["string"]}' ) with_mock( # I am mocking functions so that the example works without a real spark connection spark_read_parquet = function(x, ...){return("this is a spark dataframe")}, sdf_schema_json = function(x, ...){return(fromJSON(sample_json))}, spark_connect = function(...){return("this is a spark connection")}, # the meat of the example is here sc <- spark_connect(), spark_data <- spark_read_parquet(sc, path="path/to/data/*.parquet", name="some_name"), sdf_schema_viewer(spark_data) ) ## End(Not run)
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.