h5_dump | R Documentation |
Dump the content of an HDF5 file.
h5dump(
file,
recursive = TRUE,
load = TRUE,
all = FALSE,
index_type = h5default("H5_INDEX"),
order = h5default("H5_ITER"),
s3 = FALSE,
s3credentials = NULL,
...,
native = FALSE
)
file |
The filename (character) of the file in which the dataset will
be located. You can also provide an object of class H5IdComponent
representing a H5 location identifier (file or group). See |
recursive |
If |
load |
If |
all |
If |
index_type |
See |
order |
See |
s3 |
Logical value indicating whether the file argument should be treated as a URL to an Amazon S3 bucket, rather than a local file path. |
s3credentials |
A list of length three, providing the credentials for accessing files in a private Amazon S3 bucket. |
... |
Arguments passed to |
native |
An object of class |
Returns a hierarchical list structure representing the HDF5
group hierarchy. It either returns the datasets within the list structure
(load=TRUE
) or it returns a data.frame
for each dataset with the
dataset header information (load=FALSE
).
Bernd Fischer, Mike L. Smith
h5ls()
h5File <- tempfile(pattern = "ex_dump.h5")
h5createFile(h5File)
# create groups
h5createGroup(h5File,"foo")
h5createGroup(h5File,"foo/foobaa")
# write a matrix
B = array(seq(0.1,2.0,by=0.1),dim=c(5,2,2))
attr(B, "scale") <- "liter"
h5write(B, h5File,"foo/B")
# list content of hdf5 file
h5dump(h5File)
# list content of an hdf5 file in a public S3 bucket
h5dump(file = "https://rhdf5-public.s3.eu-central-1.amazonaws.com/h5ex_t_array.h5", s3 = TRUE)
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.