buildPDALPipelineENTWINE | R Documentation |
Generate PDAL pipeline(s) to retrieve point data for a set of polygons. Given the input polygon(s),
create PDAL pipeline(s) to retrieve data for each polygon. Input data must be polygons and must have
a field that contains the URL for the ept.json file associated with the ENTWINE data set.
Input polygons can be either an sp
or sf
feature set.
buildPDALPipelineENTWINE(
inputPolygons,
IDColumnLabel,
URLColumnLabel = "url",
pipelineOutputFileBaseName = "",
pipelineOutputFolder = ".",
pipelineTemplateFile = "",
clipOutputFolder = ".",
pipelineScript = "RUNME.bat",
compress = TRUE,
verbose = 0
)
inputPolygons |
|
IDColumnLabel |
Character: Name of the column in |
URLColumnLabel |
Character: Name of the column containing the URL for the ENTWINE data. Default is "url". For Entwine data, this is typically the URL for the ett.json file. |
pipelineOutputFileBaseName |
Character: base name for the new PDAL pipeline. This is
used along with the value in |
pipelineOutputFolder |
Character: full path name for the new PDAL pipeline(s). This should be a fully qualified path. Pipelines will be saved in this folder. If not specified, pipelines will be saved in the current working folder. This folder will be created if it does not exist. |
pipelineTemplateFile |
Character: full file specifier for the JSON file used
as a template for the output PDAL pipeline. See the description for the required items in
the template. A simple default template will be used if |
clipOutputFolder |
Character: full path name for the point data file(s). This should
be a fully qualified path. Point files will be saved in this folder. If not specified,
point files will be saved in the current working folder. This folder will be created
if it does not exist. the file name for the point data file(s) is created using the
value in the |
pipelineScript |
Character: file name for the script to run the pipelines. For Windows
systems, this will be a batch file. No other operating systems are supported. The script
will be written in the folder defined by |
compress |
Logical: If TRUE (default), point data area stored in LAZ format; if FALSE, point data are stored in LAS format. |
verbose |
if > 0, output status information every |
The pipeline template must include at least the following items for the ept_reader: bounds and placeholder, and for the las_writer: filename. There is a template provided with the library that retrieves all data within the bounding box for a feature and stores is in compressed (LAZ) format. Users can supply their own template (in json format) but only the values mentioned for the ept_reader and las_writer tags will be modified.
The pipeline templates provided with the USGSlidar package are stored in the install folder
for the package under the extdata
folder. While you can directly modify these copies
of the files, it would be better to copy them to another location, make the desired
changes and then specify your new template using the pipelineTemplateFile
parameter.
The file name for the data clips is formed using the basename of the value in the
URLColumnLabel
field and the value in the IDColumnLabel
separated
by "_". This helps ensure that clips can be differentiated when locations are covered
by more than one lidar project.
Integer: (invisible) number of pipeline files created.
## Not run:
buildPDALPipelineENTWINE(plot_polys)
## End(Not run)
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.