For pipelines with hundreds of targets, it is impractical to write out every single target by hand in _targets.R
. That's why the targets
package supports dynamic branching, a way to write shorthand for large collections of similar targets.
targets
also supports static branching, an alternative kind of shorthand. This course does not cover it, but it is easier to understand just by reading about it: https://books.ropensci.org/targets/static.html.
Start with a fresh data store.
library(targets) tar_destroy()
Copy the starting _targets.R
file into the working directory.
tmp <- file.copy("6-branching/initial_targets.R", "_targets.R", overwrite = TRUE)
Also load the quiz questions. Try not to peek in advance.
source("R/quiz.R") source("6-branching/answers.R")
Open _targets.R
for editing.
tar_edit()
What is different about this new pipeline?
A. It has a new activations
target that returns the names of all the activation functions we want to try.
B. All our previous run_*
targets are combined into a new target called run
.
C. All of A, B, and D.
D. The run
target sets pattern = map(activations)
to declare a separate model run for every element of activations
.
answer5_different("E")
Run the pipeline. The run_*
targets are dynamic branches of run
, and each is a model fit with a different activation function.
tar_make()
To read the return values of branches, you can use tar_read()
and tar_load()
as usual.
tar_read(run)
Above, we see a data frame with two rows, one from each branch. By default, targets
uses vctrs::vec_c()
to aggregate the results from branches. vctrs
is smart enough to detect that the branches are data frames, and it automatically binds the rows into a single data frame.
You can select individual branches to read with the branches
argument.
tar_read(run, branches = 2)
As you develop the workflow, depending on what you change, some branches may run while others are skipped. As an example, add the softmax activation function in _targets.R
. Sketch:
tar_target(activations, c("relu", "sigmoid", "softmax"))
Now, run the pipeline.
tar_make()
Which target(s) reran? Why?
A. Just the new run with the softmax activation function. The rest of the targets were already up to date, so tar_make()
skipped them.
B. Just the new run with the sigmoid activation function. The rest of the targets were already up to date, so tar_make()
skipped them.
C. The new run with the softmax activation function, the best_run
target, and possibly best_model
(only if best_run
changed). tar_make()
skipped the other model runs because they were already up to date.
D. All the model runs and everything downstream because the activations
target changed.
answer5_map_runs("E")
map()
can branch over more than one argument. As an example, let's vary the size of the first layer of the neural network. Make the following changes to _targets.R
:
units
which returns c(16, 32, 64)
.run
target, change map(activations)
to map(activations, units)
.run
to test_model(act1 = activations, units1 = units, churn_data, churn_recipe)
Now, run the pipeline again.
tar_make()
Read the results of the model runs.
tar_read(run)
Which model runs do you see in the output?
A. 3 model runs: 1 for each of the relu, sigmoid, and softmax activation functions in the first layer of the neural net. Each model run has the same value for units1
.
B. 3 model runs: 1 for each of the relu, sigmoid, and softmax activation functions in the first layer of the neural net. In the first layer, the relu model has 16 neurons, the sigmoid model has 32 neurons, and the softmax model has 64 runs.
C. 9 model runs: 3 for each of the relu, sigmoid, and softmax activation functions in the first layer of the neural net. Each model run has the same value for units1
.
D. 9 model runs: one for each combination of activation function and units in the first layer.
answer5_map2("E")
cross()
is like map()
, but it defines a new branch for each combination of grouping variables. In the run
target in _targets.R
, change pattern = map(activations, units)
to pattern = cross(activations, units)
. Then, run the pipeline again.
tar_make()
In that call to tar_make()
, how many run_*
branches executed?
A. All of them, because we changed the branching structure.
B. None, because all targets stayed up to date.
C. The 3 branches from the earlier map()
call were still up to date, so they were skipped. cross()
added 6 additional targets to the pipeline, and those 6 ran for the first time.
D. The 9 branches from the earlier map()
call were still up to date, so they were skipped. cross()
added 3 additional targets to the pipeline, and those ran for the first time.
answer5_cross_reruns("E")
View the results.
tar_read(run)
Which model runs do we have now?
A. 3 model runs, 1 for each of the relu, sigmoid, and softmax activation functions in the first layer of the neural net. Each model run has the same value for units1
.
B. 3 model runs, 1 for each of the relu, sigmoid, and softmax activation functions in the first layer of the neural net. In the first layer, the relu model has 16 neurons, the sigmoid model has 32 neurons, and the softmax model has 64 runs.
C. 9 model runs, 3 for each of the relu, sigmoid, and softmax activation functions in the first layer of the neural net. Each model run has the same value for units1
.
D. 9 model runs, one for each combination of activation function and units in the first layer.
answer5_cross("E")
All targets should be up to date now.
tar_make()
tar_outdated()
For this section, let's use just the first part of our crossed pipeline from earlier. Copy it over to _targets.R
.
tmp <- file.copy("6-branching/iteration_targets.R", "_targets.R", overwrite = TRUE)
Take a look at the new pipeline.
tar_edit()
Sometimes, we may not want to aggregate all the branches into a vector or data frame. Set iteration = "list"
in tar_target()
for the run
target.
# Do not run here. tar_target( run, test_model(act1 = activations, units1 = units, churn_data, churn_recipe), pattern = cross(activations, units), iteration = "list" # list iteration )
Then, run the pipeline.
tar_make()
Take a look at the results now.
tar_read(run)
What do you see?
A. A data frame of fitted model results. B. A list of one-row data frames of model results. C. A data frame of lists of model results. D. A list of trained Keras models.
answer5_alt_iteration("E")
Now, hopefully you see the magic behind the default iteration = "vector"
. With vector iteration, tar_read()
and targets downstream of run
need only mention the symbol run
, and all the branches will automatically come together in a friendlier data structure.
Switch back to iteration = "vector"
.
# Do not run here. tar_target( run, test_model(act1 = activations, units1 = units, churn_data, churn_recipe), pattern = cross(activations, units), iteration = "vector" # vector iteration )
Then, rerun the pipeline.
tar_make()
When you call tar_read()
, vctrs::vec_c()
aggregates the branches into a nice tibble
behind the scenes.
tar_read(run)
And yes, I know you just had to fit all 9 models all over again, and that took a lot of runtime. How do we save time if we need to frequently switch iteration methods?
A. Split the run
target into a model-fitting target that returns a Keras model and summarization step that computes the data frame with accuracy and hyperparameters. That way, the changes are absorbed by computationally cheaper targets.
B. Combine all the run
branches into a single target that fits all the models together.
C. If your downstream code does not care how the model runs get aggregated, set cue = tar_cue(iteration = FALSE)
in tar_target()
so that the change in iteration method does not invalidate all the branches.
D. A or C.
answer5_fickle("E")
targets
lets you dynamically iterate over arbitrary subsets of a data frame. It's like dplyr::group_by()
, but with branches. The tar_group()
function is the key. It looks at your dplyr
groups and creates a special tar_group
column to explicitly indicate group membership.
library(tidyverse) tar_read(run) %>% group_by(act1) %>% tar_group()
Then, any pattern that branches over it will iterate over those groups. Open _targets.R
and try it out yourself:
grouped_run
with the command run %>% group_by(act1) %>% tar_group()
.iteration = "group"
in tar_target()
for grouped_run
. Do not map()
over run
. Thanks to vector iteration, run
will automatically appear to grouped_run
as a single data frame with our 9 model runs.nrows
which maps over grouped_run
and counts the number of rows.Check the graph.
tar_visnetwork()
It should look like this:
readRDS("6-branching/group_graph.rds")
Also check the manifest.
tar_manifest()
It should look like this:
readRDS("6-branching/group_manifest.rds")
Then, run the pipeline.
tar_make()
How many branches of grouped_run
were created? How many branches of nrows
?
A. None for grouped_run
, none for nrows
.
B. None for grouped_run
, 3 for nrows
.
C. 3 for grouped_run
, none for nrows
.
D. 3 for grouped_run
, 3 for nrows
.
answer5_grouped("E")
Read nrows
.
tar_read(nrows)
What is in nrows
? Why?
A. The number 9, because grouped_df
had 9 rows
B. A vector of length 9 with elements all equal to 1, because we mapped over grouped_df
, which has 9 rows.
C. A vector of length 3 with each element equal to 3. We mapped over the groups in grouped_df
and counted the rows in each group.
D. A list of 3 data frames, each with 3 rows, because we mapped over the groups in grouped_df
.
answer5_nrows("E")
In targets
, it is possible to branch over multiple data files. To see this, let's switch to a new pipeline.
tmp <- file.copy("6-branching/files_targets.R", "_targets.R", overwrite = TRUE)
Take a look at the new _targets.R
.
tar_edit()
This pipeline uses the tar_files()
helper from the tarchetypes
package. Here, we use tar_files()
to ensure we can correctly branch over the files in the data/
folder. The mechanism requires two different targets to work correctly. churn_file_files
does the actual work and returns the output files we want to track, and churn_file
separates the output files into iterable branches.
tar_manifest()
We can now branch over churn_file
, which actually tracks multiple files (one per branch). Add two targets to the pipeline:
churn_data
that maps over churn_file
and has the command split_data(churn_file)
.churn_recipe
that maps over churn_data
and has the command prepare_recipe(churn_data)
. Check the abridged graph.
tar_glimpse()
It should look like this:
readRDS("6-branching/files_graph.rds")
Then run tar_make()
.
tar_make()
How many branches were created for each of churn_data
and churn_recipe
? Why?
A. 3 for both. churn_data
maps over the 3 branches of churn_file
, so each churn_data
branch corresponds to one of the CSV files in data/
. churn_recipe
produces one branch for each branch of churn_data
(i.e. each dataset).
B. 3 for both. churn_data
knows to create 3 branches because it checks the files in data/
.
C. 1 for both. We can branch over vectors and data frames, but not patterns (branching targets).
D. 1 for both. We do not use any branching after churn_file
.
answer5_files_branch("E")
If we try to read in the datasets, we get an error.
library(rsample) tar_read(churn_data)
What is the cause of the error?
A. The saved data in _targets/
is corrupted.
B. We did not map over churn_file
correctly.
C. The return value from each individual branch is incorrect.
D. tar_read()
assumed vector iteration because we did not set the iteration
argument of tar_target()
. That means vctrs::vec_c()
tried to combine a bunch of rsplit
objects into a vector-like object, which is not possible. We need list iteration for both churn_data
and churn_recipe
.
answer5_error("E")
Fix the error in _targets.R
and rerun the pipeline.
tar_make()
Try again to read the data.
library(rsample) tar_read(churn_data)
What data structure do you see?
A. A list of data frames.
B. A vector of data frames.
C. A list of rsplit
objects with training and testing datasets.
D. A vector of rsplit
objects with training and testing datasets.
answer5_churn_data("E")
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.