MAF.options | R Documentation |
Masked Autoregressive Flows can be used by Infusion to infer various densities.
This functionality requires the mafR package, and is requested through the using
argument of
infer.SLik.joint
or refine
(see Details).
config_mafR
is a convenient way to reset the Python session (erasing all results stored in its memory), and in particular to enforce GPU usage.
MAF.options
is a wrapper for Infusion.options
which facilitates the specification of
options that control the design of Masked Autoregressive Flows and their training.
config_mafR(torch_device, ...)
MAF.options(template = "zuko-like", ...)
torch_device |
character: |
template |
Defines a template list of options, subsequently modified by options specified through the ..., if any.
Possible values of |
... |
For |
Possible using
values:
With using="c.mafR"
four different MAFs are computed in every iteration:
the density of statistics given parameters (providing the likelihood), the joint density, the instrumental density of parameters, and the “instrumental posterior” density of parameters given the data.
With using="MAFmix"
, MAFs are computed only for the joint density and the instrumental density. The likelihood is deduced from them and a multivariate Gaussian mixture model is used to infer the “instrumental posterior” density.
using="mafR"
can be used to let Infusion select one of the above options. "MAFmix"
is currently called as it is faster, but this is liable to change in the future, so do not use this if you want a repeatable workflow.
Possible template
values for MAF.options
:
"PSM19"
defines a template list of control values recommended by Papamakarios et al. (2019, Section 5.1).
"zuko-like"
defines a template list of values inspired from the tutorials of the zuko Python package, with some extra twists. Concretely, their definition is
if (template == "zuko-like") { optns <- list(design_hidden_layers = .design_hidden_layers_MGM_like, MAF_patience = 30, MAF_auto_layers = 3L, Adam_learning_rate = 0.001) } else if (template == "PSM19") { optns <- list(design_hidden_layers = .design_hidden_layers_PSM19, MAF_patience = 20, MAF_auto_layers = 5L, Adam_learning_rate = 1e-04) }
and any replacement value should match the types (function, numeric, integer...) of the shown values, and the formals of the internal functions, in order to avoid cryptic errors.
The internal .design_hidden_layers...
functions return a vector of numbers H_i
of hidden values per layer i
of the neural network. THevector has an attribute giving the resulting approximate number of parameters P
of the deep-learning model according to Supplementary Table 1 of Papamakarios et al. 2017. H_i=
50L
for "PSM19"
. For "zuko-like"
, a typically higher value will be used. It is defined as a power of two such that P
is of the order of 8 to 16 times the default number of parameters of the multivariate gaussian mixture model that could be used instead of MAFs.
Other controls which can be modified through the ... are
* MAF_validasize
, a function which returns the size of the validation set,
whose default definition returns 5% of its input value nr
which is the number of samples in the reference table (consistently with Papamakarios et al., 2019);
* MAF_batchsize
, a function that returns the batch size for the Adam optimizer. Its default simply returns 100L
, but non-default functions can be defined, with at least the ... as formal arguments (more elaborate formals are possible but not part of the API).
config_mafR
is used for its side effects. Returns NULL invisibly.
MAF.options
returns the result of calling Infusion.options
on the arguments defined by the template
and the ... . Hence, it is a list of previous values of the affected options.
Papamakarios, G., T. Pavlakou, and I. Murray (2017) Masked Autoregressive
Flow for Density Estimation. Pp. 2335–2344 in I. Guyon, U. V. Luxburg,
S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, eds.
Advances in Neural Information Processing Systems 30. Curran Associates,
Inc.
http://papers.nips.cc/paper/6828-masked-autoregressive-flow-for-density-estimation
Rozet, F., Divo, F., Schnake, S (2023) Zuko: Normalizing flows in PyTorch. https://doi.org/10.5281/zenodo.7625672
save_MAFs
for saving and loading MAF objects.
MAF.options(template = "zuko-like",
Adam_learning_rate=1e-4,
MAF_batchsize = function(...) 100L)
## Not run:
## These examples require the mafR package,
## and a Python installation with cuda capabilities.
if (requireNamespace("mafR", quietly=TRUE)) {
config_mafR() # set default: "cpu", i.e. GPU not used
config_mafR("cuda") # sets cuda as GPU backend
config_mafR() # reset python session, keeping latest backend
config_mafR(torch_device="cuda")
# function for sampling from N(.,sd=1)
toyrnorm <- function(mu) {
sam <- rnorm(n=40,mean=mu,sd=1)
return(c(mean1=mean(sam)))
}
# simulated data, standing for the actual data to be analyzed:
set.seed(123)
Sobs <- toyrnorm(mu=4)
parsp <- init_reftable(lower=c(mu=2.8),
upper=c(mu=5.2))
simuls <- add_reftable(Simulate="toyrnorm", parsTable=parsp)
MAF.options(template = "zuko-like")
densv <- infer_SLik_joint(simuls,stat.obs=Sobs, using="mafR")
# Usual workflow using inferred surface:
slik_j <- MSL(densv, eval_RMSEs = FALSE) ## find the maximum of the log-likelihood surface
# ETC.
save_MAFs(slik_j, prefix = "toy_") # See its distinct documentation.
}
## End(Not run)
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.