PL: Particle Learning Skeleton Method

View source: R/pl.R

PLR Documentation

Particle Learning Skeleton Method

Description

Implements the Particle Learning sequential Monte Carlo algorithm on the data sequence provided, using re-sample and propagate steps

Usage

PL(dstream, start, end, init, lpredprob, propagate, prior = NULL,
   addpall = NULL, params = NULL, save = NULL, P = 100,
   progress = 10, cont = FALSE, verb = 1)

Arguments

dstream

function generating the data stream; for examples see data.GP

start

a scalar integer specifying the starting “time”; the data entry/sample where PL will start

end

a scalar integer specifying the ending “time”; the data entry/sample where PL will stop

init

function used to initialize the particles at the start of PL; for examples see draw.GP

lpredprob

function used to calculate the predictive probability of an observation (usually the next one in “time”) given a particle. This is the primary function used in the PL re-sample step; for examples see lpredprob.GP

propagate

function used to propagate particles given an observation (usually the next one in “time”); for examples see propagate.GP

prior

function used to generate prior parameters that may be passed into the dstream, init, lpredprob and propagate functions as needed; for examples see prior.GP

addpall

an optional function that adds the new observation (usually the next one in “time”) to the pall variable in the PL.env environment (i.e., PL.env$pall), which stores the sufficient information shared by all particles; for examples see addpall.GP

params

an optional function called each progress rounds that collects parameters from the particles for summary and visualization; for examples see params.GP

save

an option function that is called every round to save some information about the particles

P

number of particles to use

progress

number of PL rounds after which to collect params and draws histograms; a non-positive value or params = NULL skips the progress meter

cont

if TRUE then PL will try to use the existing set of particles to “continue” where it left off; start and end should be specified appropriately when continuing

verb

if nonzero, then screen prints will indicate the proportion of PL updates finished so far; verb = 1 will cause PL to pause on progress drawings for inspection

Details

Uses the PL SMC algorithm via the functions provided. This function is just a skeleton framework. The hard work is in specifying the arguments/functions which execute the calculations needed in the re-sample and propagate steps.

PL and uses the variables stored in the PL.env environment: pall, containing sufficient information common to all particles, peach, containing sufficient information particular to each of the P particles, and psave containing any saved information. These variables may be accessed as PL.env$psave, for example.

Note that PL is designed to be fast for sequential updating (of GPs) when new data arrive. This facilitates efficient sequential design of experiments by active learning techniques, e.g., optimization by expected improvement and sequential exploration of classification label boundaries by the predictive entropy. PL is not optimized for static inference when all of the data arrive at once, in batch

Value

PL modifies the PL.env$peach variable, containing sufficient information particular to each (of the P) particles

Author(s)

Robert B. Gramacy, rbg@vt.edu

References

Carvalho, C., Johannes, M., Lopes, H., and Polson, N. (2008). “Particle Learning and Smoothing.” Discussion Paper 2008-32, Duke University Dept. of Statistical Science.

Gramacy, R. and Polson, N. (2011). “Particle learning of Gaussian process models for sequential design and optimization.” Journal of Computational and Graphical Statistics, 20(1), pp. 102-118; arXiv:0909.5262

Gramacy, R. and Lee, H. (2010). “Optimization under unknown constraints”. Bayesian Statistics 9, J. M. Bernardo, M. J. Bayarri, J. O. Berger, A. P. Dawid, D. Heckerman, A. F. M. Smith and M. West (Eds.); Oxford University Press

Gramacy, R. (2020). “Surrogates: Gaussian Process Modeling, Design and Optimization for the Applied Sciences”. Chapman Hall/CRC; https://bobby.gramacy.com/surrogates/

https://bobby.gramacy.com/r_packages/plgp/

See Also

papply, draw.GP, data.GP, lpredprob.GP, propagate.GP, params.GP, pred.GP

Examples

## See the demos via demo(package="plgp"); it is important to
## run them with the ask=FALSE argument so that the
## automatically generated plots may refresh automatically
## (without requiring the user to press RETURN)
## Not run: 
## Illustrates regression GPs on a simple 1-d sinusoidal
## data generating mechanism
demo("plgp_sin1d", ask=FALSE)

## Illustrates classification GPs on a simple 2-d exponential
## data generating mechanism
demo("plcgp_exp", ask=FALSE)

## Illustrates classification GPs on Ripley's Cushings data
demo("plcgp_cush", ask=FALSE)

## Illustrates active learning via the expected improvement
## statistic on a simple 1-d data generating mechanism
demo("plgp_exp_ei", ask=FALSE)

## Illustrates active learning via entropy with classification
## GPs on a simple 2-d exponential data generating mechanism
demo("plcgp_exp_entropy", ask=FALSE)

## Illustrates active learning via the integrated expected
## conditional improvement statistic for optimization
## under known constraints on a simple 1-d data generating
## mechanism
demo("plgp_1d_ieci", ask=FALSE)

## Illustrates active learning via the integrated expected
## conditional improvement statistic for optimization under
## unknown constraints on a simple 1-d data generating
## mechanism
demo("plconstgp_1d_ieci", ask=FALSE)

## Illustrates active learning via the integrated expected
## conditional improvement statistic for optimization under
## unknokn constraints on a simple 2-d data generating
## mechanism
demo("plconstgp_2d_ieci", ask=FALSE)

## End(Not run)

plgp documentation built on Oct. 19, 2022, 5:20 p.m.