gpr1sample: Perform one-sample GP regression

Description Usage Arguments Details Value See Also Examples

View source: R/gpr.R

Description

Computes the optimal GP model by optimizing the marginal likelihood

Usage

1
2
3
4
gpr1sample(x, y, x.targets, noise = NULL, nsnoise = TRUE, nskernel = TRUE,
  expectedmll = FALSE, params = NULL, defaultparams = NULL,
  lbounds = NULL, ubounds = NULL, optim.restarts = 3,
  derivatives = FALSE)

Arguments

x

input points

y

output values (same length as x)

x.targets

target points

noise

observational noise (variance), either NULL, a constant scalar or a vector

nsnoise

estimate non-stationary noise from replicates, if possible (default)

nskernel

use non-stationary kernel

expectedmll

use an alternative expected mll optimization criteria

params

gaussian kernel parameters: (sigma.f, sigma.n, l, lmin, c)

defaultparams

initial parameters for optimization (5-length vector)

lbounds

lower bounds for parameters (5-length vector)

ubounds

upper bounds for parameters (5-length vector)

optim.restarts

restarts in the gradient ascent (default=3)

derivatives

compute also GP derivatives

Details

Parameter optimization performed through L-BFGS using analytical gradients with restarts. The input points x and output values y need to be matching length vectors. If replicates are provided, they are used to estimate dynamic observational noise.

The resulting GP model is encapsulated in the return object. The estimated posterior is in targets$pmean and targets$pstd for target points x.targets. Use plot.gp to visualize the GP.

Value

A gp-object (list) containing following elements

targets

data frame of predictions with points as rows and columns..

_$x

points

_$pmean

posterior mean of the gp

_$pstd

posterior standard deviation of the gp

_$noisestd

noises (variance)

_$mll

the MLL log likelihood ratio

_$emll

the EMLL log likelihood ratio

_$pc

the posterior concentration log likelihood ratio

_$npc

the noisy posterior concentration log likelihood ratio

cov

learned covariance matrix

mll

marginal log likelihood value

emll

expected marginal log likelihood value

kernel

the kernel matrix used

ekernel

the EMLL kernel matrix

params

the learned parameter vector:

_$sigma.f

kernel variance

_$sigma.n

kernel noise

_$l

maximum lengthscale

_$lmin

minimum lengthscale

_$c

curvature

x

the input points

y

the output values

See Also

gpr2sample plot.gp

Examples

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
# load example data
data(toydata)

## Not run: can take sevaral minutes
 # perform gpr
 res = gpr1sample(toydata$ctrl$x, toydata$ctrl$y, seq(0,22,0.1))
 print(res)
## End(Not run)

# pre-computed toydata model
data(toygps)
print(toygps$ctrlmodel)

Example output

Gaussian process model for 221 timepoints: (0, 0.1, 0.2, ..., 21.9, 22)

           MLL EMLL Avg.posterior.std Avg.noise.std
GP model -5.83 55.2             0.155         0.198

Parameters:
 sigma.f = 0.61 
 sigma.n = 1.00 
       l = 9.73 
    lmin = 0.62 
       c = 0.02 
Gaussian process model for 221 timepoints: (0, 0.1, 0.2, ..., 21.9, 22)

           MLL EMLL Avg.posterior.std Avg.noise.std
GP model -5.83 55.2             0.155         0.198

Parameters:
 sigma.f = 0.61 
 sigma.n = 1.00 
       l = 9.73 
    lmin = 0.62 
       c = 0.02 

nsgp documentation built on May 2, 2019, 9:19 a.m.