| OrderedFactorKernel | R Documentation |
Ordered Factor Kernel R6 class
Ordered Factor Kernel R6 class
k_OrderedFactorKernel(
s2 = 1,
D,
nlevels,
xindex,
p_lower = 1e-08,
p_upper = 5,
p_est = TRUE,
s2_lower = 1e-08,
s2_upper = 1e+08,
s2_est = TRUE,
useC = TRUE,
offdiagequal = 1 - 1e-06
)
s2 |
Initial variance |
D |
Number of input dimensions of data |
nlevels |
Number of levels for the factor |
xindex |
Index of the factor (which column of X) |
p_lower |
Lower bound for p |
p_upper |
Upper bound for p |
p_est |
Should p be estimated? |
s2_lower |
Lower bound for s2 |
s2_upper |
Upper bound for s2 |
s2_est |
Should s2 be estimated? |
useC |
Should C code used? Not implemented for FactorKernel yet. |
offdiagequal |
What should offdiagonal values be set to when the indices are the same? Use to avoid decomposition errors, similar to adding a nugget. |
R6Class object.
Use for factor inputs that are considered to have an ordering
Object of R6Class with methods for fitting GP model.
GauPro::GauPro_kernel -> GauPro_kernel_OrderedFactorKernel
pParameter for correlation
p_estShould p be estimated?
p_lowerLower bound of p
p_upperUpper bound of p
p_lengthlength of p
s2variance
s2_estIs s2 estimated?
logs2Log of s2
logs2_lowerLower bound of logs2
logs2_upperUpper bound of logs2
xindexIndex of the factor (which column of X)
nlevelsNumber of levels for the factor
offdiagequalWhat should offdiagonal values be set to when the indices are the same? Use to avoid decomposition errors, similar to adding a nugget.
new()Initialize kernel object
OrderedFactorKernel$new( s2 = 1, D = NULL, nlevels, xindex, p_lower = 1e-08, p_upper = 5, p_est = TRUE, s2_lower = 1e-08, s2_upper = 1e+08, s2_est = TRUE, useC = TRUE, offdiagequal = 1 - 1e-06 )
s2Initial variance
DNumber of input dimensions of data
nlevelsNumber of levels for the factor
xindexIndex of X to use the kernel on
p_lowerLower bound for p
p_upperUpper bound for p
p_estShould p be estimated?
s2_lowerLower bound for s2
s2_upperUpper bound for s2
s2_estShould s2 be estimated?
useCShould C code used? Much faster.
offdiagequalWhat should offdiagonal values be set to when the indices are the same? Use to avoid decomposition errors, similar to adding a nugget.
pVector of distances in latent space
k()Calculate covariance between two points
OrderedFactorKernel$k(x, y = NULL, p = self$p, s2 = self$s2, params = NULL)
xvector.
yvector, optional. If excluded, find correlation of x with itself.
pCorrelation parameters.
s2Variance parameter.
paramsparameters to use instead of beta and s2.
kone()Find covariance of two points
OrderedFactorKernel$kone( x, y, p, s2, isdiag = TRUE, offdiagequal = self$offdiagequal )
xvector
yvector
pcorrelation parameters on regular scale
s2Variance parameter
isdiagIs this on the diagonal of the covariance?
offdiagequalWhat should offdiagonal values be set to when the indices are the same? Use to avoid decomposition errors, similar to adding a nugget.
dC_dparams()Derivative of covariance with respect to parameters
OrderedFactorKernel$dC_dparams(params = NULL, X, C_nonug, C, nug)
paramsKernel parameters
Xmatrix of points in rows
C_nonugCovariance without nugget added to diagonal
CCovariance with nugget
nugValue of nugget
C_dC_dparams()Calculate covariance matrix and its derivative with respect to parameters
OrderedFactorKernel$C_dC_dparams(params = NULL, X, nug)
paramsKernel parameters
Xmatrix of points in rows
nugValue of nugget
dC_dx()Derivative of covariance with respect to X
OrderedFactorKernel$dC_dx(XX, X, ...)
XXmatrix of points
Xmatrix of points to take derivative with respect to
...Additional args, not used
param_optim_start()Starting point for parameters for optimization
OrderedFactorKernel$param_optim_start( jitter = F, y, p_est = self$p_est, s2_est = self$s2_est )
jitterShould there be a jitter?
yOutput
p_estIs p being estimated?
s2_estIs s2 being estimated?
param_optim_start0()Starting point for parameters for optimization
OrderedFactorKernel$param_optim_start0( jitter = F, y, p_est = self$p_est, s2_est = self$s2_est )
jitterShould there be a jitter?
yOutput
p_estIs p being estimated?
s2_estIs s2 being estimated?
param_optim_lower()Lower bounds of parameters for optimization
OrderedFactorKernel$param_optim_lower(p_est = self$p_est, s2_est = self$s2_est)
p_estIs p being estimated?
s2_estIs s2 being estimated?
param_optim_upper()Upper bounds of parameters for optimization
OrderedFactorKernel$param_optim_upper(p_est = self$p_est, s2_est = self$s2_est)
p_estIs p being estimated?
s2_estIs s2 being estimated?
set_params_from_optim()Set parameters from optimization output
OrderedFactorKernel$set_params_from_optim( optim_out, p_est = self$p_est, s2_est = self$s2_est )
optim_outOutput from optimization
p_estIs p being estimated?
s2_estIs s2 being estimated?
s2_from_params()Get s2 from params vector
OrderedFactorKernel$s2_from_params(params, s2_est = self$s2_est)
paramsparameter vector
s2_estIs s2 being estimated?
plotLatent()Plot the points in the latent space
OrderedFactorKernel$plotLatent()
print()Print this object
OrderedFactorKernel$print()
clone()The objects of this class are cloneable with this method.
OrderedFactorKernel$clone(deep = FALSE)
deepWhether to make a deep clone.
https://stackoverflow.com/questions/27086195/linear-index-upper-triangular-matrix
kk <- OrderedFactorKernel$new(D=1, nlevels=5, xindex=1)
kk$p <- (1:10)/100
kmat <- outer(1:5, 1:5, Vectorize(kk$k))
kmat
if (requireNamespace("dplyr", quietly=TRUE)) {
library(dplyr)
n <- 20
X <- cbind(matrix(runif(n,2,6), ncol=1),
matrix(sample(1:2, size=n, replace=TRUE), ncol=1))
X <- rbind(X, c(3.3,3), c(3.7,3))
n <- nrow(X)
Z <- X[,1] - (4-X[,2])^2 + rnorm(n,0,.1)
plot(X[,1], Z, col=X[,2])
tibble(X=X, Z) %>% arrange(X,Z)
k2a <- IgnoreIndsKernel$new(k=Gaussian$new(D=1), ignoreinds = 2)
k2b <- OrderedFactorKernel$new(D=2, nlevels=3, xind=2)
k2 <- k2a * k2b
k2b$p_upper <- .65*k2b$p_upper
gp <- GauPro_kernel_model$new(X=X, Z=Z, kernel = k2, verbose = 5,
nug.min=1e-2, restarts=0)
gp$kernel$k1$kernel$beta
gp$kernel$k2$p
gp$kernel$k(x = gp$X)
tibble(X=X, Z=Z, pred=gp$predict(X)) %>% arrange(X, Z)
tibble(X=X[,2], Z) %>% group_by(X) %>% summarize(n=n(), mean(Z))
curve(gp$pred(cbind(matrix(x,ncol=1),1)),2,6, ylim=c(min(Z), max(Z)))
points(X[X[,2]==1,1], Z[X[,2]==1])
curve(gp$pred(cbind(matrix(x,ncol=1),2)), add=TRUE, col=2)
points(X[X[,2]==2,1], Z[X[,2]==2], col=2)
curve(gp$pred(cbind(matrix(x,ncol=1),3)), add=TRUE, col=3)
points(X[X[,2]==3,1], Z[X[,2]==3], col=3)
legend(legend=1:3, fill=1:3, x="topleft")
# See which points affect (5.5, 3 themost)
data.frame(X, cov=gp$kernel$k(X, c(5.5,3))) %>% arrange(-cov)
plot(k2b)
}
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.