Nothing
#' @title MCMC Algorithm for Hierarchical Linear Model with Dirichlet Process Prior Heterogeneity
#' @description
#' \code{rhierLinearDPParallel} is an MCMC algorithm for a hierarchical linear model with a Dirichlet Process prior for the distribution of heterogeneity. A base normal model is used so that the DP can be interpreted as allowing for a mixture of normals with as many components as panel units. The function implements a Gibbs sampler on the coefficients for each panel unit. This procedure can be interpreted as a Bayesian semi-parametric method since the DP prior accommodates heterogeneity of unknown form.
#'
#' @usage rhierLinearDPParallel(Data, Prior, Mcmc, verbose = FALSE)
#'
#'
#' @param Data A list of: `regdata` - A \eqn{nreg} size list of regdata, and optional `Z`- \eqn{nreg \times nz} matrix of unit characteristics.
#' @param Prior A list with one required parameter: `ncomp`, and optional parameters: `deltabar`, `Ad`, `mubar`, `Amu`, `nu`, `V`, `nu.e`, and `ssq`.
#' @param Mcmc A list with one required parameter: `R`-number of iterations, and optional parameters: `keep` and `nprint`.
#' @param verbose If \code{TRUE}, enumerates model parameters and timing information.
#'
#' @details
#' \subsection{Model and Priors}{
#' \code{nreg} regression equations with \code{nvar} as the number of \eqn{X} vars in each equation \cr
#' \eqn{y_i = X_i\beta_i + e_i} with \eqn{e_i} \eqn{\sim}{~} \eqn{N(0, \tau_i)}
#'
#' \eqn{\tau_i} \eqn{\sim}{~} \eqn{nu.e*ssq_i/\chi^2_{nu.e}} where \eqn{\tau_i} is the variance of \eqn{e_i}\cr
#' \eqn{B = Z\Delta + U} or \eqn{\beta_i = \Delta' Z[i,]' + u_i} \cr
#' \eqn{\Delta} is an \eqn{nz \times nvar} matrix \cr
#'
#' \eqn{u_i} \eqn{\sim}{~} \eqn{N(\mu_{ind}, \Sigma_{ind})}\cr
#'
#' \eqn{delta = vec(\Delta)} \eqn{\sim}{~} \eqn{N(deltabar, A_d^{-1})}\cr
#' \eqn{\mu_j} \eqn{\sim}{~} \eqn{N(mubar, \Sigma_j(x) Amu^{-1})}\cr
#' \eqn{\Sigma_j} \eqn{\sim}{~} \eqn{IW(nu, V)} \cr
#'
#' \eqn{\theta_i = (\mu_i, \Sigma_i)} \eqn{\sim}{~} \eqn{DP(G_0(\lambda), alpha)}\cr
#'
#' \eqn{G_0(\lambda):}\cr
#' \eqn{\mu_i | \Sigma_i} \eqn{\sim}{~} \eqn{N(0, \Sigma_i (x) a^{-1})}\cr
#' \eqn{\Sigma_i} \eqn{\sim}{~} \eqn{IW(nu, nu*v*I)}\cr
#' \eqn{delta = vec(\Delta)} \eqn{\sim}{~} \eqn{N(deltabar, A_d^{-1})}\cr
#'
#' \eqn{\lambda(a, nu, v):}\cr
#' \eqn{a} \eqn{\sim}{~} uniform[alim[1], alimb[2]]\cr
#' \eqn{nu} \eqn{\sim}{~} dim(data)-1 + exp(z) \cr
#' \eqn{z} \eqn{\sim}{~} uniform[dim(data)-1+nulim[1], nulim[2]]\cr
#' \eqn{v} \eqn{\sim}{~} uniform[vlim[1], vlim[2]]
#'
#' \eqn{Z} should \emph{not} include an intercept and should be centered for ease of interpretation.
#' The mean of each of the \code{nreg} \eqn{\beta}s is the mean of the normal mixture.
#' Use \code{summary()} to compute this mean from the \code{compdraw} output.
#'
#' The prior on \eqn{\Sigma_i} is parameterized such that \eqn{mode(\Sigma) = nu/(nu+2) vI}. The support of nu enforces a non-degenerate IW density; \eqn{nulim[1] > 0}
#'
#' The default choices of alim, nulim, and vlim determine the location and approximate size of candidate "atoms" or possible normal components. The defaults are sensible given a reasonable scaling of the X variables. You want to insure that alim is set for a wide enough range of values (remember a is a precision parameter) and the v is big enough to propose Sigma matrices wide enough to cover the data range.
#'
#' A careful analyst should look at the posterior distribution of a, nu, v to make sure that the support is set correctly in alim, nulim, vlim. In other words, if we see the posterior bunched up at one end of these support ranges, we should widen the range and rerun.
#'
#' If you want to force the procedure to use many small atoms, then set nulim to consider only large values and set vlim to consider only small scaling constants. Set alphamax to a large number. This will create a very "lumpy" density estimate somewhat like the classical Kernel density estimates. Of course, this is not advised if you have a prior belief that densities are relatively smooth.
#' }
#'
#' \subsection{Argument Details}{
#' \emph{\code{Data = list(regdata, Z)} [\code{Z} optional]}
#' \tabular{ll}{
#' \code{regdata: } \tab A \eqn{nreg/s_shard} size list of regdata \cr
#' \code{regdata[[i]]$X: } \tab \eqn{n_i \times nvar} design matrix for equation \eqn{i} \cr
#' \code{regdata[[i]]$y: } \tab \eqn{n_i \times 1} vector of observations for equation \eqn{i} \cr
#' \code{Z: } \tab A list of s partitions where each partition include \eqn{(nreg/s_shard) \times nz} matrix of unit characteristics
#' }
#'
#' \emph{\code{Prior = list(deltabar, Ad, Prioralphalist, lambda_hyper, nu, V, nu_e, mubar, Amu, ssq, ncomp)} [all but \code{ncomp} are optional]}
#' \tabular{ll}{
#' \code{deltabar: } \tab \eqn{(nz \times nvar) \times 1} vector of prior means (def: 0) \cr
#' \code{Ad: } \tab prior precision matrix for vec(D) (def: 0.01*I) \cr
#' \code{mubar: } \tab \eqn{nvar \times 1} prior mean vector for normal component mean (def: 0) \cr
#' \code{Amu: } \tab prior precision for normal component mean (def: 0.01) \cr
#' \code{nu.e: } \tab d.f. parameter for regression error variance prior (def: 3) \cr
#' \code{V: } \tab PDS location parameter for IW prior on normal component Sigma (def: nu*I) \cr
#' \code{ssq: } \tab scale parameter for regression error variance prior (def: \code{var(y_i)}) \cr
#' \code{ncomp: } \tab number of components used in normal mixture \cr
#' }
#'
#' \emph{\code{Mcmc = list(R, keep, nprint)} [only \code{R} required]}
#' \tabular{ll}{
#' \code{R: } \tab number of MCMC draws \cr
#' \code{keep: } \tab MCMC thinning parameter -- keep every \code{keep}th draw (def: 1) \cr
#' \code{nprint: } \tab print the estimated time remaining for every \code{nprint}'th draw (def: 100, set to 0 for no print) \cr
#' }
#' }
#'
#'
#' @return A list containing:
#' \item{compdraw}{A list (length: R/keep) where each list contains `mu` (vector, length: `ncomp`) and `rooti` (matrix, shape: ncomp \eqn{\times} ncomp)}
#' \item{probdraw}{A \eqn{(R/keep) \times (ncomp)} matrix that reports the probability that each draw came from a particular component}
#' \item{Deltadraw}{A \eqn{(R/keep) \times (nz \times nvar)} matrix of draws of Delta, first row is initial value. Deltadraw is NULL if Z is NULL}
#'
#' @references
#' Bumbaca, Federico (Rico), Sanjog Misra, and Peter E. Rossi (2020), "Scalable Target Marketing:
#' Distributed Markov Chain Monte Carlo for Bayesian Hierarchical Models", Journal of Marketing
#' Research, 57(6), 999-1018.
#'
#' Chapter 5, \emph{Bayesian Statistics and Marketing} by Rossi, Allenby, and McCulloch.
#'
#' @author
#' Federico Bumbaca, Leeds School of Business, University of Colorado Boulder, \email{federico.bumbaca@colorado.edu}
#'
#' @seealso
#' \code{\link{partition_data}},
#' \code{\link{drawPosteriorParallel}},
#' \code{\link{combine_draws}},
#' \code{\link{rheteroLinearIndepMetrop}}
#'
#' @examples
#'
#' \donttest{
#'######### Single Component with rhierLinearDPParallel########
#'R = 1000
#'
#'nreg = 2000
#'nobs = 5 #number of observations
#'nvar = 3 #columns
#'nz = 2
#'
#'Z=matrix(runif(nreg*nz),ncol=nz)
#'Z=t(t(Z)-apply(Z,2,mean))
#'
#'Delta=matrix(c(1,-1,2,0,1,0),ncol=nz)
#'tau0=.1
#'iota=c(rep(1,nobs))
#'
#'#Default
#'tcomps=NULL
#'a=matrix(c(1,0,0,0.5773503,1.1547005,0,-0.4082483,0.4082483,1.2247449),ncol=3)
#'tcomps[[1]]=list(mu=c(0,-1,-2),rooti=a)
#'tpvec=c(1)
#'
#'regdata=NULL
#'betas=matrix(double(nreg*nvar),ncol=nvar)
#'tind=double(nreg)
#'
#'for (reg in 1:nreg) {
#' tempout=bayesm::rmixture(1,tpvec,tcomps)
#' if (is.null(Z)){
#' betas[reg,]= as.vector(tempout$x)
#' }else{
#' betas[reg,]=Delta %*% Z[reg,]+as.vector(tempout$x)}
#' tind[reg]=tempout$z
#' X=cbind(iota,matrix(runif(nobs*(nvar-1)),ncol=(nvar-1)))
#' tau=tau0*runif(1,min=0.5,max=1)
#' y=X %*% betas[reg,]+sqrt(tau)*rnorm(nobs)
#' regdata[[reg]]=list(y=y,X=X,beta=betas[reg,],tau=tau)
#'}
#'
#'
#'Data1=list(list(regdata=regdata,Z=Z))
#'s = 1
#'Data2=scalablebayesm::partition_data(Data1,s=s)
#'
#'Prior1=list(ncomp=1)
#'Mcmc1=list(R=R,keep=1)
#'
#'out2 = parallel::mclapply(Data2, FUN = rhierLinearDPParallel, Prior = Prior1,
#'Mcmc = Mcmc1, mc.cores = s, mc.set.seed = FALSE)
#'
#'######### Multiple Components with rhierLinearDPParallel########
#'R = 1000
#'
#'set.seed(66)
#'nreg=2000
#'nobs=5 #number of observations
#'nvar=3 #columns
#'nz=2
#'
#'Z=matrix(runif(nreg*nz),ncol=nz)
#'Z=t(t(Z)-apply(Z,2,mean))
#'
#'Delta=matrix(c(1,-1,2,0,1,0),ncol=nz)
#'tau0=.1
#'iota=c(rep(1,nobs))
#'
#'#Default
#'tcomps=NULL
#'a=matrix(c(1,0,0,0.5773503,1.1547005,0,-0.4082483,0.4082483,1.2247449),ncol=3)
#'tcomps[[1]]=list(mu=c(0,-1,-2),rooti=a)
#'tcomps[[2]]=list(mu=c(0,-1,-2)*2,rooti=a)
#'tcomps[[3]]=list(mu=c(0,-1,-2)*4,rooti=a)
#'tpvec=c(.4,.2,.4)
#'
#'regdata=NULL
#'betas=matrix(double(nreg*nvar),ncol=nvar)
#'tind=double(nreg)
#'
#'for (reg in 1:nreg) {
#' tempout=bayesm::rmixture(1,tpvec,tcomps)
#' if (is.null(Z)){
#' betas[reg,]= as.vector(tempout$x)
#' }else{
#' betas[reg,]=Delta %*% Z[reg,]+as.vector(tempout$x)}
#' tind[reg]=tempout$z
#' X=cbind(iota,matrix(runif(nobs*(nvar-1)),ncol=(nvar-1)))
#' tau=tau0*runif(1,min=0.5,max=1)
#' y=X %*% betas[reg,]+sqrt(tau)*rnorm(nobs)
#' regdata[[reg]]=list(y=y,X=X,beta=betas[reg,],tau=tau)
#'}
#'
#'
#'Data1=list(list(regdata=regdata,Z=Z))
#'s = 1
#'Data2=scalablebayesm::partition_data(Data1, s=s)
#'
#'Prior1=list(ncomp=3)
#'Mcmc1=list(R=R,keep=1)
#'
#'out2 = parallel::mclapply(Data2, FUN = rhierLinearDPParallel, Prior = Prior1,
#'Mcmc = Mcmc1, mc.cores = s, mc.set.seed = FALSE)
#' }
#' @export
rhierLinearDPParallel = function(Data, Prior, Mcmc, verbose = FALSE) {
# Purpose: run hierarchical linear model with Dirichlet Process prior
# Arguments:
# Data contains a list of (regdata and possibly Z)
# regdata is a list of lists (one list per unit)
# regdata[[i]] = list(y, X)
# y is a vector of continuous outcomes
# X is a length(y) x nvar matrix of covariates including intercepts
# Z is an optional length(regdata) x nz matrix of additional variables
# note: Z should NOT contain an intercept
# Prior contains a list of (deltabar, Ad, lambda_hyper, Prioralpha)
# alpha: starting value
# lambda_hyper: hyperparameters of prior on lambda
# Prioralpha: hyperparameters of alpha prior; a list of (Istarmin, Istarmax, power)
# if elements of the prior don't exist, defaults are assumed
# Mcmc contains a list of (s, c, R, keep, nprint)
#
# Output: as list containing
# Deltadraw R/keep x nz*nvar matrix of draws of Delta, first row is initial value
# betadraw is nreg x nvar x R/keep array of draws of betas
# probdraw is R/keep x 1 matrix of draws of probs of mixture components
# compdraw is a list of list of lists (length R/keep)
# compdraw[[rep]] is the rep-th draw of components for mixtures
# loglike log-likelihood at each kept draw
#Define constants and necessary functions
Constant.A = ScalableBayesmConstant.A
Constant.nuInc = ScalableBayesmConstant.nuInc
Constant.DPalpha = ScalableBayesmConstant.DPalpha
append=function(l) { l=c(l,list(XpX=crossprod(l$X),Xpy=crossprod(l$X,l$y)))}
#
getvar=function(l) {
v=var(l$y)
if(is.na(v)) return(1)
if(v>0) return (v) else return (1)}
#
if(missing(Data) || !is.list(Data)) {stop("Requires Data argument -- list of p,lgtdata, and (optionally) Z", .call=FALSE)}
if(missing(Prior) || !is.list(Prior)) {stop("Requires Prior list argument (at least ncomp)", .call=FALSE)}
if(missing(Mcmc) || !is.list(Mcmc)) {stop("Requires Mcmc list argument", .call=FALSE)}
if(is.null(Data$regdata)) {stop("Requires Data element regdata (list of Data for each unit)")}
nreg=length(Data$regdata)
nvar=ncol(Data$regdata[[1]]$X)
nz = ncol(Data$Z)
drawdelta=TRUE
if(is.null(Data$Z)) {
cat("Z not specified",fill=TRUE);
drawdelta=FALSE
} else {
if (nrow(Data$Z) != nreg) {
stop(paste("Nrow(Z) ",nrow(Z),"ne number regressions ",nreg),.call=FALSE)
} else {
z=Data$Z
}
}
# check regdata for validity
dimfun=function(l) {c(length(l$y),dim(l$X))}
dims=sapply(Data$regdata,dimfun) #a matrix of nrow=3, ncol=nreg, where each column includes length of X, length of y, and number of ncol
dims=t(dims)
nvar=quantile(dims[,3],prob=.5) #return number of columns and 50%
for (i in 1:nreg)
{
if(dims[i,1] != dims[i,2] || dims[i,3] !=nvar)
{stop(paste("Incorrect dimensions for unit ",i," dims(y,X) =",dims[i,]))}
}
# check prior
if(missing(Prior))
{Prior = NULL}
if (is.null(Prior$Ad) & drawdelta) {
Ad = ScalableBayesmConstant.A * diag(nvar * nz)
} else {
Ad = Prior$Ad
}
if (is.null(Prior$lambda_hyper)) {
lambda_hyper = list(alim = ScalableBayesmConstant.DPalimdef,
nulim = ScalableBayesmConstant.DPnulimdef,
vlim = ScalableBayesmConstant.DPvlimdef)
} else {
lambda_hyper = Prior$lambda_hyper
if (is.null(lambda_hyper$alim)) {
lambda_hyper$alim = ScalableBayesmConstant.DPalimdef
}
if (is.null(lambda_hyper$nulim)) {
lambda_hyper$nulim = ScalableBayesmConstant.DPnulimdef
}
if (is.null(lambda_hyper$vlim)) {
lambda_hyper$vlim = ScalableBayesmConstant.DPvlimdef
}
}
if (is.null(Prior$Prioralpha)) {
Prioralpha = list(
Istarmin = ScalableBayesmConstant.DPIstarmin,
Istarmax = min(50, 0.1 * nreg),
power = ScalableBayesmConstant.DPpower
)
} else {
Prioralpha = Prior$Prioralpha
if (is.null(Prioralpha$Istarmin)) {
Prioralpha$Istarmin = ScalableBayesmConstant.DPIstarmin
} else {
Prioralpha$Istarmin = Prioralpha$Istarmin
}
if (is.null(Prioralpha$Istarmax)) {
Prioralpha$Istarmax = min(50, 0.1 * nreg)
} else {
Prioralpha$Istarmax = Prioralpha$Istarmax
}
if (is.null(Prioralpha$power)) {
Prioralpha$power = ScalableBayesmConstant.DPpower
}
}
if(is.null(Prior$nu.e)) {
nu.e=ScalableBayesmConstant.nu.e
} else {
nu.e=Prior$nu.e
}
if(is.null(Prior$ssq)) {
ssq=sapply(Data$regdata,getvar)
} else {
ssq=Prior$ssq
}
if(is.null(Prior$mubar)) {
mubar=matrix(rep(0,nvar),nrow=1)
} else {
mubar=matrix(Prior$mubar,nrow=1)
}
if(is.null(Prior$ncomp)) {
ncomp=1
} else {
ncomp=Prior$ncomp
}
gamma= ScalableBayesmConstant.gamma
Prioralpha$alphamin=exp(digamma(Prioralpha$Istarmin)-log(gamma+log(nreg)))
Prioralpha$alphamax=exp(digamma(Prioralpha$Istarmax)-log(gamma+log(nreg)))
Prioralpha$n=nreg
if(ncol(mubar) != nvar) {stop(paste("mubar must have ncomp cols, ncol(mubar)= ",
ncol(mubar)), .call=FALSE)}
if(is.null(Prior$amu)) {
amu=matrix(ScalableBayesmConstant.A,ncol=1)
} else {
amu=matrix(Prior$amu,ncol=1)
}
if(ncol(amu) != 1 | nrow(amu) != 1) {stop("Am must be a 1 x 1 array",
.call=FALSE)}
if(is.null(Prior$nu)) {
nu=nvar+ScalableBayesmConstant.nuInc
} else {
nu=Prior$nu
}
if(nu < 1) {stop("invalid nu value", .call=FALSE)}
if(is.null(Prior$v)) {
v=nu*diag(nvar)
} else {
v=Prior$v
}
if(sum(dim(v)==c(nvar,nvar)) !=2) {stop("Invalid v in Prior", .call=FALSE)}
if(is.null(Prior$ad) & drawdelta) {
ad=ScalableBayesmConstant.A*diag(nvar*nz)
} else {
ad=Prior$ad
}
if(drawdelta) {
if(ncol(ad) != nvar*nz | nrow(ad) != nvar*nz) { #check dimension on Ad
stop("ad must be nvar*nz x nvar*nz", .call=FALSE)
}
}
if(is.null(Prior$deltabar) & drawdelta) {
deltabar=matrix(rep(0,nvar*nz), nrow = 1)
} else {
deltabar=Prior$deltabar
}
if(is.null(deltabar)){deltabar = matrix(0)}
#Check dimension of delta
if(drawdelta) {
if(length(deltabar) != nz*nvar) {stop("deltabar must be of length nvar*nz",
.call=FALSE)}
}
if(is.null(Prior$a)) {
a=rep(ScalableBayesmConstant.a,ncomp)
} else {
a=Prior$a
}
if(length(a) != ncomp) {stop("Requires dim(a)= ncomp (no of components)",
.call=FALSE)}
bada=FALSE
for(i in 1:ncomp) { if(a[i] < 0) bada=TRUE}
if(bada) {stop("invalid values in a vector", .call=FALSE)}
# check on Mcmc
if(missing(Mcmc)) {
stop("Requires Mcmc list argument", .call=FALSE)
} else {
if(is.null(Mcmc$keep)) {keep=ScalableBayesmConstant.keep} else {keep=Mcmc$keep}
if(is.null(Mcmc$R)) {stop("Requires MCMC element 'R' - number of iterations", .call=FALSE)} else {r=Mcmc$R}
if(is.null(Mcmc$nprint)) {nprint=ScalableBayesmConstant.nprint} else {nprint=Mcmc$nprint}
if(is.null(Mcmc$gridsize)) {gridsize=ScalableBayesmConstant.DPgridsize} else {gridsize=Mcmc$gridsize}
if(nprint<0) {stop('nprint must be an integer greater than or equal to 0',
.call=FALSE)}
}
if (is.null(Mcmc$maxuniq)) {
maxuniq = ScalableBayesmConstant.DPmaxuniq
} else {
maxuniq = Mcmc$maxuniq
}
if(verbose){
# print problem
cat(" ",fill=TRUE)
cat("Starting MCMC Inference for Hierarchical Logit:",fill=TRUE)
cat(" Dirichlet Process Prior",fill=TRUE)
cat(paste(" for ",nreg," cross-sectional units"),fill=TRUE)
cat(" ",fill=TRUE)
cat(" Prior Parms: ",fill=TRUE)
cat(" G0 ~ N(mubar,Sigma (x) Amu^-1)",fill=TRUE)
cat(" mubar = ",0,fill=TRUE)
cat(" Sigma ~ IW(nu,nu*v*I)",fill=TRUE)
cat(" Amu ~ uniform[",lambda_hyper$alim[1],",",lambda_hyper$alim[2],"]",fill=TRUE)
cat(" nu ~ uniform on log grid [",nvar-1+exp(lambda_hyper$nulim[1]),
",",nvar-1+exp(lambda_hyper$nulim[2]),"]",fill=TRUE)
cat(" v ~ uniform[",lambda_hyper$vlim[1],",",lambda_hyper$vlim[2],"]",fill=TRUE)
cat(" ",fill=TRUE)
cat(" alpha ~ (1-(alpha-alphamin)/(alphamax-alphamin))^power",fill=TRUE)
cat(" Istarmin = ",Prioralpha$Istarmin,fill=TRUE)
cat(" Istarmax = ",Prioralpha$Istarmax,fill=TRUE)
cat(" alphamin = ",Prioralpha$alphamin,fill=TRUE)
cat(" alphamax = ",Prioralpha$alphamax,fill=TRUE)
cat(" power = ",Prioralpha$power,fill=TRUE)
cat(" ",fill=TRUE)
}
# initialize values
# Create XpX elements of regdata and initialize tau
regdata=lapply(Data$regdata,append) #includes y,x, beta,tau, XpX and Xpy
tau=sapply(regdata,getvar)
#initialize delta
if(drawdelta)
{
olddelta=matrix(0)
#olddelta = matrix(rep(0,nz*nvar), ncol = nz)
} else { #send placeholders to the _loop function if there is no Z matrix
olddelta = z = Ad = matrix(0)
}
for(i in 1:ncomp) { if(a[i] < 0) bada=TRUE}
if(bada) {stop("invalid values in a vector", .call=FALSE)}
# check on Mcmc
if(missing(Mcmc))
{stop("Requires Mcmc list argument", .call=FALSE)}else {
if(is.null(Mcmc$keep)) {keep=ScalableBayesmConstant.keep} else {keep=Mcmc$keep}
if(is.null(Mcmc$R)) {stop("Requires MCMC element 'R' - number of iterations", .call=FALSE)} else {r=Mcmc$R}
if(is.null(Mcmc$nprint)) {nprint=ScalableBayesmConstant.nprint} else {nprint=Mcmc$nprint}
if(nprint<0) {stop('nprint must be an integer greater than or equal to 0',
.call=FALSE)}
}
draws = rhierLinearDPParallel_rcpp_loop(regdata = regdata, Z=z, deltabar = deltabar, Ad = Ad, Prioralphalist = Prioralpha,lambda_hyper = lambda_hyper,
mubar = mubar, Amu = amu, nu = nu, V = v, nu_e= nu.e, maxuniq = maxuniq, gridsize = gridsize, ssq = ssq,
R = R, keep = keep,
nprint= nprint, olddelta = olddelta,
a = a, tau = tau,
drawdelta = drawdelta,
BayesmConstantA = ScalableBayesmConstant.A, BayesmConstantnuInc = ScalableBayesmConstant.nuInc,
BayesmConstantDPalpha = ScalableBayesmConstant.DPalpha, verbose = verbose)
#draws = rhierLinearDPParallel_rcpp_loop(regdata, Z, deltabar, Ad, Prioralpha, lambda_hyper,
# mubar, amu, nu, v, nu.e, maxuniq, gridsize, ssq,
# R, keep, nprint, olddelta, a, tau, drawdelta,
# Constant.A, Constant.nuInc, Constant.DPalpha)
if(drawdelta){
attributes(draws$Deltadraw)$class=c("bayesm.mat","Mcmc")
attributes(draws$Deltadraw)$mcpar=c(1,R,keep)
}
return(draws)
}
Any scripts or data that you put into this service are public.
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.