# samplePosNorm: Function samplePosNorm In hbmem: Hierarchical Bayesian Analysis of Recognition Memory

## Description

Samples posterior of mean parameters of the positive hierarchical linear normal model with a single Sigma2 \$(x = N(exp(mu+alpha_i+beta_j),sigma2))\$.

## Usage

 ```1 2``` ```samplePosNorm(sample, y, cond, sub, item, lag, N, I, J, R, sig2mu, a, b, met, sigma2, sampLag) ```

## Arguments

 `sample` Block of linear model parameters from previous iteration. `y` Vector of data `cond` Vector of condition index. `sub` Vector of subject index, starting at zero. `item` Vector of item index, starting at zero. `lag` Vector of lag index, zero-centered. `N` Number of conditions. `I` Number of subjects. `J` Number of items. `R` Total number of trials. `sig2mu` Prior variance on the grand mean mu; usually set to some large number. `a` Shape parameter of inverse gamma prior placed on effect variances. `b` Rate parameter of inverse gamma prior placed on effect variances. Setting both s2a AND s2b to be small (e.g., .01, .01) makes this an uninformative prior. `met` Vector of tuning parameter for metropolis-hastings sampling. There is one tuning parameter for mu, each of I alphas, each of J betas, s2alpha,s2beta,and theta. Those for s2alpha and s2beta are placeholders, as these parameters are sampled with gibbs. `sigma2` Variance of distribution. `sampLag` Logical. Whether or not to sample the lag effect.

## Value

The function returns a list. The first element of the list is the newly sampled block of parameters. The second element contains a vector of 0s and 1s indicating which of the decorrelating steps were accepted.

## Author(s)

Michael S. Pratte

## References

Not Published yet

 ``` 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65``` ```library(hbmem) N=3 I=50 J=100 R=N*I*J t.sigma2=3 t.mu=c(-1,0,1) t.sig2alpha=.2 t.sig2beta=.6 t.alpha=rnorm(I,0,sqrt(t.sig2alpha)) t.beta =rnorm(J,0,sqrt(t.sig2beta)) t.theta=-.5 cond=sample((0:(N-1)),R,replace=TRUE) sub=rep(rep(0:(I-1),each=J),N) item=rep(rep(0:(J-1),I),N) lag=scale(rnorm(R,0,sqrt(t.sigma2)/10)) tmean=1:R for(r in 1:R) tmean[r]=exp(t.mu[cond[r]+1]+t.alpha[sub[r]+1]+t.beta[item[r]+1]+t.theta*lag[r]) y=rnorm(R,tmean,sqrt(t.sigma2)) M=10 #Way too low for real analysis! B=N+I+J+3 block=matrix(0,nrow=M,ncol=B) met=rep(.1,B);jump=.0001 b0=rep(0,B) keep=2:M for(m in 2:M) { tmp= samplePosNorm(block[m-1,],y,cond,sub,item,lag,N,I,J,R,1,.01,.01,met,t.sigma2,1) block[m,]=tmp[[1]] b0=b0+tmp[[2]] if(m.5)*jump met[met