Description Usage Arguments Value References Examples
lik_pw
takes each observation and sequentially (or concurrently)
via Open MP) conduct KDEs. This function is for testing purpose. If you
wish to conduct KDE smoothing point-by-point, use 'density'
in stats
, which uses a similar FFT algorithm as lik_fft
.
1 | lik_pw(x, xtilde, h = 0, m = 0.8, n = 0L)
|
h |
kernel bandwidth. Default value is 0.8 times Silverman's Rule of Thumb, based on simulated data (i.e., yhat). |
m |
a bandwidth multiplier. Default is 0.8. |
n |
the number of simulation. When n=0, where the function will count the numbers of observation in the simulated histogram. If simulating a defective distribution, one should enter the simulation numbers. |
y |
a vector storing empirical observations (e.g., RTs). |
yhat |
a vector storing simulations. |
parallel |
a switch for parallel processing via OMP. Default is FALSE. |
a vector storing log-likelihoods
Holmes, W. (2015). A practical guide to the Probability Density
Approximation (PDA) with improved implementation and error characterization.
Journal of Mathematical Psychology, vol. 68-69, 13–24,
doi: http://dx.doi.org/10.1016/j.jmp.2015.08.006.
Turner, B. M. & Sederberg, P. B. (2014). A generalized, likelihood-free
method for posterior estimation. Psychonomic Bulletin Review, 21(2),
227-250. doi: http://dx.doi.org/10.3758/s13423-013-0530-0.
Brown, S. & Heathcote, A. (2008). The simplest complete model of choice response time: Linear ballistic accumulation. Cognitive Psychology, 57, 153-178. doi: http://dx.doi.org/10.1016/j.cogpsych.2007.12.002.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 | ## Example 1 tests if Lik_pw match 'dnorm' and shows how to adjust
## bandwidth
rm(list=ls())
lik_pw(0, rnorm(1e6)) ## 0.3974386
dnorm(0) ## 0.3989423
h <- 0.8*bw.nrd0(rnorm(1e6)); ## 0.04542646
lik_pw(0, rnorm(1e6), h=h) ## 0.3996198
lik_pw(0, rnorm(1e6), h=h, m=1) ## 0.3985692
## Example 2 demostrates how to use Lik_pw to get pLBA likelihoods
data(lba)
str(plba)
## List of 4
## $ DT1 : num [1:695] 0.488 0.801 0.376 0.507 0.532 ...
## $ DT2 : num [1:305] 0.538 0.77 0.568 0.271 0.881 ...
## $ eDT1: num [1:7020] 0.475 0.346 0.42 0.401 0.368 ...
## $ eDT2: num [1:2980] 0.703 0.693 0.704 0.462 0.468 ...
## Use pointwise pda to get likelihoods for each data point
## This algorithm calculates via a standard gaussian kernel directly
## (1) First argument, plba$DT1, is the data.
## (2) Second argument, plba$eDT1, is the simulation.
## (3) The output is likelihood.
output <- lik_pw(plba$DT1, plba$eDT1)
sum(output) ## Get summed, logged likelihood
#########################
## Example 3 ##
#########################
rm(list=ls())
n <- 1e5
x <- seq(-3,3, length.out=100) ## Support
xlabel <- "Observations"
ylabel <- "Density"
## Approximate Gaussian densities
samp <- rnorm(n)
pw1 <- lik_pw(x, samp)
pw2 <- approx(density(samp)$x, density(samp)$y, x)$y
plot(x, pw1, type="l", lty="longdash", xlab=xlabel, ylab=ylabel)
lines(x, pw2, lwd=1.5, lty="dotted")
lines(x, dnorm(x), lwd=2)
samp <- gamlss.dist::rexGAUS(n, mu=-2, sigma=1, nu=1)
system.time(pw1 <- lik_pw(x, samp, parallel=TRUE))
system.time(pw2 <- approx(density(samp)$x, density(samp)$y, x)$y)
plot(x, pw1, type="l", lty="longdash", xlab=xlabel, ylab=ylabel)
lines(x, pw2, lwd=1.5, lty="dotted")
lines(x, gamlss.dist::dexGAUS(x, mu=-2, sigma=1, nu=1), col="red")
## Approximate densities of linear regression with Gaussian noise:
## y = ax + b + N(0, s)
theta <- c(a=7.5, b=3.5, s=5)
y <- rnorm(length(x), mean=theta[2]+theta[1]*x, sd=theta[3])
dat <- cbind(x, y)
plot(x, y, main="Linear Regression")
## -- Because means for each data point differ, we need to generate
## simulation (ie samp) for each data point. That is, the likelihood for
## each data point is calculated by different simulated likelihood functions
## -- We use sapply to gain a little speedy up.
## -- 'density' function cannot calculate density for only 1 data point
pw1 <- sapply(x, function(s) {
samp <- rnorm(n, mean=theta[2]+theta[1]*s, sd=theta[3])
lik_pw(s, samp)
})
plot(x, pw1, type="l", lty="longdash", xlab=xlabel, ylab=ylabel)
lines(x, dnorm(x, theta[2]+theta[1]*x, theta[3]), col="red")
#########################
## Example 4: LBA ##
#########################
rm(list=ls())
## Assuming that this is empirical data
y <- rtdists::rLBA(1e3, A=.5, b=1, t0=.25, mean_v=c(2.4, 1.2), sd_v=c(1,1.2))
rt1 <- y[y$response==1,"rt"]
rt2 <- y[y$response==2,"rt"]
srt1 <- sort(rt1)
srt2 <- sort(rt2)
summary(rt1); summary(rt2)
n <- 1e5
pvec <- c(b=1, A=.5, v1=2.4, v2=1.2, sv1=1, sv2=1.2, t0=.25)
samp <- cpda::rlba1(n, pvec)
samp1 <- samp[samp[,2]==1, 1]
samp2 <- samp[samp[,2]==2, 1]
pw1 <- lik_pw(srt1, samp1, n=n)
pw2 <- lik_pw(srt2, samp2, n=n)
den0 <- rtdists::dLBA(y$rt, y$response, A=.5, b=1, t0=.25,
mean_v=c(2.4, 1.2), sd_v=c(1, 1.2))
df0 <- cbind(y, den0)
df1 <- df0[df0[,2]==1,]
df2 <- df0[df0[,2]==2,]
den1 <- df1[order(df1[,1]),3]
den2 <- df2[order(df2[,1]),3]
plot(srt1, den1, type="l")
lines(srt2, den2)
lines(srt1, pw1, col="red")
lines(srt2, pw2, col="red")
#########################
## Example 5 ##
#########################
n <- 1e5
x <- seq(-3,3, length.out=100) ## Support
sam <- rnorm(n)
## Tested on 12-core CPU
rbenchmark::benchmark(replications=rep(10, 3),
pw <- cpda::lik_pw(x, sam),
pw_omp <- cpda::lik_pw(x, sam, parallel = T),
columns=c('test', 'elapsed', 'replications'))
## test elapsed replications
## 1 pw <- cpda::lik_pw(x, sam) 2.570 10
## 3 pw <- cpda::lik_pw(x, sam) 2.485 10
## 5 pw <- cpda::lik_pw(x, sam) 2.484 10
## 2 pw_omp <- cpda::lik_pw(x, sam, parallel = T) 0.993 10
## 4 pw_omp <- cpda::lik_pw(x, sam, parallel = T) 1.119 10
## 6 pw_omp <- cpda::lik_pw(x, sam, parallel = T) 1.024 10
|
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.