fbase1.binomial: Single-Parameter Base Log-likelihood Function(s) for Binomial...

Description Usage Arguments Value Note Author(s) See Also Examples

Description

Vectorized, single-parameter base log-likelihood functions for binomial GLM using various link functions. These base functions can be supplied to the expander function regfac.expand.1par in order to obtain the full, high-dimensional log-likleihood and its derivatives.

Usage

1
2
3
4
fbase1.binomial.logit(u, y, fgh=2, n=1)
fbase1.binomial.probit(u, y, fgh=2, n=1)
fbase1.binomial.cauchit(u, y, fgh=2, n=1)
fbase1.binomial.cloglog(u, y, fgh=2, n=1)

Arguments

u

Varying parameter of the base log-likelihood function. This parameter is intended to be projected onto a high-dimensional space using the familiar regression transformation of u <- X%*%beta. In the typical use-case where the caller is regfac.expand.1par, a vector of values are supplied, and return objects will have the same length as u.

y

Fixed slot of the base distribution, corresponding to the response variable in the regression model. For binomial family, it must be an integer vector with values between 0 and n.

fgh

Integer with possible values 0,1,2. If fgh=0, the function only calculates and returns the log-likelihood vector and no derivatives. If fgh=1, it returns the log-likelihood and its first derivative in a list. If fgh=2, it returns the log-likelihood, as well as its first and second derivatives in a list.

n

Number of trials in the binomial model. This parameter is assumed to be fixed, and must be supplied by the user. If n==1, the model is reduced to binary logit/probit/cauchit/cloglog regression.

Value

If fgh==0, the logit version returns -(n*log(1+exp(-u))+(n-y)*u), the probit returns y*log(pnorm(u))+(n-y)*log(1-pnorm(u)), the cauchit returns y*log(pcauchy(u))+(n-y)*log(1-pcauchy(u)), and the cloglog returns y*log(1-exp(-exp(u)))-(n-y)*exp(u) . If fgh==1, a list is returned with elements f and g, where the latter is a vector of length length(u), with each element being the first derivative of the above expressions. If fgh==2, the list will include an element named h, consisting of the second derivatives of f with respect to u.

Note

In all base log-likelihood functions, we have dropped any additive terms that are independent of the distribution parameter, e.g. constant terms or those terms that are dependent on the response variable only. This is done for computational efficiency. Therefore, these functions cannot be used to obtain the absolute values of log-likelihood functions but only in the context of optimization and/or sampling. Users can write thin wrappers around these functions to add the constant terms to the function value. (Derivatives do not need correction. For binomial family, all factorial terms are ignored since they only depend on n and y.)

Author(s)

Alireza S. Mahani, Mansour T.A. Sharabiani

See Also

regfac.expand.1par

Examples

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
## Not run: 
library(sns)
library(MfUSampler)

# using the expander framework and binomial base log-likelihood
# to define log-likelihood function for binary logit regression
loglike.logit <- function(beta, X, y, fgh) {
  regfac.expand.1par(beta, X, y, fbase1.binomial.logit, fgh, n=1)
}

# generate data for logistic regression
N <- 1000
K <- 5
X <- matrix(runif(N*K, min=-0.5, max=+0.5), ncol=K)
beta <- runif(K, min=-0.5, max=+0.5)
y <- 1*(runif(N) < 1.0/(1+exp(-X%*%beta)))

# obtaining glm coefficients for comparison
beta.glm <- glm(y~X-1, family="binomial")$coefficients

# mcmc sampling of log-likelihood
nsmp <- 100

# Slice Sampler (no derivatives needed)
beta.smp <- array(NA, dim=c(nsmp,K)) 
beta.tmp <- rep(0,K)
for (n in 1:nsmp) {
  beta.tmp <- MfU.Sample(beta.tmp
    , f=function(beta, X, y) loglike.logit(beta, X, y, fgh=0), X=X, y=y)
  beta.smp[n,] <- beta.tmp
}
beta.slice <- colMeans(beta.smp[(nsmp/2+1):nsmp,])

# Adaptive Rejection Sampler
# (only first derivative needed)
beta.smp <- array(NA, dim=c(nsmp,K)) 
beta.tmp <- rep(0,K)
for (n in 1:nsmp) {
  beta.tmp <- MfU.Sample(beta.tmp, uni.sampler="ars"
    , f=function(beta, X, y, grad) {
        if (grad)
          loglike.logit(beta, X, y, fgh=1)$g
        else
          loglike.logit(beta, X, y, fgh=0)
      }
    , X=X, y=y)
  beta.smp[n,] <- beta.tmp
}
beta.ars <- colMeans(beta.smp[(nsmp/2+1):nsmp,])

# SNS (Stochastic Newton Sampler)
# (both first and second derivative needed)
beta.smp <- array(NA, dim=c(nsmp,K)) 
beta.tmp <- rep(0,K)
for (n in 1:nsmp) {
  beta.tmp <- sns(beta.tmp, fghEval=loglike.logit, X=X, y=y, fgh=2)
  beta.smp[n,] <- beta.tmp
}
beta.sns <- colMeans(beta.smp[(nsmp/2+1):nsmp,])

# compare results
cbind(beta.glm, beta.slice, beta.ars, beta.sns)

## End(Not run)

RegressionFactory documentation built on Oct. 26, 2020, 9:07 a.m.