Description Usage Arguments Details Value Author(s) References Examples
Sparse logistic regression (SLR) with automatic relevance determination (ARD).
1 
T 
A vector of binary categorical dependent variable. 
X 
A matrix, each row of which is a vector of independent variables. 
bias 
A logical value specifying whether to include intercept or not. 
method 
The method to be used. See ‘Details’. 
control 
A list of control parameters. See ‘Details’. 
check.lb 
A logical value whether to check the validity of parameter updates.
Defaults to TRUE.

Hyper parameters:
a0 
Shape of Gamma distribution. 
b0 
Rate of Gamma distribution.

Initial values:
mu0 
Coefficients. 
xi0 
Variational parameter. 
Some independent variables can be pruned in each VBEM iteration and
never be used for successive iterations when irrelevance of the
variables exceed a threshold determined by
control$pruning
. Explicitly set control$pruning = Inf
to
prevent the pruning.
Method ‘"VB"’ is a variational Bayesian method that is robust. See Bishop (2006).
Method ‘"VBMacKay"’ is basically same as ‘"VB"’ but some parameters are updated based on method of MacKay (1992). Global convergence has not been proven but this algorithm will be faster.
Method ‘"PXVB"’ based on the Parameter eXpanded VB method proposed in Qi and Jaakkola (2007). Global convergence is proven and could be faster than ‘"VB"’.
The control
argument is a list that can supply any of the
following components:
pruning: threshold of independent variables pruning. No variables are pruned if ‘Inf’. Defaults to ‘1e+8’.
See ?optim
for meanings of the following control parameters.
maxit: Defaults to ‘10^5’.
reltoj: Defaults to ‘sqrt(.Machine$double.eps)’.
trace: Defaults to ‘TRUE’.
REPORT: Defaults to ‘floor(control$maxit / 20)’.
coefficients 
A named vector of coefficients. 
irrelevance 
A named vector of irrelevance. 
iterations 
A number of iterations. 
converged 
A logical value giving whether the iterations converged or not. 
lower.bound 
The lower bound of the marginal loglikelihood minus a constant term. 
method 
The method used. 
lb.diff 
The difference of the lower bounds in each update.
This exists when 
fitted.values 
The fitted values. 
residuals 
The residuals, that is 
Hiroshi Saito [email protected]
Bishop, C. M. (2006) Pattern recognition and machine learning. Springer.
MacKay, D. J. C. (1992) Bayesian interpolation. _Neural Computation_, *4*(3), 415447.
Qi, Y. and Jaakkola, T. S. (2007) Parameter expanded variational Bayesian methods. _Advances in Neural Information Processing Systems_, *19*, 1097.
1 2 3 4 5 6 7 8 9 10 11 12 13 14  data(iris)
tmp < iris[iris$Species != 'versicolor',]
T < tmp$Species == 'setosa'
X < as.matrix(tmp[,1:4])
res < SlrArd(T, X, bias=TRUE, method="VB", control = list(maxit=500))
print(coefficients(res))
res < SlrArd(T, X, bias=TRUE, method="VBMacKay") ## faster
print(coefficients(res))
res < SlrArd(T, X, bias=FALSE, method="VBMacKay") ## without bias
print(coefficients(res))

Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.