Function to estimate a vector of parameters based on moment conditions using the GMM method of Hansen(82).
1 2 3 4 5 6 7 8 9 10 11 12 13 14  gmm(g,x,t0=NULL,gradv=NULL, type=c("twoStep","cue","iterative"),
wmatrix = c("optimal","ident"), vcov=c("HAC","MDS","iid","TrueFixed"),
kernel=c("Quadratic Spectral","Truncated", "Bartlett", "Parzen", "TukeyHanning"),
crit=10e7,bw = bwAndrews, prewhite = 1, ar.method = "ols", approx="AR(1)",
tol = 1e7, itermax=100,optfct=c("optim","optimize","nlminb", "constrOptim"),
model=TRUE, X=FALSE, Y=FALSE, TypeGmm = "baseGmm", centeredVcov = TRUE,
weightsMatrix = NULL, traceIter = FALSE, data, eqConst = NULL,
eqConstFullVcov = FALSE, ...)
evalGmm(g, x, t0, tetw=NULL, gradv=NULL, wmatrix = c("optimal","ident"),
vcov=c("HAC","iid","TrueFixed"), kernel=c("Quadratic Spectral","Truncated",
"Bartlett", "Parzen", "TukeyHanning"),crit=10e7,bw = bwAndrews,
prewhite = FALSE, ar.method = "ols", approx="AR(1)",tol = 1e7,
model=TRUE, X=FALSE, Y=FALSE, centeredVcov = TRUE, weightsMatrix = NULL, data)
gmmWithConst(obj, which, value)

g 
A function of the form g(θ,x) and which returns a n \times q matrix with typical element g_i(θ,x_t) for i=1,...q and t=1,...,n. This matrix is then used to build the q sample moment conditions. It can also be a formula if the model is linear (see details below). 
x 
The matrix or vector of data from which the function g(θ,x) is computed. If "g" is a formula, it is an n \times Nh matrix of instruments or a formula (see details below). 
t0 
A k \times 1 vector of starting values. It is required only when "g" is a function because only then a numerical algorithm is used to minimize the objective function. If the dimension of θ is one, see the argument "optfct". 
tetw 
A k \times 1 vector to compute the weighting matrix. 
gradv 
A function of the form G(θ,x) which returns a q\times k matrix of derivatives of \bar{g}(θ) with respect to θ. By default, the numerical algorithm 
type 
The GMM method: "twostep" is the two step GMM proposed by Hansen(1982) and the "cue" and "iterative" are respectively the continuous updated and the iterative GMM proposed by Hansen, Eaton et Yaron (1996) 
wmatrix 
Which weighting matrix should be used in the objective function. By default, it is the inverse of the covariance matrix of g(θ,x). The other choice is the identity matrix which is usually used to obtain a first step estimate of θ 
vcov 
Assumption on the properties of the random vector x. By default, x is a weakly dependant process. The "iid" option will avoid using the HAC matrix which will accelerate the estimation if one is ready to make that assumption. The option "TrueFixed" is used only when the matrix of weights is provided and it is the optimal one. 
kernel 
type of kernel used to compute the covariance matrix of the vector of sample moment conditions (see 
crit 
The stopping rule for the iterative GMM. It can be reduce to increase the precision. 
bw 
The method to compute the bandwidth parameter in the HAC
weighting matrix. The default is 
prewhite 
logical or integer. Should the estimating functions be prewhitened? If 
ar.method 
character. The 
approx 
A character specifying the approximation method if the bandwidth has to be chosen by 
tol 
Weights that exceed 
itermax 
The maximum number of iterations for the iterative GMM. It is unlikely that the algorithm does not converge but we keep it as a safety. 
optfct 
Only when the dimension of θ is 1, you can choose between the algorithm 
model, X, Y 
logical. If 
TypeGmm 
The name of the class object created by the method 
centeredVcov 
Should the moment function be centered when computing its covariance matrix. Doing so may improve inference. 
weightsMatrix 
It allows users to provide 
traceIter 
Tracing information for GMM of type "iter" 
data 
A data.frame or a matrix with column names (Optional). 
eqConst 
Either a named vector (if "g" is a function), a simple vector for the nonlinear case indicating which of the θ_0 is restricted, or a qx2 vector defining equality constraints of the form θ_i=c_i. See below for an example. 
which, value 
The equality constraint is of the form which=value. "which" can be a vector of type characters with the names of the coefficients being constrained, or a vector of type numeric with the position of the coefficient in the whole vector. 
obj 
Object of class "gmm" 
eqConstFullVcov 
If FALSE, the constrained coefficients are assumed to be fixed and only the covariance of the unconstrained coefficients is computed. If TRUE, the covariance matrix of the full set of coefficients is computed. 
... 
More options to give to 
If we want to estimate a model like Y_t = θ_1 + X_{2t} θ_2 + \cdots + X_{k}θ_k + ε_t using the moment conditions Cov(ε_tH_t)=0, where H_t is a vector of Nh instruments, than we can define "g" like we do for lm
. We would have g = y ~\tilde{}~ x2+x3+ \cdots +xk and the argument "x" above would become the matrix H of instruments. As for lm
, Y_t can be a Ny \times 1 vector which would imply that k=Nh \times Ny. The intercept is included by default so you do not have to add a column of ones to the matrix H. You do not need to provide the gradiant in that case since in that case it is embedded in gmm
. The intercept can be removed by adding 1 to the formula. In that case, the column of ones need to be added manually to H. It is also possible to express "x" as a formula. For example, if the instruments are \{1,z_1,z_2,z_3\}, we can set "x" to \tilde{} z1+z2+z3. By default, a column of ones is added. To remove it, set "x" to \tilde{}z1+z2+z31.
The following explains the last example bellow. Thanks to Dieter Rozenich, a student from the Vienna University of Economics and Business Administration. He suggested that it would help to understand the implementation of the Jacobian.
For the two parameters of a normal distribution (μ,σ) we have the following three moment conditions:
m_{1} = μ  x_{i}
m_{2} = σ^2  (x_{i}μ)^2
m_{3} = x_{i}^{3}  μ (μ^2+3σ^{2})
m_{1},m_{2} can be directly obtained by the definition of (μ,σ). The third moment condition comes from the third derivative of the moment generating function (MGF)
M_{X}(t) = exp\Big(μ t + \frac{σ^{2}t^{2}}{2}\Big)
evaluated at (t=0).
Note that we have more equations (3) than unknown parameters (2).
The Jacobian of these two conditions is (it should be an array but I can't make it work):
1~~~~~~~~~~ 0
2μ+2x ~~~~~ 2σ
3μ^{2}3σ^{2} ~~~~ 6μσ
gmmWithConst()
reestimates an unrestricted model by adding an
equality constraint.
evalGmm()
creates an object of class '"gmm"' for a given
parameter vector. If no vector "tetw" is provided and the weighting
matrix needs to be computed, "t0" is used.,
'gmm' returns an object of 'class' '"gmm"'
The functions 'summary' is used to obtain and print a summary of the results. It also compute the Jtest of overidentying restriction
The object of class "gmm" is a list containing at least:
coefficients 
k\times 1 vector of coefficients 
residuals 
the residuals, that is response minus fitted values if "g" is a formula. 
fitted.values 
the fitted mean values if "g" is a formula. 
vcov 
the covariance matrix of the coefficients 
objective 
the value of the objective function \ var(\bar{g})^{1/2}\bar{g}\^2 
terms 
the 
call 
the matched call. 
y 
if requested, the response used (if "g" is a formula). 
x 
if requested, the model matrix used if "g" is a formula or the data if "g" is a function. 
model 
if requested (the default), the model frame used if "g" is a formula. 
algoInfo 
Information produced by either 
Zeileis A (2006), Objectoriented Computation of Sandwich Estimators. Journal of Statistical Software, 16(9), 1–16. URL http://www.jstatsoft.org/v16/i09/.
Pierre Chausse (2010), Computing Generalized Method of Moments and Generalized Empirical Likelihood with R. Journal of Statistical Software, 34(11), 1–35. URL http://www.jstatsoft.org/v34/i11/.
Andrews DWK (1991), Heteroskedasticity and Autocorrelation Consistent Covariance Matrix Estimation. Econometrica, 59, 817–858.
Newey WK & West KD (1987), A Simple, Positive SemiDefinite, Heteroskedasticity and Autocorrelation Consistent Covariance Matrix. Econometrica, 55, 703–708.
Newey WK & West KD (1994), Automatic Lag Selection in Covariance Matrix Estimation. Review of Economic Studies, 61, 631653.
Hansen, L.P. (1982), Large Sample Properties of Generalized Method of Moments Estimators. Econometrica, 50, 10291054,
Hansen, L.P. and Heaton, J. and Yaron, A.(1996), FinitSample Properties of Some Alternative GMM Estimators. Journal of Business and Economic Statistics, 14 262280.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155  ## CAPM test with GMM
data(Finance)
r < Finance[1:300, 1:10]
rm < Finance[1:300, "rm"]
rf < Finance[1:300, "rf"]
z < as.matrix(rrf)
t < nrow(z)
zm < rmrf
h < matrix(zm, t, 1)
res < gmm(z ~ zm, x = h)
summary(res)
## linear tests can be performed using linearHypothesis from the car package
## The CAPM can be tested as follows:
library(car)
linearHypothesis(res,cbind(diag(10),matrix(0,10,10)),rep(0,10))
# The CAPM of Black
g < function(theta, x) {
e < x[,2:11]  theta[1]  (x[,1]  theta[1]) %*% matrix(theta[2:11], 1, 10)
gmat < cbind(e, e*c(x[,1]))
return(gmat) }
x < as.matrix(cbind(rm, r))
res_black < gmm(g, x = x, t0 = rep(0, 11))
summary(res_black)$coefficients
## APT test with FamaFrench factors and GMM
f1 < zm
f2 < Finance[1:300, "hml"]
f3 < Finance[1:300, "smb"]
h < cbind(f1, f2, f3)
res2 < gmm(z ~ f1 + f2 + f3, x = h)
coef(res2)
summary(res2)$coefficients
## Same result with x defined as a formula:
res2 < gmm(z ~ f1 + f2 + f3, ~ f1 + f2 + f3)
coef(res2)
## The following example has been provided by Dieter Rozenich (see details).
# It generates normal random numbers and uses the GMM to estimate
# mean and sd.
#
# Random numbers of a normal distribution
# First we generate normally distributed random numbers and compute the two parameters:
n < 1000
x < rnorm(n, mean = 4, sd = 2)
# Implementing the 3 moment conditions
g < function(tet, x)
{
m1 < (tet[1]  x)
m2 < (tet[2]^2  (x  tet[1])^2)
m3 < x^3  tet[1]*(tet[1]^2 + 3*tet[2]^2)
f < cbind(m1, m2, m3)
return(f)
}
# Implementing the jacobian
Dg < function(tet, x)
{
jacobian < matrix(c( 1, 2*(tet[1]+mean(x)), 3*tet[1]^23*tet[2]^2,0, 2*tet[2],
6*tet[1]*tet[2]), nrow=3,ncol=2)
return(jacobian)
}
# Now we want to estimate the two parameters using the GMM.
gmm(g, x, c(0, 0), grad = Dg)
# Twostageleastsquares (2SLS), or IV with iid errors.
# The model is:
# Y(t) = b[0] + b[1]C(t) + b[2]Y(t1) + e(t)
# e(t) is an MA(1)
# The instruments are Z(t)={1 C(t) y(t2) y(t3) y(t4)}
getdat < function(n) {
e < arima.sim(n,model=list(ma=.9))
C < runif(n,0,5)
Y < rep(0,n)
Y[1] = 1 + 2*C[1] + e[1]
for (i in 2:n){
Y[i] = 1 + 2*C[i] + 0.9*Y[i1] + e[i]
}
Yt < Y[5:n]
X < cbind(1,C[5:n],Y[4:(n1)])
Z < cbind(1,C[5:n],Y[3:(n2)],Y[2:(n3)],Y[1:(n4)])
return(list(Y=Yt,X=X,Z=Z))
}
d < getdat(5000)
res4 < gmm(d$Y~d$X1,~d$Z1,vcov="iid")
res4
### Examples with equality constraint
######################################
# Random numbers of a normal distribution
## Not run:
# The following works but produces warning message because the dimension of coef is 1
# Brent should be used
# without named vector
gmm(g, x, c(4, 0), grad = Dg, eqConst=1)
# with named vector
gmm(g, x, c(mu=4, sig=2), grad = Dg, eqConst="sig")
## End(Not run)
gmm(g, x, c(4, 0), grad = Dg, eqConst=1,method="Brent",lower=0,upper=6)
gmm(g, x, c(mu=4, sig=2), grad = Dg, eqConst="sig",method="Brent",lower=0,upper=6)
# Example with formula
# first coef = 0 and second coef = 1
# Only available for one dimensional yt
z < z[,1]
res2 < gmm(z ~ f1 + f2 + f3, ~ f1 + f2 + f3, eqConst = matrix(c(1,2,0,1),2,2))
res2
# CUE with starting t0 requires eqConst to be a vector
res3 < gmm(z ~ f1 + f2 + f3, ~ f1 + f2 + f3, t0=c(0,1,.5,.5), type="cue", eqConst = c(1,2))
res3
### Examples with equality constraints, where the constrained coefficients is used to compute
### the covariance matrix.
### Useful when some coefficients have been estimated before, they are just identified in GMM
### and don't need to be reestimated.
### To use with caution because the covariance won't be valid if the coefficients do not solve
### the GMM FOC.
######################################
res4 < gmm(z ~ f1 + f2 + f3, ~ f1 + f2 + f3, t0=c(0,1,.5,.5), eqConst = c(1,2),
eqConstFullVcov=TRUE)
summary(res4)
### Examples with equality constraint using gmmWithConst
###########################################################
res2 < gmm(z ~ f1 + f2 + f3, ~ f1 + f2 + f3)
gmmWithConst(res2,c("f2","f3"),c(.5,.5))
gmmWithConst(res2,c(2,3),c(.5,.5))
## Creating an object without estimation for a fixed parameter vector
###################################################################
res2_2 < evalGmm(z ~ f1 + f2 + f3, ~ f1 + f2 + f3,
t0=res2$coefficients, tetw=res2$coefficients)
summary(res2_2)

Questions? Problems? Suggestions? Tweet to @rdrrHQ or email at ian@mutexlabs.com.
All documentation is copyright its authors; we didn't write any of that.