This routine tests the equality of a nonparametric regression curve, m, and a given function, m_0, from a sample {(Y_i, t_i): i=1,...,n}, where:
Y_i= m(t_i) + ε_i.
The unknown function m is smooth, fixed equally spaced design is considered, and the random errors, {ε_i}, are allowed to be time series. The test statistic used for testing the null hypothesis, H0: m = m_0, derives from a CramervonMisestype functional distance between a nonparametric estimator of m and m_0.
1 2 3 4 
data 

m0 
the considered function in the null hypothesis. If 
h.seq 
the statistic test is performed using each bandwidth in the vector 
w 
support interval of the weigth function in the test statistic. If 
estimator 
allows us the choice between “NW” (NadarayaWatson) or “LLP” (Local Linear Polynomial). The default is “NW”. 
kernel 
allows us the choice between “gaussian”, “quadratic” (Epanechnikov kernel), “triweight” or “uniform” kernel. The default is “quadratic”. 
time.series 
it denotes whether the data are independent (FALSE) or if data is a time series (TRUE). The default is FALSE. 
Tau.eps 
it contains the sum of autocovariances associated to the random errors of the regression model. If NULL (the default), the function tries to estimate it: it fits an ARMA model (selected according to an information criterium) to the residuals from the fitted nonparametric regression model and, then, it obtains the sum of the autocovariances of such ARMA model. 
h0 
if 
lag.max 
if 
p.max 
if 
q.max 
if 
ic 
if 
num.lb 
if 
alpha 
if 
A weight function (specifically, the indicator function 1_{[w[1] , w[2]]}) is introduced in the test statistic to allow elimination (or at least significant reduction) of boundary effects from the estimate of m(t_i).
If Tau.eps=NULL
and the routine is not able to suggest an approximation for Tau.eps
, it warns the user with a message saying that the model could be not appropriate and then it shows the results. In order to construct Tau.eps
, the procedures suggested in Muller and Stadmuller (1988) and Herrmann et al. (1992) can be followed.
The implemented statistic test particularizes that one in Gonzalez Manteiga and Vilar Fernandez (1995) to the case where the considered class in the null hypothesis has only one element.
A list with a dataframe containing:
h.seq 
sequence of bandwidths used in the test statistic. 
Q.m 
values of the test statistic (one for each bandwidth in 
Q.m.normalised 
normalised value of Q.m. 
p.value 
pvalues of the corresponding statistic tests (one for each bandwidth in 
Moreover, if data
is a time series and Tau.eps
is not especified:
pv.Box.test 
pvalues of the LjungBox test for the model fitted to the residuals. 
pv.t.test 
pvalues of the t.test for the model fitted to the residuals. 
ar.ma 
ARMA orders for the model fitted to the residuals. 
German Aneiros Perez ganeiros@udc.es
Ana Lopez Cheda ana.lopez.cheda@udc.es
Biedermann, S. and Dette, H. (2000) Testing linearity of regression models with dependent errors by kernel based methods. Test 9, 417438.
GonzalezManteiga, W. and AneirosPerez, G. (2003) Testing in partial linear regression models with dependent errors. J. Nonparametr. Statist. 15, 93111.
GonzalezManteiga, W. and Cao, R. (1993) Testing the hypothesis of a general linear model using nonparametric regression estimation. Test 2, 161188.
Gonzalez Manteiga, W. and Vilar Fernandez, J. M. (1995) Testing linear regression models using nonparametric regression estimators when errors are nonindependent. Comput. Statist. Data Anal. 20, 521541.
Herrmann, E., Gasser, T. and Kneip, A. (1992) Choice of bandwidth for kernel regression when residuals are correlated. Biometrika 79, 783795
Muller, H.G. and Stadmuller, U. (1988) Detecting dependencies in smooth regression models. Biometrika 75, 639650
Other related functions are np.est
, par.gof
and plrm.gof
.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32  # EXAMPLE 1: REAL DATA
data < matrix(10,120,2)
data(barnacles1)
barnacles1 < as.matrix(barnacles1)
data[,1] < barnacles1[,1]
data < diff(data, 12)
data[,2] < 1:nrow(data)
np.gof(data)
# EXAMPLE 2: SIMULATED DATA
## Example 2a: dependent data
set.seed(1234)
# We generate the data
n < 100
t < ((1:n)0.5)/n
m < function(t) {0.25*t*(1t)}
f < m(t)
f.function < function(u) {0.25*u*(1u)}
epsilon < arima.sim(list(order = c(1,0,0), ar=0.7), sd = 0.01, n = n)
y < f + epsilon
data < cbind(y,t)
## Example 2a.1: true null hypothesis
np.gof(data, m0=f.function, time.series=TRUE)
## Example 2a.2: false null hypothesis
np.gof(data, time.series=TRUE)

Questions? Problems? Suggestions? Tweet to @rdrrHQ or email at ian@mutexlabs.com.
All documentation is copyright its authors; we didn't write any of that.