This function estimates maximum likelihood models (e.g., Poisson or Logit) and is efficient to handle fixed effects (i.e. cluster variables). It further allows for nonlinear right hand sides.

1 2 3 4 5 6 |

`fml` |
A formula. This formula must provide the dependent variable as well as the non linear part of the right hand side (RHS). It can be for instance |

`data` |
A data.frame containing the necessary variables to run the model. The variables of the non-linear right hand side of the formula are identified with this data.frame names. Note that no NA is allowed. |

`family` |
Character scalar. It should provide the family. Currently |

`linear.fml` |
A formula with no left hand side. This formula states the linear parameters (as the constant for instance). Putting linear parameters in this formula enhances A LOT the performance of the algorithm. Example: |

`start` |
A list. Starting values for the non-linear parameters. ALL the parameters are to be named and given a staring value. Example: |

`lower` |
A list. The lower bound for each of the non-linear parameters that requires one. Example: |

`upper` |
A list. The upper bound for each of the non-linear parameters that requires one. Example: |

`env` |
An environment. You can provide an environement in which the non-linear part will be evaluated. (May be useful for some particular non-linear functions.) |

`dummy` |
Character vector. The name/s of a/some variable/s within the dataset. These variables should contain the identifier of each observation (e.g., think of it as a panel identifier). |

`start.init` |
Numeric scalar. If the argument |

`nl.gradient` |
A formula. The user can prodide a function that computes the gradient of the non-linear part. The formula should be of the form |

`linear.start` |
Numeric named vector. The starting values of the linear part. |

`jacobian.method` |
Character scalar. Provides the method used to numerically compute the jacobian. Can be either |

`useHessian` |
Logical. (Only if omptimization method is |

`d.hessian` |
Numeric scalar. It provides an argument to the function |

`opt_method` |
Character scalar. Which optimization method should be used. Either |

`opt.control` |
List of elements to be passed to the optimization method (nlminb or optim). |

`optim.method` |
Character scalar. If |

`debug` |
Logical. If |

`...` |
Not currently used. |

When the paramters are at their lower or upper bound, there can be problems when computing the Hessian. This is because the values of the paremeters are shifted to compute numerically the hessian. The defaults of those steps are 0.1 (see the help pages of `hessian`

). Thus, in the case where the non-linear part CANNOT be estimated when the parameter is beyond its bound, the hessian will not be possibly computed numerically. Thus the most straightforward way to circumvent this problem is to either rise the lower (resp. lower the upper) bound by more than 0.1, or to set `d.hessian`

to a lower value (while slighlty rising/lowering the bound).

An `feNmlm`

object.

`coef` |
The coefficients. |

`coeftable` |
The table of the coefficients with their standard errors, z-values and p-values. |

`loglik` |
The loglikelihood. |

`iterations` |
Number of iterations of the algorithm. |

`n` |
The number of observations. |

`k` |
The number of parameters of the model. |

`call` |
The call. |

`nonlinear.fml` |
The nonlinear formula of the call. It also contains the dependent variable. |

`linear.formula` |
The linear formula of the call. |

`ll_null` |
Log-likelyhood of the null model |

`pseudo_r2` |
The adjusted pseudo R2. |

`naive.r2` |
The R2 as if the expected predictor was the linear predictor in OLS. |

`message` |
The convergence message from the optimization procedures. |

`sq.cor` |
Squared correlation between the dependent variable and its expected value as given by the optimization. |

`expected.predictor` |
The expected predictor is the expected value of the dependent variable. |

`cov.unscaled` |
The variance covariance matrix of the parameters. |

`sd` |
The standard error of the parameters. |

Laurent Berge

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 | ```
#The data
n = 100
x = rnorm(n,1,5)**2
y = rnorm(n,-1,5)**2
z = rpois(n,x*y)
base = data.frame(x,y,z)
#Comparing the results of a 'linear' function
est0L = feNmlm(z~0,base,~log(x)+log(y),family="poi")
est0NL = feNmlm(z~a*log(x)+b*log(y),base,start = list(a=0,b=0),
family="poisson", linear.fml=~1)
est0NL_hess = feNmlm(z~a*log(x)+b*log(y),base,start = list(a=0,b=0),
family="poisson", linear.fml=~1, useHessian=TRUE)
#Generating a non-linear relation
z2 = rpois(n,x + y)
base$z2 = z2
#Using a non-linear form
est1L = feNmlm(z2~0,base,~log(x)+log(y),family="poi")
est1NL = feNmlm(z2~log(a*x + b*y),base,start = list(a=1,b=2),family="poisson")
est1NL_hess = feNmlm(z2~log(a*x + b*y),base,start = list(a=1,b=2),
family="poisson",useHessian=TRUE)
#Using a custom Jacobian
myGrad = function(a,x,b,y){
#Custom Jacobian
s = a*x+b*y
data.frame(a = x/s, b = y/s)
}
est1NL_grad = feNmlm(z2~log(a*x + b*y), base, start = list(a=1,b=2),
family="poisson", nl.gradient = ~myGrad(a,x,b,y))
``` |

Questions? Problems? Suggestions? Tweet to @rdrrHQ or email at ian@mutexlabs.com.

Please suggest features or report bugs with the GitHub issue tracker.

All documentation is copyright its authors; we didn't write any of that.