tadaFit: Fit a time of acquisition diffusion analysis (TADA) model

Description Usage Arguments Details Value tadaFit components See Also

View source: R/tadaFit.R

Description

A continuous TADA (cTADA) is fitted if an nbdaData object (nbdaData) or a list of nbdaData objects (for multiple diffusions) is provided. A discrete TADA (dTADA) is fitted if a dTADAData object (dTADAData or a list of dTADAData objects (for multiple diffusions) is provided.

Usage

1
2
3
4
5
tadaFit(nbdadata, type = "social", startValue = NULL, upper = NULL,
  lower = NULL, interval = c(0, 999), method = "nlminb",
  gradient = T, iterations = 150, standardErrors = T,
  baseline = "constant", noHazFunctPars = NULL, hazFunct = function()
  return(NULL), cumHaz = function() return(NULL))

Arguments

nbdadata

for cTADA: an object of class nbdaData (nbdaData) to fit a model to a single diffusion or a list of nbdaData objects to fit a model to multiple diffusions. For dTADA: an object of class dTADAData (dTADAData) to fit a model to a single diffusion or a list of dTADAData objects to fit a model to multiple diffusions.

type

a string specifying either "social" or "asocial" model. Usually asocial models have all s parameters constrained =0 and all ILVs affecting only the rate of social learning are removed (i.e. those in the int_ilv slot(s) of the nbdaData object(s)). However, if a non-zero offset is present on the social transmission component, e.g. when constaining all s parameters to a specific value using constrainedNBDAdata, int_ilv variables are retained. This situation occurs most commonly when the function is called internally by the profLikCI function.

startValue

optional numeric vector giving start values for the maximum likelihood optimization. Length to match the number of parameters fitted in the model.

upper

optional numeric vector giving upper values for the maximum likelihood optimization. Length to match the number of parameters fitted in the model. By default taken to be Inf for all parameters.

lower

optional numeric vector giving lower values for the maximum likelihood optimization. Length to match the number of parameters fitted in the model. By default taken to be 0 for all s parameters and -Inf for coefficients of ILVs.

interval

currently non-functioning argument: can be ignored.

method

character string determining which optimization algorithm is used, defaulting to "nlminb" using the nlminb function. If set to "both" the optim method optim is also used and the results returned for both optimization procedures.

gradient

logical indicating whether the gradient function should be used during optimization.

iterations

numerical determining the maximum iterations to be used during optimization. Increasing this may solve convergence issues.

standardErrors

logical indicating whether standard errors should be calculated.

baseline

string giving the baseline rate (hazard) function to be fitted. "constant" assumes that the baseline rate does not change over time, fitting a single Scale parameter controlling the reference rate of asocial learning. "gamma" and "weibull" both assume that the baseline rate of learning can increase or decrease over time, as determined by a second shape parameter. Shape <1 indicates a decreasing baseline rate, and shape> 1 indicates an increasing baseline rate. "custom" allows the user to provide their own baseline rate function (see below).

hazFunct

a function returning the hazard function for the baseline rate of learning over time. This must return a rate as a function of time,taking the form hazFunct(parameters,time). Only necessary if baseline="custom".

cumHaz

a function giving the cumulative hazard function for the baseline rate of learning over time. This must return a cumulative hazard as a function of time, taking the form cumHaz(parameters,time). Only necessary if baseline="custom".

noHazFuncPars

numercial giving the number of parameters in the baseline rate (hazard) function. Only necessary if baseline="custom".

Details

The model is fitted using maximum likelihood methods, for OADA models use oadaFit.The ILVs included in the model are determined by those present in the nbdaData or dTADAData object(s). All nbdaData/dTADAData objects must contain the same social networks (assMatrix must match in the third dimension) and the same individual level variables (ILVs) in each of the asoc_ilv, int_ilv and multi_ilv slots. Random effects are not included: if random effects are required then an OADA oadaFit or a Bayesian TADA is 'recommended (the latter not implemented in the NBDA package).

Value

An object of class tadaFit is returned.

tadaFit components

The following components of the tadaFit object are of key importance for interpreting the output:

@outputPar

The maximum likelihood estimates (MLEs) for the model parameters

@varNames

The name of the variable corresponding to each of the parameter estimates. These are numbered so the user can easily identify parameters when obtaining confidence intervals using profLikCI. The s parameters are labelled "Social transmission N" with N giving the number of the network. ILV effects on asocial learning are preceded with "Asocial:". ILV effects on social learning are preceded with "Social:". "Multiplicative" ILV effects constrained to be equal on asocial and social learning are preceded with "Social=Asocial". "Scale" gives the parameter estimating the reference rate of asocial learning (scale= 1/rate). If gamma or weibull baseline functions are used a "Shape" parameter is also fitted. Shape <1 indicates a decreasing baseline rate, and shape> 1 indicates an # increasing baseline rate.

@se

The standard error for each parameter. These can not always be derived so may be NaN. The user is advised to get confidence intervals for parameters using profLikCI.

@aic

The AIC for the model.

@aicc

The AICc for the model: AIC adjusted for sample size, with sample size taken to be the number of acquisition events.

@loglik

The -log-likelihood for the model. Can be used to conduct likelihood ratio tests to test hypotheses.

The tadaData object also contains the following components:

@nbdadata

The data the model is fitted to, as a list of nbdaData or dTADAData objects.

@optimisation

The output of the nlminb optimization alogorithm, useful for assessing convergence of the model.

@optim

The output of the optim optimization alogorithm, where used, useful for assessing convergence of the model.

@hessian

The hessian matrix- giving the value of the second partial derivatives of the -log-likelihood with respect to the model parameters at the maximum likelihood estimators. Used to dervive the standard errors.

@type

The model type: "asocial" or "social".

@baseline

The baseline function used.

@noHazFunctPars

The number of parameters fitted estimating the baseline function.

@hazFunct

The custom hazard function used, if appropriate.

@cumHaz

The custom cumulative hazard function used, if appropriate.

See Also

For OADA models use oadaFit. To obtain confidence intervals see profLikCI. For further details about cTADA see https://www.sciencedirect.com/science/article/pii/S0022519310000081 and https://royalsocietypublishing.org/doi/full/10.1098/rstb.2016.0418. For further details about dTADA (the original version of NBDA) see https://royalsocietypublishing.org/doi/10.1098/rspb.2008.1824. For further details about modelling increasing or decreasing baseline rates see https://link.springer.com/article/10.3758/LB.38.3.243


whoppitt/NBDA documentation built on April 25, 2021, 7:55 a.m.