Description Usage Arguments Details Value Author(s) References Examples

Function that searches for the global maximum of the log-likelihood of different models and selects the optimal number of states.

1 2 3 4 5 |

`responsesFormula` |
a symbolic description of the model to fit. A detailed description is given in the ‘Details’ section of |

`latentFormula` |
a symbolic description of the model to fit. A detailed description is given in the ‘Details’ section of |

`data` |
a |

`index` |
a character vector with two elements, the first indicating the name of the unit identifier, and the second the time occasions |

`k` |
a vector of integer values for the number of latent states |

`weights` |
an optional vector of weights for the available responses |

`version` |
type of responses for the LM model: "categorical" and "continuous" |

`nrep` |
number of repetitions of each random initialization |

`tol1` |
tolerance level for checking convergence of the algorithm in the random initializations |

`tol2` |
tolerance level for checking convergence of the algorithm in the last deterministic initialization |

`out_se` |
to compute the information matrix and standard errors (FALSE is the default option) |

`seed` |
an integer value with the random number generator |

`...` |
additional arguments to be passed to functions |

The function combines deterministic and random initializations strategy to reach the global maximum of the model log-likelihood.
It uses one deterministic initialization (`start=0`

) and a number of random initializations (`start=1`

) proportional to the number of latent states. The tolerance level is set equal to 10^-5. Starting from the best solution obtained in this way, a final run is performed (`start=2`

) with a default tolerance level equal to 10^-10.

Missing responses are allowed according to the model to be estimated.

Returns an object of class `'LMsearch'`

with the following components:

`out.single` |
Output of every LM model estimated for each number of latent states given in input |

`Aic` |
Values the Akaike Information Criterion for each number of latent states given in input |

`Bic` |
Values of the Bayesian Information Criterion for each number of latent states given in input |

`lkv` |
Values of log-likelihood for each number of latent states given in input. |

Francesco Bartolucci, Silvia Pandolfi, Fulvia Pennoni, Alessio Farcomeni, Alessio Serafini

Bartolucci F., Pandolfi S., Pennoni F. (2017) LMest: An R Package for Latent Markov Models for Longitudinal Categorical
Data, *Journal of Statistical Software*, **81**(4), 1-38.

Bartolucci, F., Farcomeni, A. and Pennoni, F. (2013) *Latent Markov Models for Longitudinal Data*, Chapman and Hall/CRC press.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 | ```
### Example with data on drug use in wide format
data("data_drug")
long <- data_drug[,-6]
# add labels referred to the identifier
long <- data.frame(id = 1:nrow(long),long)
# reshape data from the wide to the long format
long <- reshape(long,direction = "long",
idvar = "id",
varying = list(2:ncol(long)))
out <- lmestSearch(data = long,
index = c("id","time"),
version = "categorical",
k = 1:3,
weights = data_drug[,6],
modBasic = 1,
seed = 123)
out
summary(out$out.single[[3]])
## Not run:
### Example with data on self rated health
# LM model with covariates in the measurement model
data("data_SRHS_long")
SRHS <- data_SRHS_long[1:1000,]
# Categories rescaled to vary from 1 (“poor”) to 5 (“excellent”)
SRHS$srhs <- 5 - SRHS$srhs
out1 <- lmestSearch(data = SRHS,
index = c("id","t"),
version = "categorical",
responsesFormula = srhs ~ -1 +
I(gender - 1) +
I( 0 + (race == 2) + (race == 3)) +
I(0 + (education == 4)) +
I(0 + (education == 5)) + I(age - 50) +
I((age-50)^2/100),
k = 1:2,
out_se = TRUE,
seed = 123)
summary(out1)
summary(out1$out.single[[2]])
## End(Not run)
``` |

Embedding an R snippet on your website

Add the following code to your website.

For more information on customizing the embed code, read Embedding Snippets.