mte.learning: Fitting mixtures of truncated exponentials.

View source: R/mte.R

mte.learningR Documentation

Fitting mixtures of truncated exponentials.

Description

These functions fit mixtures of truncated exponentials (MTEs). Least square optimization is used to minimize the quadratic error between the empirical cumulative distribution function and the estimated one.

Usage

mte.learning(X, nparam, domain)

bestMTE(X, domain, maxParam = NULL)

Arguments

X

A "numeric" data vector.

nparam

Number of parameters of the resulting density function.

domain

A "numeric" containing the domain if the function to estimate.

maxParam

A "numeric" value indicating the maximum number of coefficients in the function. By default it is NULL; otherwise, the output is the function which gets the best BIC with at most this number of parameters.

Details

mte.learning(): The returned value $Function is the only visible element which contains the algebraic expression. Using attributes the name of the others elements are shown and also they can be abstract with $. The summary of the function also shows all this elements.

bestMTE(): The first returned value $bestPx contains the output of the mte.learning() function with the number of parameters which gets the best BIC value, taking into account the Bayesian information criterion (BIC) to penalize the functions. It evaluates the two next functions, if the BIC doesn't improve then the function with the last best BIC is returned.

Value

mte.lerning() returns a list of n elements:

Function

An "motbf" object of the 'mte' subclass.

Subclass

'mte'.

Domain

The range where the function is defined to be a legal density function.

Iterations

The number of iterations that the optimization problem employed to minimize the errors.

Time

The CPU time consumed.

bestMTE() returns a list including the MTE function with the best BIC score, the number of parameters, the best BIC value and an array contained the BIC values of the evaluated functions.

See Also

univMoTBF A complete function for learning MoTBFs which includes extra options.

Examples


## 1. EXAMPLE
data <- rchisq(1000, df=3)

## MTE with fix number of parameters
fx <- mte.learning(data, nparam=7, domain=range(data))
hist(data, prob=TRUE, main="")
plot(fx, col=2, xlim=range(data), add=TRUE)

## Best MTE in terms of BIC
fMTE <- bestMTE(data, domain=range(data))
attributes(fMTE)
fMTE$bestPx
hist(data, prob=TRUE, main="")
plot(fMTE$bestPx, col=2, xlim=range(data), add=TRUE)

## 2. EXAMPLE
data <- rexp(1000, rate=1/3)

 ## MTE with fix number of parameters
fx <- mte.learning(data, nparam=8, domain=range(data))
## Message: The nearest function with odd number of coefficients 
hist(data, prob=TRUE, main="")
plot(fx, col=2, xlim=range(data), add=TRUE)

## Best MTE in terms of BIC
fMTE <- bestMTE(data, domain=range(data), maxParam=10)
attributes(fMTE)
fMTE$bestPx
attributes(fMTE$bestPx)
hist(data, prob=TRUE, main="")
plot(fMTE$bestPx, col=2, xlim=range(data), add=TRUE)

MoTBFs documentation built on April 18, 2022, 5:06 p.m.

Related to mte.learning in MoTBFs...