# minimize_MSEHat: Minimization of Estimated MSE In kader: Kernel Adaptive Density Estimation and Regression

## Description

Minimization of the estimated MSE as function of σ in four steps.

## Usage

 1 2 minimize_MSEHat(VarHat.scaled, BiasHat.squared, sigma, Ai, Bj, h, K, fnx, ticker = FALSE, plot = FALSE, ...) 

## Arguments

 VarHat.scaled Vector of estimates of the scaled variance (for values of σ in sigma). BiasHat.squared Vector of estimates of the squared bias (for values of σ in sigma). sigma Numeric vector (σ_1, …, σ_s) with s ≥ 1. Ai Numeric vector expecting (x_0 - X_1, …, x_0 - X_n) / h, where (usually) x_0 is the point at which the density is to be estimated for the data X_1, …, X_n with h = n^{-1/5}. Bj Numeric vector expecting (-J(1/n), …, -J(n/n)) in case of the rank transformation method, but (\hat{θ} - X_1, …, \hat{θ} - X_n) in case of the non-robust Srihera-Stute-method. (Note that this the same as argument Bj of adaptive_fnhat!) h Numeric scalar, where (usually) h = n^{-1/5}. K Kernel function with vectorized in- & output. fnx f_n(x_0) = mean(K(Ai))/h, where here typically h = n^{-1/5}. ticker Logical; determines if a 'ticker' documents the iteration progress through sigma. Defaults to FALSE. plot Should graphical output be produced? Defaults to FALSE. ... Currently ignored.

## Details

Step 1: determine first (= smallest) maximizer of VarHat.scaled (!) on the grid in sigma. Step 2: determine first (= smallest) minimizer of estimated MSE on the σ-grid LEFT OF the first maximizer of VarHat.scaled. Step 3: determine a range around the yet-found (discrete) minimizer of estimated MSE within which a finer search for the “true” minimum is continued using numerical minimization. Step 4: check if the numerically determined minimum is indeed better, i.e., smaller than the discrete one; if not keep the first.

## Value

A list with components sigma.adap, msehat.min and discr.min.smaller whose meanings are as follows:

 sigma.adap Found minimizer of MSE estimator. msehat.min Found minimum of MSE estimator. discr.min.smaller TRUE iff the numerically found minimum was smaller than the discrete one.

## Examples

  1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 require(stats) set.seed(2017); n <- 100; Xdata <- sort(rnorm(n)) x0 <- 1; Sigma <- seq(0.01, 10, length = 11) h <- n^(-1/5) Ai <- (x0 - Xdata)/h fnx0 <- mean(dnorm(Ai)) / h # Parzen-Rosenblatt estimator at x0. # For non-robust method: Bj <- mean(Xdata) - Xdata # # For rank transformation-based method (requires sorted data): # Bj <- -J_admissible(1:n / n) # rank trafo BV <- kader:::bias_AND_scaledvar(sigma = Sigma, Ai = Ai, Bj = Bj, h = h, K = dnorm, fnx = fnx0, ticker = TRUE) kader:::minimize_MSEHat(VarHat.scaled = BV$VarHat.scaled, BiasHat.squared = (BV$BiasHat)^2, sigma = Sigma, Ai = Ai, Bj = Bj, h = h, K = dnorm, fnx = fnx0, ticker = TRUE, plot = FALSE) 

kader documentation built on May 1, 2019, 10:13 p.m.