# Kiefer-Wolfowitz NPMLE for Gaussian Location Mixtures

### Description

Kiefer Wolfowitz Nonparametric MLE for Gaussian Location Mixtures

### Usage

1 2 |

### Arguments

`x` |
Data: Sample Observations |

`v` |
Undata: Grid Values defaults equal spacing of with v bins, when v is a scalar |

`sigma` |
scale parameter of the Gaussian noise, may take vector values of length(x) |

`hist` |
If TRUE then aggregate x to histogram bins, when sigma is vector valued this option is inappropriate unless there are only a small number of distinct sigma values. |

`histm` |
histogram bin boundaries, equally spacing with |

`weights` |
replicate weights for x obervations, should sum to 1 |

`...` |
other parameters to pass to KWDual to control optimization |

### Details

Kiefer Wolfowitz MLE as proposed by Jiang and Zhang for
the Gaussian compound decision problem. The histogram option is intended
for large problems, say n > 1000, where reducing the sample size dimension
is desirable. When `sigma`

is heterogeneous and `hist = TRUE`

the
procedure tries to do separate histogram binning for distinct values of
`sigma`

, however this is only feasible when there are only a small
number of distinct `sigma`

. By default the grid for the binning is
equally spaced on the support of the data. This function does the normal
convolution problem, for gamma mixtures of variances see `GVmix`

, or
for mixtures of both means and variances `TLVmix`

.

The predict method for `GLmix`

objects will compute means, medians or
modes of the posterior according to whether the `Loss`

argument is 2, 1
or 0, or posterior quantiles if `Loss`

is in (0,1).

### Value

An object of class density with components:

`x` |
points of evaluation on the domain of the density |

`y` |
estimated function values at the points v, the mixing density |

`g` |
the estimated mixture density function values at x |

`logLik` |
Log likelihood value at the proposed solution |

`dy` |
prediction of mean parameters for each observed x value via Bayes Rule |

`status` |
exit code from the optimizer |

### Author(s)

Roger Koenker

### References

Kiefer, J. and J. Wolfowitz Consistency of the Maximum
Likelihood Estimator in the Presence of Infinitely Many Incidental
Parameters *Ann. Math. Statist*. Volume 27, Number 4 (1956), 887-906.

Jiang, Wenhua and Cun-Hui Zhang General maximum likelihood empirical Bayes
estimation of normal means *Ann. Statist.*, Volume 37, Number 4 (2009),
1647-1684.

Koenker, R and I. Mizera, (2013) “Convex Optimization, Shape Constraints,
Compound Decisions, and Empirical Bayes Rules,” *JASA*, 109, 674–685.