nougad | R Documentation |

Run a gradient descent for each (row) measurement in `mixed`

, extracting how
much of `spectra`

is contained in each measurement.
Gradient descent runs for `iters`

iterations, with learning rate `alpha`

and
AdaProp-style acceleration factor `accel`

in each dimension.

nougad( mixed, spectra, rnw = 1, rpw = 1, nw = 1, start = 0, alpha = 0.01, accel = 1, iters = 250L, threads = 0L )

`mixed` |
n*d matrix of measurements |

`spectra` |
k*d matrix of spectra, norm of rows must be 1. |

`rnw` |
negative weights for residual, will be converted to vector of length d |

`rpw` |
positive weights for residual, will be converted to vector of length d |

`nw` |
weights of non-negative learning factor, gets converted to a vector of size k |

`start` |
starting points for the gradient descent |

`alpha` |
learning rate, preferably low to prevent numeric problems |

`accel` |
acceleration factor applied independently for each dimension if the convergence direction in that dimension is the same as in the last iteration. |

`iters` |
number of iterations |

`threads` |
number of threads to use for computation, defaults to 0 (auto-detection), 1 disables all threading. |

Additionally, the result may be weighted towards non-negative region in each
result dimension by weights `nw`

. Influence of each input measurement on
each output parameter is weighted by matrices `rnw`

(in case the residual in
the dimension is negative) and `rpw`

(in case the residual is positive). The
latter allows one to implicitly force a non-negative or non-positive
residual.

The method should behave like OLS for rnw,rpw=1 and nw=0.

Caveat: Row and column names of all matrices are ignored, the correct order of channels/markers must be ensured manually.

a list with `unmixed`

n*k matrix and residuals n*d matrix, so that

`mixed = unmixed %*% spectra + residuals`

Embedding an R snippet on your website

Add the following code to your website.

For more information on customizing the embed code, read Embedding Snippets.