nougad: Non-linear unmixing by gradient descent

View source: R/nougad.R

nougadR Documentation

Non-linear unmixing by gradient descent

Description

Run a gradient descent for each (row) measurement in mixed, extracting how much of spectra is contained in each measurement. Gradient descent runs for iters iterations, with learning rate alpha and AdaProp-style acceleration factor accel in each dimension.

Usage

nougad(
  mixed,
  spectra,
  rnw = 1,
  rpw = 1,
  nw = 1,
  start = 0,
  alpha = 0.01,
  accel = 1,
  iters = 250L,
  threads = 0L
)

Arguments

mixed

n*d matrix of measurements

spectra

k*d matrix of spectra, norm of rows must be 1.

rnw

negative weights for residual, will be converted to vector of length d

rpw

positive weights for residual, will be converted to vector of length d

nw

weights of non-negative learning factor, gets converted to a vector of size k

start

starting points for the gradient descent

alpha

learning rate, preferably low to prevent numeric problems

accel

acceleration factor applied independently for each dimension if the convergence direction in that dimension is the same as in the last iteration.

iters

number of iterations

threads

number of threads to use for computation, defaults to 0 (auto-detection), 1 disables all threading.

Details

Additionally, the result may be weighted towards non-negative region in each result dimension by weights nw. Influence of each input measurement on each output parameter is weighted by matrices rnw (in case the residual in the dimension is negative) and rpw (in case the residual is positive). The latter allows one to implicitly force a non-negative or non-positive residual.

The method should behave like OLS for rnw,rpw=1 and nw=0.

Caveat: Row and column names of all matrices are ignored, the correct order of channels/markers must be ensured manually.

Value

a list with unmixed nk matrix and residuals nd matrix, so that mixed = unmixed %*% spectra + residuals


exaexa/nougad documentation built on Jan. 30, 2023, 12:26 a.m.