demeaning_algo: Controls the parameters of the demeaning procedure

demeaning_algoR Documentation

Controls the parameters of the demeaning procedure

Description

Fine control of the demeaning procedure. Since the defaults are sensible, only use this function in case of difficult convergence (e.g. in feols or demean). That is, look at the slot ⁠$iterations⁠ of the returned object, if it's high (over 50), then it might be worth playing around with these settings.

Usage

demeaning_algo(
  extraProj = 0,
  iter_warmup = 15,
  iter_projAfterAcc = 40,
  iter_grandAcc = 4,
  internal = FALSE
)

Arguments

extraProj

Integer scalar, default is 0. Should there be more plain projection steps in between two accelerations? By default there is not. Each integer value adds 3 simple projections. This can be useful in cases where the acceleration algorithm does not work well but simple projections do.

iter_warmup

Integer scalar, default is 15. Only used in the presence of 3 or more fixed-effects (FE), ignored otherwise. For 3+ FEs, the algorithm is as follows:

  1. iter_warmup iterations on all FEs. If convergence: end of the algorithm. 2) Otherwise: a) demeaning over the first two largest FEs only, until convergence, then b) demeaning over all FEs until convergence. To skip the demeaning over 2 FEs, use a very high value of iter_warmup. To go directly to the demeaning over 2 FEs, se iter_warmup to a value lower than or equal to 0.

iter_projAfterAcc

Integer scalar, default is 40. After iter_projAfterAcc iterations of the standard algorithm, a simple projection is performed right after the acceleration step. Use very high values to skip this step, or low values to apply this procedure right from the start.

iter_grandAcc

Integer scalar, default is 4. The regular fixed-point algorithm applies an acceleration at each iteration. This acceleration is for f(X) (with f the projection). This settings controls a grand acceleration, which is instead for f^k(X) where k is the value of iter_grandAcc and f^2(X) is defined as f(f(X)) (i.e. the function f applied k times). By default, an additional acceleration is performed for h(X) = f^4(X) every 8 iterations (2 times 4, equivalent to the iterationsthe time to gather h(X) and h(h(X))).

internal

Logical scalar, default is FALSE. If TRUE, no check on the arguments is performed and the returned object is a plain list. For internal use only.

Details

The demeaning algorithm is a fixed-point algorithm. Basically a function f is applied until ⁠|f(X) - X| = 0⁠, i.e. there is no difference between X and its image. For terminology, let's call the application of f a "projection".

For well behaved problems, the algorithm in its simplest form, i.e. just applying f until convergence, works fine and you only need a few iterations to reach convergence.

The problems arise for non well behaved problems. In these cases, simply applying the function f can lead to extremely slow convergence. To handle these cases, this algorithm applies a fixed-point acceleration algorithm, namely the "Irons and Tuck" acceleration.

The main algorithm combines regular projections with accelerations. Unfortunately sometimes this is not enough, so we also resort on internal cuisine, detailed below.

Sometimes the acceleration in its simplest form does not work well, and garbles the convergence properties. In those cases:

  • the argument extraProj adds several standard projections in between two accelerations, which can improve the performance of the algorithm. By default there are no extra projections. Note that while it can reduce the total number of iterations until convergence, each iterations is almost twice expensive in terms of computing time.

  • the argument iter_projAfterAcc controls whether, and when, to apply a simple projection right after the acceleration step. This projection adds roughly a 33% increase in computing time per iteration but can improve the convergence properties and speed. By default this step starts at iteration 40 (when the convergence rate is already not great).

On top of this, in case of very difficult convergence, a "grand" acceleration is added to the algorithm. The regular acceleration is over f. Say g is the function equivalent to the application of one regular iteration (which is a combination of one acceleration with several projections). By default the grand acceleration is over ⁠h = g o g o g o g⁠, otherwise g applied four times. The grand acceleration is controled with the argument iter_grandAcc which corresponds to the number of iterations of the regular algorithm defining h.

Finally in case of 3+ fixed-effects (FE), the convergence in general takes more iterations. In cases of the absence of quick convergence, applying a first demeaning over the first two largest FEs before applying the demeaning over all FEs can improve convergence speed. This is controlled with the argument iter_warmup which gives the number of iterations over all the FEs to run before going to the 2 FEs demeaning. By default, the deameaning over all FEs is run for 15 iterations before switching to the 2 FEs case.

The above defaults are the outcome of extended empirical applications, and try to strike a balance across a majority of cases. Of course you can always get better results by tailoring the settings to your problem at hand.

Value

This function returns a list of 4 integers, equal to the arguments passed by the user. That list is of class demeaning_algo.

References

B. M. Irons, R. Tuck, "A version of the Aitken accelerator for computer iteration", International journal of numerical methods in engineering 1 (1969) 670 275–277.


fixest documentation built on June 22, 2024, 9:12 a.m.