control_earl: Control arguments for Efficient Augmentation and Relaxation...

View source: R/earl.R

control_earlR Documentation

Control arguments for Efficient Augmentation and Relaxation Learning

Description

control_earl sets the default control arguments for efficient augmentation and relaxation learning , type = "earl". The arguments are passed directly to DynTxRegime::earl() if not specified otherwise.

Usage

control_earl(
  moPropen,
  moMain,
  moCont,
  regime,
  iter = 0L,
  fSet = NULL,
  lambdas = 0.5,
  cvFolds = 0L,
  surrogate = "hinge",
  kernel = "linear",
  kparam = NULL,
  verbose = 0L
)

Arguments

moPropen

Propensity model of class "ModelObj", see modelObj::modelObj.

moMain

Main effects outcome model of class "ModelObj".

moCont

Contrast outcome model of class "ModelObj".

regime

An object of class formula specifying the design of the policy/regime.

iter

Maximum number of iterations for outcome regression.

fSet

A function or NULL defining subset structure.

lambdas

Numeric or numeric vector. Penalty parameter.

cvFolds

Integer. Number of folds for cross-validation of the parameters.

surrogate

The surrogate 0-1 loss function. The options are "logit", "exp", "hinge", "sqhinge", "huber".

kernel

The options are "linear", "poly", "radial".

kparam

Numeric. Kernel parameter

verbose

Integer.

Value

list of (default) control arguments.


polle documentation built on May 29, 2024, 1:15 a.m.