knitr::opts_chunk$set(
    error = TRUE,
    fig.height = 4,
    fig.path = "figs/",
    background = "#F0F0F0",
    cache = FALSE,
    include = TRUE,
    size = "scriptsize"
)
rm(list = ls())

Introduction

The lpinfer package for R contains a collection of tools for estimation and inference on the solutions for linear programs. Our motivation comes from the econometrics literature on partial identification or "bounds" approaches. A string of recent research has shown that bounds in many interesting models can be constructed by solving two linear programs: \begin{equation} \minmax_{x \in \re^{d}_{+}} \Atarget x \quad \text{s.t.} \quad \Ashape x = \betashape \quad \text{and} \quad \Aobs x = \betaobs. \label{eq:linear-programs} \end{equation} The notation in $\eqref{eq:linear-programs}$ means the following:

A concrete example of how $\eqref{eq:linear-programs}$ arises from an econometric model is given in the next section.

Although the shape and data-matching constraints enter $\eqref{eq:linear-programs}$ in the same way, their conceptual difference will be important in the procedures ahead. The shape constraints will be required to hold. Subject to that requirement, $x$ will be chosen so that the data-matching constraints are met as well as possible.

Often, it will not be possible to meet the data-matching constraints exactly. This is no different here than in other statistical models. The estimation procedure described in Section \@ref(estimation) modifies $\eqref{eq:linear-programs}$ to account for this possibility. Applying this estimation procedure yields bound estimates for the target parameter, $[\betahatlb, \betahatub]$. These estimates can be expected to be consistent for the "true" population bounds, $[\betalb, \betaub]$, that would arise if the model were correctly-specified and the population data were known; see @mogstadsantostorgovitsky2018e for more detail.

In addition to estimating the population bounds, we might also want to conduct frequentist statistical inference on them, either by testing the null hypothesis $H_{0}: \betatarget \in [\betalb, \betaub]$ for some pre-specified value $\betatarget$, or by constructing a confidence interval $[\clb(\alpha), \cub(\alpha)]$ that contains $[\betalb, \betaub]$ in repeated samples with probability $1-\alpha$. This problem falls into the large and ever-expanding literature on inference in partially identified models; see the recent surveys by @canayshaikh2017ic or @molinari2020ae for extensive reviews.

For the lpinfer package, we focus on the subset of these procedures that scale well both in terms of $d$ (the dimension of the parameters, $x$), and in terms of the number of rows in $\Aobs$ and $\Ashape$. This limits our focus to procedures specifically focused on projection inference. We limit our focus even further to those procedures that can be implemented reliably in the sense that they involve only calculations for which certificates are available. This means that we do not consider methods that involve solving unstructured nonlinear, non-convex optimization problems. We would, however, be amenable to considering methods that involve non-convex optimization problems for which certificates of optimality are available, such as linear integer programs or bilinear programs.

Even with these restrictions, there are still many procedures. Currently, we have implemented four in lpinfer; see Section \@ref(statistical-inference-testing) for details. We have a list of other applicable procedures that we intend to implement in the future, and we plan to continually add new procedures as they are proposed.

Example: The Mixed Logit Model

In this section, we give a simple example of how $\eqref{eq:linear-programs}$ arises in an empirically-relevant econometric model. For more examples, see @fangsantosshaikhetal2020wp [, section 2] and the references cited therein.

@bajarifoxryan2007aer and @foxkimryanetal2011qe consider a class of discrete choice random coefficient models. The binary choice version of this model has the structure \begin{equation} \label{eq:mixedlogit} Y_{i} = \IndicSmall{ V_{i}'W_{i} \geq U_{i} }, \end{equation} where $i$ denotes an individual, $Y_{i}$ is an observed binary random variable, $W_{i}$ is an observed random vector, and both $V_{i}$ and $U_{i}$ are unobserved random variables that are assumed to be independent of $W_{i}$. The unobservable $U_{i}$ is assumed to have a standard logistic distribution and be independent of $V_{i}$. Model $\eqref{eq:mixedlogit}$ is like the latent variable representation of the well-known logit model for binary outcomes, with the important difference that $V_{i}$ are unobserved random variables, not degenerate parameters.

@foxkimryanetal2011qe consider approximating the distribution of $V_{i}$ with a discrete distribution. This discrete distribution is assumed to have known support points $(v_{1},\ldots,v_{d})$. The unknown parameters of the model are the masses that the distribution places on each support points, which are the $x = (x_{1},\ldots,x_{d})$ in $\eqref{eq:linear-programs}$. These masses must be non-negative, so that $x \in \re^{d}{+}$. They also must sum to unity, so that \begin{equation} \sum{j=1}^{d} x_{j} \equiv \Ashape x = \betashape \equiv 1. \label{eq:mixedlogit-shape} \end{equation} These are the shape constraints of this simple model. More complicated models might have many more shape constraints.

If $\eqref{eq:mixedlogit}$ were indeed the way that $Y_{i}$ were generated, then we would have \begin{equation} \label{eq:data-matching} \Prob[Y_{i} = 1 \vert W_{i} = w] = \sum_{j=1}^{d}x_{j}\ell(v_{j}'w), \end{equation} where $\ell(v_{j}'w)$ are the logit probabilities conditional on $V_{i} = v_{j}$, that is,

$$ \ell(z) \equiv \frac{1}{ 1 + \exp(-z) }. $$ The left-hand side of $\eqref{eq:data-matching}$ is a population moment, which is not observed but can be estimated. For example, if $W_{i}$ is discretely distributed, a simple estimator is $$ \betaobs(w) \equiv \frac{\sum_{i : W_{i} = w} Y_{i}}{\sum_{i : W_{i} = w} 1}. $$ If we choose a set of supported points from the distribution of $W_{i}$, say $w_{1},\ldots,w_{p}$, then we obtain the data-matching constraints \begin{equation} \betaobs \equiv \begin{bmatrix} \betaobs(w_{1}) \ \vdots \ \betaobs(w_{p}) \end{bmatrix} = \begin{bmatrix} \ell(v_{1}'w_{1}) & \cdots & \ell(v_{d}'w_{1}) \ \vdots & & \vdots \ \ell(v_{1}'w_{p}) & \cdots & \ell(v_{d}'w_{p}) \end{bmatrix} \begin{bmatrix} x_{1} \ \vdots \ x_{d} \end{bmatrix} \equiv \Aobs x. \label{eq:logit-data-matching} \end{equation} If $p$ is smaller than $d$, then we would generally expect there to be multiple solutions to $\eqref{eq:logit-data-matching}$, even under the shape constraint $\eqref{eq:mixedlogit-shape}$. This is the source of partial identification in this model, and the reason that the solutions to the minimization and maximization problems defined in $\eqref{eq:linear-programs}$ might be different.

There are many target parameters that might be of interest in this model. One example is the probability of choosing $Y_{i}$ at some value $w^{\star}$ not in the support of $W_{i}$: $$ \Prob[Y_{i} = 1 \vert W_{i} = w^{\star}] = \sum_{j=1}^{d}x_{j}\ell(v_{j}'w^{\star}) = \begin{bmatrix} \ell(v_{1}'w^{\star}) & \cdots & \ell(v_{d}'w^{\star}) \end{bmatrix} \begin{bmatrix} x_{1} \ \vdots \ x_{d} \end{bmatrix} \equiv \Atarget x. $$ For a more elaborate example, consider the elasticity of choosing $Y_{i}$ with respect to $w_{k}$ at $W_{i} = w^{\star}$, and conditional on $V_{i} = v_{j}$:

$$ \epsilon_{j}(w^{\star}) = \left( \frac{\partial}{\partial w_{k}} \ell(v_{j}'w^{\star}) \right) \frac{w_{k}^{\star}}{\ell(v_{j}'w^{\star})} = v_{jk}w_{k}^{\star}(1 - \ell(v_{j}'w^{\star})). $$ We might want to conduct inference on the average of this elasticity over $V_{i}$: $$ \sum_{j=1}^{d} x_{j} \epsilon_{j}(w^{\star}) \equiv \Atarget x, $$ or perhaps on the probability that it lies below some pre-specified value $e$: \begin{equation} \label{eq:dfelast} \sum_{j=1}^{d} x_{j} \IndicSmall{\epsilon_{j}(w^{\star}) \leq e} \equiv \Atarget x. \end{equation} There are many other choices of target parameter that may be interesting, depending on the application.

The lpinfer package includes example code for the mixed logit. This example code includes a function that generates the matrix $\Aobs$ and the vector $\betaobs$ in $\eqref{eq:logit-data-matching}$, a function that draws a sample of data, and a function that generates the $\Atarget$ vector that corresponds to the distribution of elasticity in $\eqref{eq:dfelast}$. The following snippet illustrates these functions. They will be used ahead to demonstrate the core features of lpinfer.

source("/Users/conroylau/Library/R/3.6/library/lpinfer/example/dgp_mixedlogit.R")
dgp <- mixedlogit_dgp(dimw = 4, dimv = 16)
dgp$wdist
head(dgp$vdist)
dim(dgp$vdist)

mixedlogit_Aobs(dgp)
set.seed(67)
df <- mixedlogit_draw(dgp, n = 4000)
mixedlogit_betaobs(df, dgp)

Atgt <- mixedlogit_Atgt_dfelast(dgp, w2eval = 1, eeval = -2)
Atgt %*% dgp$vdist[,1] # The actual (unknown) value of the target parameter

# Change the target parameter slightly
Atgt <- mixedlogit_Atgt_dfelast(dgp, w2eval = 1, eeval = -1)
Atgt %*% dgp$vdist[,1]

The lpmodel Class

The lpinfer package is built around an S3 class called lpmodel. This class simply consists of the five objects necessary to specify $\eqref{eq:linear-programs}$: $\Atarget, \Ashape, \betashape, \Aobs, \betaobs$. An instance of the class is created with the following syntax:

library("lpinfer")

lpm <- lpmodel(A.obs = mixedlogit_Aobs(dgp),
               beta.obs = function(d) mixedlogit_betaobs(d, dgp),
               A.shp = rep(1, nrow(dgp$vdist)),
               beta.shp = 1,
               A.tgt = mixedlogit_Atgt_dfelast(dgp, w2eval = 1, eeval = -1))

All of the statistical inference procedures utilize either the bootstrap or subsampling. The procedures differ on which components of an lpmodel are allowed to change across resampling, that is, which components are treated as "stochastic". Table \@ref(tab:stochastic) catalogues these differences. In implementing the procedures, we have tried to allow for as much flexibility as possible. If the paper does not explicitly require a given object to be deterministic, then we allow it to be stochastic. In some cases, there may be additional restrictions on the form of some of the components of an lpmodel.

Stochastic components can be specified in two ways. The first is to pass a function that accepts a data.frame as an argument and returns the realization of the object. This is what is being done for beta.obs in the syntax above, but it can be done for any of the other components as well. For beta.obs only, the function can also return two objects in a list, with the first object being a vector realization of $\betaobs$ and the second object being an estimate of the asymptotic variance matrix of $\betaobs$. This is useful in some procedures that make use of the variance--covariance matrix of $\betaobs$; if one is not passed, then lpinfer will estimate it via nonparametric bootstrap.

Alternatively, the user can create a list of objects ahead of time and pass this in lieu of a function. For example,

R <- 100 # number of bootstrap replications
beta.obs.list <- vector(mode = "list", length = R)
for (i in 1:R) {
    d <- mixedlogit_draw(dgp, n = 4000)
    beta.obs.list[[i]] <- mixedlogit_betaobs(d, dgp)
}

lpm2 <- lpmodel(A.obs = mixedlogit_Aobs(dgp),
                beta.obs = beta.obs.list,
                A.shp = rep(1, nrow(dgp$vdist)),
                beta.shp = 1,
                A.tgt = mixedlogit_Atgt_dfelast(dgp, w2eval = 1, eeval = -1))

\noindent When using this list syntax, the number of bootstrap replications is inferred from the length of boot.obs.list. This makes it more error-prone for procedures where multiple components can be stochastic, since each one would need to have a list of the same length. We include the list syntax to accommodate models in which it might be costly to compute beta.obs; using the list approach the user can just calculate these once at the outset before running various testing procedures. When using the function syntax, the number of bootstrap replications can be specified in the testing procedure itself.

library(kableExtra)

df.estb <- matrix(c("$\\texttt{estbounds}$", rep("$\\checkmark$", 5), "--"), nrow = 1)
df.fsst <- matrix(c("$\\texttt{fsst}$", rep("", 4), "$\\checkmark$", "--"), nrow = 1)
df.dkqs <- matrix(c("$\\texttt{dkqs}$", rep("", 4), "$\\checkmark$", "Must have $A_{\\rm shp} = 1_{1 \\times d}$ and $\\beta_{\\rm shp} = 1$"), nrow = 1)
df.subs <- matrix(c("$\\texttt{subsample}$", rep("$\\checkmark$", 5), "--"), nrow = 1)
df.cr <- matrix(c("$\\texttt{chorussell}$", rep("$\\checkmark$", 5), "--"), nrow = 1)

df.lpm <- rbind(df.estb, df.fsst, df.dkqs, df.subs, df.cr)
colnames(df.lpm) <- c(" ",
                      "$A_{\\rm tgt}$",
                      "$A_{\\rm shp}$",
                      "$\\beta_{\\rm shp}$",
                      "$A_{\\rm obs}$",
                      "$\\beta_{\\rm obs}$",
                      " ")
kbl(df.lpm, caption = "Requirements for `lpmodel` by procedure", escape = FALSE, booktabs = TRUE) %>%
   add_header_above(c("Procedure",
                      "Can it be a function?" = 5,
                      "Other notes"),
                    escape = FALSE,
                    bold = TRUE) %>%
  kable_styling(position = "center")

Estimation

Estimation is conducted using the procedure developed in @mogstadsantostorgovitsky2018e. The command is called estbounds:

estbounds(data = df, lpmodel = lpm)

\noindent For a full description of the procedure and its theoretical justification, we refer the reader to @mogstadsantostorgovitsky2018e. However, the basic idea is intuitive. First we search over all $x \in \re^{d}{+}$ that satisfy $\Ashape x = \betashape$ to try to minimize the distance between $\Aobs x$ and $\betaobs$. By default, this distance is measured using the Euclidean ($\ell{2}$) norm, although in computationally difficult problems one might want to switch to the $\ell_{1}$ norm using the norm option:

r <- estbounds(data = df, lpmodel = lpm, norm = 1)
print(r)

\noindent The outcome of this problem is saved in the field mincriterion:

print(r$mincriterion)

\noindent Next, $\eqref{eq:linear-programs}$ is solved with the hard constraint $\Aobs x = \betaobs$ replaced by the soft-constraint that $x$ is such that the norm of $\Aobs x - \betaobs$ is within some tolerance of mincriterion. This tolerance is determined by a non-negative regularization parameter kappa. By default this parameter is set to $0$, but in some problems it may be theoretically important to set it to a small number.

estbounds(data = df, lpmodel = lpm, kappa = 1e-3)

Statistical Inference: Testing

In this section, we briefly outline the syntax and options for the testing procedures that are currently available in lpinfer. For details on the theory, please consult the original paper. All of the tests are for the null hypothesis discussed in Section \@ref(introduction).

An important comment---applicable to all of the procedures---is that the number of bootstrap or subsampling replications is, by default, set to $100$. This should be increased to a conventional size for production runs. The optional argument for all procedures is called R. Examples of changing it are given ahead.

@fangsantosshaikhetal2020wp [, "FSST"]

The basic syntax for testing $\betatarget = .2 \in [\betalb, \betaub]$ is this:

fsst(data = df, lpmodel = lpm, beta.tgt = .2)
fsst(data = df, lpmodel = lpm, beta.tgt = .2, R = 500) # more bootstraps

\noindent The tuning parameter in the procedure is called lambda, which is a number between $0$ and $1$. By default, a data-driven choice of lambda is used, which corresponds to lambda = NA. The test can be run for multiple values of lambda simultaneously by passing a vector:

fsst(data = df, lpmodel = lpm, beta.tgt = .2, lambda = c(.1, .2, .5, NA, .9))

\noindent Another choice that one might want to change in some settings is the weighting matrix used in an internal step of fsst. By default, it is set to the inverse of the diagonal of the asymptotic variance matrix of $\betaobs$, but one can also use the identity matrix, or the full asymptotic variance matrix.

set.seed(25)
fsst(data = df, lpmodel = lpm, beta.tgt = .2, weight.matrix = "diag")
set.seed(25)
fsst(data = df, lpmodel = lpm, beta.tgt = .2, weight.matrix = "identity")
set.seed(25)
fsst(data = df, lpmodel = lpm, beta.tgt = .2, weight.matrix = "avar")

\noindent The function used to compute the matrix square root can be specified through the sqrtm.method option. This function can only have one argument and return a matrix. An example of using pracma::sqrtm with kmax = 100 is as follows.

set.seed(25)
fsst(data = df, lpmodel = lpm, beta.tgt = .2,
     sqrtm.method = function(m) pracma::sqrtm(m, kmax = 100)$B)

\noindent More detailed information on test statistics and critical values can be obtained by calling summary on the output of fsst.

@debkitamuraquahetal2018wp [, "DKQS"]

The basic syntax for testing $\betatarget = .2 \in [\betalb, \betaub]$ is similar to fsst:

dkqs(data = df, lpmodel = lpm, beta.tgt = .2)
dkqs(data = df, lpmodel = lpm, beta.tgt = .2, R = 500) # more bootstraps

\noindent The tuning parameter in DKQS is called tau, which is a number between $0$ and $1$. @kamat2018wp observed that a key optimization problem solved in the DKQS procedure can be infeasible unless tau is smaller than some maximum value. This value can be computed, and the default behavior of dkqs is to compute and use this value. Alternatively, the user can also pass their own values of tau as a list:

set.seed(34)
dkqs(data = df, lpmodel = lpm, beta.tgt = .2, tau = c(0, .5, .75))

\noindent One thing to note about the DKQS procedure is that it is only designed for a specific choice of $\Ashape$ and $\betashape$, as shown in Table \@ref(tab:stochastic). For this reason, dkqs ignores these fields of the lpmodel:

lpm2 <- lpm
lpm2$A.shp <- NULL
set.seed(34)
dkqs(data = df, lpmodel = lpm2, beta.tgt = .2, tau = .5) # still works

Profiled Subsampling [@romanoshaikh2008jospai]

Profiled subsampling is based on a criterion function that measures the distance between $\Aobs x$ and $\betaobs$ while constraining $x \in \re^{d}_{+}$, $\Ashape x = \betashape$ and $\Atarget x = \betatarget$ for a hypothetical $\betatarget$. The syntax is this:

subsample(data = df, lpmodel = lpm, beta.tgt = .2)
subsample(data = df, lpmodel = lpm, beta.tgt = .2, R = 500) # more bootstraps

\noindent As in estbounds, the user can change the norm used to measure distance to the $\ell_{1}$ norm, although for subsample there are no computational benefits from doing so. The main tuning parameter is phi, which is a number between $0$ and $1$ that determines the subsample size. If $n$ is the original number of observations, then the subsample size is taken to be $n^{\phi}$. The default value is phi = 2/3. The user can also change replace from its default of FALSE to TRUE to draw with replacement. Note that replace = TRUE and phi = 1 is a standard nonparametric bootstrap.

subsample(data = df, lpmodel = lpm, beta.tgt = .2, phi = 6/9)
subsample(data = df, lpmodel = lpm, beta.tgt = .2, phi = 1,
          replace = TRUE) # bootstrap

@chorussell2019ae [, "CR"]

The basic syntax for testing $\betatarget = .2 \in [\betalb, \betaub]$ is similar to the previous methods:

chorussell(data = df, lpmodel = lpm, beta.tgt = .2)
chorussell(data = df, lpmodel = lpm, beta.tgt = .2, R = 500) # more bootstraps

\noindent The CR procedure directly bootstraps estimates of the bounds. To handle situations when $\Aobs x = \betaobs$ cannot be met perfectly in sample, we use estbounds to construct these estimates. Thus, the kappa and norm tuning parameters from estbounds are also tuning parameters for chorussell.

chorussell(data = df, lpmodel = lpm, beta.tgt = .2, kappa = 1e-3, norm = 1)

\noindent The CR procedure also differs from the others because it is designed to directly construct confidence intervals, rather than test specific points. The examples above actually do the opposite procedure of inverting confidence intervals to obtain $p$--values for a specific point. Alternatively, the user can directly obtain a level $1-\alpha$ confidence interval by just not passing beta.tgt, as in

chorussell(data = df, lpmodel = lpm, alpha = .1)

Statistical Inference: Confidence Intervals

\noindent All of the procedures described in the previous section (other than CR) are designed for testing a given null hypothesis. The lpinfer package also provides a bisection routine that constructs a level $1 - \alpha$ confidence interval by inverting level $\alpha$ tests. For example,

set.seed(5)
invertci(f = fsst, farg = list(data = df, lpmodel = lpm), progress = FALSE)

\noindent The first argument gives the function that controls the test, while the second argument passes any arguments needed to run the test as a list. The third argument progress is option and used here for brevity; when progress = TRUE the sequential progress of the bisection procedure is printed. The default is to construct a level 95\% ($= 1 - .05$) confidence interval, but this can be changed with the alpha option:

set.seed(5)
invertci(f = fsst, farg = list(data = df, lpmodel = lpm), progress = FALSE,
         alpha = .10)

\noindent The bisection procedure attempts to find appropriate initial brackets, but the user can override this to help speed up computation. Although it won't be clear here, since progress = FALSE, the following requires many fewer iterations than the previous command.

set.seed(5)
invertci(f = fsst, farg = list(data = df, lpmodel = lpm), progress = FALSE,
         init.lb = c(.1, .5), init.ub = c(.75, .8))

\noindent The invertci also returns previous $p$--values it has computed, which can then be used as an input to future calls of invertci in order to speed up computation. For example, the second invertci here runs much more quickly with the optional pvals argument than it would without:

set.seed(5)
ci <- invertci(f = fsst, farg = list(data = df, lpmodel = lpm), progress = FALSE)
set.seed(5)
invertci(f = fsst, farg = list(data = df, lpmodel = lpm), progress = FALSE,
         alpha = .10, pvals = ci$pvals) # this runs more quickly

Parallelization

All of the procedures discussed in Sections \@ref(statistical-inference-testing) and \@ref(statistical-inference-confidence-intervals) can be parallelized in lpinfer using the future package.

library("future")

# 3 workers
plan(multisession, workers = 3)
t_start <- Sys.time()
set.seed(1)
fsst(df, lpm, .2, R = 5000)
print(sprintf("That took %s seconds.",
              round(difftime(Sys.time(), t_start, units = "secs"),
                    digits = 3)))

# 1 worker
plan(multisession, workers = 1)
t_start <- Sys.time()
set.seed(1)
fsst(df, lpm, .2, R = 5000)
print(sprintf("That took %s seconds.",
              round(difftime(Sys.time(), t_start, units = "secs"),
                    digits = 3)))

Converting Linear Programs to "Standard Form"

All linear programs can be converted to a form that looks like $\eqref{eq:linear-programs}$ by including appropriate slack variables. We call this "standard form" with the scare quotes because it's not the standard "standard form" seen in most textbooks on linear programming. At the same time, it's also not non-standard to see $\eqref{eq:linear-programs}$ called "standard form"; one reference we know of is @bertsimastsitsiklis1997 [, page. 4].

Semantics aside, we have included a function that helps one translate expressive linear programs in extensive form into the format $\eqref{eq:linear-programs}$ required by lpinfer. The idea is to use another S3 class in lpinfer called lpmodel.natural, which allows one to explicitly give bounds on variables and declare different "senses" of equality and inequality constraints. This functionality is still experimental, but here's an example of how it works:

### Step 1: Create an object in the `lpmodel.natural` class
# Obs
Aobs0 <- matrix(c(1, 2), nrow = 1)
bobs0 <- c(10)

# Shp
Ashp0 <- matrix(c(3, 4, 5, 6), nrow = 2, byrow = TRUE)
bshp0 <- matrix(c(15, 100))
sshp0 <- matrix(c(">=", "<="))

# Tgt
Atgt0 <- matrix(c(1, 1), nrow = 1)

# Upper bounds
xub0 <- c(200, 200)

# Lower bounds
xlb0 <- c(0.1, 0.1)

# Formulate the `lpmodel.natural` object
lpmn0 <- lpmodel.natural(A.obs = Aobs0,
                         A.shp = Ashp0,
                         A.tgt = Atgt0,
                         beta.obs = bobs0,
                         beta.shp = bshp0,
                         sense.shp = sshp0,
                         x.ub = xub0,
                         x.lb = xlb0)

### Step 2: Apply the `standard.lpmodel` function
lpm1 <- standard.lpmodel(lpmn0)

\noindent The new object, lpm1 has updated A.shp and beta.shp matrices that incorporate the appropriate slack variables to fit into form $\eqref{eq:linear-programs}$.

print(lpm1$A.shp)
print(lpm1$beta.shp)

\noindent For example, the first row corresponds to the <= components in the original Ashp0 and bshp0 specification, while the second and third rows correspond to the upper bounds on x. The lpm1 instance can now be used as an lpmodel for all of the routines in lpinfer.

Further Examples

Further syntax examples are contained in the installation directory for lpinfer under the example subdirectory.

References {-}



conroylau/lpinfer documentation built on Oct. 23, 2022, 9:21 a.m.