IndexNumR: A Package for Index Number Calculation" In IndexNumR: Index Number Calculation

library(IndexNumR)


Introduction

IndexNumR is a package for computing indices of aggregate prices or quantities using information on the prices and quantities on multiple products over multiple time periods. Such numbers are routinely computed by statistical agencies to measure, for example, the change in the general level of prices, production inputs and productivity for an economy. Well known examples are consumer price indices and producer price indices.

In recent years, advances have been made in index number theory to address biases in many well known and widely used index number methods. One area of development has been the adaptation of multilateral methods, commonly used in cross-sectional comparisons, to the time series context. This typically involves more computational complexity than bilateral methods. IndexNumR provides functions that make it easy to estimate indices using common index number methods, as well as multilateral methods.

Packages providing related functionality

• micEconIndex: produces price or quantity indices using the Paasche, Laspeyres or Fisher index methods.
• multilaterals: provides multilateral indices for cross-section and panel data. Can also produce bilateral indices using Paasche, Laspeyres, Fisher and Tornqvist methods.
• productivity: calculates indices of productivity and profitability using common index number methods.
• IndexNumber: calculates Laspeyres, Paasche or Fisher indexes.
• PriceIndices: produces indices for a wide range of bilateral and multilateral methods, but limited to a monthly frequency. Provides splicing methods for extending multilateral indices as well as functions for preparation of the data.

The IndexNumR package

Data organisation

This first section covers the inputs into the main index number functions and how the data are to be organised to use these functions.

Index number input dataframe

The index number functions such as priceIndex, quantityIndex and GEKSIndex all take a dataframe as their first argument. This dataframe should contain everything needed to compute the index. In general this includes columns for,

• prices
• quantities
• a time period variable (more on this below)
• a product identifier that uniquely identifies each product.

The dataframe must have column names, since character strings are used in other arguments to the index number functions to specify which columns contain the data listed above. Column names can be set with the colnames function of base R. The sample dataset CES_sigma_2 is an example of the minimum dataframe required to compute an index.

head(CES_sigma_2)


In this case, the dataframe is sorted by the product identifier prodID, but it need not be sorted at all.

The time period variable

To be able to compute indices, the data need to be subset in order to extract all observations on products for given periods. The approach used in IndexNumR is to require a time period variable as an input into many of its functions that will be used for subsetting. This time period variable must satisfy the following,

• start at 1
• increase in integer increments of 1
• continuous (that is, no gaps).

The variable may, and in fact likely will, have many observations for a given time period, since there are generally multiple items with price and quantity information. For example, the CES_sigma_2 dataset has observations on 4 products for each time period. We can see this by observing the first few rows of the dataset sorted by the time period.

head(CES_sigma_2[order(CES_sigma_2$time),])  The user can provide their own time variable, or if a date variable is available, IndexNumR has four functions that can compute the required time variable: yearIndex, quarterIndex, monthIndex and weekIndex. Users should be aware that if there are a very large number of observations then these functions can take some time to compute, but once it has been computed it is easier and faster to work with than dates. Time aggregation A related issue is that of aggregating data collected at some higher frequency, to a lower frequency. When computing index numbers, this is often done by computing a unit value as follows, UV_{t} = \frac{\sum_{i=1}^{N}p^{t}{n}q^{t}{n}}{\sum_{i=1}^{N}q^{t}_{n}} That is, sum up total expenditure on each item over the required period, and divide by the total quantity. Provided that a time period variable as described above is available, the unit values can be computed using the function unitValues. This function returns the unit values, along with the aggregate quantities for each time period and each product. The output will also include the product identifier and time period variable so the output dataframe from the unitvalues function contains everything needed to compute an index number. Sample data IndexNumR provides a sample dataset, CES_sigma_2 that contains prices and quantities on four products over twelve time periods that are consistent with consumers displaying CES preferences with an elasticity of substitution equal to two. This dataset is calculated using the method described in [@df:2017]. We start with prices for each of$n$products in each of$T$time periods, an n-dimensional vector of preference parameters$\alpha$, and a T-dimensional vector of total expenditures. Then calculate the expenditure shares for each product in each time period using, $$s_{tn} = \frac{\alpha_{n}p_{tn}^{1-\sigma}}{\sum_{n=1}^{N}\alpha_{n}p_{tn}^{1-\sigma}}$$ and use those shares to calculate the quantities, $$q_{tn} = \frac{e_{t}s_{tn}}{p_{tn}}$$ IndexNumR provides the function CESData to produce datasets assuming CES preferences as above for any elasticity of substitution$\sigma$, using the prices,$\alpha$, and expenditure values assumed in [@df:2017]. The vector$\alpha$is, $$\alpha = \begin{bmatrix} 0.2 & 0.2 & 0.2 & 0.4 \end {bmatrix}$$ and the prices and expenditures are, t p1 p2 p3 p4 e 1 2.00 1.00 1.00 0.50 10 2 1.75 0.50 0.95 0.55 13 3 1.60 1.05 0.90 0.60 11 4 1.50 1.10 0.85 0.65 12 5 1.45 1.12 0.40 0.70 15 6 1.40 1.15 0.80 0.75 13 7 1.35 1.18 0.75 0.70 14 8 1.30 0.60 0.72 0.65 17 9 1.25 1.20 0.70 0.70 15 10 1.20 1.25 0.40 0.75 18 11 1.15 1.28 0.70 0.75 16 12 1.10 1.30 0.65 0.80 17 Matched-sample indexes A common issue when computing index numbers is that the sample of products over which the index is computed changes over time. Since price and quantity information is generally needed on the same set of products for each pair of periods being compared, the index calculation functions provided in IndexNumR provide the option sample="matched" to use only a matched sample of products. How this performs the matching depends on whether the index is bilateral or multilateral. For bilateral indices the price and quantity information will be extracted for a pair of periods, any non-overlapping products removed, and the index computed over these matched products. This is repeated for each pair of periods over which the index is being computed. For multilateral indexes it is somewhat different. For the GEKS index, the matching is performed for each bilateral comparison that enters into the calculation of the multilateral index (see section on the GEKS index below). For the Geary-Khamis and Weighted-Time-Product-Dummy methods, matching can be performed over each window of data. That is, only products that appear in all time periods within each calculation window are kept. For these two indexes a matched sample is not required; by default, IndexNumR will set price and quantity to zero for all missing observations, to allow the index to be computed. For the WTPD index, this can be shown to give the same result as running a weighted least squares regression on the available pooled data. Matched-sample indexes may suffer from bias. As a simple assessment of the potential bias, the function evaluateMatched calculates the proportion of total expenditure that the matched sample covers in each time period. The function provides output for expenditure as well as counts and can evaluate overlap using either a chained or fixed base index. The first four columns of the output presents the base period information base_index (the time index of the base period), base (total base period expenditure or count), base_matched (the expenditure or count of the base period for matched products), base_share (share of total expenditure in the base period that remains after matching). Columns 5-8 report the same information for the current period. Columns 4 and 8 can be expressed as, $$\lambda_{t} = \frac{\sum_{I\in I(1)\cap I(0)}p_{n}^{t}q_{n}^{t}}{\sum_{I\in I(t)}p_{n}^{t}q_{n}^{t}} \quad \text{for } t \in {1,0},$$ where$I(t)$is the set of products available in period$t$,$t=1$refers to the current period as is used to compute column 8 and$t=0$refers to the comparison period, which is used to compute column 4. The count matrix has two additional columns, "new" and "leaving". The new column gives the number of products that exist in the current period but not the base period (products entering the sample). The leaving column gives the count of products that exist in the base period but not the current period (products leaving the sample). Matching removes both of these types of products. Bilateral index numbers Bilateral index numbers are those that use data for two periods to compute each value of the index. All of the bilateral index numbers can be computed as period-on-period, chained or fixed base. Period-on-period simply measures the change from one period to the next. Chained indices give the cumulative change, and it is calculated as the cumulative product of the period-on-period index. The fixed base index compares each period to the base period. This is also called a direct index, because unlike a chained index, it does not go through other periods to measure the change since the base period. Formulae used to compute the bilateral index numbers from period t-1 to period t are as below. Elementary indexes • Carli index [@carli:1804], \begin{equation} P(p^{t-1},p^{t}) = \frac{1}{N}\sum_{n=1}^{N}\left(\frac{p^{t}{n}}{p^{t-1}{n}}\right) \end{equation} • Jevons index [@jevons:1865], \begin{equation} P(p^{t-1},p^{t}) = \prod_{n=1}^{N}\left(\frac{p^{t}{n}}{p^{t-1}{n}}\right)^{(1/N)} \end{equation} • Dutot index [@dutot:1738], \begin{equation} P(p^{t-1},p^{t}) = \frac{\sum_{n=1}^{N}p^{t}{n}}{\sum{n=1}^{N}p^{t-1}_{n}} \end{equation} Fixed-basket indexes • Laspeyres index [@lasp:1871], \begin{equation} P(p^{t-1},p^{t},q^{t-1}) = \frac{\sum_{n=1}^{N}p^{t}{n}q^{t-1}{n}}{\sum_{n=1}^{N}p^{t-1}{n}q^{t-1}{n}} \end{equation} • Paasche index [@paasche:1874], \begin{equation} P(p^{t-1},p^{t},q^{t}) = \frac{\sum_{n=1}^{N}p^{t}{n}q^{t}{n}}{\sum_{n=1}^{N}p^{t-1}{n}q^{t}{n}} \end{equation} • Geometric Laspeyres index \begin{equation} P(p^{t-1},p^{t},q^{t-1}) = \prod_{n=1}^{N}\left(\frac{p^{t}{n}}{p^{t-1}{n}}\right)^{s^{t-1}_{n}}, \end{equation} where$s^{t}{n} = \frac{p^{t}{n}q^{t}{n}}{\sum{n=1}^{N}p^{t}{n}q^{t}{n}}$is the share of period$t$expenditure on good$n$. • Geometric Paasche index \begin{equation} P(p^{t-1},p^{t},q^{t}) = \prod_{n=1}^{N}\left(\frac{p^{t}{n}}{p^{t-1}{n}}\right)^{s^{t}_{n}}, \end{equation} where$s^{t}_{n}$is defined as above for the geometric laspeyres index. Superlative and pseudo-superlative indexes • Fisher index [@fisher:1921], \begin{equation} P(p^{t-1},p^{t},q^{t-1},q^{t}) = [P_{P}P_{L}]^{\frac{1}{2}}, \end{equation} where$P_{P}$is the Paasche index and$P_{L}$is the Laspeyres index. The Fisher index has other representations, but this is the one used by IndexNumR in its computations. • Tornqvist index [@torn:1936; @torntorn:1937], \begin{equation} P(p^{t-1},p^{t},q^{t-1},q^{t}) = \prod_{n=1}^{N}\left(\frac{p^{t}{n}}{p^{t-1}{n}}\right)^{\left(s^{t-1}{n}+s^{t}{n}\right)/2}, \end{equation} where$s^{t}_{n}$is defined as above for the geometric laspeyres index. • Walsh index, \begin{equation} P(p^{t-1},p^{t},q^{t-1},q^{t}) = \frac{\sum_{n=1}^{N}\sqrt{q^{t-1}{n}q^{t}{n}}\cdot p^{t}{n}}{\sum{n=1}^{N}\sqrt{q^{t-1}{n}q^{t}{n}}\cdot p^{t-1}_{n}} \end{equation} • Sato-Vartia index [@sato:1976; @vartia:1976], \begin{equation} P(p^{t-1},p^{t},q^{t-1},q^{t}) = \prod_{n=1}^{N}\left(\frac{p^{t}{n}}{p^{t-1}{n}}\right)^{w_{n}} \end{equation} where the weights are normalised to sum to one, \begin{equation} w_{n} = \frac{w^{}{n}}{\sum{n=1}^{N}w^{}_{n}} \end{equation} and$w^{}_{n}$is the logarithmic mean of the shares, \begin{equation} w^{}{n} = \frac{s^{t}{n}-s^{t-1}{n}}{\log (s^{t}{n}) - \log (s^{t-1}_{n})} \end{equation} • Geary-Khamis [@khamis:1972] \begin{equation} P(p^{t-1},p^{t},q^{t-1},q^{t}) = \frac{\sum_{n=1}^{N}h(q^{t-1}{n}, q^{t}{n})p^{t}{n}}{\sum{n=1}^{N}h(q^{t-1}{n}, q^{t}{n})p^{t-1}_{n}} \end{equation} where h() is the harmonic mean. Constant elasticity of substitution (CES) framework indexes • CES index, also known as the Lloyd-Moulton index [@lloyd:1975; @Moult:1996], \begin{equation} P(p^{t-1},p^{t},q^{t-1}) = \left[\sum_{n=1}^{N}s_{n}^{t-1}\left(\frac{p^{t}{n}}{p^{t-1}{n}}\right)^{(1-\sigma)}\right]^{\left(\frac{1}{1-\sigma}\right)}, \end{equation} where$\sigma$is the elasticity of substitution. Time dummy methods • Time-product-dummy This is a regression model approach where log prices are modelled as a function of time and product dummies. The regression equation is given by, \begin{equation} \ln{p_{n}^{t}} = \alpha + \beta_{1} D^{t} + \sum_{n = 2}^{N}\beta_{n}D_{n} + \epsilon_{n}^{t}, \end{equation} where$D^{t}$is equal to 1 in period$t$and 0 in period$t-1$, and$D_{n}$is equal to 1 if the product is product$n$and 0 otherwise. The price index is then given by, \begin{equation} P(p^{t-1},p^{t},q^{t-1},q^{t}) = \exp({\hat{\beta_{1}}}) \end{equation} However, this is a biased estimate [@kennedy:1981], so IndexNumR optionally calculates the following adjusted estimate, \begin{equation} P(p^{t-1},p^{t},q^{t-1},q^{t}) = \exp({\hat{\beta_{1}} - 0.5 \times Var(\hat{\beta_{1}})}) \end{equation} The time-product-dummy equation can be estimated using three methods in IndexNumR using the weights parameter: ordinary least squares; weighted least squares where the weights are the product expenditure shares; or weighted least squares where the weights are the average of the expenditure shares in the two periods. In the first case, the index produced is the same as the matched sample Jevons index, which does not use quantity information. The second option produces a matched sample harmonic share weights index, and the last option produces the matched sample Tornqvist index. See [@diewert:2005b] for a discussion of these results. Examples To estimate a simple chained Laspeyres price index, priceIndex(CES_sigma_2, pvar = "prices", qvar = "quantities", pervar = "time", prodID = "prodID", indexMethod = "laspeyres", output = "chained")  Estimating multiple different index numbers on the same data is straight-forward, methods <- c("laspeyres","paasche","fisher","tornqvist") prices <- lapply(methods, function(x) {priceIndex(CES_sigma_2, pvar = "prices", qvar = "quantities", pervar = "time", prodID = "prodID", indexMethod = x, output = "chained")}) as.data.frame(prices, col.names = methods)  This illustrates the Laspeyres index's substantial positive bias, the Paasche index's substantial negative bias, and the similar estimates produced by the Fisher and Tornqvist superlative index numbers. The elasticity of substitution parameter The CES index number method requires an elasticity of substitution parameter in order to be calculated. IndexNumR provides a function elasticity to estimate the elasticity of substitution parameter, following the method of [@balk:2000]. The basic method is to solve for the value of the elasticity of substitution that equates the CES index to a comparison index. One comparison index noted by Balk is the 'current period' CES index, \left[\sum_{n=1}^{N}s_{n}^{t}\left(\frac{p^{t}{n}}{p^{t-1}{n}}\right)^{-(1-\sigma)}\right]^{\left(\frac{-1}{1-\sigma}\right)}. Therefore, we numerically calculate the value of$\sigma$that solves, \left[\sum_{n=1}^{N}s_{n}^{t-1}\left(\frac{p^{t}{n}}{p^{t-1}{n}}\right)^{(1-\sigma)}\right]^{\left(\frac{1}{1-\sigma}\right)} - \left[\sum_{n=1}^{N}s_{n}^{t}\left(\frac{p^{t}{n}}{p^{t-1}{n}}\right)^{-(1-\sigma)}\right]^{\left(\frac{-1}{1-\sigma}\right)} = 0. This is done using the uniroot function of the stats package distributed with base R. Note that this equation can be used to solve for sigma for any$t=2,\cdots,T$, so there are$T-1$potential estimates of sigma. The elasticity function will return all$T-1$estimates as well as the arithmetic mean of the estimates. In addition to the current period CES index, Balk also notes that the Sato-Vartia index can be used, while [@idf:2010] note that a Fisher index could be used. Any of these three indexes can be used as the comparison index by specifying the compIndex option as either "fisher", "ces" or "satovartia". The current period CES index is the default. The dataset available with IndexNumR, CES_sigma_2, was calculated assuming a CES cost function with an elasticity of substitution equal to 2. Running the elasticity function on this dataset, elasticity(CES_sigma_2, pvar="prices", qvar="quantities", pervar="time", prodID="prodID", compIndex="ces")  which recovers the value of$\sigma$used to construct the dataset. There is one additional item of output labelled 'diff'. This is the value of the difference between the CES index and the comparison index and is returned so that the user can check that the value of this difference is indeed zero. If it is non-zero then it may indicate that uniroot was not able to find a solution, within the specified upper and lower bounds for$\sigma$. These bounds can be changed with the options upper and lower of the elasticity function. The defaults are 20 and -20 respectively. Chain-linked indices and the linking period problem One problem with chain-linked indices is that depending on the index number method chosen, the index will likely suffer from chain drift. Take an example where prices increase in one period and then return to their original level in the next period. An index suffering from chain-drift will increase when prices increase, but won't return to its original level when prices do. In the above examples, it was noted that there is substantial positive bias in the Laspeyres index and substantial negative bias in the Paasche index. Part of this is due to chain drift. One way of reducing the amount of chain drift is to choose linking periods that are 'similar' in some sense (alternatively, use a multilateral method). This method of linking has been mentioned by Diewert and Fox [@df:2017], and Hill [@hill:2001] takes the concept further to choose the link period based on a minimum cost spanning tree. To choose the linking period we need a measure of the similarity between two periods. For each period we have information on prices and quantities. The Hill (2001) method compares the two periods based on the Paasche-Laspeyres spread, $$PL (p^{t},p^{T+1},q^{t},q^{T+1}) = \Bigg|{ln\Bigg(\frac{P_{T+1,t}^{L}}{P_{T+1,t}^{P}}\Bigg)}\Bigg|,$$ where$P^{L}$is a Laspeyres price index and$P^{P}is a Paasche price index. Since the Laspeyres and Paasche indices are biased in opposite directions, this choice of similarity measure is designed to choose linking periods that minimise the influence of index number method choice. Alternative measures exist that compute the dissimilarity of two vectors. Two such measures, recommended by Diewert [@Diewert:2002] are the weighted log-quadratic index of relative price dissimilarity and the weighted asymptotically linear index of relative price dissimilarity, given by the following, \begin{align} LQ(p^{t},p^{T+1},q^{t},q^{T+1}) = \sum_{n=1}^{N}\frac{1}{2}&(s_{T+1,n} + s_{t,n})[ln(p_{T+1,n}/P(p^{t},p^{T+1},q^{t},q^{T+1})p_{t,n})]^{2} \label{eq:logQuadratic} \ AL(p^{t},p^{T+1},q^{t},q^{T+1}) = \sum_{n=1}^{N}\frac{1}{2}&(s_{T+1,n} + s_{t,n})[(p_{T+1,n}/P(p^{t},p^{T+1},q^{t},q^{T+1})p_{t,n}) + \nonumber \ & (P(p^{t},p^{T+1},q^{t},q^{T+1})p_{t,n}/p_{T+1,n}) - 2] \end{align} whereP(p^{t},p^{T+1},q^{t},q^{T+1})$is a superlative index number. A final measure proposed by Fox, Hill and Diewert [@fhd:2004] is a measure of absolute dissimilarity given by, $$AD(x_{j},x_{k}) = \frac{1}{M+N}\sum_{l=1}^{M+N}\Bigg[ln\Bigg(\frac{x_{kl}}{x_{jl}}\Bigg) - \frac{1}{M+N}\sum_{i=1}^{M+N}ln\Bigg(\frac{x_{ki}}{x_{ji}}\Bigg)\Bigg]^{2} + \Bigg[\frac{1}{M+N}\sum_{i=1}^{M+N}ln\Bigg(\frac{x_{ki}}{x_{ji}}\Bigg)\Bigg]^{2},$$ where$M+N$is the total number of items in the vector and$x_{j}$and$x_{k}$are the two vectors being compared. The authors use this in the context of detecting outliers, but it can be used to compare the price and quantity vectors of two time periods. One way to do this is to only use price information, or only use quantity information. There are two ways to use both price and quantity information: stack the price and quantity vectors for each time period into a single vector and compare the two stacked' vectors; or calculate separate measures of absolute dissimilarity for prices and quantities before combining these into a single measure. The former method is simple to implement, but augments the price vector with a quantity vector that may be of considerably different magnitude and variance. Another option is to compute the absolute dissimilarity using prices and quantities separately, then combine them by taking the geometric average. IndexNumR provides two functions, enabling the estimation of all six dissimilarity measures above. The first function relativeDissimilarity calculates the Paasche-Laspeyres spread, log-quadratic and asymptotically linear measures, and the second function mixScaleDissimilarity computes the mix, scale and absolute measures of dissimilarity. Both functions provide the same output - a data frame with three columns containing the indices of the pairs of periods being compared in the first two columns and the value of the dissimilarity measure in the third column. Once these have been computed, the function maximiumSimilarityLinks can take the output data frame from these two functions and compute the maximum similarity linking periods. Those linking periods are then used by priceIndex as follows, • Compute the measure of dissimilarity between all possible combinations of time periods. • Set the price index to 1 in the first period. • Compute the price index for the second period and chain it with the first period, \begin{equation} P_{chain}^{2} = P_{chain}^{1} \times P(p^{1},p^{2},q^{1},q^{2}), \end{equation} where$P(p^{1},p^{2},q^{1},q^{2})$is any bilateral index number formula. • For each period$t$from$3,\dots,T$, find the period$t^{min}$with the minimum dissimilarity, comparing period$t$to all periods$1, \dots, t-1$. • Compute the similarity chain-linked index number, \begin{equation} P_{chain}^{t} = P_{chain}^{t^{min}} \times P(p^{t^{min}},p^{t},q^{t^{min}},q^{t}) \end{equation} Examples Using the log-quadratic measure of relative dissimilarity, the dissimilarity between the periods in the CES_sigma_2 dataset is as follows, lq <- relativeDissimilarity(CES_sigma_2, pvar="prices", qvar="quantities", pervar = "time", prodID = "prodID", indexMethod = "fisher", similarityMethod = "logquadratic") head(lq)  The output from estimating the dissimilarity between periods can than be used to estimate the maximum similarity links, maximumSimilarityLinks(lq)  To estimate a chained Laspeyres index linking together the periods with maximum similarity as estimated above, priceIndex(CES_sigma_2, pvar = "prices", qvar = "quantities", pervar = "time", prodID = "prodID", indexMethod = "laspeyres", output = "chained", chainMethod = "logquadratic")  Multilateral index numbers Multilateral index number methods use data from multiple periods to compute each term in the index. IndexNumR provides the functions GEKSIndex, GKIndex and WTPDIndex to use the GEKS, Geary-Khamis or Weighted Time-Product-Dummy multilateral index number methods respectively. The GEKS method The GEKS method is attributable to Gini [@gini:1931], Eltito and Koves [@ek:1964], and Szulc [@szulc:1964] in the cross-sectional context. The idea of adapting the method to the time series context is due to Balk [@balk:1981], and developed further by Ivancic, Diewert and Fox [@idf:2011]. The user must choose the size of the window over which to apply the GEKS method, typically one or two years of data plus one period to account for seasonality. Denote this as$w$.The basic method followed by the function GEKSIndex is as follows. Choose a period, denoted period$k$, within the window as the base period. Calculate a bilateral index number between period$k$and every other period in the window. Repeat this for all possible choices of$k$. This gives a matrix of size$w\times w$of bilateral indexes between all possible pairs of periods within the window. Then compute the GEKS indexes for the first$w$periods as, $$\left[ \prod_{k=1}^{w}P^{k,1} \right]^{1/w}, \left[ \prod_{k=1}^{w}P^{k,2} \right]^{1/w}, \cdots, \left[ \prod_{k=1}^{w}P^{k,w} \right]^{1/w},$$ where the term$P^{k,t}$is the bilateral index between period$t$and base period$k$. IndexNumR offers the Fisher, Tornqvist, Walsh, Jevons and time-product-dummy index number methods for the index$P$via the indexMethod option. The Tornqvist index method is the default. The$w\times w$matrix of bilateral indexes is as follows, [P = \begin{pmatrix} P^{1,1} & \cdots & P^{1,w} \ \vdots & \ddots & \vdots \ P^{w,1} & \cdots & P^{w,w} \end{pmatrix} ] So that the first term of the GEKS index is the geometric mean of the elements in the first column of the above matrix, the second term is the geometric mean of the second column, and so on. Note that IndexNumR makes use of two facts about the matrix above to speed up computation: it is (inversely) symmetric so that$P^{j,k} = 1/P^{k,j}$; and the diagonal elements are 1. The Geary-Khamis method The Geary-Khamis, or GK method, was introduced by Geary [@geary:1958] and extended by Khamis [@khamis:1970; @khamis:1972]. This method involves calculating a set of quality adjustment factors,$b_{n}$, simultaneously with the price levels,$P_{t}$. The two equations that determine both of these are: $$b_{n} = \sum_{t=1}^{T}\left[\frac{q_{tn}}{q_{n}}\right]\left[\frac{p_{tn}}{P_{t}}\right]$$ $$P_{t} = \frac{p^{t} \cdot q^{t}} {b \cdot q^{t}}$$ These equations can be solved by an iterative method, where a set of$b_{n}$are arbitrarily chosen, which can then be used to calculate an initial vector of price levels. This vector of prices is then used to generate a new$b$vector, and so on until the changes become smaller than some threshold. IndexNumR instead uses an alternative method. As discussed in [@df:2017] and following Diewert [@diewert:1999], the problem of finding the$b$vector can be solved using the following system of equations: $$\left[I_{N} - C\right]b = 0_{N},$$ where$I_{N}$is the$N \times N$identity matrix,$0_{N}$is an n-dimensional vector of zeros and the$C$matrix is given by, $$C = \hat{q}^{-1} \sum_{t=1}^{T}s^{t}q^{t\textbf{T}},$$ where$\hat{q}^{-1}$is the inverse of an$N \times N$diagonal matrix$q$, where the diagonal elements are the total quantities purchased for each good over all time periods,$s^{t}$is a vector of the expenditure shares for time period$t$, and$q^{t\textbf{T}}$is the transpose of the vector of quantities purchased in time period$t$. It can be shown that the matrix$[I-C]$is singular so a normalisation is required to solve for$b$. IndexNumR follows the method discussed by Irwin Collier Jr. in his comment on [@diewert:1999] and assumes the following normalisation, $$\sum_{n=1}^{N}b_{n}q_{n} = 1,$$ which is, in matrix form, $$c = R\begin{bmatrix} b_{1}q_{1} \ \vdots \ b_{n}q_{n} \end{bmatrix},$$ where$c$is the$N \times 1$vector$\begin{bmatrix} 1 & 0 & \dots & 0 \end{bmatrix}^{\textbf{T}}$, and$R$is the$N \times N$matrix, $$R = \begin{bmatrix} 1 & 1 & \dots & 1 \ 0 & \dots & \dots & 0 \ \vdots & & & \vdots \ 0 & \dots & \dots & 0 \end{bmatrix}$$ Adding the constraint to the original equation we now have the solution for$b$, $$b = [I_{N} - C + R]^{-1}c.$$ Once the$b$vector has been calculated, the price levels can be computed from the GK equations above. The Weighted Time-Product-Dummy method The weighed time-product-dummy method can be seen as the country-product-dummy method [@summers:1973] adapted to the time-series context and supposes the following model for prices: $$p_{tn} = \alpha_{t}b_{n}e_{tn},$$ where$\alpha_{t}$can be interpreted as the price level in period$t$,$b_{n}$is the quality adjustment factor for product$n$and$e_{tn}$is a stochastic error term. The problem is to solve for$\alpha$and$b$using least squares minimisation. Following [@rao:1995], it is formulated as a weighted least squares minimisation, where the weights are based on economic importance. Diewert and Fox show that this can be written as the solution to the system of equations, $$[I_{N} - F]\beta = f,$$ where$I_{N}$is the$N \times N$identity matrix,$F$is the following$N \times N$matrix, $$F = \begin{bmatrix} f_{11} & \dots & f_{1N} \ \vdots & \dots & \vdots \ f_{N1} & \dots & f_{NN} \end{bmatrix},$$ the elements of$F$are the following, $$f_{nj} = w_{nj}/\sum_{k=1}^{N}w_{nk} \quad n,j = 1, \dots, N,$$ with the$w_{nj}$given by, $$w_{nj} = \sum_{t=1}^{T}w_{tnj} \quad n,j = 1, \dots, N,$$ and the$w_{tnj}$given by, $$w_{tnj} = s_{tn}s_{tj} \quad n \neq j, n = 1, \dots, N; j = 1, \dots, N; t = 1, \dots, T.$$$f$on the right-hand-side is the following, $$f = [f_{1}, \dots, f_{N}]^{\textbf{T}},$$ where the$f_{n}$are given by, $$\sum_{t=1}^{T}\sum_{j=1}^{N}f_{tnj}(y_{tn} - y_{tj}) \quad for \space n = 1, \dots, N$$ and$y_{tn} = log(p_{tn})$. The matrix$[I_{N} - F]$is singular so a normalisation must be used to solve the system of equations. IndexNumR uses the method discussed in [@df:2017];$\beta_{N}$is assumed to be zero and the last equation is dropped to solve for the remaining coefficients. Extending multilateral indexes The multilateral indexes are normalised by dividing by the first term, to give an index for the first$w$periods that starts at 1. If the index only covers$w$periods then no further calculation is required. However, if there are$T>w$periods in the dataset then the index must be extended. Extending a multilateral index can be done in a multitude of ways. Statistical agencies generally do not revise price indices like the consumer price index, so the methods offered by IndexNumR to extend multilateral indexes are methods that do not lead to revisions. More specifically, these are called splicing methods and the options available are the movement, window, half, mean, fbew (Fixed Base Expanding Window), fbmw (Fixed Base Moving Window), wisp (window splice on published data), hasp (half-splice on published data) and mean splice on published data. The idea behind most of these methods is that we start by moving the window forward by one period and calculate the index for the new window. There will be$w-1$overlapping periods between the initial index and the index computed on the window that has been rolled forward one period. Any one of these overlapping periods can be used to extend the multilateral index. The variants of window, half and mean splice that are on published data use the same method as the classical counterparts, but splice onto the published series instead of the previously calculated window. Let$P_{OLD}$be the index computed over periods$1$to$w$and let$P_{NEW}$be the index computed over the window rolled forward one period, from periods$2$to$w+1$. Let the final index simply be$P$. For the first$w$periods$P = P_{OLD}$, then$P^{w+1}$is computed using the splicing methods as follows. • Movement splice [@idf:2011] $$P^{w+1} = P^{w} \times \frac{P_{NEW}^{w+1}}{P_{NEW}^{w}}$$ That is, the movement between the final two periods of the index computed over the new window is used to extend the original index from period$w$to$w+1\$.

• Window splice [@krsinich:2016] $$P^{w+1} = P^{w} \times \frac{P_{NEW}^{w+1}/P_{NEW}^{2}}{P_{OLD}^{w}/P_{OLD}^{2}}$$ In this case, the ratio of the movement between the first and last periods computed using the new window, to the movement between the first and last periods using the old window is used to extend the original index.

• Half splice $$P^{w+1} = P^{w} \times \frac{P_{NEW}^{w+1}/P_{NEW}^{\frac{w-1}{2}+1}}{P_{OLD}^{w}/P_{OLD}^{\frac{w-1}{2}+1}}$$ The half splice uses the period in the middle of the window as the overlapping period to calculate the splice.

• Mean splice [@idf:2011] $$P^{w+1} = P^{w} \times \left( \prod_{t=1}^{w-1} \frac{P_{NEW}^{w+1}/P_{NEW}^{t+1}}{P_{OLD}^{w}/P_{OLD}^{t+1}} \right)^{\frac{1}{(w-1)}}$$ The mean splice uses the geometric mean of the movements between the last period and every other period in the window to extend the original index.

• FBMW [@lamboray:2017] $$P^{w+1} = P^{base} \times \frac{P_{NEW}^{w+1}}{P_{NEW}^{base}}$$ This method uses a fixed base period that is updated periodically. For example, if the data are monthly then the base period could be each December month, which would be achieved by ensuring that December is the first period in the data and specifying a window length of 13. The splice is calculated by using the movement between the final data point and the base period in the new window to extend the index. If the new data point being calculated is the first period after the base period, then this method produces the same price growth the same as the movement splice. Using the same example, if each December is the base period, then this method will produce the same price growth for January on December as the movement splice.

• FBEW [@chessa:2016]

This method uses the same calculation as FBMW, but uses a different set of data for the calculation. It expands the size of the window used to compute the new data point each period to include the latest period of data. If the data are monthly and the base period is each December, then the window used to compute the new data point in January includes only the December and January months. In February it includes the December, January and February months, and so on until the next December where it includes the full 13 months (assuming a window length of 13). This method produces the same result as the FBMW method when the new period being calculated is the base period. Using the same example, if each December is the base period, then each December this will produce the same result as the FBMW method.

The splicing methods are used in this fashion to extend the series up to the final period in the data.

# Assume that the data in CES_sigma_2 are quarterly data with time period
# 1 corresponding to the December quarter.

splices <- c("window", "half", "movement", "mean", "fbew", "fbmw", "wisp", "hasp", "mean_pub")

# estimate a GEKS index using the different splicing methods. Under
# the above assumptions, the window must be 5 to ensure the base period is
# each December quarter.
result <- as.data.frame(lapply(splices, function(x){
GEKSIndex(CES_sigma_2,
pvar = "prices",
qvar = "quantities",
pervar = "time",
prodID = "prodID",
indexMethod = "tornqvist",
window=5,
splice = x)
}))

colnames(result) <- splices
result


On the assumptions in the above example, periods 1, 5 and 9 are Decembers. Periods 1-5 are computed using full information and periods 6-12 are computed using the splicing methods. Notice that fbew = fbmw in period 9 (December) and fbmw = movement in period 6 (January).

The differences approach to index numbers

The above index number methods are derived based on a ratio approach, which decomposes the value change from one period to the next into the product of a price index and a quantity index. An alternative approach is to decompose value change into the sum of a price indicator and a quantity indicator. The theory dates back to the 1920s, and an excellent paper on this approach has been written by Diewert [@diewert:2005a]. There are a number of methods available for computing the indicator, and IndexNumR exposes the following, via the priceIndicator function:

• Laspeyres indicator $$I(p^{t-1}, p^{t}) = \sum_{n=1}^{N}q_{n}^{t-1}\times(p_{n}^{t}-p_{n}^{t-1})$$

• Paasche indicator $$I(p^{t-1}, p^{t}) = \sum_{n=1}^{N}q_{n}^{t}\times(p_{n}^{t}-p_{n}^{t-1})$$

• Bennet indicator [@bennet:1920] $$I(p^{t-1}, p^{t}) = \sum_{n=1}^{N} \frac{(q_{n}^{t}+q_{n}^{t-1})}{2} \times(p_{n}^{t}-p_{n}^{t-1})$$

• Montgomery indicator [@montgomery:1929] $$I(p^{t-1}, p^{t}) = \sum_{n=1}^{N} \frac{p_{n}^{t}q_{n}^{t}+p_{n}^{t-1}q_{n}^{t-1}}{log(p_{n}^{t}q_{n}^{t}) - log(p_{n}^{t-1}q_{n}^{t-1})} \times\left(\frac{p_{n}^{t}}{p_{n}^{t-1}}\right)$$

Examples

Price indicators for the CES_sigma_2 dataset are as follows:

methods <- c("laspeyres", "paasche", "bennet", "montgomery")

p <- lapply(methods, function(x) {priceIndicator(CES_sigma_2,
pvar = "prices",
qvar = "quantities",
pervar = "time",
prodID = "prodID",
method = x)})

as.data.frame(p, col.names = methods)


Quantity indicators can also be produced using the same methods as outlined above via the quantityIndicator function. This allows for the value change from one period to the next to be decomposed into price and quantity movements. To facilitate this, IndexNumR contains the valueDecomposition function, which can be used as follows to produce a decomposition of the value change for CES_sigma_2 using a Bennet indicator:

valueDecomposition(CES_sigma_2,
pvar = "prices",
qvar = "quantities",
pervar = "time",
prodID = "prodID",
priceMethod = "bennet")
`

Note that for this decomposition, the method is specified for the price indicator and IndexNumR uses the appropriate quantity indicator. For Bennet and Montgomery indicators the same method is used for the quantity indicator as for the price indicator. If a Laspeyres price indicator is requested then the corresponding volume indicator is a Paasche indicator. The reverse is true if the Paasche indicator is used for prices.

Development

IndexNumR is hosted on Github at https://github.com/grahamjwhite/IndexNumR. There users can find instructions to install the development version directly from Github, as well as report and view bugs or improvements.

References

Try the IndexNumR package in your browser

Any scripts or data that you put into this service are public.

IndexNumR documentation built on July 21, 2021, 5:08 p.m.