# Load all necessary information without showing it
library(productConfig)
load("knit_data.RData")
library(devtools)
devtools::load_all(".")

First, let us look at the data:

tail.matrix(camera_data)

As you can see our data displays 1828 rows with around 63 different users in a rather complex format which makes it practically difficult to work with. This is the reason we need the basic function cluster \texttt{GetFunctions}. For example, it is quite necessary to know how many attributes there are in out data:

get_attrs_ID(dataset=camera_data)

Given that our functions are mostly vectorized and assuming all users have the same attribtues, we can ask for the unique values of each \texttt{attr}. ```r temp <- lapply(getAttrValues(dataset=camera_data), round, 8)

```r
getAttrValues(dataset=camera_data, attr = c(1,2,3,4))

```r lapply(temp, unique)

Now that we know how many attributes there are, we also know how many columns the decision matrices are going to have. The number of rows depends on how much each user interacted with the product configurator and again, since functions are vectorised we can calculate the number of rows for all users using \texttt{getRoundsById}.
```r
all.rounds <- getRoundsById(camera_data, userid = getAllUserIds(camera_data))
head(all.rounds, 3) # To display only the results for the first three users

We can now easily observe that user 10 interacted four times with the configurator four times before making a decision.

The three functions presented above are necessary to create more complex structures, such as the decision matrix. To build it, we just need to use the right function with the right parameters. At mentioned earlier, the fourth parameter \texttt{attr=4} is price, which means it is a cost attribute (lower values are better). To handle this we input the correspondent attribute ID in \texttt{cost_ids}. Choosing any random user from our table, we calculate its decision matrix. ```r decisionMatrix(camera_data, 33, rounds="all", cost_ids=4)

Notice how we did not specify the \texttt{attr} argument. As suggested before, aside from \texttt{dataset} and \texttt{userid} almost all arguments have a default value and perform a default behavior. When no input is entered \texttt{attr} calculates using all recognized attributes and \texttt{rounds} with the first and the last, which is why we explicitly specified \texttt{"all"}. Our next step is to determine the reference points. For the \texttt{refps} of PT we will use the default settings of user \texttt{33} which are:
```r
decisionMatrix(camera_data, 33, rounds="first", cost_ids=4)

This result should correspond to and validate our PT-reference-point function referencePoints. ```r referencePoints(camera_data, 33, cost_ids=4)

Now that we have determined the decision matrix and the reference points for user 33, we can proceed to compute the following steps.

[Insert quick figure]

However, since we have demonstrated how the functions build on each other and to avoid repetitiveness, we will calculate these matrices using only one function. 

```r
pvMatrix(camera_data, 33, attr=1:4, rounds="all", cost_ids = 4,
         alpha = 0.88, beta=0.88, lambda=2.25)

Finally, for the final step of calculating the overall prospect values, we need to determine the attribute weights using the \texttt{WeightFunctions} cluster. Within it we have three weighting functions to our disposal, each mirroring a method described in section \ref{ch:Content1:sec:Section3}: \texttt{differenceToIdeal, entropy, highAndStandard}. Using the entropy method we first need to normalize the \emph{decision matrix} and pass it on to the respective function. ```r decisionMatrix33 <- decisionMatrix(camera_data, 33, attr=1:4, rounds="all", cost_ids = 4)

```r
norm.DM33 <- normalize.sum(decisionMatrix33[[1]])

```r norm.DM33 <- normalize.sum(decisionMatrix33[[1]])

```r
entropy(norm.DM33)

Since most functions are vectorialized, they return a list. For this reason, we use \texttt{[[1]]} to subset the matrix from the list, which can be then passed on to \texttt{normalize.sum}. Keeping such details in mind in addition to having to know which normalizing function to use with each weight method can be quite unefficient. Alternatively, we can computate the above results with an `interface' function called \texttt{weight.entropy}, which also functions for the above functions, with their corresponding names. ```r weight.entropy(camera_data, 33, attr=1:4, rounds="all", cost_ids = 4)

The weight vector reflects the relative importance each attribute has for the user, thus its addition results in 1. Furthermore, each row of the value matrix represents how much value one specific product setting (or product alternative) represents to the user, relative to its reference points. Multiplying each row of the value matrix with the weight vector, returns the overall prospect value of each alternative. To achieve this we can find at least two equivalent ways.
```r
overallPV(camera_data, 33, rounds="all", cost_ids = 4, weightFUN ="entropy")

```r overallPV(camera_data, 33, rounds="all", cost_ids = 4, weightFUN ="entropy", alpha=0.88, beta=0.88, lambda=2.25)

The last command illustrates many of the implementation principles described in section \ref{ch:Content2:sec:Section2}. The function \texttt{overallPV} functions without the user having to calculate something first (a). It is build upon many of the functions we used before as you can notice by the many shared parameters it contains (b). It calculates our final results with just one command (c) and it does so by returning a list (e). Throughout this demonstration we also showed how some functions are vectorized in the \texttt{userid} parameter (d) and how they handle cost type attributes by using the \texttt{cost\_ids} argument. 

Lastly, ranking alternatives by the highest value reveals that the second alternative showed the highest \emph{prospect value} for user 33. This is quite interesting, since the chosen product configuration, i.e. the last round has a value of -0.3715795, which is not even the second highest option. Nevertheless, we made three important assumptions which influence our results: (1) the default values as reference points, (2) prospect theory's assumptions and value function and (3) the entropy weighting function. 

The overall prospect values for the DRP approach and the TRP theory can be achieved in a similar manner with \texttt{overallDRP} and \texttt{overallTRP}. As mentioned before for multiple reference point approaches there is no theoretical framework on how to read all the reference points from the data. Thus, for DRP and TRP we need to specify \texttt{SQ, G} and \texttt{MR, SQ, G} for each of the four attributes. 
```r
dualPoints <- matrix(c(sq=1.5,g=2.5,  1.5,2.5,  1.5,2.5,  0.17,-0.10),
                     nrow=4, ncol=2, byrow=TRUE)
triPoints <-  matrix(c(mr=0.5,sq=1.5,g=2.5,  0.5,1.5,2.5,  0.5,1.5,2.5,  
                       0.40,0.17,-0.10), nrow=4, ncol=3, byrow=TRUE)

```r colnames(dualPoints) <- c("sq", "g") colnames(triPoints) <- c("mr","sq", "g")

For the fourth attribute we chose a smaller G than the the SQ and the MR (MR only for TRP), since it will be converted to a benefit type attribute. Furthermore, since attributes 1 to 3 have the same possible values, they should have the same reference points. Now we can calculate the overall prospect values for user 33. For simplicity, we will use the default values for DRP and TRP (see pages 14, 15).
```r
overallDRP(camera_data, 33, rounds="all", cost_ids = 4, weightFUN="entropy",
           dual.refps = dualPoints, lambda = 2.25, delta = 0.8)
overallTRP(camera_data, 33, rounds="all", cost_ids = 4, weightFUN="entropy",
           tri.refps = triPoints, beta_f = 5,beta_l = 1,beta_g = 1,beta_s = 3)

The weighting functions are easier to use, since they basically require the same information. Analogous to\texttt{entropy}, we use the same arguments to call our two other weight functions. ```r weight.differenceToIdeal(camera_data, 33, attr=1:4, rounds="all", cost_ids = 4) weight.highAndStandard(camera_data, 33, attr=1:4, rounds="all", cost_ids = 4) ````

The goal of this chapter was solely to introduce and familiarize the reader with the mechanics and parameters of `productConfig'. Now that we know... we would like to shortly illustrate how we can use productConfig together with visualizing packages to gain interesting insights into product configurators. Consider...



avilesd/productConfig documentation built on May 11, 2019, 4:08 p.m.