Analyzing network scale-up data using the networkreporting package

Introduction

The networkreporting package has several tools for analyzing survey data that have been collected using the network scale-up method.

This introduction will assume that you already have the networkreporting package installed. If you don't, please refer to the introductory vignette ("getting started") for instructions on how to do this.

Review of the network scale-up method

For the purposes of this vignette, we'll assume that you have conducted a survey using network scale-up questions in order to estimate the size of a hidden population. Analytically, using the scale-up estimator involves two steps:

We'll quickly review each of these steps, and then we'll show how to use the package to carry the estimation out.

Step 1: estimating network sizes

Here, we will use the known population estimator for respondents' degrees (Killworth et al., 1998; Feehan and Salganik, 2016). In order to estimate the degree of the $i$ th survey respondent, we use

$$ \begin{align} \label{eqn:kpdegree} \hat{d_i} = \sum_{j=1}^{K} y_{ij} \times \frac{N}{\sum_{j=1}^{K} N_j}, \end{align} $$

where $N$ is the total size of the population, $N_j$ is the size of the $j$ th population of known size, and $y_{ij}$ is the number of connections that survey respondent $i$ reports between herself and members of the $j$ th population of known size.

Step 2: estimating hidden population sizes

Once we have the estimates of the respondents' degrees, we use them to produce an estimate for the size of the hidden population:

$$ \begin{align} \label{eqn:nsum} \hat{N}h = \frac{ \sum{i \in s} y_{ih} }{ \sum_{i \in s} \hat{d_i} }, \end{align} $$

where $N_h$ is the size of the population of interest (which we want to estimate), $s$ is the set of respondents in our sample, and $\hat{d_i}$ is the estimate of the size of respondent $i$'s degree, obtained using the known population method.

Preparing data

In order to use the package, we will assume that you start with two datasets: the first is a survey containing information collected from respondents about their personal networks; the second is information about the sizes of several populations.

The example data for this vignette are provided with the networkreporting package, and can be loaded by typing

library(networkreporting)
library(surveybootstrap)

## column names for connections to hidden population numbers
hidden.q <- c("sex.workers", "msm", "idu", "clients")

## column names for connections to groups of known size
hm.q <- c("widower", "nurse.or.doctor", "male.community.health", "teacher", 
          "woman.smoke", "priest", "civil.servant", "woman.gave.birth", 
          "muslim", "incarcerated", "judge", "man.divorced", "treatedfortb", 
          "nsengimana", "murekatete", "twahirwa", "mukandekezi", "nsabimana", 
          "mukamana", "ndayambaje", "nyiraneza", "bizimana", "nyirahabimana", 
          "ndagijimana", "mukandayisenga", "died")

## size of the entire population
tot.pop.size <- 10718378

The example data include two datasets: one has all of the responses from a network scale-up survey, and the other has the known population sizes for use with the known population estimator.

Preparing the known population data

The demo known population data are in example.knownpop.dat:

example.knownpop.dat

example.knownpop.dat is very simple: one column has a name for each known population, and the other has its toal size. We expect that users will typically start with a small dataset like this one. When using the networkreporting package, it is more useful to have a vector whose entries are known population sizes and whose names are the known population names. The df.to.kpvec function makes it easy for us to create it:

kp.vec <- df.to.kpvec(example.knownpop.dat, kp.var="known.popn", kp.value="size")

kp.vec

Finally, we also need to know the total size of the population we are making estimates about. In this case, let's assume that we're working in a country of 10 million people:

# total size of the population
tot.pop.size <- 10e6

Preparing the survey data

Now let's take a look at the demo survey dataset, which is called example.survey:

head(example.survey)

The columns fall into a few categories:

This is the general form that your survey dataset should have.

Topcoding

Many network scale-up studies have topcoded the responses to the aggregate relational data questions. This means that researchers considered any responses above a certain value, called the topcode, to be implausible. Before proceeding with the analysis, researchers substitute the maximum plausible value in for the implausible ones. For example, in many studies, researchers replaced responses with the value 31 or higher with the value 30 before conducting their analysis (see Zheng, Salganik, and Gelman 2006).

We won't discuss whether or not this is advisable here, but this is currently a common practice in scale-up studies. If you wish to follow it, you can use the topcode.data function. For example, let's topcode the responses to the questions about populations of known size to the value 30. First, we'll examine the distribution of the responses before topcoding:

## make a vector with the list of known population names from
## our dataset of known population totals
known.popn.vars <- paste(example.knownpop.dat$known.popn)

## before topcoding: max. response for several popns is > 30
summary(example.survey[,known.popn.vars])

Several populations, including widower, male.community.health, teacher, woman.smoke, muslim, and incarcerated have maximum values that are very high. (It turns out that 95 is the highest value that could be recorded during the interviews; if respondents said that they were connected to more than 95 people in the group, the interviewers wrote 95 down.)

Now we use the topcode.data function to topcode all of the responses at 30:

example.survey <- topcode.data(example.survey,
                               vars=known.popn.vars,
                               max=30)

## after topcoding: max. response for all popns is 30
summary(example.survey[,known.popn.vars])

If you look at the help page for topcode.data, you'll see that it can also handle situations where the variables can take on special codes for missing values, refusals, and so forth.

Estimating network sizes

Now that we have finished preparing the data, we turn to esimating the sizes of each respondent's personal network. To do this using the known population estimator, we use the kp.degree.estimator function:

d.hat <- kp.degree.estimator(survey.data=example.survey,
                             known.popns=kp.vec,
                             total.popn.size=tot.pop.size,
                             missing="complete.obs")

summary(d.hat)

We can examine the results with a histogram

library(ggplot2) # we'll use qplot from ggplot2 for plots
theme_set(theme_minimal())
qplot(d.hat, binwidth=25)

Now let's append the degree estimates to the survey reports dataframe:

example.survey$d.hat <- d.hat

Missing data in known population questions

TODO

Estimating hidden population size

Now that you have estimated degrees, you can use them to produce estimates of the size of the hidden population. Here, we'll take the example of injecting drug users, idu

idu.est <- nsum.estimator(survey.data=example.survey,
                          d.hat.vals=d.hat,
                          total.popn.size=tot.pop.size,
                          y.vals="idu",
                          missing="complete.obs")

Note that we had to specify that we should use only rows in our dataset with no missing values through the missing = "complete.obs" option, and also that we had to pass in the total population size using the total.popn.size option. The resulting estimate is

idu.est

This returns the estimate, and also the numerator and denominator used to compute it.

Variance estimation

In order to estimate the sampling uncertainty of our estimated totals, we can use the rescaled bootstrap technique; see Feehan and Salganik 2016 for more about the rescaled boostrap and how it can be applied to the network scale-up method. In order to use the rescaled boostrap, you need to be able to specify the sampling design of your study. In particular, you need to be able to describe the stratifcation (if any) and the primary sampling units used in the study.

idu.est <- bootstrap.estimates(## this describes the sampling design of the
                               ## survey; here, the PSUs are given by the
                               ## variable cluster, and the strata are given
                               ## by the variable region
                               survey.design = ~ cluster + strata(region),
                               ## the number of bootstrap resamples to obtain
                               ## (NOTE: in practice, you should use more than 100.
                               ##  this keeps building the package relatively fast)
                               num.reps=100,
                               ## this is the name of the function
                               ## we want to use to produce an estimate
                               ## from each bootstrapped dataset
                               estimator.fn="nsum.estimator",
                               ## these are the sampling weights
                               weights="indweight",
                               ## this is the name of the type of bootstrap
                               ## we wish to use
                               bootstrap.fn="rescaled.bootstrap.sample",
                               ## our dataset
                               survey.data=example.survey,
                               ## other parameters we need to pass
                               ## to the nsum.estimator function
                               d.hat.vals=d.hat,
                               total.popn.size=tot.pop.size,
                               y.vals="idu",
                               missing="complete.obs")

By default, bootstrap.estimates produces a list with num.reps entries; each entry is the result of calling the estimator function on one bootstrap resample.

Next, you can write a bit of code that will help us put all of these results together, for plotting and summarizing

library(plyr)
## combine the estimates together in one data frame
## (bootstrap.estimates gives us a list)
all.idu.estimates <- ldply(idu.est,
                           function(x) { data.frame(estimate=x$estimate) })

We can examine the summarized results with a histogram or with summarize.

## look at a histogram of the results
qplot(all.idu.estimates$estimate, binwidth=50)

## summarize the results
summary(all.idu.estimates$estimate)

To produce 95% intervals using the percentile method you can do something like this

quantile(all.idu.estimates$estimate, probs=c(0.025, 0.975))

Internal valididty checks

If you want to run internal validation checks (see e.g. Salganik et al., 2011, Fig 3), you can use the nsum.internal.validation function. We specify that we wish to use only complete observations (ie, we will remove rows that have any missing values from our calculations).

iv.result <- nsum.internal.validation(survey.data=example.survey,
                                      known.popns=kp.vec,
                                      missing="complete.obs",
                                      killworth.se=TRUE,
                                      total.popn.size=tot.pop.size,
                                      kp.method=TRUE,
                                      return.plot=TRUE)

Now iv.result is a list that has a summary of the results in the entry results

iv.result$results

Since we passed the argument return.plot=TRUE to the function, we also get a plot:

print(iv.result$plot)

This plot is a ggplot2 object, so we can customize it if we want. As a very simple example, we can change the title:

print(iv.result$plot + ggtitle("internal validation checks"))

The ggplot2 website has more information on modifying ggplot2 objects.

Attaching known population totals to the dataframe

Several of the functions we demonstrated above required us to pass in the vector containing the known population sizes and also the size of the total population. We can avoid this step by attaching these two pieces of information to the survey dataframe using the add.kp function:

example.survey <- add.kp(example.survey, kp.vec, tot.pop.size)

d.hat.new <- kp.degree.estimator(survey.data=example.survey,
                                 # we don't need this anymore, since we
                                 # them to survey.data's attributes using add.kp
                                 #known.popns=kp.vec,
                                 #total.popn.size=tot.pop.size,
                                 missing="complete.obs")

summary(d.hat.new)

This is exactly the same result we obtained before.



Try the networkreporting package in your browser

Any scripts or data that you put into this service are public.

networkreporting documentation built on May 2, 2019, 1:52 p.m.