Optimal CO2 emission policy given relocation risk

user.name = '' # set to your user name

library(RTutor)
check.problem.set('RTutorOptimalCO2', ps.dir, ps.file, user.name=user.name, reset=FALSE)

# Run the Addin 'Check Problemset' to save and check your solution

title: "RTutorOptimalCO2" author: "Andreas Unkauf" date: "19.3.2016" output: html_document


In this problem set we want to analyze industry compensation under the risk of relocation. To do so, we gradually reproduce the paper "Industry Compensation under Relocation Risk: A Firm-Level Analysis of the EU Emissions Trading Scheme" written by Ralf Martin, Mirabelle Muûls, Laure B. de Preux and Ulrich J. Wagner published in 2014 in which they derive efficient permit allocation rules in order to reduce carbon leakage and job risk without increasing compensation in the scenario of the EU ETS. The original paper and the data can be obtained from the website of the American Economic Association (Both can be found here: link )

Exercise Overview

  1. Introduction

  2. Exempted Firms

2.1 Graphical approach

  1. Vulnerability Score

  2. Optimal Permit Allocation

  3. Exploring the Results

  4. Conclusion

  5. References

Note: This will be the recommended order to work through the problem set but if required individual parts can be skipped. Bear in mind that later exercises rely on knowledge and information achieved in early exercises. Finally you need to solve the tasks of each exercise in the given order.

Exercise 1 - Introduction

In this exercise we just want to set a basic knowledge for the considered scenario and have a first peak on the given data we are going to use.

The EU Emission Trading Scheme

Since we want to analyze how we can optimize permit allocation throughout the EU Emissions Trading Scheme (EU ETS), we first need to understand how the EU ETS works. The objective of the EU ETS is, to reduce the pollution of greenhouse gas emissions in all partaking 31 countries by setting an overall cap on CO2 emissions from all stationary sources. All affected emitters can subsequently trade their permits to pollution with each other and therefore lower their individual production costs. By gradually reducing the overall cap of CO2, the price of the permits begin to rise. While for firms which can easily and cheaply reduce their emissions, this is an appeal to so in order to establish their profit, firms with high cost of reducing their emissions are facing a lot of additional costs. These additional cost however are for many firms an appeal to relocate to an unregulated country in order to stay competitive on the market (see Ellerman et al. (2010, p. 193-195)). This risk of relocation, which is composed of job losses and carbon leakage, is the main driving reason that forced politicians to weaken the policy and give sectors and sub-sectors with a high risk of relocation a higher share of free permits. In the phases I and II of the EU ETS this was mainly done by grandfathering, while since phase III the allocation is made by benchmarking. If you need more information on grandfathering and benchmarking, take a look at Ellerman et al. (2010, p. 60-67) or check the info box below.

info("Grandfathering and Benchmarking") # Run this line (Strg-Enter) to show info

While firms in sectors with an average risk of relocation get a share of 80% in 2013 and declining to 30% in 2020 of their benchmarked product permits for free in phase III, firms in a sector with a high risk of relocation are granted 100% of their benchmarked product permits for free and therefore only need to auction a small amount of permits (see EU ETS Handbook (2015, p. 24)). Now since the European Commission (EC) considers sectors at a high risk of relocation if they surpass certain threshold values for carbon intensity (CI) and/or trade intensity (TI), we want at first analyze which sectors are exempted. For more information about CI and TI, you can check the info-box below.

info("CI and TI") # Run this line (Strg-Enter) to show info

About the Data

The data for this paper provides us with very unique firm-level dataset since it combines classical economic performance measures from the ORBIS database, data on CO2 emissions from the EU ETS and interview data obtained by interviewing manufacturing firms from the following six European countries: Belgium, France, Germany, Hungary, Poland and the United Kingdom. This interview data supplies us with a proxy for how vulnerable firms are to carbon pricing and we will use this source later to optimize the allocation of free permits.

On the problem set itself

Throughout this problem set you'll have to solve different code chunks and quizzes. Before you start with the first code chunk in each exercise you need the click the edit panel. Be sure to check the info-boxes for further information to solve the code chunks. Moreover you can check the hint panel at each code chunk to get a little tip for solving the task. Finally if don't know how to solve the task at all you can check the solution panel to finish the task.

So before we investigate the dataset in more detail, here's a little quiz to get warmed up:

! addonquizThe EU ETS

Exercise 2 - Exempted Firms

In this exercise we want to find out more about the sectors that are exempted. First of all we load the data necessary for this purpose. To do so we use the command read.dta() from the package foreign in order to load the dataset basicdata.dta into our workspace, which contains firm-level data on manufacturing firms in the EU ETS. Remember to download the file basicdata.dta from where you downloaded this problem set and save it in your working directory.

info("read.dta()") # Run this line (Strg-Enter) to show info

Task:

Load the library foreign and load the dataset basicdata.dta with the read.dta() command. Save the data in the variable basic. If you need further information, just check the info-box above about the read.dta() command. Click check after you finished.

# Enter your code here

Now with the data loaded, we want to see how the sectors spread around carbon and trade intensity. Therefore we need to group the firms by their NACE four-digit level code (a system to classify industry) and calculate their average trade and carbon intensity as well as their sum of installations. The variables we need for this task are sec4dig, vv.xxx0, tt.xxx0, ninstallations and nonmanufactoring. You can check the info-box for further information on these variables.

info("Variables of interest 1") # Run this line (Strg-Enter) to show info

First we want to look at some basic characteristics of the variables we are interested in.

Task:

Just press check to see some characteristics of the data.

library(stargazer)
stargazer(data.frame(basic$tt_xxx0,basic$vv_xxx0,basic$ninstallations),
          type="html", style="aer", title="Basic Firm Characteristics",
          covariate.labels = c("Trade intensity" , 
                               "Carbon intensity" , "Installations"),
          digits=2,flip=TRUE)


One of the most noticeable properties of the data is the fact that we have a different number of observations for each of these variables (here reported in the row N). This means that we have missing data for some observations and we first need to clean up our data set in order to get rid of this missing data problem.

Task:

Create a new dataset Ex1 where you remove - firms that have no entry for sec4dig, i.e. firms that have no NACE four-digit classification - firms where nonmanufacturing is 1, i.e. firms that are not manufacturing - firms where ninstallations is missing, i.e. firms where we have missing information about their number of installations

form the original dataset basic. Since this is your first real exercise you get the solution straightaway, but try to understand what happens and how we delete entries because we need the same pattern later on.

# We filter out the missing data from our data set basic
# and save it into the new dataset Ex1
Ex1=filter(basic, is.na(sec4dig)==FALSE,
           is.na(nonmanufacturing)==TRUE,is.na(ninstallations)==FALSE)

Now it's time to group our firms by their NACE four-digit level and calculate the averages and sums. To accomplish this task you will learn about the transmute() and group_by() routines from the package dplyr.

Task:

Load the library dplyr and group the dataset Ex1 by its NACE four-digit level stored in sec4dig with the group_by() command. Save this grouping in the variable by_nace. If you are not familiar with the group_by() command, go and check the info-box below.

info("group_by()") # Run this line (Strg-Enter) to show info

# Enter your code here

With the grouping by_nace we are able to do calculations for firms of the same group. As said earlier we now want to calculate the mean of CI and TI as well as the sum of installations in each group of our dataset.

Task:

Use the summarize() command on your grouping by_nace and calculate the following: - the mean of vv_xxx0 and save it into CI.mean. Add na.rm=TRUE in the mean command. - the mean of tt_xxx0 and save in into TI.mean. Add na.rm=TRUE in the mean command. - the sum of ninstallations and save in into sum.installations

Save the results into the variable NACE.

info("summarize()") # Run this line (Strg-Enter) to show info

# Enter your code here

Let's have a look at what we have calculated so far. Just press check.

stargazer(round(NACE[1:3,],2), type="html",style="aer",summary=FALSE, 
          title="Excerpt of NACE",rownames=FALSE,
          covariate.labels = c("NACE Code","CI","TI","Installations"))


As you can see, for each NACE four-digit sector we have their means of CI and TI as well as their sum of installations. If you find yourself asking why sum.installations is not an even number, keep in mind that a firm may not only produce one type of product and therefore the installation belongs to different NACE four-digit levels. Let's look up the values for a specified NACE-sector. For example the NACE 4-digit code 2051 stands for a "Manufacture of explosives". If we want to get the values, you could simply look through the full dataset, but this wouldn't be very efficient. An easier way te get the data is to use the filter() command from the package dplyr as you have seen earlier. If you're not familiar with this function, check the info-box.

info("filter()") # Run this line (Strg-Enter) to show info

Task:

We use the filter() command to get the values of CI.mean, TI.mean and sum.installations for the sector "Manufacture of explosives" with the NACE four-digit code 2051. Uncomment the code and fill out the missing commands.

# Uncomment the the following code and fill out the ???
# stargazer(round(filter(???,sec4dig=="???"),2),type="html",
#         summary=FALSE,rownames = FALSE,
#         title="Sector 2051"))


So the sector 2051: Manufacture of explosives consists of 17 installations (in our Dataset) and has a CI of 2.50 and a TI of 26.03. Now back to our dataset NACE. We can now use the package ggplot2 which gives us a nice set of functions for visualization in order to get a first insight of our data. So let's plot each NACE sector on a graph with TI on the x-axis and CI on the y-axis. Moreover, let each circle be proportional to the number of installations in this sector.

Task:

For this task the code is given, so just press check and get an insight to the data.

# Load the required package 'ggplot2' from the library
library(ggplot2)

# Define which data you want to use and its aesthetics.
# In our case the x and y arguments
MyPlot=ggplot(NACE,aes(x=TI.mean,y=CI.mean))

# With geom_point we define that we want a scatter plot, 
# size allows us to give every sector their number of installations
# scale_size_area defines the maximal size of a point
# With theme we turn of the legend
# And with scale_x/scale_y we define our x- and y- axis
MyPlot=MyPlot+geom_point(aes(size=sum.installations),shape=1,alpha=0.8)+
  scale_size_area(max_size=20)+
  theme(legend.position="none")+
  scale_x_continuous(limits=c(0,100),name="Trade intensity")+
  scale_y_continuous(limits=c(0,80),name="Carbon intensity")

# Show the plot
MyPlot

As we can see there are many NACE sectors located in the lower left corner with very low CI and TI. Also we have many trade intensive sectors and only some very carbon intensive sectors. Like we said in the beginning, the Directive 2009/29/EC defines exact threshold values in order to decide if a sector is at a high risk of relocation: - TI or CI is greater than 30% - TI is greater than 10% and CI greater than 5%

If one of these criteria are met, a sector is considered to have risk of carbon leakage. For our regarded sector 2051: Manufacture of explosives this means no exempting. Now let's add these thresholds to our plot to see if there are many sectors considered to be at a high risk of relocation.

Task:

To represent the threshold values, we want to add 2 vertical and 2 horizontal lines to our plot with the two routines geom_vline and geom_hline. Check the info-box for information to this routine. Add the following lines to MyPlot: - A horizontal line with intercept at 30% CI in red - A horizontal line with intercept at 5% CI in blue - A vertical line with intercept at 30% TI in green - And a vertical line with intercept at 10% TI in black

info("geom_vline and geom_hline") # Run this line (Strg-Enter) to show info

# Enter your code here

With these thresholds now being set, we want to use the same classification for the sectors as in the underlying paper: - $\textrm{Group A: } CI \gt 30$, i.e. all above the red line, representing sectors with a high carbon intensity - $\textrm{Group B: } TI \gt 30 \ \& \ CI \lt 30$, i.e. all on the right side of the green line but under the red line, representing sectors with high trade intensity and mediocre carbon intensity - $\textrm{Group C: } 5 \lt CI \lt 30 \ \& \ 10 \lt TI \lt 30$, i.e. the square between all lines, representing sectors with mediocre trade and carbon intensity

The remaining sectors are the ones, that are not exempted from the auctioning. We can represent this classification with the following graphic:

Furthermore, since B is a very big group, we want to separate it into two parts: - Group B with CI<5, which are sectors with high trade intensity and low to none carbon intensity - Group B with a CI between 5 and 30, which are sectors with high trade intensity and mediocre carbon intensity

Now with this classification set, we can move on to the next exercise and analyze the characteristics of these exempted sectors. Before that, let's conclude this exercise with a quiz.

Task:

You can enter whatever you think is needed to resolve the quiz here.

# Enter your command here

! addonquizNACE

Exercise 2.1 - Graphical approach

As we said in the beginning, the risk of relocation has two different sides: If a company decides to relocate to an unregulated country, the country they leave has to face job losses and since their pollution is now unregulated, we have probably an increase in greenhouse gas emissions that are a global bad (Ellerman et al. (2010, p. 193-195)). When the policy was made this was accounted in the fashion that firms in a sector with high CI and/or TI received their benchmarked permits fully for free in order to get a lower burden from auctioning. Now did this exempting level out the effectiveness of the policy? Are only firms exempted with a small amount of greenhouse gas (GHG) emission? Is the huge bulk of polluters still forced to trade the majority of their permits or to reduce their emissions? Let's calculate which of our groups has the biggest share on emissions, employees and firms.

info("Variables of interest 2") # Run this line (Strg-Enter) to show info

Task:

As in the previous exercise we begin with loading the dataset. Just press edit and check afterwards to load it.

library(foreign)
Dat = read.dta("basicdata.dta")

Before we calculate the shares each sector has we need to clean up our data in case of missing values. If you are not sure how this works, just check the previous exercise.

Task:

Manipulate the dataset Dat by removing - firms where nonmanufacturing is 1, i.e. firms that are not manufacturing - firms where notETS is 1, i.e. firms that are not in the EU ETS - firms where xquad is NA, i.e. firms we couldn't derive a group for (A, B&CI<5, B&CI>5, C, not exempt) - firms where countid is NA, - firms where empBigorbis is NA, i.e. firms we have no information about their employees

form the original dataset. Use the filter() command

# Enter your code here

Now that our dataset is cleaned up, let's think about the shares. Before we calculate the total shares of each group we need to calculate them on the firm-level. In our dataset we have total numbers about emissions and employees as well as the ratio of the firm in sector (not every firm is only producing in one sector). If we divide these numbers by their total sum, we get the share of each.

Task:

Divide in the dataset Dat each variable countid, empBigorbis and surr by their respective total sum in order to get their shares. Use the mutate() command for this task. Save the share of countid in oneOsum, the share of empBigorbis in empOsum and the share of surr in surrOsum. For more info on the mutate()command check the info box below

info("mutate() and transmute()") # Run this line (Strg-Enter) to show info

# Enter your code here

Since we are interested in the shares of each group we established in Exercise 2, we need to group by this classification. The variable xquad contains the information about the groups we derived.

Task:

Use the group_by() command in order to group the dataset Dat by the variable xquad. Save the result in the variable classification.

# Enter your code here

With this classification set we now can calculate the shares of each group stated in xquad. As described our aim is to calculate for each group stated in xquad the total share on firms, employees and emissions.

Task:

Use the summarize() routine on the grouping classification. Calculate the following three variables: - firms: The sum over the variable oneOsum - employees: The sum over the variable empOsum - emission: The sum over the variable surrOsum Save it all in the data set Groups.

# Enter your code here

So let's have a fist look at what we calculated until now

Task:

Just press check to see the reults.

Groups[,2:4]=round(Groups[,2:4],4)
stargazer(Groups[,2:4], type="html",summary=FALSE, 
          title="Characteristics of groups",flip=TRUE,
          covariate.labels = c("Statistic","not exempt",
                               "A","B,CI>5","B,CI<5","C"))


With the values calculated we can again use the ggplot() package to get a visual understanding of the data.

Task:

Just execute the code by clicking check and have a look at the resulting plot.

# Load the package 'ggplot2'
library(ggplot2)
# We need to `melt` the data because we have now a wide format but 
# for our plot we need to change that
DatMelt=melt(Groups,id.vars="xquad")
# For our barplot we want the different sectors on the x-Axis
# and their respective values as the `bar`
barplot = ggplot(DatMelt,aes(x=xquad,y=value,fill=variable))
# For each sector we want their values of firms, employment and emission
# displayed side by side. Position="dodge" allows us to do this
barplot+geom_bar(stat="identity",position="dodge")+
  scale_y_continuous("") + 
  scale_x_discrete("") +
  theme(legend.position=c(1,1),legend.justification=c(1,1)) +
  scale_fill_discrete(name ="",
      labels=c("Share of firms", "Share of employment","Share of emissions"))

Now this plot verifies the worry we expressed in the beginning of this exercise: While nearly 50% of the firms are not exempted from auctioning they only represent around 15% of all CO2 emissions. This leaves the pollution rights by the European industry and thus strongly weakens the principle of full auctioning as it was stated by the ETS. As we said one reason of this dilemma is the fear of the risk of relocation which the industry uses in order the get exempted, but how can we modify this policy and get a larger amount of emission permits to be fully auctioned?

One approach is the "Optimal Permit Allocation" which idea is "that payments be distributed across firms so as to equalize marginal relocation probabilities, weighted by the damage caused by relocation" (Martin, Muls, Preux, Wagner (2014)) . We will try to apply this idea in the upcoming exercises but for now let's finish this exercise with a quiz.

Task:

Enter what you want in this part in order to solve the quiz.

# Enter your command here

! addonquizQuiz 2.1

Exercise 3 - Vulnerability score

As we said in the previous exercise the optimal permit allocation idea is to hand out payments, i.e. free permits, to firms in order to equalize their marginal relocation probability (weighted by the damage caused by their relocation). How do you get those relocation probabilities to optimize effective? The approach in the paper was to collect interview data from different firms in order to try out their model. In their telephone interview one of their questions to managers was:

"Do you expect that government efforts to put a price on carbon emissions will force you to outsource part of the production of this business site in the foreseeable future, or close down completely?"

Then they used the answer to match it to an ordinal score, the so called "vulnerability score"(VS) which ranges from 1 to 5. These scores were set as follows: - VS=1: The managers expected no outsourcing at all - VS=3: At least 10% of production and/or employment are expected to be outsourced - VS=5: Probably the plant gets shut down completely - VS=2 and VS=4 were assigned to responses in between

If you want to know more about the interview design, be sure to check the info box below.

info("About the interview") # Run this line (Strg-Enter) to show info

Let's investigate this score a little bit before we use it in the model. First, we want to have look at the firm characteristics of the firms that responded to the interview before we look at country-specific and sectoral differences. Check the info box below for more information on the variables we are using.

info("Variables of interest 3") # Run this line (Strg-Enter) to show info

Task:

As usual we begin with loading the dataset. Just click edit and then check to load it.

library(foreign)
basic = read.dta("basicdata.dta")

The variable unique contains information if a firm has accepted or declined the interview. If the firm accepted, unique is 1 and else NA. It is also NA if a firm was sampled twice.

Task:

Erase the entries where unique is NA and remove the entries where fimpact_score_clean is NA. Since you already know how this works the solution is already given

Dat=filter(basic,is.na(unique)==FALSE,is.na(fimpact_score_clean)==FALSE)

Overall characteristics

We want to begin by looking up some characteristics of the firms we interviewed. For this task we want to use the stargazer()command from the package stargazer which allows us to design nice tables out of data and regressions as you have seen in exercise 2. If you want more information on the stargazer package check the info box below.

info("Stargazer") # Run this line (Strg-Enter) to show info

Task:

For this task the code is already given, so just press check.

characteristics = transmute(Dat, age=company_age_clean, 
                    turnover=firmturnover2007_EUR/1000,
                    employees=firmemployees2007,
                    EBIT=earningsbeforeI_T_2007_EUR/1000,
                    shareholder=noshareholders2007, 
                    subsidiaries=nosubsidiaries2007)

stargazer(characteristics, type="html",style="aer",
          summary.stat=c("mean","sd","p25","median","p75","n"),
          digits=0,title="Firm Characteristics",
          covariate.labels = c("Age (years)","Turnover (millions EUR)",
                               "Number of employees",
                               "EBIT (millions USD)",
                               "Number of Shareholders",
                               "Number of subsidiaries"))


We can see that our sample data is nicely spread and as previous said is quite reliable. Let's continue by calculating the overall mean and standard deviation of the VS across all firms.

Task:

Calculate the mean and the standard deviation of fimpact_score_clean using the basic commands mean() and sd().

# Enter your code here

Now this implies a rather low impact of carbon pricing across all partaking firms. We will later see if this holds true for country-level and sector-level data.

Next let's see if the sample is spread evenly across the considered countries. To do so we need to group our dataset by the variable country and calculate the sum over the variable unique since this is now 1 for every entry.

Task:

Use the group_by() command from the dplyr package to group the dataset Dat by country. Save the result in by.country. Uncomment the summarize(...) command and run the code.

# Enter your code here
# summarize(by.country,count=sum(unique))

Although the UK has an obvious peak in observations we have overall a nice base for our study.

Differences by country

In this part we want to investigate if there are any major differences in the vulnerability score for different countries. For this task we use the variable country in our dataset which represents the origin of each firm. In the previous part we already grouped our dataset by this variable, so we can directly move on to calculate the average VS in each country.

Task:

Use the summarize() command on the grouping by.country to calculate the new variable VS which shall be the mean of fimpact_score_clean. Save the result into VS.by.country.

# Enter your code here

Now let's have a look at our results so far:

Task:

Just click check so see the data.

VS.by.country$VS=round(VS.by.country$VS,2)
stargazer(VS.by.country,type="html",summary=FALSE,
          title= "Average VS for Countries",
          covariate.labels = c("Country","VS"), rownames=FALSE )


While a table with the exact values is always right choice, a nice plot is a more vivid approach to look at data. In this case a barplot will be a good choice since we can see there easily the differences. In this task we will use the pirateplot() routine from the package yarrr. If you want to know more about the pirateplot() routine be sure to check the info box.

info("pirateplot()") # Run this line (Strg-Enter) to show info

Task:

Just press check and have a look at the resulting plot.

pirateplot(data=Dat,formula=fimpact_score_clean~country, inf="ci",
           theme.o=1,pal="appletv", main="VS by country", 
           xlab="Country", ylab="VS",point.o = 0)

Note: While the thick horizontal line represents the mean VS of the country, the translucent box around it represents the 95% confidence interval

It seems like firms in Germany, France and Poland are more affected by carbon pricing than in the other countries, but the differences are quite low. Moreover this effect levels out if we think about the following: In countries like Germany and France the debate about emission permits has a larger presence in the media than in some other countries. This higher presence can make managers more sensible for this topic and therefore slightly raise the estimates VS. That being said leaves us with the conclusion that we don't need to treat countries differently if we try to optimize the permit allocation. Now let's see if there are sectoral differences.

Differences by sector

Like to the previous part we want to find out if some sectors are more affected by carbon pricing than others. For this task our dataset contains the variable mcetsdig, which assigns different sectors to each firm. Let's see what characteristics mcetsdig does have.

Task:

Just click check to see which sectors we want to investigate.

unique(Dat$mcetsdig)

Now an assumption would be that the energy-intensive sectors like Iron&Steel, Cement or Glass are a lot more vulnerable to carbon-pricing than the other sectors. So let's find out the real values.

Task:

Calculate the average vulnerability score for each group in mcetsdig. Use again the dataset Dat to compute what you need for this task. Save your grouping in by.sector and your final calculation in VS.by.sector and print it out. Use the same style as in the previous exercise.

# Enter your code here
# 
# and uncomment afterwards 
# VS.by.sector$VS=round(VS.by.sector$VS,2)
# stargazer(VS.by.sector,type="html",summary=FALSE, rownames=FALSE,
#           title= "Average VS for Sectors",
#           covariate.labels = c("Sector","VS"))


Also in this case barplot would be the most convenient choice to investigate the results, but this time you need to use the ggplot() routine. Check the info box for more information on ggplot() and geom_bar().

info("ggplot() and geom_bar()") # Run this line (Strg-Enter) to show info

Task:

Create a barplot out of the data VS.by.sector. To do so save your ggplot() command into the variable barplot.sector and add the aesthetics geom_bar() and coord_flip() afterwards.

# Enter your code here

As we can see this gives us a slightly different result. Some sectors with high energy-consumption like Iron & Steel, Fuels, Glass and especially other minerals seem to be more vulnerable. We will consider this when we try to optimize the permit allocation in the sense of optimizing across firms and optimizing across sectors.

Correlation to other interview variables

Since we want to use the VS to have an estimate on the vulnerability of a firm to different ways of carbon pricing, we need to ensure that this score is consistent. Therefore we want to check if the score correlates to other interview variables in an expected way. In this task we want to see if the variables costpass_percent_z, emonitor_score_z, fp_competitorsMeuOcomp_z and ghgtargets_score_z correlate to our VS fimpact_score_z as expected. Be sure to check the info box below for a brief explanation of these variables. Furthermore we want to detect if there are differences if we only take EU ETS firms into account instead of the whole interview set.

info("Variables of interest 4") # Run this line (Strg-Enter) to show info

Let's first formulate our expectations to these variables: - costpass_percent_Z: Our VS should be negative correlated to a high ability to pass-through the costs to the customers, since this means the firm can pass the cost of carbon pricing and therefore protect themselves from the negative effects of the additional costs. - fp_competitorsMeuOcomp_z: It is plausible that this should be positive correlated since high share of non-EU competitors that are not effected by the carbon pricing policy doesn't allow the regulated firm to pass the costs to their customers since this would decrease their international competitiveness. - emonitor.score.z and ghgtargets_score_z: Both should be positive correlated to the VS since firms that have high disadvantages due to carbon pricing are highly urged to monitor and decrease their emissions as well as their needed permits.

In the upcoming task we want to verify this expectations by regressing the VS to the above discussed variables with a simple linear regression model. Moreover we will see if these results are significant in order to confirm our expectations. If you want to know more about regressions, I recommend Stock and Watson (2015, p. 155 and following).

Task:

Just run the code below by pressing check.

# Filter the data
VS.all = filter(basic,is.na(interviewset)==FALSE)
VS.ets = filter(VS.all,ETS_idORphase3==1)
# Robust regression for cost pass-through
costpass.all = lm(fimpact_score_z~costpass_percent_z,data=VS.all)
costpass.all.rob = coeftest(costpass.all, 
                            vcov = vcovHC(costpass.all, "HC1"))
costpass.ets = lm(fimpact_score_z~costpass_percent_z, data=VS.ets)
costpass.ets.rob = coeftest(costpass.ets, 
                            vcov = vcovHC(costpass.ets, "HC1"))
# Robust regression for share of non-EU competitors
nonEU.all = lm(fimpact_score_z~fp_competitorsMeuOcomp_z,data=VS.all)
nonEU.all.rob = coeftest(nonEU.all, 
                         vcov = vcovHC(nonEU.all, "HC1"))
nonEU.ets = lm(fimpact_score_z~fp_competitorsMeuOcomp_z, data=VS.ets)
nonEU.ets.rob = coeftest(nonEU.ets, 
                         vcov = vcovHC(nonEU.ets, "HC1"))
# Robust regression for customers are other businesses
emonitor.all = lm(fimpact_score_z~emonitor_score_z,data=VS.all)
emonitor.all.rob = coeftest(emonitor.all, 
                            vcov = vcovHC(emonitor.all, "HC1"))
emonitor.ets = lm(fimpact_score_z~emonitor_score_z,data=VS.ets)
emonitor.ets.rob = coeftest(emonitor.ets, 
                            vcov = vcovHC(emonitor.ets, "HC1"))
# Robust regression for GHG targets
ghg.all = lm(fimpact_score_z~ghgtargets_score_z,data=VS.all)
ghg.all.rob = coeftest(ghg.all, 
                       vcov = vcovHC(ghg.all, "HC1"))
ghg.ets = lm(fimpact_score_z~ghgtargets_score_z, data=VS.ets)
ghg.ets.rob = coeftest(ghg.ets, 
                       vcov = vcovHC(ghg.ets, "HC1"))
# Generate a nice table
stargazer(costpass.all.rob, nonEU.all.rob,
          emonitor.all.rob, ghg.all.rob,
          type = "html", style="aer", omit="Constant",
          covariate.labels = c("Cost pass-through",
                               "Share of non-EU competitors",
                               "Energy monitoring",
                               "GHG targets"),
          title="Correlation VS and other interview variables (all firms)",
          column.labels="All firms",
          dep.var.labels = "Vulnerability Score",omit.stat="all",
          column.separate = 4)
stargazer(costpass.ets.rob, nonEU.ets.rob,
          emonitor.ets.rob, ghg.ets.rob,
          type = "html", style="aer", omit="Constant",
          covariate.labels = c("Cost pass-through",
                               "Share of non-EU competitors",
                               "Energy monitoring",
                               "GHG targets"),
          title="Correlation VS and other interview variables (EU ETS firms)",
          column.labels="EU ETS Firms",
          dep.var.labels = "Vulnerability Score",omit.stat="all",
          column.separate = 4)


These results confirm the expectations we discussed above with a remarkable high significance (in the case of all firms) of at least at the 1% level. Although the statistical significance is for some variables lower if we only consider EU ETS firms, we can conclude that the VS is a quite consistent estimator for the vulnerability of a firm to carbon pricing.

Before we set up our model for the optimal permit allocation we finish this exercise with a quiz.

Task:

Enter what you want in this command block in order to solve the quiz.

# Enter your command here

! addonquizQuiz 3

Exercise 4 - Optimal Permit Allocation

In this chapter we as legislator want to deduce a model which improves the actual law. As we have seen until now, our biggest concern will be the risk of relocation, i.e. firms that leave the country since the burden through GHG taxation is too high and they can be a lot more profitable in less strict countries. Like we said, this risk of relocation has 2 downsides since on one hand we have job losses and on the other hand the carbon leakage risk. We will take this to account when we derive the risk of relocation for each individual firm.

A feasible model

For a firm $i$ located in a country which is regulated through the policy we denote the firms profit by $\pi_i(p,q_i)$ where $q_i$ defines the number of free permits allocated to firm $i$ and $p$ the current permit price. Due to the fact that free permits are equal to a subvention to the firm, we can expect that $\frac{\partial \pi_i(p,q_i)}{\partial q_i} \gt 0 \, \forall \, p \gt 0$, i.e. that each additional permit to firm $i$ increases the profit of this firm. If the firm $i$ would leave their current country to an unregulated country $f$, this firm would earn a profit of $\pi_{if}$ in the other country, but would need to pay a relocation cost of $\kappa_i$. Based on the above a firm would relocate if $\pi_i(p,q_i) \lt \pi_{if} - \kappa_i$ in order to maximize their profit. Whilst the governments know pretty accurate the profit of a company in it's own country, it does not know about the net cost of relocation $\epsilon_i \equiv \kappa_i - \pi_{if}$. The government though does know that the net cost $\epsilon_i$ is an independent and identically distributed random variable that follows a continuously differentiable distribution function $F_i(\dot)$ with mean $\mu_{\epsilon}$ and a standard deviation of $\sigma_{\epsilon}$. With this assumptions made we can state a binary relocation variable as follows $$ y_i \equiv \mathbf{1}{(\epsilon_i \lt -\pi_i(p,q_i))}$$ which is $1$ if a firm relocates and $0$ else. With this the government's estimation if a firm $i$ relocates is given by $P(y_i=1|p,q_i)=F_i(-\pi(p,q_i))$. As we said in the beginning the aim of these free permits for polluting industries is to minimize the risk of relocation and to keep them internationally competitive. For each individual firm $i$ we define their relocation risk as their probability to relocate times the damage this relocation causes, i.e. $$\tag{1} r_i(q_i)=F_i(-\pi_i(p,q_i)) \cdot (\alpha l_i(p) + (1-\alpha) e_i(p))$$ in which we define $l_i(p)$ and $e_i(p)$ as the level of employment and emission at a permit price $p$ for the firm $i$. Furthermore $\alpha$ shall be a weight the government assigns to those attributes. This also implies that when a firm $i$ decides to relocate to an unregulated country, all of its emissions will leak to this country and all jobs will be lost. With the individual relocation risk defined, we can easily state that the overall relocation risk $R$ is the sum over the individual relocation risk $(1)$, i.e. we can define it as $R=\sum{i=1}^N r_i(q_i)$ where $N$ denotes the number of firms. With one last assumption we will be able to set our optimization problem: The overall cap $\bar{Q}$ on free permits will be exogenously fixed. This implies that the carbon price will be constant and can therefore be excluded from the calculation. Now with that said the aim of the government will be to minimize the total damage inflicted by relocating. To achieve this goal it can dispense free permits to each firm without exceeding the overall cap $\bar{Q}$. Mathematically spoken this yields the following optimization problem (primal program): $$\tag{2} \min_{q_i \ge 0} \sum_{i=1}^N r_i(q_i) \quad s.t. \quad \sum_{i=1}^N q_i \le \bar{Q}$$ By assumption we know that if we give out an additional free permit we will get a reduction in the probability of relocation (this is because $F_i(-\pi_i(p,qi_i))$ is a continuously differentiable distribution function). This means that the shadow price $\lambda$ has to be positive and furthermore that our constraint in $(2)$ has to hold with equality in an optimum and we get the following first-order condition: $${F^'i}(-\pi_i(q_i)) \frac{\partial \pi_i(q_i)}{\partial q_i} \cdot (\alpha l_i + (1-\alpha) e_i) = \lambda \quad \forall i$$ This equation implies that we need to align the reduction of relocation with the last free permit assigned to this firm for each firm. Let's review the idea of the marginal relocation probabilities a little further. If we would observe two firms where the level of employment and abatement is the same, but their probability of relocation is different, we shouldn't give the firm with the higher relocation probability the free permits but the firm where the free permits bring the biggest reduction in the relocation probability. To fully state an optimization problem we need to frame the dual program. Aim of it is to minimize the number of free permits given out while keeping the overall relocation risk under the level $\bar{R}$. Again mathematically spoken: $$\tag{3} \min{q_i \ge 0} \sum_{i=1}^N q_i \quad s.t. \quad \sum_{i=1}^N r_i(q_i) \le \bar{R}$$

Solving numerically

With our assumptions made and our model set up it is time to solve the programs. Since we want to take in account different relocation probability functions, a numerically approach by using dynamic programming seems to be a good choice since it provides us with the necessary framework. Based on this we want to take a closer look at the primal program $(1)$.

Programs with the above discussed properties are also known as cake-eating problems since we want do distribute a fixed 'cake' (in our case our caps $\bar{R}$ and $\bar{Q}$) optimally. But instead distributing the cake over time as in the original cake-eating problem we want to distribute it across firms. First of all we can rewrite $(1)$ into $$\begin{align} \min_{0 \le q_i \le s_i} &\sum_{i=1}^N r_i(q_i) \\ s.t. \quad &s_{i+1}=s_i-q_i \\ &q_i \ge 0 \ , \ s_{i+1} \ge 0 \\ &s_1=\bar{Q} \end{align}$$ where $s_i$ denotes the amount of permits left when reaching firm $i$ in the sequence. This formulation and an arbitrary ordering of firms yields us the Bellman equation $$V_i(s_i)=\min_{0 \le q_i \le s_i} \left[ r_i(q_i)+V_{i+1}(s_i-q_i) \right]$$ for our program, where $V_i()$ is the so-called value function and $V_{i+1}(s_i-q_i)$ is the value of leaving $s_i-q_i$ permits to all firms left in the sequence (see Sniedovich (1992, p. 303 and following)). We can solve this system by beginning with the last firm $N$ in the sequence which function is given by $V_N(s_N)=F_N(-\pi_N(q_N))(\alpha l_N + (1+\alpha) e_N)$ and then iterate backwards to get the optimal $q_i$'s.

In the same fashion we can derive a recursive formulation of the dual program $(2)$. By inverting the relocation risk $r_i(q_i)$ of an individual firm $i$ we get $q_i=\pi_i^{-1}\left[-F_i^{-1}\left(\frac{r_i}{\alpha l_i + (1-\alpha)e_i}\right)\right]$. We can again plug that into the dual program $(2)$ and rewrite it as Bellman equation: $$W_i(s_i)=\min_{0 \le r_i \le s_i} \pi_i^{-1}\left[-F_i^{-1}\left(\frac{r_i}{\alpha l_i + (1-\alpha)e_i}\right)\right] + W_{i+1}(s_i-r_i) $$ Again we can solve this problem with the same approach as for the primal program. If you want to read more about cake-eating problems and a dynamic programming approach, I recommend Sniedovich (1992) or Adda and Cooper (2003).

Now we want to try to explain this idea with a little numerical example. Let's assume we have 3 firms, 3 permits to give out ($\bar{Q}=3$) and we want to minimize the damage inflicted by relocating by distributing these permits optimally. The following table shall represent the risk of each firm for each amount of $q_i$ they receive.

Task:

Just press edit and check afterwards.

# Grid of permits
q_i=seq(0,3)
# Risk associated with number of permits for each firm
risk.1=c(10,8,7,5); risk.2=c(8,7,6,6); risk.3=c(5,5,4,3)
risk.table=data.frame(q_i,risk.1,risk.2,risk.3)
risk.table

For example if firm $1$ receives zero permits their damage inflicted by relocation is $10$ and for a larger amount it drops. Now let's calculate the Bellman equation as stated above. Obviously $V_3$ equals $r_3$ since this shall be the last firm in our sequence. For $V_2$ we need to differentiate 4 cases: $s_2$ can equal 0,1,2 and 3. So for $V_2(s_2=0)$ we have no permits to distribute and therefore we get $V_2(s_2=0,q_2=0)=r_2(0)+V_3(0)=13$. If we have 1 permit to distribute we get $V_2(s_2=1,q_2=1)=r_2(1)+V_3(0)=12$ or $V_2(s_2=1,q_2=0)=r_2(0)+V_3(1)=13$ and since we want to minimize we set $V_2(s_2=1)=12$. With the same approach we get the optimal $V_2(s_2=2)=r_2(2)+V_3(0)=11$ and $V_2(s_2=3)=r_2(2)+V_3(1)=11$.

Now we can calculate the value function of the first firm in our sequence $V_1$ and therefore derive the optimal distribution of the permits. If we give firm 1 zero free permits all 3 free permits go to the remaining firms in the sequence and therefore we get $V_1(s_1=3,q_1=0)=r_1(0)+V_2(s_2=s_1-q_1=3)=10+11=21$. With the same calculation we get $V_1(s_1=3,q_1=1)=19$, $V_1(s_1=3,q_1=2)=19$ and $V_1(s_1=3,q_1=3)=18$. Again optimality requires us to choose the smallest value function and therefore we get $V_1(s_1)=18$ which implies that firm 1 would get all 3 free permits while the other two firms get none since this would give us highest reduction in relocation propensity (weighted by their damage).

Note however that while this is easily solvable by hand for 3 firms and 3 free permits, this can get quite complex for a set of hundreds of firms and thousands of free permits distributed on a fine grid. This fact forces us to deduce a efficient numerical solver for this problem that works in the same fashion as described above. For the ease of readability we want to skip the implementation of the solver and move on with how we can estimate the relocation probability out of the interview data.

Estimating $F_i$

Before we can solve our programs for different scenarios we need to estimate the marginal propensity to relocate. To do so we will use our VS from the interview responses but first we need to make some assumptions: - We assume a linear approximation of the profit function, i.e. $\pi_i(q_i)=\delta_{0i}+\delta_{1i}q_i$ - We assume that the unobserved net cost of relocation $\epsilon_i$ follows a logistic distribution

With these assumptions made we can rewrite the relocation probability to $$P(y_i=1|q_i)=F_i(\pi_i(q_i))=\frac{1}{1+exp(\beta_{0i}+\beta_{1i}q_i)}$$ where we define $\beta_{0i}=\frac{\delta_{0i}+\mu_{\epsilon}}{\sigma_{\epsilon}}$ and $\beta_{1i}=\frac{\delta_{1i}}{\sigma_{\epsilon}}$. Our VS reflects the probability that a firm relocates if they wouldn't get any free allocation at all. Another question of the interview was how the VS would change if the firm would get free allocation for 80 percent of their emissions. With the mapping from VS to probabilities we get values for $P(y_i=1|q_i=0)$ and $P(y_i|q_i=0.8e_i)$ for each firm. With these we can calculate $\beta_{0i}$ and $\beta_{1i}$ by rearranging the relocation probability to $\beta_{0i}=\ln{\left( \frac{1-P(y_i=1|q_i=0)}{P(y_i=1|q_i=0)} \right)}$ and $\beta_{1i}=\frac{1}{0.8e_i}\left(\ln{\left( \frac{1-P(y_i=1|q_i=0.8e_i)}{P(y_i=1|q_i=0.8e_i)} \right)}-\beta_{0i}\right)$ and thus get our estimates for $F_i(\cdot)$ for each firm.

With all these assumptions made and the model set up we can pass our problem to a numerical solver for so called cake-eating problems where a fixed pie (in our case the threshold values for emissions or permits, i.e. the $\bar{R}$ or the $\bar{Q}$) is distributed optimally (in our case across firms or sectors).

Now since executing 16 different optimization problems for 344 observations each is quite time consuming, we will only investigate the results in the next exercise, but first let's finish with a quiz.

! addonquizQuiz 4

Exercise 5 - Exploring the Results

In this chapter we want to find out if our efforts to optimize the allocation of free permits was successful. Because our model allows different assumptions, we want to split this up in two major topics. First, the aim of the government shall be cost reduction as we want to minimize the amount of free permits handed out. The second aim shall be to minimize the relocation risk. For both cases we want to assume different weights for the damage of job losses and carbon leakage (represented in the formula as $\alpha$) as well as optimize on a firm- and on the sector-level. Furthermore we want to compare the results to phase II (grandfathering) and to phase III (benchmarking) of the EU ETS to get a better understanding.

Minimizing the cost

The idea to minimize the permits allocated for free can be seen as the approach to minimize the amount of taxpayers' money allocated to the firms while keeping the risk of relocation at a constant level. This also means that the government could earn additional revenue since a lot more permits would be traded on the market. That being said we need to load the data to get a better insight.

Task:

Load the dataset optimal_firm_done.dta and assign it to the variable firm.

# Enter your code here

In this dataset, we have standard information about the firms that have taken part in the interview. Moreover we have information about the optimized free permits allocated to these firms under different assumptions. We want to investigate these assumption one after another. Starting of, we need our reference scenario. In the variable allo we have information about the costs of the free permits given away to the firms. We need to sum up theses costs as this will be our reference level. Additionally we sum up the costs of the permits given out for free in 5 other cases and divide them by our reference scenario. These five cases are stored in the following variables: - opti_emp_allo_cost: We fixed the risk of job losses ($\alpha$ = 1) at the level resulting from grandfathering and then optimized to calculate the cost of the free permits given out to each firm - opti_co2_allo_cost: This time we fixed the risk of emission leakage ($\alpha$=0) at the level resulting from grandfathering - bench: The cost of the free permits given out under the benchmark-decision - opti_emp_bench_cost: The cost of free permits given out by fixing the job risk at the level resulting from benchmarking - opti_co2_bench_cost: The cost of the free permits given out by fixing the risk of emission leakage at the level resulting from benchmarking

The difference in the risk level of grandfathering and benchmarking arises due to the fact that under benchmarking only around half of the free permits were given out compared to the number of permits given out under grandfathering. This results in a higher overall risk of relocation (either in job losses or emission leakage) and gives us a higher cap $\bar{R}$ in the optimization problem.

Task:

Sum up over all the above described variables and save the results in the variables sum.allo, sum.emp.allo,sum.co2.allo,sum.bench,sum.emp.bench and sum.emp.bench accordingly. Use the summarise()command for this task and save the result in cost.min.

# Enter your code here

Before we look at what we have calculated, we want to transform the costs of the permits into shares of the costs of the permits given out under grandfathering. This can easily be done by calculating $share.scenario=\frac{cost.scenario}{cost.reference}*100$

Task:

Divide the dataset by the value of sum.allo and multiply all by $100$ to get the shares in percentage. Afterwards we print it out.

# Uncomment the code lines and fill out the ???
#cost.min=100*???/???

#cost.min=100*cost.min/cost.min$sum.allo
#stargazer(cost.min[1:3],
#          title="Permits distributed for free
#          (in percent of emissions), Risk level: Grandfathering",
#          type="html",summary=FALSE, style="aer",rownames=FALSE,
#          covariate.labels = c("Grandfathering",
#                               "Objective: Jobs","Objective: CO2"))
#stargazer(cost.min[4:6],
#          title="Permits distributed for free 
#          (in percent of emissions), Risk level: Benchmarking",
#          type="html",summary=FALSE, style="aer",rownames=FALSE,
#          covariate.labels = c("Benchmarking",
#                               "Objective: Jobs","Objective: CO2"))


So let's interpret the first table. As earlier said, these results are based on the risk of relocation associated with grandfathering and a firm-level optimization of the free permits. To keep the same level of risk as under grandfathering it would be enough to allocate between $14.3$ and $24.5$ percent of the permits given out under grandfathering for free to the firms, depending on our assumptions about the weight of each risk (job loss and carbon leakage), which would be a huge increase in efficiency.

Now to the other table: In the benchmarking scenario around $52.3$ percent of the permits that are given out in the grandfathering scenario are given for free. This shortage raises the overall risk in this scenario and therefore pushes our improvements even further. With the higher threshold for risk, we can reduce the amount of free permits to between $1.6$ to $13.0$ percent of free permits used in the grandfathering scenario. That being said we see that there is a quite big opportunity to raise the income of the auctioning with emission permits while not exceeding the risk of carbon leakage or job loss.

Reducing the relocation risk

Another aim of the government can be to reduce the overall risk of relocation in order to be a more attractive location for business. We took this into account in our optimization by keeping the free permits given out to firms or sectors at a fixed value $\bar{Q}$. As before we want to compare different scenarios (grandfathering and benchmarking) as well as different aims of the government (reducing job losses or carbon leakage). First up we want to reduce the risk of job losses. In our dataset we have information about the allocation in each different scenario and the estimates for $\beta_{0i}$ and $\beta_{1i}$. We want to apply these values to the formula we derived in the previous chapter $$ F_i(-\pi_i(q_i))=\frac{1}{1+exp(\beta_{0i}+\beta_{1i}q_i)} $$ to calculate the relocation probability and with that calculate the individual risk of relocation for each individual firm by $$ r_i(q_i)=F_i(-\pi_i(q_i))(\alphal_i+(1-\alpha)e_i)$$ Finally we want to take the sum over the individual risks to get an estimate of the risk of relocation in each scenario. We begin with calculating the risk of job losses under grandfathering.

info("Variables of interest 5") # Run this line (Strg-Enter) to show info

Task:

Calculate the individual relocation probability for each firm in the grandfathering scenario. The variable allo contains the number of free permits given out to each firm in this scenario (the $q_i$). The variables beta0 and beta1 contain the estimates for each firm. Use the mutate() command for this task, save your calculation in the variable prob.grand and your whole dataset in firm.

# Enter your code here

With the relocation probability calculated we need to multiply by the aim of the government. As said we first want to reduce job losses, therefore our $\alpha$ equals 1 and we need to multiply by the level of employment $l_i$ which is the employment at firm i divided by the total sum of employment $\left(l_i = \frac{{\textrm{employment}}_i}{\sum \textrm{employment}_i} \right)$.

Task:

Use again the mutate() command to calculate the level of employment $l_i$. Save the result in level.emp and save the dataset in firm.

# Enter your code here

In our dataset firm we have now for each firm the probability to relocate under grandfathering prob.grand and the level of employment level.emp. To get the risk associated to the relocation of a firm we need simply to multiply, i.e. $\textrm{relocation risk of firm i} = \textrm{probability to relocate of firm i} \times \textrm{level of employment at firm i}$. Moreover by summing up these risks over all firms we get estimate of the overall risk in a scenario.

Task:

Use the summarize() command to sum up the risk in the grandfathering scenario. Save it in the variable risk.grand and your whole dataset in risk. Finally print out the risk.

# Enter your code here

Let us explain this estimate. In the scenario of grandfathering (phase II of the EU ETS) and when the only concern of the government is risk of job losses around $0.04156413 * 100 = 4.16\%$ of all jobs of our sample are at risk of relocation. Now we want to compare this to what we can achieve through optimal permit allocation across firms and sectors. To do so we need to load a second dataset which contains the estimates for optimal permit allocation across sectors.

Task:

Load the dataset optimal_sector_done.dta and save it into the variable sector.

# Enter your code here

Our next step will be to merge the two datasets firm and sector into one dataset. For this task you will need the merge() command. If you're not familiar with this command, go ahead and check the info-box.

info("merge()") # Run this line (Strg-Enter) to show info

Task:

Merge the two datasets by the variable id and save the new dataset into optimal.

# Enter your code here

Again we want to calculate the risk of relocation. Due to the merging some variables have slightly changed their names so watch out, but for now we will only look at the two variables opti_emp_allo_risk and sopti_emp_allo_risk where we optimized the allocation in the grandfathering setup (a slightly higher $\bar{Q}$) across firms and sectors in order to minimize the risk of job loss.

Task:

Calculate the risk that arises through optimizing across firms and across sectors in the same fashion as above. Be aware that these variables changed their names in the dataset optimal: - emp is now emp.x - beta0 is now beta0.x - beta1 is now beta1.x Try to do it all with only summarize() command. Save the firm-level risk in risk.firm and the sector-level risk in risk.sector and save the hole dataset in emp.allo.

# Uncomment the code lines and fill out the "???"
#emp.allo=summarize(optimal,
#              risk.firm=sum((1/(1+exp(???+beta1.x*???)))
#                            *emp.x/sum(???)),
#              risk.sector=sum((1/(1+exp(???+beta1.x*???)))
#                            *emp.x/sum(???)))

# We multiply by 100 to get percentage
#emp.allo=emp.allo*100

#stargazer(round(emp.allo,4), 
#          title=" Share of employment at risk,Reference: Grandfathering",
#          type="html", rownames=FALSE, summary=FALSE, style="aer",
#          covariate.labels = c("Optimal Firm","Optimal Sector"))


For the firm-level optimization we have around $2.93\%$ and for sector-level around $3.23\%$ of jobs at risk. This means that by the same amount of permits given out as in the grandfathering scenario our average risk drops around one percentage point. Now we want to see if the same holds if the reference scenario is benchmarking which leaves us with a lower amount of permits we can distribute.

Task:

This time the code is already given. Just press check and see what we achieved.

emp.bench=summarize(optimal,
               risk.bench=sum((1/(1+exp(beta0.x+beta1.x*bench.x)))
                              *emp.x/sum(emp.x)),
               risk.firm=sum((1/(1+exp(beta0.x+beta1.x*opti_emp_bench_risk)))
                              *emp.x/sum(emp.x)),
               risk.sector=sum((1/(1+exp(beta0.x+beta1.x*sopti_emp_bench_risk)))
                              *emp.x/sum(emp.x)))
# We multiply by 100 to get percentage
emp.bench=emp.bench*100
stargazer(round(emp.bench,4),
          title=" Share of employment at risk, Reference: Benchmarking",
          type="html",rownames=FALSE, summary=FALSE, style="aer",
          covariate.labels = c("Benchmarking","Optimal Firm","Optimal Sector"))


While in the benchmarking scenario around $6.92\%$ of all jobs are at risk of relocation we can lower that risk with optimal permit allocation on a firm-level to $2.9\%$ and even with sector-level allocation to $4.51\%$ which is quite a good improvement. To close out we want to look at what we have calculated if the government's main concern would be carbon leakage. In our formula this would mean that $\alpha=0$ throughout our calculations and we would need to weight it by the level of emission $e_i$ which is defined as $e_i = \frac{{\textrm{emission}}_i}{\sum \textrm{emission}_i}$.

Task:

By now you know how it all works so simply press check and investigate the data.

co2.allo=summarize(optimal,
              risk.allo=sum((1/(1+exp(beta0.x+beta1.x*allo.x)))
                            *co2.x/sum(co2.x)),
              risk.firm=sum((1/(1+exp(beta0.x+beta1.x*opti_co2_allo_risk)))
                            *co2.x/sum(co2.x)),
              risk.sector=sum((1/(1+exp(beta0.x+beta1.x*sopti_co2_allo_risk)))
                            *co2.x/sum(co2.x)))
# We multiply by 100 to get percentage
co2.allo=co2.allo*100

co2.bench=summarize(optimal,
              risk.bench=sum((1/(1+exp(beta0.x+beta1.x*bench.x)))
                             *co2.x/sum(co2.x)),
              risk.firm=sum((1/(1+exp(beta0.x+beta1.x*opti_co2_bench_risk)))
                             *co2.x/sum(co2.x)),
              risk.sector=sum((1/(1+exp(beta0.x+beta1.x*sopti_co2_bench_risk)))
                             *co2.x/sum(co2.x)))
# We multiply by 100 to get percentage
co2.bench=co2.bench*100


stargazer(round(co2.allo,4), 
          title=" Share of emissions at Risk, Reference: Grandfathering",
          type="html", rownames=FALSE, summary=FALSE, style="aer", 
          covariate.labels=c("Grandfathering","Optimal Firm","Optimal Sector"))

stargazer(round(co2.bench,4),
          title=" Share of emissions at Risk, Reference: Benchmarking",
          type="html", rownames=FALSE, summary=FALSE, style="aer", 
          covariate.labels=c("Benchmarking","Optimal Firm","Optimal Sector"))


As we can see we get a nice improvement when our reference scenario is grandfathering where we can decrease the shares of emissions at risk between $1.32$ and $2.51$ percentage points with optimal permit allocation either across sectors or firms. But when the reference scenario is benchmarking which has a higher baseline risk due to the lower amount of permits given out we get a huge decrease by $9.59$ percentage points by giving out the permits with optimal permit allocation across firms but only a decrease of $0.88$ percentage points if we optimize across sectors. This success of benchmarking is mainly driven by its within-sector allocation.

Now before we finish this problem set with a conclusion, it's time for one last quiz.

Task:

You can enter all you think is required to solve the quiz here.

# Enter your command here

! addonquizQuiz 5

Now that we are finished with the optimal permit allocation we want to recap in the next exercise what we achieved throughout this problem set and state some final thoughts.

Exercise 6 - Conclusion

In our problem set we were interested in finding out how governments can optimally subsidize firms that suffer in international competitiveness, due to laws that regulate negative aspects of the industry. With the argument of these firms that they have to relocate to unregulated countries the governments are forced to intervene in the market, since relocation of firms means job losses, loss of taxes and emissions which were in our case the aspect we wanted to regulate. Therefore we derived a model that minimizes the expected damage emerging through relocation. This model distributes subsidies across firms or sectors as to equalize their impact on the objective function of the government. Subsequently we compared our model to the EU ETS, the biggest cap-and-trade scheme for emissions, where subsidies have been given out as free permits in order to decrease the risk of relocation. In a next step we found out, that we could achieve large reductions in job risk and emission leakage in comparison with the actual compensation rules. Moreover, we also stated that our model needs far less free permits to maintain the same risk of relocation as through the current rules. This also implies a higher auction revenue since more permits would be traded in auctions and therefore we could minimize that social cost of the policy.

While our approach takes the objectives stated by the EU ETS (prevention of carbon leakage and job losses through relocation) and achieves significant improvements over that actual allocation rules, it can be debatable if these are the only objectives of the policy and therefore the free permits were given out more generous. It may be plausible that this might have happened in order to build a stronger political support by subsidizing carbon and trade intensive sectors in the beginning of the cap-and-trade system. Moreover since the data we use for the estimates of the relocation probability originates from interview data, these values can be manipulated from the firms in order to fake a higher relocation probability and therefore get more free permits if we would apply this scheme to the actual situation. A possible solution to this can be to derive a mechanism based on publicly available firm characteristics.

Independent of this facts, our results show that further researches in the topic of optimal permit allocation and in the relocation propensities of firms will be beneficial for the government's objectives as well as for the tax payers. More generally speaking the approach of optimal permit allocation can be adopted to a variety of scenarios where compensation schemes are in charge in order to establish a more efficient policy.

If you want to see the awards that you collected, just press edit and check afterwards. In total there were $12$ awards you could achieve.

awards()

I hope you had fun solving this problem set. If so, make sure you check github.com/skranz/RTutor for more interesting problem sets.

Exercise 7 - References

Bibliography

R and Packages in R



aunkauf/RTutorOptimalCO2 documentation built on May 10, 2019, 2:17 p.m.