knitr::opts_chunk$set( collapse = TRUE, comment = "#>" ) library(nmecr) library(dplyr) library(lubridate) library(ggplot2) library(tidyr)
The nmecr
package was written to integrate the past efforts of the energy efficiency industry to streamline meter-based Measurement & Verification of EE projects. It brings together simple linear regression with outside air temperature, the change point models (from ASHRAE 1050-RP), and the Time-of-Week and Temperature model (developed by the Lawrence Berkeley National Laboratory) and builds enhancements upon these by accurately predicting the energy use profiles of facilities with multiple operating modes.
Meter-based energy efficiency programs are proliferating across the globe. This proliferation is influenced by the widespread availability of high frequency energy use measurements from new metering technology as well as advancements in statistical regression and other empirical energy data modeling methodologies. Program administrators may report savings as the overall reduction in normalized metered energy consumption (NMEC). This method to determine savings is based on analysis of meter data collected prior to and after energy efficiency and conservation measures have been installed. Referred to as advanced measurement and verification (M&V) by the industry, this time-granular data and updated modeling methods provide several advantages over other methods used to quantify the benefits of energy efficiency:
It reliably determines the actual savings achieved at the meter
It provides fast feedback on the facility's energy performance and savings progress
It enables the identification and troubleshooting of issues that prevent savings realization
The nmecr
package streamlines the application of the NMEC approach by enabling the:
Management of high frequency, high volume data.
Execution of advanced, state-of-the-art time-series data modeling algorithms.
Comprehensive assessment of the validity of energy data models in specific applications.
Quantification of uncertainties and risks associated with the energy savings projections in accordance with ASHRAE Guideline 14.
For an overview of nmcer, please refer nmecr_overview.pdf
nmecr
includes two datasets for this vignette: temp
and eload
. These contain three years (03/01/2012 - 02/28/2015) of energy use and outside temperature data from a commercial building in North America. We will use the first year (03/01/2012 - 02/28/2013) of energy use and temperature data for this predictability analysis, as at the start of any project, we generally can collect a year of data. Additionally, a TMY3 dataset is provided as TMY3
.
# Load data into an R session: data(eload) data(temp) data(TMY3)
eload
and temp
are data frames with the two variables each. When using nmecr
functions, ensure that the column headers of your datasets are the same as those shown below.
Eload:
head(eload, n=5)
Temp:
head(temp, n=5)
create_dataframe()
combines the eload
and temp
dataframes into one, filters by the specified start and end dates, and aggregates to an hourly, daily, or monthly data interval. It lines up all data such that each timestamp represents attributes up until that point, e.g. an eload value corresponding to 03/02/2012 represents the energy consumption up until then - the energy consumption of 03/01/2012. If operating mode data is supplied, this information is added to the dataframe created by create_dataframe()
.
# Baseline Dataframe baseline_df <- create_dataframe(eload_data = eload, temp_data = temp, start_date = "03/01/2012 00:00", end_date = "02/28/2013 23:59", convert_to_data_interval = "Daily") head(baseline_df, 5)
# Performance Period Dataframe performance_df <- create_dataframe(eload_data = eload, temp_data = temp, start_date = "03/01/2014 00:00", end_date = "02/28/2015 23:59", convert_to_data_interval = "Daily") head(performance_df, 5)
The baseline period dataframe can be used for energy data modeling using one of the four modeling algorithms available in nmecr:
model_with_TOWT()*
: Time-of-Week & Temperature and Time-Only algorithms
model_with_CP()
: 3-Parameter Heating, 3-Parameter Cooling, 4-Parameter, and 5-Parameter algorithms
model_with_SLR()
: Simple Linear Regression algorithm
model_with_HDD_CDD()
: Heating Degree Day only, Cooling Degree Day only, and a combination of Heating Degree Day and Cooling Degree Day algorithms
The common arguments for these four algorithms are:
create_dataframe()
)model_with_TOWT()
has two additional arguments: prediction_data and occupancy_info. prediction_data (output from create_dataframe()) provides the independent variable data over which the model can be applied to compute predictions. occupancy_info is an optional parameter to manually define the occupancy for the time-of-week aspect of the algorithm.
model_with_HDD_CDD()
has two additional optional arguments: HDD_balancepoint and CDD_balancepoint. As the name suggests, these define the balancepoints for heating and cooling degree days. If left undefined, HDD and CDD values are calculated using a balancepoint of 65.
The four modeling algorithms have many specifications in common that can be specified using the assign_model_inputs()
function. The following code chunk shows the default values for these model inputs
model_input_options <- assign_model_inputs(timescale_days = NULL, has_temp_knots_defined = FALSE, equal_temp_segment_points = TRUE, temp_segments_numeric = 6, temp_knots_value = c(40, 45, 50, 60, 65, 90), initial_breakpoints = c(50,65), regression_type = "TOWT", occupancy_threshold = 0.65, day_normalized = FALSE)
As noted in the nmecr_predictability.pdf vignette, the Time-of-Week and Temperature model is best suited for this facility.
Time of Week and Temperature:
TOWT_baseline_model <- model_with_TOWT(training_data = baseline_df, model_input_options = assign_model_inputs(regression_type = "TOWT"))
TOWT_baseline_df <- TOWT_baseline_model$training_data %>% tidyr::gather(key = "variable", value = "value", -c("time", "temp", "CDD", "HDD")) ggplot2::ggplot(TOWT_baseline_df, aes(x = time, y = value)) + geom_line(aes(color = variable), size = 1) + xlab("Time") + scale_y_continuous(name = "Energy Data & Model Fit (kWh)", labels = scales::comma) + theme_minimal() + theme(legend.position = "bottom") + theme(legend.title = element_blank())
Figure 1: Baseline Energy Use modeled with Time-of-week and Temperature over Time
TOWT_performance_model <- model_with_TOWT(training_data = performance_df, model_input_options = assign_model_inputs(regression_type = "TOWT"))
TOWT_performance_df <- TOWT_performance_model$training_data %>% tidyr::gather(key = "variable", value = "value", -c("time", "temp", "CDD", "HDD")) ggplot2::ggplot(TOWT_performance_df, aes(x = time, y = value)) + geom_line(aes(color = variable), size = 1) + xlab("Time") + scale_y_continuous(name = "Energy Data & Model Fit (kWh)", labels = scales::comma) + theme_minimal() + theme(legend.position = "bottom") + theme(legend.title = element_blank())
Figure 2: Performance Period Energy Use modeled with Time-of-week and Temperature over Time
Model Statistics
To ensure that the models meet goodness-of-fit criteria, we can run the calculate_summary_statistics()
function:
TOWT_baseline_stats <- calculate_summary_statistics(TOWT_baseline_model) TOWT_performance_stats <- calculate_summary_statistics(TOWT_performance_model) all_stats <- bind_rows(TOWT_baseline_stats, TOWT_performance_stats) model_names <- c("TOWT Baseline", "TOWT Performance") all_stats <- bind_cols("Model Name" = model_names, all_stats) all_stats
Using the models' statistics and the uncertainty for 10% savings, we can determine the validity of the NMEC approach for a certain project. For the building described here, the items that follow contain key values to be used for this assessment
CVRMSE (should be < 25%):
Baseline Model: 6.8%
Performance Period Model: 7.3%
NMBE: should be < 0.5%
Baseline Model: ~0.00%
The California Public Utilities Commission requires savings to be detectable above model variations. nmecr
interprets this using ASHRAE Guideline 14 - 2014's formulation for savings uncertainty, which relates the savings uncertainty to the model goodness of fit metric CV(RMSE), the confidence level, the amount of savings, the amount of data used to develop the model, and the amount of data required to report savings. The formulation includes the correction polynomial developed by Sun and Balthazar[^1] for daily and monthly models. It also includes a correction when autocorrelation is present (which occurs mainly in models developed from daily and hourly data). LBNL has shown this uncertainty formulation with correction for autocorrelation underestimates the savings uncertainty. More work on this issue is needed. Until a better formulation is available, nmecr
uses ASHRAE's method only as an estimation.
[^1]: Sun, Y. and Baltazar, J.C., "Analysis and Improvement of the Estimation of Building Energy Savings Uncertainty," ASHRAE Transactions, vol. 119, May 2013
TOWT_baseline_savings_10 <- calculate_savings_and_uncertainty(prediction_df = NULL, savings_fraction = 0.1, modeled_object = TOWT_baseline_model, model_summary_statistics = TOWT_baseline_stats, confidence_level = 90) TOWT_baseline_savings_10$savings_summary_df
In addition to evaluating the model metrics, it is essential to ensure that the modeled profile follows the actual energy use closely (see Figure 8 above), and that the four assumptions of linear regression are met by the model. For a step-by-step process of evaluating the model against the four assumptions of regression, read nmecr_predictability.pdf
Normalized Savings represent the energy savings that measures would achieve under a 'normal' set of conditions which, in this case, TMY3 weather data.
Inspecting the TMY3 data, we see that it has hourly data intervals:
head(TMY3, n = 5)
Since we have our models in daily time intervals, it will be best to aggregate the TMY3 data to the daily level as well:
TMY3_daily <- aggregate(eload_data = NULL, temp_data = TMY3, convert_to_data_interval = "Daily", temp_balancepoint = 65)
Previously, nmecr did not provide a direct calculation for normalized savings and these had to be calculated manually by the user. The following function, added recently, provides this calculation now.
norm_savings <- calculate_norm_savings_and_uncertainty(baseline_model = TOWT_baseline_model, baseline_stats = TOWT_baseline_stats, performance_model = TOWT_performance_model, performance_stats = TOWT_performance_stats, normalized_weather = TMY3_daily, confidence_level = 90) as.data.frame(norm_savings$normalized_savings_df)
The Normalized Savings are found to be 9.1% with an associated uncertainty of 30%.
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.