As the number of cyber-attacks continues to grow on a daily basis, so does the delay in threat detection. For instance, in 2015, the Office of Personnel Management (OPM) discovered that approximately 21.5 million individual records of Federal employees and contractors had been stolen. On average, the time between an attack and its discovery is more than 200 days. In the case of the OPM breach, the attack had been going on for almost a year. Currently, cyber analysts inspect numerous potential incidents on a daily basis, but have neither the time nor the resources available to perform such a task. anomalyDetection aims to curtail the time frame in which cyber-attacks go unnoticed and to aid in the discovery of these attacks among the millions of daily logged events, while minimizing the number of false positives and negatives. By incorporating a tabular vector approach along with multivariate analysis functionality, anomalyDetection provides cyber analysts the ability to effectively and efficiently identify time periods associated with suspected anomalies for further evaluation.

Functions

anomalyDetection provides 13 functions to aid in the detection of potential cyber anomalies:

| Function | Purpose | |:-----------|:--------------------------------------------------------| | tabulated_state_vector | Employs a tabulated vector approach to transform security log data into unique counts of data attributes based on time blocks.
| block_inspect | Creates a list where the original data has been divided into blocks denoted in the state vector.
| mc_adjust | Handles issues with multi-collinearity.
| mahalanobis_distance | Calculates the distance between the elements in data and the mean vector of the data for outlier detection. | bd_row | Indicates which variables in data are driving the Mahalanobis distance for a specific row, relative to the mean vector of the data.
| horns_curve | Computes Horn's Parallel Analysis to determine the factors to retain within a factor analysis. | factor_analysis | Reduces the structure of the data by relating the correlation between variables to a set of factors, using the eigen-decomposition of the correlation matrix. | factor_analysis_results| Provides easy access to factor analysis results. | kaisers_index | Computes scores designed to assess the quality of a factor analysis solution. It measures the tendency towards unifactoriality for both a given row and the entire matrix as a whole. | principal_components | Relates the data to a set of a components through the eigen-decomposition of the correlation matrix, where each component explains some variance of the data. | principal_components_results | Provides easy access to principal component analysis results. | get_all_factors | finds all factor pairs for a given integer.

anomalyDetection also incorporates the pipe operator (%>%) from the magrittr package for streamlining function composition. To illustrate the functionality of anomalyDetection we will use the security_logs data that mimics common information that appears in security logs and comes with anomalyDetection.

security_logs <- anomalyDetection::security_logs
security_logs <- tibble::as_tibble(security_logs)
library(dplyr)            # common data manipulations
library(tidyr)            # common data manipulations
library(tibble)           # turning output into convenient tibble
library(ggplot2)          # visualizations
library(anomalyDetection)

security_logs

State Vector Creation

To apply the statistical methods that we'll see in the sections that follow, we employ the tabulated vector approach. This approach transforms the security log data into unique counts of data attributes based on pre-defined time blocks. Therefore, as each time block is generated, the categorical fields are separated by their levels and a count of occurrences for each level are recorded into a vector. All numerical fields, such as bytes in and bytes out, are recorded as a summation within the time block. The result is what we call a "state vector matrix".

Thus, for our security_logs data we can create our state vector matrix based on our data being divided into 10 time blocks. What results is the summary of instances for each categorical level in our data for each time block. Consequently, row one represents the first time block and there were 2 instances of CISCO as the device vendor, 1 instances of IBM, etc.

tabulate_state_vector(security_logs, 10)

Multicollinearity Adjustment

Prior to proceeding with any multivariate statistical analyses we should inspect the state vector for multicollinearity, to avoid issues such as matrix singularity, rank deficiency, and strong correlation values, and remove any columns that pose an issue. We can use mc_adjust to handle issues with multi-collinearity by first removing any columns whose variance is close to or less than a minimum level of variance (min_var). Then, it removes linearly dependent columns. Finally, it removes any columns that have a high absolute correlation value equal to or greater than max_cor.

(state_vec <- security_logs %>%
  tabulate_state_vector(10) %>%
  mc_adjust())

By default, mc_adjust removes all columns that violate the variance, dependency, and correlation thresholds. Alternatively, we can use action = "select" as an argument, which provides interactivity where the user can select the variables that violate the correlation threshold that they would like to remove.

Multivariate Statistical Analyses

With our data adjusted for multicollinearity we can now proceed with multivariate analyses to identify anomalies. First we'll use the mahalanobis_distance function to compare the distance between each observation by its distance from the data mean, independent of scale. This is computed as

$$MD = \sqrt{(x - \bar{x})C^{-1}(x-\bar{x})} \tag{1}$$

where $x$ is a vector of $p$ observations, $x=(x_1, \dots, x_p)$, $\bar{x}$ is the mean vector of the data, $\bar{x}=(\bar{x}_1, \dots, \bar{x}_p)$, and $C^{-1}$ is the inverse data covariance matrix.

Here, we include output = "both" to return both the Mahalanobis distance and the absolute breakdown distances and normalize = TRUE so that we can compare relative magnitudes across our data.

state_vec %>%
  mahalanobis_distance("both", normalize = TRUE) %>%
  as_tibble()

We can use this information in a modified heatmap visualization to identify outlier values across our security log attributes and time blocks. The larger and brighter the dot the more significant the outlier is and deserves attention.

state_vec %>%
  mahalanobis_distance("both", normalize = TRUE) %>%
  as_tibble() %>%
  mutate(Block = 1:n()) %>% 
  gather(Variable, BD, -c(MD, Block)) %>%
  ggplot(aes(factor(Block), Variable, color = MD, size = BD)) +
  geom_point()

We can build onto this with the bd_row to identify which security log attributes in the data are driving the Mahalanobis distance. bd_row measures the relative contribution of each variable, $x_i$, to $MD$ by computing

$$BD_i = \Bigg|\frac{x_i - \bar{x}i}{\sqrt{C{ii}}} \Bigg| \tag{2}$$

where $C_{ii}$ is the variance of $x_i$. Furthermore, bd_row will look at a specified row and rank-order the columns by those that are driving the Mahalanobis distance. For example, the plot above identified block 17 as having the largest Mahalanobis distance suggesting some abnormal activity may be occuring during that time block. We can drill down into that block and look at the top 10 security log attributes that are driving the Mahalanobis distance as these may be areas that require further investigation.

state_vec %>%
  mahalanobis_distance("bd", normalize = TRUE) %>%
  bd_row(17, 10)

Next, we can use factor analysis by first exploring the factor loadings (correlations between the columns of the state vector matrix and the suggested factors) and then comparing the factor scores against one another for anomaly detection. Factor analysis is another dimensionality reduction technique designed to identify underlying structure of the data. Factor analysis relates the correlations between variables through a set of factors to link together seemingly unrelated variables. The basic factor analysis model is

$$ X= Λf+e \tag{3}$$

where $X$ is the vector of responses $X = (x_1, \dots, x_p)$, $f$ are the common factors $f = (f_1, \dots, f_q)$, $e$ is the unique factors $e = (e_1, \dots, e_p)$, and $Λ$ is the factor loadings. For the desired results, anomalyDetection uses the correlation matrix. Factor loadings are correlations between the factors and the original data and can thus range from -1 to 1, which indicate how much that factor affects each variable. Values close to 0 imply a weak effect on the variable.

A factor loadings matrix can be computed to understand how each original data variable is related to the resultant factors. This can be computed as

$$ Λ = \bigg[\sqrt{λ_1}e_1,\dots,\sqrt{λ_p}e_p \bigg] \tag{4}$$ where $λ_1$ is the eigenvalue for each factor, $e_i$ is the eigenvector for each factor, and $p$ is the number of columns. Factor scores are used to examine the behavior of the observations relative to each factor and can be used to identify anomaly detection. Factor scores are calculated as

$$ \hat{f} ̂= X_s R^{-1} Λ \tag{5}$$

where $X_s$ is the standardized observations, $R^{-1}$ is the inverse of the correlation matrix, and $ $ is the factor loadings matrix. To simplify the results for interpretation, the factor loadings can undergo an orthogonal or oblique rotation. Orthogonal rotations assume independence between the factors while oblique rotations allow the factors to correlate. anomalyDetection utilizes the most common rotation option known as varimax. Varimax rotates the factors orthogonally to maximize the variance of the squared factor loadings which forces large factors to increase and small ones to decrease, providing easier interpretation.

To begin using factor analysis, the dimensions of the reduced state vector matrix are first passed to the horns_curve function to find the recommended set of eigenvalues.

horns_curve(state_vec)

Next, the dimensionality is determined by finding the eigenvalues of the correlation matrix of the state vector matrix and retaining only those factors whose eigenvalues are greater than or equal to those produced by horns_curve. We use factor_analysis to reduce the state vector matrix into resultant factors. The factor_analysis function generates a list containing five outputs:

  1. fa_loadings: numerical matrix with the original factor loadings
  2. fa_scores: numerical matrix with the row scores for each factor
  3. fa_loadings_rotated: numerical matrix with the varimax rotated factor loadings
  4. fa_scores_rotated: numerical matrix with the row scores for each varimax rotated factor
  5. num_factors: numeric vector identifying the number of factors
state_vec %>%
  horns_curve() %>%
  factor_analysis(state_vec, hc_points = .) %>%
  str()

For easy access to these results we can use the factor_analysis_results parsing function. The factor_analysis_results will parse the results either by their list name or by location. For instance to extract the rotated factor scores you can use factor_analysis_results(data, results = fa_scores_rotated) or factor_analysis_results(data, results = 4) as demonstrated below.

state_vec %>%
  horns_curve() %>%
  factor_analysis(state_vec, hc_points = .) %>%
  factor_analysis_results(4) %>%
  # show the first 10 rows and 5 columns for brevity
  .[1:10, 1:5]

To evaluate the quality for a factor analysis solution Kaiser proposed the Index of Factorial Simplicity (IFS). The IFS is computed as

$$ IFS = \frac{∑i\big[q ∑_sv_js^4-(∑_sv{js}^2)^2\big]}{∑i\big[(q-1)(∑_sv{js}^2)^2 \big]} \tag{6}$$

where $q$ is the number of factors, $j$ the row index, $s$ the column index, and $v_{js}$ is the value in the loadings matrix. Furthermore, Kaiser created the following evaluations of the score produced by the IFS as shown below:

  1. In the .90s: Marvelous
  2. In the .80s: Meritorious
  3. In the .70s: Middling
  4. In the .60s: Mediocre
  5. In the .50s: Miserable
  6. < .50: Unacceptable

Thus, to assess the quality of our factor analysis results we apply kaisers_index to the rotated factor loadings and as the results show below our output value of 0.702 suggests that our results are "middling".

state_vec %>%
  horns_curve() %>%
  factor_analysis(data = state_vec, hc_points = .) %>%
  factor_analysis_results(fa_loadings_rotated) %>%
  kaisers_index()

We can visualize the factor analysis results to show the correlation between the columns of the reduced state vector to the rotated factor loadings. Strong negative correlations are depicted as red while strong positive correlations are shown as blue. This helps to identify which factors are correlated with each security log data attribute. Furthermore, this helps to identify two or more security log data attributes that appear to have relationships with their occurrences. For example, this shows that Russia is highly correlated with IP address 223.70.128. If there is an abnormally large amount of instances with Russian occurrances this would be the logical IP address to start investigating.

fa_loadings <- state_vec %>%
  horns_curve() %>%
  factor_analysis(state_vec, hc_points = .) %>%
  factor_analysis_results(fa_loadings_rotated)

row.names(fa_loadings) <- colnames(state_vec)

gplots::heatmap.2(fa_loadings, dendrogram = 'both', trace = 'none', 
            density.info = 'none', breaks = seq(-1, 1, by = .25), 
            col = RColorBrewer::brewer.pal(8, 'RdBu'))

We can also visualize the rotated factor score plots to see which time blocks appear to be outliers and deserve closer attention.

state_vec %>%
  horns_curve() %>%
  factor_analysis(state_vec, hc_points = .) %>%
  factor_analysis_results(fa_scores_rotated) %>%
  as_tibble(.name_repair = "unique") %>%
  mutate(Block = 1:n()) %>%
  gather(Factor, Score, -Block) %>%
  mutate(Absolute_Score = abs(Score)) %>%
  ggplot(aes(Factor, Absolute_Score, label = Block)) +
  geom_text() +
  geom_boxplot(outlier.shape = NA)

This allows us to look across the factors and identify outlier blocks that may require further intra-block analysis. If we assume that an absolute rotated factor score $\geq$ 2 represents our outlier cut-off then we see that time blocks 4, 13, 15, 17, 24, 26, and 27 require further investigation. We saw block 17 being highlighted with the mahalanobis_distance earlier but these other time blocks were not as obvious so by performing and comparing these multiple anomaly detection approaches we can gain greater insights and confirmation.

An alternative, yet similar approach to factor analysis is principal component analysis. The goal in factor analysis is to explain the covariances or correlations between the variables. By contrast, the goal of principal component analysis is to explain as much of the total variance in the variables as possible. Thus, The first principal component of a set of features $X_1, X_2,\dots,X_p$ is the normalized linear combination of the features

$$ Z_1 = \phi_{11}X_1 + \phi_{21}X_2 + \cdots + \phi_{p1}X_p \tag{7}$$

that has the largest variance. By normalized, we mean that $\sum^p_{j=1} \phi^2_{j1} = 1$. We refer to the elements $\phi_{11},\dots,\phi_{p1}$ as the loadings of the first principal component; together, the loadings make up the principal component loading vector, $\phi_1 = (\phi_{11}, \phi_{21}, \dots, \phi_{p1})^T$. The loadings are constrained so that their sum of squares is equal to one, since otherwise setting these elements to be arbitrarily large in absolute value could result in an arbitrarily large variance. After the first principal component $Z_1$ of the features has been determined, we can find the second principal component $Z_2$. The second principal component is the linear combination of $X_1,\dots,X_p$ that has maximal variance out of all linear combinations that are uncorrelated with $Z_1$. The second principal component scores $z_{12}, z_{22},\dots,z_{n2}$ take the form

$$z_{12} = \phi_{12}x_{i1} + \phi_{22}x_{i2} + \cdots + \phi_{p2}x_{ip} \tag{8} $$

where $\phi_2$ is the second principal loading vector, with elements $\phi_{12}, \phi_{22}, \dots, \phi_{p2}$. This continues until all principal components have been computed. To perform a principal components analysis we use principal_components which will create a list containing:

  1. pca_sdev: the standard deviations of the principal components (i.e., the square roots of the eigenvalues of the covariance/correlation matrix, though the calculation is actually done with the singular values of the data matrix).
  2. pca_loadings: the matrix of variable loadings (i.e., a matrix whose columns contain the eigenvectors).
  3. pca_rotated: the value of the rotated data (the centered, and scaled if requested, data multiplied by the rotation matrix) is returned.
  4. pca_center: the centering used.
  5. pca_scale: a logical response indicating whether scaling was used.
principal_components(state_vec) %>% str()

For easy access to these results we can use the principal_components_result parsing function. The principal_components_result will parse the results either by their list name or by location. For instance to extract the computed component scores as outlined in Eq. 8 you can use principal_components_result(data, results = pca_rotated) or principal_components_result(data, results = 3) as demonstrated below.

state_vec %>%
  principal_components() %>%
  principal_components_result(pca_rotated) %>%
  as_tibble()

We could then follow up the principal component analysis with similar visualization activities as performed post factor analysis to identify potential anomalies.



AFIT-R/anomalyDetection documentation built on Oct. 14, 2019, 5:24 p.m.