Introduction

First off, thank you so much for downloading my package, and the name of it is just my netid:xw547. To load the package, you may use:

library(xw547)

As required by Professor Basu, there are three functions. Also, there are several helper functions. To wit, you may use the command below to

??xw547

By clicking xw547::overview, you may find an overview of my functions.

Then, we would proceed to the testing part of my package.

solve_ols

As the name indicates, the function is an implementation of the Gauss-Seidel and the Jacobi method we metioned in class.

As always, you cal always check out the helper documentation by

?solve_ols

And we can do a simulation similar to what we have done in the HW1

###Helper function that generates A in HW1
generate_A <- function(n, alpha){
  A <- diag(rep(alpha,n))
  for (i in 1:(n-1)){
    A[i+1, i] <- -1
    A[i, i+1] <- -1
  }
  return(A)
}

###generate data matrix and response
A_1 = generate_A(100,1)
A_2 =generate_A(100,2)
A_3 =generate_A(100,3)

v = rep(c(1,0),50)
b_1 = A_1%*%v
b_2 = A_2%*%v
b_3 = A_3%*%v

###A_1, and the both algorithm is diverged
# solve_ols(A_1,b_1)
# solve_ols(A_1, b_1, method = "Jacobi")

###A_3,
#Gauss-Seidel Method
solve_ols(A_3, b_3, method = "Gauss")
#Sequential Jacobi Method
solve_ols(A_3, b_3, method = "Jacobi")
#Parallel Jacobi
solve_ols(A_3, b_3, method = "Jacobi", parallel = TRUE)

The unsolvable linear system $A_1, b_1$ should return warning, and they are commented out in the code chunk. Also, we can give an output of the full result of each iteration in a matrix form. And each row should be the result from one of the iteration.

solve_ols(A_3, b_3, method = "Gauss", full_result = TRUE)

algo_leverage

Also, you can find the documentation by

?algo_leverage

Also, we can try to do a simulation similar to HW1

###Generate Data
b = 500
X = rt(500, 6)
epsilon = rnorm(500, 0, 1)
y = -X + epsilon
sol_beta = lm(y~X)$coeff

###Running simulation us
###Note that both X and Y should be 1-D here 
beta_unif_10 = algo_leverage(y, X, size = 10, draws = 600, method = "uniform")
beta_weighted_10 = algo_leverage(y, X, size = 10, draws = 600, method = "weighted")

elnet_coord

Following a similar manner, we have:

generate_data <-function(n){
para = c(c(2,0,-2,0,1,0,-1,0),rep(0,12))
covmat = diag(rep(1,20))
covmat[1,2] = .8
covmat[2,1] = .8
covmat[5,6] = .8
covmat[6,5] = .8
X = matrix(NA, n, 20)
for (i in 1:n){
X[i,] = mvtnorm::rmvnorm(1, rep(0, 20), sigma = covmat)
}
eps = rnorm(n,0,1)
y = X%*%para + eps
return(list(y,X))
}

data_20 = generate_data(20)
data_50 = generate_data(50)
data_100 = generate_data(100)

b = elnet_coord(data_100[[1]],data_100[[2]], rep(0,20), alpha = .5, lambda =.5)
library(glmnet)
c = glmnet(data_100[[2]], data_100[[1]], alpha =  .5, lambda = .5)$beta

And we can find that it worked well. Although there is difference between the exact value for the beta, that is due to the usage of sparse matrix in glmnet package, and my result is almost identical the official implement from MATLAB.



xw547/hwpkg documentation built on Dec. 23, 2021, 7:14 p.m.