# SpicyMKL: DALMKL In RMKL: Multiple Kernel Learning for Classification or Regression Problems

## Description

This function conducts DALMKL for precomputed gramm matrices

## Usage

 ```1 2 3``` ```SpicyMKL(K, y, loss = "hinge", C = 0.5, tolOuter = 0.01, tolInner = 1e-06, OuterMaxiter = 500, InnerMaxiter = 500, calpha = 10) ```

## Arguments

 `K` The multiple kernel cube (3-d array) `y` The outome variable, must be -1/1 `loss` The loss function to be used, must be either 'hinge' or 'logistic', default to be 'hinge' `C` tuning parameter for block one norm, default to be .5 `tolOuter` change between to iterations is smaller than this, algorithms is considered to have converged for outer loop, default to be .01 `tolInner` change between to iterations is smaller than this, algorithms is considered to have converged for inner loop, default to be .000001 `OuterMaxiter` maximum number of allowed iteratons for outer loop, default to be 500 `InnerMaxiter` maximum number of allowed iteratons for inner loop, default to be 500 `calpha` Lagrangian parameter, default to be 10

## Value

b Estimated Intercept

alpha coeffiencents of the dual of MKL

weight Estimated between kernel weight

rho Estimated within kernel weight

## Examples

 ``` 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22``` ```data(benchmark.data) data.mkl=benchmark.data[[1]] kernels=rep('radial',2) sigma=c(2,1/20) train.samples=sample(1:nrow(data.mkl),floor(0.7*nrow(data.mkl)),replace=FALSE) degree=sapply(1:length(kernels), function(a) ifelse(kernels[a]=='p',2,0)) #Kernels.gen splts the data into a training and test set, and generates the desired kernel matrices. #Here we generate two gaussisan kernel matrices with sigma hyperparameter 2 and 0.05 K=kernels.gen(data=data.mkl[,1:2],train.samples=train.samples,kernels=kernels,sigma=sigma, degree=degree,scale=rep(0,length(kernels))) C=0.05 #Cost parameter for DALMKL K.train=K\$K.train K.test=K\$K.test # parameters set up ytr=data.mkl[train.samples,3] #Converts list of kernel matrices in to an array with is appropriate for C++ code k.train=simplify2array(K.train) k.test=simplify2array(K.test) #Implement DALMKL with the hinge loss function spicy_svmb1n=SpicyMKL(K=k.train,y=ytr, loss='hinge',C=C) #Implement DALMKL with the hinge loss function spicy_logistic=SpicyMKL(K=k.train,y=ytr, loss='logistic',C=C)#' ```

### Example output

```
```

RMKL documentation built on May 2, 2019, 7:55 a.m.