MTL | R Documentation |
Train a multi-task learning model.
MTL( X, Y, type = "Classification", Regularization = "L21", Lam1 = 0.1, Lam1_seq = NULL, Lam2 = 0, opts = list(init = 0, tol = 10^-3, maxIter = 1000), G = NULL, k = 2 )
X |
A set of feature matrices |
Y |
A set of responses, could be binary (classification problem) or continues (regression problem). The valid value of binary outcome \in\{1, -1\} |
type |
The type of problem, must be |
Regularization |
The type of MTL algorithm (cross-task regularizer). The value must be
one of { |
Lam1 |
A positive constant λ_{1} to control the cross-task regularization |
Lam1_seq |
A positive sequence of |
Lam2 |
A non-negative constant λ_{2} to improve the
generalization performance with the default value of 0 (except for
|
opts |
Options of the optimization procedure. One can set the
initial search point, the tolerance and the maximized number of
iterations using this parameter. The default value is
|
G |
A matrix to encode the network information. This parameter
is only used in the MTL with graph structure ( |
k |
A positive number to modulate the structure of clusters
with the default of 2. This parameter is only used in MTL with
clustering structure ( |
The trained model including the coefficient matrix W
and intercepts C
and related meta information
#create the example data data<-Create_simulated_data(Regularization="L21", type="Regression") #train a MTL model #cold-start model<-MTL(data$X, data$Y, type="Regression", Regularization="L21", Lam1=0.1, Lam2=0, opts=list(init=0, tol=10^-6, maxIter=1500)) #warm-start model<-MTL(data$X, data$Y, type="Regression", Regularization="L21", Lam1=0.1, Lam1_seq=10^seq(1,-4, -1), Lam2=0, opts=list(init=0, tol=10^-6, maxIter=1500)) #meta-information str(model) #plot the historical objective values plotObj(model)
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.