Description Usage Arguments Value References
A distance metric learning algorithm that tries to minimize kNN expected error.
1 2 3 4  | 
num_dims | 
 Desired value for dimensionality reduction. If None, the dimension of transformed data will be the same as the original. Integer.  | 
learning_rate | 
 Type of learning rate update for gradient descent. Possible values are: - 'adaptive' : the learning rate will increase if the gradient step is succesful, else it will decrease. - 'constant' : the learning rate will be constant during all the gradient steps.  | 
eta0 | 
 The initial value for learning rate.  | 
initial_transform | 
 If array or matrix that will represent the starting linear map for gradient descent, where d is the number of features, and d' is the dimension specified in num_dims. If None, euclidean distance will be used. If a string, the following values are allowed: - 'euclidean' : the euclidean distance. - 'scale' : a diagonal matrix that normalizes each attribute according to its range will be used.  | 
max_iter | 
 Maximum number of iterations of gradient descent. Integer.  | 
prec | 
 Precision stop criterion (gradient norm). Float.  | 
tol | 
 Tolerance stop criterion (difference between two iterations). Float.  | 
descent_method | 
 The descent method to use. Allowed values are: - 'SGD' : stochastic gradient descent. - 'BGD' : batch gradient descent.  | 
eta_thres | 
 A learning rate threshold stop criterion. Float.  | 
learn_inc | 
 Increase factor for learning rate. Ignored if learning_rate is not 'adaptive'. Float.  | 
learn_dec | 
 Decrease factor for learning rate. Ignored if learning_rate is not 'adaptive'. Float.  | 
The NCA transformer, structured as a named list.
Jacob Goldberger et al. “Neighbourhood components analysis”. In: Advances in neural information processing systems. 2005, pages 513-520.
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.