RLScore is a Regularized Least-Squares (RLS) based machine learning package. It contains implementations of the RLS and RankRLS learners allowing the optimization of performance measures for the tasks of regression, ranking and classification. Implementations of efficient cross-validation algorithms are integrated to the package, combined together with functionality for fast parallel learning of multiple outputs.
The package offers unique benefits for learning tasks where nonlinear kernels are used in tasks where the quality of predictions is measured with AUC or ranking performance measures. In this setting RLScore can be trained to optimize these metrics with datasets consisting of several thousands of examples, and cross-validation together with grid search can be efficiently used for choosing the value of the regularization parameter for the learner.
For tasks where linear kernels are used and the number of features is small, the RLScore uses the highly scalable primal formulations of the RLS and RankRLS algorithms. For example, if the dimensionality of the feature space is below fifty, one can expect the package to scale up to tens or hundreds of thousands of training examples.
Sparse approximations suitable for large-scale learning, based on using the subset of regressors approximation are also included. In this setting approximation is introduced also to the cross-validation methods.
The latest version of RLScore can be downloaded from the projects webpage.