The main area considered in this package is the semi-supervised learning (SSL). SSL is a suitable approach when we dispose of a limited number of labeled examples and a great amount of unlabeled data (Chapelle, 2006). Specifically, semi-supervised classification (SSC) focuses on training a classifier such that it outperforms a supervised classifier trained on the labeled data alone. In semi-supervised classification, the dataset can be divided into two parts, *L* and *U*. Let *L* be the set of instances *X _{l}*=(

*x*,...,

_{1}*x*) for which the labels

_{l}*Y*=(

_{l}*y*,...,

_{1}*y*) are provided. Let

_{l}*U*be the set of instances

*Xu*=(

*x*,...,

_{l+1}*x*) for which the labels are not known. We follow the typical assumption that there is much more unlabeled than labeled data, i.e.,

_{l+u}*u*>>

*l*. The whole set

*L*union

*U*forms the training set.

An specific family of SSC methods, denoted self-labeled techniques (Triguero, 2015), aims to enlarge the original labeled set using the most confident predictions to classify unlabeled data. In contrast to other approaches, self-labeled techniques do not make any special assumptions about the distribution of the input data. All methods that have been implemented in this R-package belong to this family of SSC methods.