The iterative supervised learning setting, in which learning algorithms can actively query an oracle for labels, e.g. a human annotator that understands the nature of the problem, is called active learning. As the learner is allowed to interactively choose the data from which it learns, it is expected that the learner would perform better with less training. The active learning approach is appropriate to machine learning applications where training labels are costly to obtain but unlabeled data is abundant. Although active learning has been widely considered for single-label learning, this is not the case for multi-label learning, in which objects can have more than one class label and a multi-label learner is trained to assign multiple labels simultaneously to an object. There are different scenarios to query the annotator. This work focuses on the scenario in which the evaluation of unlabeled data is taken into account to select the object to be labeled. In this scenario, several multi-label active learning algorithms were identified in the literature. These algorithms were implemented in a common framework and experimentally evaluated in two multi-label datasets which have different properties. The influence of the properties of the datasets in the results obtained by the multi-label active learning algoritm is highlighted.