Nkululeko: ensemble learners with late fusion

With nkululeko since version 0.88.0 you can combine experiment results and report on the outcome, by using the ensemble module.

For example, you would like to know if the combination of expert features and learned embeddings works better than one of those. You could then do

python -m nkululeko.ensemble \
--method max_class \
tests/exp_emodb_praat_xgb.ini \
tests/exp_emodb_ast_xgb.ini \
tests/exp_emodb_wav2vec_xgb.in

(all in one line)
and would then get the results for a majority voting of the three results for Praat, AST and Wav2vec2 features.

Other methods are mean, max, sum, max_class, uncertainty_threshold, uncertainty_weighted, confidence_weighted:

  • majority_voting: The modality function for classification: predict the category that most classifiers agree on.
  • mean: For classification: compute the arithmetic mean of probabilities from all predictors for each labels, use highest probability to infer the label.
  • max: For classification: use the maximum value of probabilities from all predictors for each labels, use highest probability to infer the label.
  • sum: For classification: use the sum of probabilities from all predictors for each labels, use highest probability to infer the label.
  • max_class: For classification: compare the highest probabilities of all models across classes (instead of same class as in max_ensemble) and return the highest probability and the class
  • uncertainty_threshold: For classification: predict the class with the lowest uncertainty if lower than a threshold (default to 1.0, meaning no threshold), else calculate the mean of uncertainties for all models per class and predict the lowest.
  • uncertainty_weighted: For classification: weigh each class with the inverse of its uncertainty (1/uncertainty), normalize the weights per model, then multiply each class model probability with their normalized weights and use the maximum one to infer the label.
  • confidence_weighted: Weighted ensemble based on confidence (1-uncertainty), normalized for all samples per model. Like before, but use confidence (instead of inverse of uncertainty) as weights.

Leave a Reply

Your email address will not be published. Required fields are marked *