With nkululeko since version 0.94 (aleatoric) uncertainty, i.e. the confidence of the model, is explicitly visualized. You simply find a plot in the image folder after running an experiment, like so:
You see the distribution for true vs. false predictions wrt. uncertainty, i.e. in this case this worked out quite well (because less uncertain prediction are usually correct).
The approach is described in our paper Uncertainty-Based Ensemble Learning For Speech Classification
You can use this to tweak your results if you specify an uncertainty-threshold, i.e. you refuse to predict sample that are above some threshold:
[PLOT]
uncertainty_thresshold = .4
You will than get additionally a confusion plot that only takes the selected samples into account.
This might feel like cheating, but especially in critical use cases it might be better to deliver not prediction than a wrong one.