Category Archives: tutorial

Nkululeko: how to compare classifiers, features and databases using multiple runs

With nkululeko since version 0.98 there is a functionality to compare the outcome for several runs across experiments.

Say, you would like to know if the difference between using acoustic (opensmile) features and linguistic embeddings (bert) as features for some classifier is significant. You could than use the outcomes of several runs from one MLP (multi layer perceptron) as tests that represent all possible runs (disclaimer: afaik this approach is disputable according to some statisticians).

You would set up your experiment like this:

[EXP]
...
runs = 10
epochs = 100
[FEATS]
type = ['bert']
#type = ['os']
#type = ['os', 'bert']
[MODEL]
type = mlp
...
patience = 5
[EXPL]
# turn on extensive statistical output
print_stats = True
[PLOT]
runs_compare = features

and run this three times, each time changing the feature type that is being used (bert, os, or the combination of both), so in the end you got a results folder three different run_results as text files in it.

Using this, nkululeko prints a plot that compares the three feature sets, here's a example (having used only 5 runs):

The title states the overall significance for all differences, as well as the largest one for pair-wise comparison. If you run-number is larger than 30, t-tests will be used instead of Mann-Whitney.

Nkululeko tutorial: voice of wellness workshop

Context

In Sep 2025, we did the Voice of wellness workshop.

In this post i try the nkululeko experiments i use for the tutorials there.

Prepare the Database

i use the Androids corpus, paper here

First thing you should probably do is check the data formats and re-sample if necessary.

[RESAMPLE]
# which of the data splits to re-sample: train, test or all (both)
sample_selection = all
replace = True
target = data_resampled.csv

Explore

Check the database distributions

python -m nkululeko.explore --config data/androids/exp.in

Transcribe and translate

transcribe Note! this should be done on a GPU

translate, no GPU required as it uses a Google service

Segment

Androids database samples are quite long sometimes.
It makes sense to check if approaches work better on shorter speech segments.

python -m nkululeko.segment --config data/androids/exp.ini

Filter the data

[DATA]
data.limit_samples_per_speaker = 8
data.filter = [['task', 'interview']]
check_size = 1000

Define splits

Either use pre-defined folds:

[MODEL]
logo=5

or, randomly define splits, but stratify them:

[DATA]
data.split_strategy = balanced
data.balance = {'depression':2, 'age':1, 'gender':1}
data.age_bins = 2

Add additional training data

More details here

[DATA]
databases = ['data', 'emodb']
data.split_strategy = speaker_split
# add German emotional data
emodb = ./data/emodb/emodb
# rename emotion to depression
emodb.colnames = {"emotion": "depression"}
# only use neutral and sad samples
emodb.filter = [["depression", ["neutral", "sadness"]]]
# map them to depression
emodb.mapping = {"neutral": "control", "sadness": "depressed"}
# and put everything to the training
emodb.split_strategy = train
target = depression
labels = ['depressed', 'control']

Nkululeko: ensemble learners with late fusion

With nkululeko since version 0.88.0 you can combine experiment results and report on the outcome, by using the ensemble module.

For example, you would like to know if the combination of expert features and learned embeddings works better than one of those. You could then do

python -m nkululeko.ensemble \
--method max_class \
tests/exp_emodb_praat_xgb.ini \
tests/exp_emodb_ast_xgb.ini \
tests/exp_emodb_wav2vec_xgb.in

(all in one line)
and would then get the results for a majority voting of the three results for Praat, AST and Wav2vec2 features.

Other methods are mean, max, sum, max_class, uncertainty_threshold, uncertainty_weighted, confidence_weighted:

  • majority_voting: The modality function for classification: predict the category that most classifiers agree on.
  • mean: For classification: compute the arithmetic mean of probabilities from all predictors for each labels, use highest probability to infer the label.
  • max: For classification: use the maximum value of probabilities from all predictors for each labels, use highest probability to infer the label.
  • sum: For classification: use the sum of probabilities from all predictors for each labels, use highest probability to infer the label.
  • max_class: For classification: compare the highest probabilities of all models across classes (instead of same class as in max_ensemble) and return the highest probability and the class
  • uncertainty_threshold: For classification: predict the class with the lowest uncertainty if lower than a threshold (default to 1.0, meaning no threshold), else calculate the mean of uncertainties for all models per class and predict the lowest.
  • uncertainty_weighted: For classification: weigh each class with the inverse of its uncertainty (1/uncertainty), normalize the weights per model, then multiply each class model probability with their normalized weights and use the maximum one to infer the label.
  • confidence_weighted: Weighted ensemble based on confidence (1-uncertainty), normalized for all samples per model. Like before, but use confidence (instead of inverse of uncertainty) as weights.

Nkululeko: export acoustic features

With nkululeko since version 0.85.0 the acoustic features for the test and the train (aka dev) set are exported to the project store.

If you specify the store_format:

[FEATS]
store_format = csv

they will be exported to CSV (comma separated value) files, else PKL (readable by python pickle module).
I.e. you store should then after execution of any nkululeko module that computes features the two files:

  • feats_test.csv
  • feats_train.csv

If you specified scaling the features:

[FEATS]
scale = standard # or speaker

you will have two additional files with features:

  • feats_test_scaled.csv
  • feats_train_scaled..csv

In contrast to the other feature stores, these contain the exact features that are used for training or feature importance exploration, so they might be combined from different feature types and selected via the features value. An example:

[FEATS]
type = ['praat', 'os']
features = ['speechrate_nsyll_dur', 'F0semitoneFrom27.5Hz_sma3nz_amean']
scale = standard
store_format = csv

results in the following feats_test.csv:

file,start,end,speechrate_nsyll_dur,F0semitoneFrom27.5Hz_sma3nz_amean
./data/emodb/emodb/wav/11b03Wb.wav,0 days,0 days 00:00:05.213500,4.028004219813945,34.42206
./data/emodb/emodb/wav/16b10Td.wav,0 days,0 days 00:00:03.934187500,3.0501850763340586,31.227554

....

How to use train, dev and test splits with Nkululeko

Usually in machine learning, you train your predictor on a train set, tune meta-parameters on a dev (development or validation set ) and evaluate on a test set.
With nkululeko, there currently the test set is not, as there are only two sets that can be specified: train and evaluation set.
A work-around is to use the test module to evaluate your best model on a hold out test set at the end of your experiments.
All you need to do is to specify the name of the test data in your [DATA] section, like so (let's call it myconf.ini):

[EXP]
save = True
....
[DATA]
databases =  ['my_train-dev_data']
... 
tests = ['my_test_data']
my_test_data = ./data/my_test_data/
my_test_data.split_strategy = test
...

you can run the experiment module with your config:

python -m nkululeko.nkululeko --config myconf.ini

and then, after optimization (of predictors, features sets and meta-parameters), use the test module

python -m nkululeko.test --config myconf.ini

The results will appear at the same place as all other results, but the files are named with test and the test database as a suffix.

If you need to compare several predictors and feature sets, you can use the nkuluflag module
All you need to do, is, in your main script, if you call the nkuluflag module, pass a parameter (named --mod) to tell it to use the test module:

cmd = 'python -m nkululeko.nkuluflag --config myconf.ini  --mod test '

Nkululeko: how to bin/discretize your feature values

With nkululeko since version 0.77.8 you have the possibility to convert all feature values into the discreet classes low, mid and high

Simply state

[FEATS]
type = ['praat']
scale = bins
store_format = csv

in your config to use Praat features.
With the store format stated as csv you will be able to look at the train and test features in the store folder.

The binning will be done based on the 33 and 66 percent of the training feature values.

Nkululeko: compare several databases

With nkululeko since version 0.77.7 there is a new interface named multidb which lets you compare several databases.

You can state their names in the [EXP] section and they will then be processed one after each other and against each other, the results are stored in a file called heatmap.png in the experiment folder.

!Mind YOU NEED TO OMIT THE PROJECT NAME!

Here is an example for such an ini.file:

[EXP]
root = ./experiments/emodbs/
#  DON'T give it a name, 
# this will be the combination 
# of the two databases: 
# traindb_vs_testdb
epochs = 1
databases = ['emodb', 'polish']
[DATA]
root_folders = ./experiments/emodbs/data_roots.ini
target = emotion
labels = ['neutral', 'happy', 'sad', 'angry']
[FEATS]
type = ['os']
[MODEL]
type = xgb

you can (but don't have to), state the specific dataset values in an external file like above.
data_roots.ini:

[DATA]
emodb = ./data/emodb/emodb
emodb.split_strategy = specified
emodb.test_tables = ['emotion.categories.test.gold_standard']
emodb.train_tables = ['emotion.categories.train.gold_standard']
emodb.mapping = {'anger':'angry', 'happiness':'happy', 'sadness':'sad', 'neutral':'neutral'}
polish = ./data/polish_emo
polish.mapping = {'anger':'angry', 'joy':'happy', 'sadness':'sad', 'neutral':'neutral'}
polish.split_strategy = speaker_split
polish.test_size = 30

Withe respect to the mapping, you can also specify super categories, by giving a list as a source category. Here's an example:

emodb.mapping = {'anger, sadness':'negative', 'happiness': 'positive'}
labels = ['negative', 'positive']

Call it with:

python -m nkululeko.multidb --config my_conf.ini

The default behavior is that all databases are used as a whole when being test or training database. If you would rather like the splits to be used, you can add a flag for this:

[EXP]
use_splits = True

Here's a result with two databases:

and this is the same experiment, but with augmentations:

In order to add augmentation, simply add an [AUGMENT] section:

[EXP]
root = ./experiments/emodbs/augmented/
epochs = 1
databases = ['emodb', 'polish']
[DATA]
--
[AUGMENT]
augment = ['traditional', 'random_splice']
[FEATS]
...

In order to add an additional training database to all experiments, you can use:

[CROSSDB]
train_extra = [meta, emodb]

, to add two databases to all training data sets,
where meta and emodb should then be declared in the root_folders file

Nkululeko: generate a latex/pdf report

With nkululeko since version 0.66.3, a report document formatted in Latex and compiled as a PDF file can automatically be generated, basically as a compilation of the images that are generated.
There is a dedicated REPORT section in the config file for this, here is an example:

[REPORT]
# should the report be shown in the terminal at the end?
show = False 
# should a latex/pdf file be printed? if so, state the filename
latex = emodb_report
# name of the experiment author (default "anon")
author = Felix
# title of the report (default "report")
title = EmoDB

NOTE:
with each run of a nkululeko module in the same experiment environment, the details of the report will be added.
So a typical use would be, to first run the general module and than more specialized ones:

# first run a segmentation 
python -m nkululeko.segment --config myconf.ini 
# then rename the data-file in the config.ini and
# run some data exploration
python -m nkululeko.explore --config myconf.ini 
# then run a machine learning experiment
python -m nkululeko.nkululeko --config myconf.ini 

Each run will add some contents to the report

How to fix different sampling rates in a dataset with Nkululeko

With nkululeko since version 0.62.0 you can automatically adjust the sampling rate to the standard of 16 kHz, which is required by most models that might need to process your data.

A special module can be configured in the configuration file like this:

[RESAMPLE]
# which of the data splits to re-sample: train, test or all (both)
sample_selection = all
replace = True
target = data_resampled.csv

and then you call it like this

python -m nkululeko.resample --config my_config.ini

WARNING: if replace = True, this changes (overwrites) ALL files in the splits, directly on your hard disk. Make sure to make a safety copy of your database before, in case the results are undesired, or you still need the data in other sample rates.

The default value though, is replace = False . Then, the target value will be used as filename for the new dataframe with filenames that indicate that the sampling rate has been changed.

As stated above, only files in the test and train splits are affected. This means that you can use all filtering, e.g. limit samples per speaker to 20 samples to pre-select samples.

Nkululeko: how to predict labels for your data from existing models and check them

With nkululeko since version 0.58.0, you can predict labels automatically for a given database, and then perhaps use these predictions to check on bias within your data.
One example:
You have a database labeled with smokers/non-smokers. You evaluate a machine learning model, check on the features and find to your astonishment, that the mean pitch is the most important feature to distinguish between smokers and non-smokers, with a very high accuracy.
You suspect foul-play and auto-label the data with a public model predicting biological sex (called gender in Nkululeko).
After a data exploration you see that most of the smokers are female and most of the non-smokers are male.
The machine learning model detected biological sex and not smoking behaviour.

How do you do this?
Firstly, you need to predict labels. In a configuration file, state the annotations you'd like to be added to your data like this:

[DATA]
databases = ['mydata']
mydata = ... # location of the data
mydata.split_strategy = random # not important for this 
...
[PREDICT]
# the label names that should be predicted: possible are: 'text', speaker', 'gender', 'age', 'snr', 'valence', 'arousal', 'dominance', 'pesq', 'mos'
targets = ['gender']
# the split selection, use "all" for all samples in the database
sample_selection = all

You can then call the predict module with python:

python -m nkululeko.predict --config my_config.ini

The resulting new database file in CSV format will appear in the experiment folder.
The newly predicted values will be named with a trailing _pred, e.g. "gender_pred" for "gender"
You can than configure the explore module to visualize the the correlation between the new labels and the original target:

[DATA]
databases = ['predicted']
predicted = ./my_exp/mydata_predicted.csv
predicted.type = csv
predicted.absolute_path = True
predicted.split_strategy = random
...
[EXPL]
# which labels to investigate in context with target label
value_counts = [['gender_pred']]
# the split selection
sample_selection = all

and then call the explore module:

python -m nkululeko.explore --config my_config.ini

The resulting visualizations are in the image folder of the experiment folder.
Here is an example of the correlation between emotion and estimated PESQ (Perceptual Evaluation of Speech Quality)

The effect size is stated as Cohen's d, for categories that have the largest value, in this case the difference of estimated speech quality is largest between the categories neutral and angry.