Category Archives: nkululeko

Nkululeko: how to predict many samples

There are three ways to predict a number of samples:

  1. If you want to save the predictions of an experiment for later use, you can do so by stating in the EXP section

    [EXP]
    save_test = ./my_saved_test_predictions.csv

    The output format is CSV, comma seperated values.

  2. Alternatively, you can test an existing database against the best model you trained before, by stating the databases as tests in the DATA section:

    [DATA]
    tests = ['my_testdb']
    my_testdb = /mypath/my_testdb
    ...

    and then calling Nkululeko's test module

    python -m nkululeko.test --config mycoonfg.ini --outfile myresults.csv
  3. Run the demo module simply for a set of files:

    python -m nkululeko.demo --config mycoonfg.ini --list my_filelist.txt

Nkululeko

This is the entry post for Nkululeko: a framework to do machine learning experiments on audio data based on configuration files.

Here's an overview on the tutorials:

How to import features from outside the Nkululeko software

Since version 0.29.1 there is the possibilty to directly import acoustic features into the Nkululeko framework.

You can specify a file to be imported in the FEATS section:

[FEATS]
type = ['import']
import_file = ['/home/.../my_features_1.csv']

Of course the features still can be combined with other feature sets and will be assigned to training and test splits accordingly.

There can be several feature files (e.g. for train and dev serpately), and they must be in CSV format (comma separated values) in audformat with segmented index.
Here is an example:

file,start,end,voice segments,HNR Mean (dB),F1 Mean (Hz)
/home/.../a42_1.wav,0 days,0 days 00:00:07.815875,4.13,45,7.48,

Nkululeko: How to evaluate a test set with a given best model

Nululeko has two modules for testing and unknown data set, despite train and development/evaluation set.

Let's recap the concept of train/dev/test splits:

  • train is used to train a supervised model
  • dev is a set to evaluate this model, i.e. know when it is a good model (that doesn't overfit)
  • test is a set to be used ONLY once: for the real use of the model. If you would use the test as a dev set, you can't be sure if you're not overfitting again (because you used the dev set to adjust the meta parameters of your model).

So, in order to evaluate a third dataset ( beneath train and dev) you might have situations:
a) you have a labeled test set and want to evaluate it
b) you have an unknown test set (no labels) and want to add predictions (without evaluation)

For a),
you can use the test module, and set a tests entry in the configuration [DATA] section like so:

[DATA]
tests = ['my_testdb']
my_testdb = /mypath/my_testdb
my_testdb.split_strategy = test
...

and then call Nkululeko's test module

python -m nkululeko.test --config mycoonfg.ini --outfile myresults.csv

For b),
you can use the demo module and state your test set as a list of files like so:

python -m nkululeko.demo --config my_config.ini --list my_testsamples.csv --outfile my_results.csv

In order to use a model, of course you do need to have it trained and saved before. So you need a run with the nkululeko module before.

python -m nkululeko.nkululeko --config my_config.ini

with my_config,ini containing:

[EXP]
save = True
[MODEL]
save = True

Nkululeko FAQ

This post is the start of a troubleshooting /FAQ list

The number of speakers in my test/train splits seems wrong

  • Did you check if perhaps the speakers from several datasets are the same? If so, you can add a dataname.rename_speakers = True to the configuration. This will prepend the dataset name to the speakers

How to combine feature sets with Nkululeko

If you want to use combine several acoustic parameter (feature) sets with nkululeko, you might state

[FEATS]
type = ['mld', 'praat']
features = ['JitterPCA', 'meanF0Hz', 'hld_sylRate']

This would combine the

  • hld_sylRate feature from MLD
  • JitterPCA feature from Feinberg's Praat features and
  • meansF0Hz feature from Feinberg's Praat features

Of course you could omit the features entry and simply use all of them.

It's interesting to see how many emotions from Berlin Emodb can still be recognized with only these three parameters:

How to use selected features from Praat with Nkululeko

If you want to use acoustic parameters extracted by the wonderful Praat software with nkululeko, you state

[FEATS]
type=['praat']

in the feature section of your config file.
If you like to use only some features of all the ones that are extracted by David R. Feinberg's Praat scripts, you can look at the output and select some of them in the FEAT section, e.g.

type = ['praat']
praat.features = ['speechrate(nsyll / dur)']

You can do the same with opensmile features:

type = ['os']
os.features = ['F0semitoneFrom27.5Hz_sma3nz_amean']

or even combine them

type = ['praat', 'os']
praat.features = ['speechrate(nsyll / dur)']
os.features = ['F0semitoneFrom27.5Hz_sma3nz_amean']

this is actually the same as

type = ['praat', 'os']
features = ['speechrate(nsyll / dur)', 'F0semitoneFrom27.5Hz_sma3nz_amean']

if you would want to combine all of opensmile eGeMAPS features with selected Praat features, you would do:

type = ['praat', 'os']
praat.features = ['speechrate(nsyll / dur)']

It is interesting to see, how many emotions of Berlin EmoDB still get recognized with only mean F0 and Jitter as features:

image

What kind of features are there, you might ask yoursel?
Here's a list:
'duration', 'meanF0Hz', 'stdevF0Hz', 'HNR', 'localJitter',
'localabsoluteJitter', 'rapJitter', 'ppq5Jitter', 'ddpJitter',
'localShimmer', 'localdbShimmer', 'apq3Shimmer', 'apq5Shimmer',
'apq11Shimmer', 'ddaShimmer', 'f1_mean', 'f2_mean', 'f3_mean',
'f4_mean', 'f1_median', 'f2_median', 'f3_median', 'f4_median',
'JitterPCA', 'ShimmerPCA', 'pF', 'fdisp', 'avgFormant', 'mff',
'fitch_vtl', 'delta_f', 'vtl_delta_f''

How to test a trained model on a new test set with Nkululeko

Sometimes you might want to test your already trained model(s) on a new dataset, e.g. because the training took a lot of resources.
If you stored your models during the training this is possible.

[DATA]
databases = ['emodb']
....
[MODEL]
save = True

In a new config file for your experiment that uses a dufferent test set, you set

[DATA]
databases = ['emodb', 'polish']
trains = ['emodb']
tests = ['polish']
strategy = cross_data....
[MODEL]
only_test = True

In the example above, emodb has been used as the training database, and polish in a second experiment later as a test database.

How to compare several MLP layer layouts with each other

Some days ago I showed how you can run several experiments in one go.
Obviously this can be used to compare several ANN layer architectures as an alternative to the approach discussed in this (much earlier) post

There is an example configuration shipped with Nkululeko, and you simply can specify your layer specifications per experiment like this:

classifiers = [
    {'--model': 'mlp',
    '--layers': '\"{\'l1\':16,\'l2\':4}\"'},
    {'--model': 'mlp',
    '--layers': '\"{\'l1\':64,\'l2\':16}\"'},
    {'--model': 'mlp',
    '--layers': '\"{\'l1\':128,\'l2\':32}\"',
    '--learning_rate': '.0001',
    '--drop': '.3',},
    {'--model': 'xgb',
    '--epochs':1},
    {'--model': 'svm',
    '--epochs':1},
]

i.e in this example three MLP classifiers are specified with architectures:

  • (hidden) layer 1 with 16 neurons, and (hidden) layer 2 with 4 neurons
  • one layer with 64 and one with 16 neurons
  • and a third one with
    • one layer with 128 and a second one with 32 neurons,
    • learning rate of .0001 and
    • dropout probability of 30%

and, for comparison:

  • a XGB classifier
  • and a SVM classifier

both only need to be trained one epoch because there are no weights to be adapted.
The MLP classifiers are trained with the epoch number that is specified in the sceleton config file

How to run multiple experiments in one go with Nkululeko

Sometimes you will want to run several experiments without the need to manually start them one after the other, e.g. if you want to run them over night.
This post shows you one way how to do this.

You need two files:
Examples of these files are part of the Nkululeko distribution.

Since version 0.82.0, there's a module named nkuluflag to call nkululeko with many command-line options in addition to the config file (which you still need as a basis).

The configuration file

A Nkululeko config file with the constant values for all experiments (to be adapted to your needs and paths)

[EXP]
root = ./
name = exp
runs = 1
epochs = 1
[DATA]
root_folders = ../data_roots.ini
databases = ['mydata']
target = mytarget
labels = ['label1', 'label2']
[FEATS]
scale = standard
[MODEL]
C_val = .001

The script to specify and run all experiments

Lastly, you need a script to start and specify the experiments, here's an example that combines four classifiers and eight feature sets, resulting in 32 experiments, let's call it do_experiments.py:

import os

classifiers = [
    {"--model": "mlp", "--layers": "\"{'l1':64,'l2':16}\"", "--epochs": 100},
    { "--model": "mlp",
        "--layers": "\"{'l1':128,'l2':64,'l3':16}\"",
        "--learning_rate": ".01",
        "--drop": ".3",
        "--epochs": 100,
    },
    {"--model": "xgb"},
    {"--model": "svm", "C_val": 10},

features = [
    {'--feat': 'os'},
    {'--feat': 'os', 
    '--set': 'ComParE_2016',
    },
    {'--feat': 'wavlm'},
    {'--feat': 'audmodel'},
    {'--feat': 'hubert'},
    {'--feat': 'trill'},
    {'--feat': 'whisper'},
    {'--feat': 'wav2vec'},
]

for c in classifiers:
    for f in features:
        cmd = 'python -m nkululeko.nkuluflag --config myconf.ini  '
        for item in c:
            cmd += f'{item} {c[item]} '
        for item in f:
            cmd += f'{item} {f[item]} '
        print(cmd)
        os.system(cmd)

You can then simply call you script with python:

python do_experiments.py