Tag Archives: nkululeko

How to soft-label a database with Nkululeko

Soft-labeling means to annotate data with labels that were predicited by a machine classifier.
As they were not evaluated by a human, you might call them "soft".

Two steps are necessary:
1) save a test/evaluation set as a new database
2) load this new database in a new experiment as training data

Within nkululeko, you would do it like this:

step 1: save a new database

You simply specifify a name in the EXP section to save the test predictions like this:

[EXP]
...
save_test = ./my_test_predictions.csv

You need Model save to be turned on, because it will look for the best performing model:

[MODEL]
...
save = True

This will store the new database to a file called my_test_predictions.csv in the folder where the python was called.

step 2: load as training data

Here is an example configuration how to load this data as additional training (in this case in addition to emodb):

[EXP]
root = ./tests/
name = exp_add_learned
runs = 1
epochs = 1
[DATA]
strategy = cross_data
databases = ['emodb', 'learned', 'test_db']
trains = ['emodb', 'learned']
tests = ['test_db']
emodb = /path-to-emodb/
emodb.split_strategy = speaker_split
emodb.mapping = {'anger':'angry', 'happiness':'happy', 'sadness':'sad', 'neutral':'neutral'}
test_db = /path to test database/
test_db.mapping = <if any mapping to the target categories is needed>
learned = ./my_test_predictions.csv
learned.type = csv
target = emotion
labels = ['angry', 'happy', 'neutral', 'sad']
[FEATS]
type = os
[MODEL]
type = xgb
save = True
[PLOT]

Nkululeko: try out / demo a trained model

This is another Nkululeko post that shows you how to demo model that you trained before.

First you need to train a model, e.g. on emodb as shown here

The difference is, that you need to set the parameters

[EXP]
...
save = True

in the general section and

[MODEL]
...
save = True

in the MODEL section of the configuration file.
Background: we need both the experiment as well as all model files to be saved on disk.

Here is an example script theen how to call the demo mode:

# set the path to the nkululeko sources
import sys
sys.path.append("./src")
# import classes you need from there
import experiment as exp
import configparser
from util import Util

def main(config_file):
    # load the configuration that created the experiment
    config = configparser.ConfigParser()
    config.read(config_file)
    util = Util()
    # set up the experiment
    expr = exp.Experiment(config)
    print(f'running {expr.name}')
    # load the experiment from the name that is suggested
    expr.load(f'{util.get_save_name()}')
    # run the demo
    expr.demo()

if __name__ == "__main__":
    # example call with a configuration file
    main('tests/exp_emodb.ini')

Automatically the best performing model will be used.

How to set up wav2vec embedding for nkululeko

Since version 0.10, nkululeko supports facebook's wav2vec 2.0 embeddings as acoustic features.
This post shows you how to set this up.

1) extend your environment

Firstly, set up your normal nkululeko installation
Then, within the activated environment, install the additional modules that are needed by the wav2vec model

pip install -r requirements_wav2vec.txt

2) download the pretrained model

Wav2vec 2.0 is an end-to-end architecture to do speech to text, but pretrained models can be used as embeddings (from the penultimate hidden layer) to represent speech audio features usable for speaker classification.
Facebook published several pretrained models that can be accessed
from hugginface.

a) get git-lfs

First, you should get the git version that supports large file download (the model we target here is about 1.5 GB).
In linux ubuntu, I do

sudo apt install git-lfs

with other operating systems you should google how to get git-lfs

b) download the model

Go to the huggingface repository for the model and copy the url.

Clone the repository somewhere on your computer:

git-lfs clone https://huggingface.co/facebook/wav2vec2-large-robust-ft-swbd-300h

set up nkululeko

in your nkululeko configuration (*.ini) file, set the feature extractor as eav2vec and denote the path to the model like this:

[FEATS]
type = wav2vec
model = /my path to the huggingface model/

That's all!

Nkululeko: perform cross database experiments

This is one of a series of posts about how to use nkululeko.
If you're unfamilar with nkululelo, you might want to start here.

This post is about cross database experiments, i.e. training a classifier on one database and test it on another, something that happens quite often with real life situations.

In this post I will only talk about the config file, the python file can be re-used.

I'll walk you through the sections of the config file (all options here):
The first section deals with general setup:

[EXP]
# root is the base directory for the experiment relative to the python call
root = ./experiment_1/
# mainly a name for the top folder to store results (inside root)
name = cross_data

Next, the DATA section is in this case more complex than usual:

[DATA]
# list all databases
databases = ['polish', 'emodb']
# strategy as opposed to train_test
strategy = cross_data
# state which databases to use for training
trains = ['emodb']
# state with databases to use as a test
tests = ['polish']
# what is the target label?
target = emotion
# what are the category names?
labels = ['neutral', 'happy', 'sad', 'angry', 'fright.']
# for each database:
# where is it?
polish = PATH/polish-emotional-speech
# map the databases categories to a common set 
polish.mapping = {'anger':'angry', 'joy':'happy', 'sadness':'sad', 'fear':'fright.', 'neutral':'neutral'}
# plot the distribution of categories
polish.value_counts = True
# and for the second database:
emodb = PATH/emodb
emodb.mapping = {'anger':'angry', 'happiness':'happy', 'sadness':'sad', 'fear':'fright.', 'neutral':'neutral'}
emodb.value_counts = True

The features section, better explained in this post

[FEATS]
type = os

The classifiers section, better explained in this post

[MODEL]
type = xgb

Again, you might want to plot the final distribution of categories per train and test set:

[PLOT]
value_counts = True

Nkululeko: comparing classifiers and features

This is one of a series of posts about how to use nkululeko.

Although Nkululeko is meant as a programming library, many experiments can be done simply by adapting the configuration file of the experiment. If you're unfamilar with nkululelo, you might want to start here.

This post is about maschine classification (as opposed to regression problems) and an introduction how to combine different features sets with different classifiers.

In this post I will only talk about the config file, the python file can be re-used.

I'll walk you through the sections of the config file (all options here):
The first section deals with general setup:

[EXP]
# root is the base directory for the experiment relative to the python call
root = ./experiment_1/
# mainly a name for the top folder to store results (inside root)
name = exp_A
# needed only for neural net classifiers
#epochs = 100
# needed only for classifiers with random initialization
# runs = 3 

The DATA section deals with the data sets:

[DATA]
# list all the databases  you will be using
databases = ['emodb']
# state the path to the audformat root folder
emodb = /home/felix/data/audb/emodb
# split train and test based on different random speakers
emodb.split_strategy = speaker_split
# state the percentage of test speakers (in this case 4 speakers, as emodb only has 10 speakers)
emodb.testsplit = 40
# for a subsequent run you might want to skip the speaker selection as it requires to extract features for each run
# emodb.split_strategy = reuse # uncomment the other strategy then
# the target label that should be classified
target = emotion
# the categories for this label
labels = ['anger', 'boredom', 'disgust', 'fear', 'happiness', 'neutral', 'sadness']

The next secton deals with the features that should be used by the classifier.

[FEATS]
# the type of features to use
type = os

The following altenatives are currently implemented (only os and trill are opensource):

  • type = os # opensmile features
  • type = mld # mid level descriptors, to be published
  • type = trill # TRILL features requires keras to be installed
  • type = spectra # log mel spectra, for convolutional ANNs

Next comes the MODEL section which deals with the classifier:

[MODEL]
# the main thing to sepecify is the kind of classifier:
type = xgb

Choices are:

  • type = xgb # XG-boost algorithm, based on classification trees
  • type = svm # Support Vector Machines, a classifier based on decision planes
  • type = mlp # Multi-Layer-Perceptron, needs a layer-layout to be specified, e.g. layers = {'l1':64}

And finally, the PLOT section specifies possible additional visualizations (a confusion matrix is always plotted)

[PLOT]
tsne = True

A t-SNE plot can be useful to estimate if the selected features seperate the categories at all.

Setting up a base nkululeko experiment

This is one of a series of posts on how to use nkululeko and deals with setting up the "hello world" of nkululeko: performing classification on the berlin emodb emotional datbase.

Typically nkululeko experiments are defined by two files:

  • a python file that is called by the interpreter
  • an initialization file that is interpreted by the nkululeko framework

First we'll take a look at the python file:

# my_experiment.py
# Demonstration code to use the Nkululeko framework

import sys
sys.path.append("TO BE ADAPTED/nkululeko/src")
import configparser # to read the ini file
import experiment as exp # central nkululeko class
from util import Util # mainly for logging

def main(config_file):
    # load one configuration per experiment
    config = configparser.ConfigParser()
    config.read(config_file) # read in the ini file, the experiment is defined there
    util = Util() # init the logging and global stuff

    # create a new experiment
    expr = exp.Experiment(config)
    util.debug(f'running {expr.name}')

    # load the data sets (specified in ini file)
    expr.load_datasets()

    # split into train and test sets
    expr.fill_train_and_tests()
    util.debug(f'train shape : {expr.df_train.shape}, test shape:{expr.df_test.shape}')

    # extract features
    expr.extract_feats()
    util.debug(f'train feats shape : {expr.feats_train.df.shape}, test feats shape:{expr.feats_test.df.shape}')

# initialize a run manager and run the experiment
    expr.init_runmanager()
    expr.run()
    print('DONE')

if __name__ == "__main__":
    main('PATH TO INI FILE/exp_emodb.ini') 
    # main(sys.argv[1]) # alternatively read it from command line

and this would be a minimal nkululeko configuration file (tested with version 0.8)

[EXP]
root = ./emodb/
name = exp_emodb
[DATA]
databases = ['emodb']
emodb = TO BE ADAPTED/emodb
emodb.split_strategy = speaker_split
emodb.testsplit = 40
target = emotion
labels = ['anger', 'boredom', 'disgust', 'fear', 'happiness', 'neutral', 'sadness']
[FEATS]
type = os
[MODEL]
type = svm

I hope the names of the entries are self-explanatory, here's the link to the config file description

Nkululeko: meta parameter optimization

Here's one idea how to find the optimal values for 2 layers of an MLP net with nkululeko:

  • store your meta-parameters in arrays
  • loop over them and initialize an experiment each time
  • keep the experiment name but change your parameters and the plot name
  • this way you can re-use your extracted features and do not get your harddisk cluttered.

Here's some python code to illustrate this idea:

def main(config_file):
    # load one configuration per experiment
    config = configparser.ConfigParser()
    config.read(config_file)
    util = Util()
    l1s = [32, 64, 128]
    l2s = [16, 32, 64]
    for l1 in l1s:
        for l2 in l2s:
            # create a new experiment
            expr = exp.Experiment(config)

            plotname = f'{util.get_exp_name()}_{l1}_{l2}'
            util.set_config_val('PLOT', 'name', plotname)

            print(f'running {expr.name} with layers {l1} and {l2}')

            layers = {'l1':l1, 'l2':l2}
            util.set_config_val('MODEL', 'layers', layers)

            # load the data
            expr.load_datasets()

            # split into train and test
            expr.fill_train_and_tests()
            util.debug(f'train shape : {expr.df_train.shape}, test shape:{expr.df_test.shape}')

            # extract features
            expr.extract_feats()
            util.debug(f'train feats shape : {expr.feats_train.df.shape}, test feats shape:{expr.feats_test.df.shape}')

            # initialize a run manager
            expr.init_runmanager()

            # run the experiment
            expr.run()

    print('DONE')

Keep in mind though that meta parameter optimization like done here is in itself a learning problem. It is usually not feasible to systematically try out all combinations of possible values and thus some kind of stochastic approach is preferable.

How to set up your first nkululeko project

Nkululeko is a framework to build machine learning models that recognize speaker characteristics on a very high level of abstraction (i.e. starting without programming experience).

This post is meant to help you with setting up your first experiment, based on the Berlin Emodb.

1) Set up python

It's written in python so first you have to set up a Python environment

2) Get a database

Load the Berlin emodb database to some location on you harddrive, as discussed in this post. I will refer to the location as "emodb root" from now on.

3) Download nkululeko

Navigate with a browser to the nkululeko github page and click on the "code" button, download the zip or (better) clone with your git software (step 1).

Unpack (if zip file) to some location on your hard disk that I will call "nkululeko root" from now on.

4) Install the required python packages

Inside the virtual environment that you created!

Navigate with a shell to the nkululeko root and install the python packages needed by nkululeko with

pip install -r requirements.txt

5) Adapt the ini file

Use your favourite editor, e.g. visual studio code and open the nkululeko root. If you use visual studio code, set the path to the environment as python interpreter path and store this (nkululeko root and python envirnment path) as a workspace configuration, so next time you can simply open the wprkspace and you're set up.

Open the exp_emodb.ini file and put your nkululeko root as the root value, for me this looks like this:

root = /home/felix/data/research/nkululeko/

Put the emodb root folder as the emodb value, for me this looks like this

emodb = /home/felix/data/audb/emodb

An overview on all nkululeko options should be here

6) Run the experiment

Inside a shell type (or use VSC) and start the process with

python my_experiment.py exp_emodb.ini

7) Inspect the results

If all goes well, the program should start by extracting opensmile features, and, if you're done, you should be able to inspect the results in the folder named like the experiment: exp_emodb.
There should be a subfolder with a confusion matrix named images` and a subfolder for the textual results named `results.

What to do next?

You might be interested in the hello world of nkululeko

.