Nkululeko: inspect your data with Spotlight

With nkululeko since version 0.67.0, the spotlight software is directly integrated as part of the EXPLORE module.

You can simply run your data filters, augmentations, machine learning experiments, segmentations and model predictions as usual, and then call the spotlight software by adding to your configuration file:

[EXPL]
sample_selection = all # or train or test 
spotlight = True 

and running the EXPORE module

python -m nkululeko.explore --config myconfig.ini

Note that you might require to install an extra package:

pip install renumics-spotlight

A new web browser window should open as an interface to spotlight:

Nkululeko: generate a latex/pdf report

With nkululeko since version 0.66.3, a report document formatted in Latex and compiled as a PDF file can automatically be generated, basically as a compilation of the images that are generated.
There is a dedicated REPORT section in the config file for this, here is an example:

[REPORT]
# should the report be shown in the terminal at the end?
show = False 
# should a latex/pdf file be printed? if so, state the filename
latex = emodb_report
# name of the experiment author (default "anon")
author = Felix
# title of the report (default "report")
title = EmoDB

NOTE:
with each run of a nkululeko module in the same experiment environment, the details of the report will be added.
So a typical use would be, to first run the general module and than more specialized ones:

# first run a segmentation 
python -m nkululeko.segment --config myconf.ini 
# then rename the data-file in the config.ini and
# run some data exploration
python -m nkululeko.explore --config myconf.ini 
# then run a machine learning experiment
python -m nkululeko.nkululeko --config myconf.ini 

Each run will add some contents to the report

Torchaudio

If you use modules, feature-extractors or models that use torchaudio with Nkululeko, like e.g . Resampler or Squim model, you need to install the nightly version.

pip uninstall -y torch torchvision torchaudio
pip install --pre torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/nightly/cpu

Nkululeko: get some statistics on correlation and effect size

With nkululeko since version 0.64.0, some statistics are printed as part of the plot's titles.
With the explore module, you can plot correlations between the target (e.g. emotion or age) and other variables that are in the database, e.g. gender or duration, or everything you might have predicted with the predict module.
You need to differentiate if any of your variables is categorical/nominal (strings) or real valued (numbers).

If you plot the distribution of two categories, the Chi2 statistics is used to estimate if the correlation is significant. A p-value is given in the title, like e.g. in this plot:

If you plot the distribution of two real-valued variables, the correlation will be estimated by the Pearson's correlation coefficient:

If the target is categorical and the variable real-valued, we use Cohen's d to print out the maximal effect size of a category combination and the variable:

If the target is also real-valued, it will be binned (made categorical) per default:

If you want to prevent that, you can set a value in the configuration:

[EXPL]
bin_reals = False

and you get a different plot:

How to fix different sampling rates in a dataset with Nkululeko

With nkululeko since version 0.62.0 you can automatically adjust the sampling rate to the standard of 16 kHz, which is required by most models that might need to process your data.

A special module can be configured in the configuration file like this:

[RESAMPLE]
# which of the data splits to re-sample: train, test or all (both)
sample_selection = all
replace = True
target = data_resampled.csv

and then you call it like this

python -m nkululeko.resample --config my_config.ini

WARNING: if replace = True, this changes (overwrites) ALL files in the splits, directly on your hard disk. Make sure to make a safety copy of your database before, in case the results are undesired, or you still need the data in other sample rates.

The default value though, is replace = False . Then, the target value will be used as filename for the new dataframe with filenames that indicate that the sampling rate has been changed.

As stated above, only files in the test and train splits are affected. This means that you can use all filtering, e.g. limit samples per speaker to 20 samples to pre-select samples.

Nkululeko: how to predict labels for your data from existing models and check them

With nkululeko since version 0.58.0, you can predict labels automatically for a given database, and then perhaps use these predictions to check on bias within your data.
One example:
You have a database labeled with smokers/non-smokers. You evaluate a machine learning model, check on the features and find to your astonishment, that the mean pitch is the most important feature to distinguish between smokers and non-smokers, with a very high accuracy.
You suspect foul-play and auto-label the data with a public model predicting biological sex (called gender in Nkululeko).
After a data exploration you see that most of the smokers are female and most of the non-smokers are male.
The machine learning model detected biological sex and not smoking behaviour.

How do you do this?
Firstly, you need to predict labels. In a configuration file, state the annotations you'd like to be added to your data like this:

[DATA]
databases = ['mydata']
mydata = ... # location of the data
mydata.split_strategy = random # not important for this 
...
[PREDICT]
# the label names that should be predicted: possible are: 'gender', 'age', 'snr', 'valence', 'arousal', 'dominance', 'pesq', 'mos'
targets = ['gender']
# the split selection, use "all" for all samples in the database
sample_selection = all

You can then call the predict module with python:

python -m nkululeko.predict --config my_config.ini

The resulting new database file in CSV format will appear in the experiment folder.
The newly predicted values will be named with a trailing _pred, e.g. "gender_pred" for "gender"
You can than configure the explore module to visualize the the correlation between the new labels and the original target:

[DATA]
databases = ['predicted']
predicted = ./my_exp/mydata_predicted.csv
predicted.type = csv
predicted.absolute_path = True
predicted.split_strategy = random
...
[EXPL]
# which labels to investigate in context with target label
value_counts = [['gender_pred']]
# the split selection
sample_selection = all

and then call the explore module:

python -m nkululeko.explore --config my_config.ini

The resulting visualizations are in the image folder of the experiment folder.
Here is an example of the correlation between emotion and estimated PESQ (Perceptual Evaluation of Speech Quality)

The effect size is stated as Cohen's d, for categories that have the largest value, in this case the difference of estimated speech quality is largest between the categories neutral and angry.

Nkululeko: segmenting a database

Segmenting a database means to split the audio samples of a database into smaller segments or chunks. With speech data this is usually done on the basis of VAD, aka voice activity detection, meaning that the pauses between speech in the audio samples are used as segment borders.

The reason for segmenting could be to label the data with something that would not last over the whole sample, e.g. emotional state.
Another motivation to segment audio data might be that the acoustic features are targeted at a specific stretch of audio, e.g. 3-5 seconds long.

Within nkululeko this would be done with the segment module, which is currently based on the silero software.

You simply call your experiment configuration with the segment module, and the train, test set or both will be segmented.
The advantage is, that you can use all filters on your data that might make sense beforehand, for example with the android corpus, only the reading task samples are not segmented.
You can select them like so:

[DATA]
filter = [['task', 'reading']]

and then call the segment module:

python -m nkululeko.segment --config my_conf.ini

The output is a new database file in CSV format.

If you want, you can specify if only the training, or test split, or both should be segmented, as well as the string that is added to the name of the resulting csv file (the name per default consists of the database names):

[SEGMENT]
# name postfix
target = _segmented
# which model to use
method = silero
# which split: train, test or all (both)
sample_selection = all
# the minimum lenght of rest-samples (in seconds)
min_length = 2
# the maximum length of segments, longer ones are cut here.  (in seconds)
max_length = 10

Nkululeko: check your dataset

Within nkululeko, since version 0.53.0, you can perform automatic data checks, which means that some of your data might be filtered out if it doesn't fulfill certain requirements.

Currently two checks are implemented:

[DATA]
# check the filesize of all samples in train and test splits, in bytes
 check_size = 1000
# check if the files contain speech with voice activity detection (VAD)
 check_vad = True

VAD is using silero VAD

Nkululeko: how to visualize your data distribution

If you just want to see how your data distributes on the target with nkululeko, you can do a value_counts plot with the explore module

In your config, you would specify like this:

[EXPL]
# all samples, or only test or train split?
sample_selection = all 
# activate the plot
value_counts = [['age'], ['gender'], ['duration'], ['duration', 'age']] 

and then, run this with the explore module:

python -m nkululeko.explore --config myconfig.ini

The results, for a data set with target=depression, looks similar to this for all samples:


and this for the speakers (if there is a speaker annotation)

If you prefer a kernel density estimation over a histogram, you can select this with

[EXPL]
dist_type = kde

which would result for duration to:

Nkululeko distinguishes between categorical and continuous properties, this would be the output for gender

You can show the distribution of two sample properties at once, by using a scatter plot:

In addition, this module will automatically plot the distribution of samples per speaker, per gender (if annotated):

Nkululeko: visualize clusters of your acoustic features

It can be very interesting to reduce the dimensionality of your acoustic or learned features to two or three dimensions and then color the single samples features with the label.

Nkululeko supports three different ways to reduce the dimensionality:

  • pca: Principal Componen Analysis
  • tsne: t-distributed stochastic neighbor embedding
    • perplexity=30, learning_rate=200
  • umap: Uniform Manifold Approximation and Projection
    • n_neighbors=10, random_state=0

To do this, you simply state your data and features as usual. The approaches you want to use can be set in the scatter field of the EXPL section:

[EXPL]
scatter = ['umap', 'tsne', 'pca']

(of course you don't have to use all) and then call the explore interface

python -m nkululeko.explore --config myconfig.ini

You can do this for all columns in your data, not only the target value.
If you want a scatter plot for a different target, state it like this (example):

[EXPL]
scatter = ['pca']
scatter.target = ['gender', 'age', 'likability']

And you can do it in 2 or 3-d:

[EXPL]
scatter = ['pca']
scatter.dim = 3

The images appear in the image folder of your experiment and might look like this (all from the same data):

PCA

T-SNE

UMAP