Category Archives: tutorial

Nkululeko: perform cross database experiments

This is one of a series of posts about how to use nkululeko.
If you're unfamilar with nkululelo, you might want to start here.

This post is about cross database experiments, i.e. training a classifier on one database and test it on another, something that happens quite often with real life situations.

In this post I will only talk about the config file, the python file can be re-used.

I'll walk you through the sections of the config file (all options here):
The first section deals with general setup:

[EXP]
# root is the base directory for the experiment relative to the python call
root = ./experiment_1/
# mainly a name for the top folder to store results (inside root)
name = cross_data

Next, the DATA section is in this case more complex than usual:

[DATA]
# list all databases
databases = ['polish', 'emodb']
# strategy as opposed to train_test
strategy = cross_data
# state which databases to use for training
trains = ['emodb']
# state with databases to use as a test
tests = ['polish']
# what is the target label?
target = emotion
# what are the category names?
labels = ['neutral', 'happy', 'sad', 'angry', 'fright.']
# for each database:
# where is it?
polish = PATH/polish-emotional-speech
# map the databases categories to a common set 
polish.mapping = {'anger':'angry', 'joy':'happy', 'sadness':'sad', 'fear':'fright.', 'neutral':'neutral'}
# plot the distribution of categories
polish.value_counts = True
# and for the second database:
emodb = PATH/emodb
emodb.mapping = {'anger':'angry', 'happiness':'happy', 'sadness':'sad', 'fear':'fright.', 'neutral':'neutral'}
emodb.value_counts = True

The features section, better explained in this post

[FEATS]
type = os

The classifiers section, better explained in this post

[MODEL]
type = xgb

Again, you might want to plot the final distribution of categories per train and test set:

[PLOT]
value_counts = True

Nkululeko: comparing classifiers and features

This is one of a series of posts about how to use nkululeko.

Although Nkululeko is meant as a programming library, many experiments can be done simply by adapting the configuration file of the experiment. If you're unfamilar with nkululelo, you might want to start here.

This post is about maschine classification (as opposed to regression problems) and an introduction how to combine different features sets with different classifiers.

In this post I will only talk about the config file, the python file can be re-used.

I'll walk you through the sections of the config file (all options here):
The first section deals with general setup:

[EXP]
# root is the base directory for the experiment relative to the python call
root = ./experiment_1/
# mainly a name for the top folder to store results (inside root)
name = exp_A
# needed only for neural net classifiers
#epochs = 100
# needed only for classifiers with random initialization
# runs = 3 

The DATA section deals with the data sets:

[DATA]
# list all the databases  you will be using
databases = ['emodb']
# state the path to the audformat root folder
emodb = /home/felix/data/audb/emodb
# split train and test based on different random speakers
emodb.split_strategy = speaker_split
# state the percentage of test speakers (in this case 4 speakers, as emodb only has 10 speakers)
emodb.testsplit = 40
# for a subsequent run you might want to skip the speaker selection as it requires to extract features for each run
# emodb.split_strategy = reuse # uncomment the other strategy then
# the target label that should be classified
target = emotion
# the categories for this label
labels = ['anger', 'boredom', 'disgust', 'fear', 'happiness', 'neutral', 'sadness']

The next secton deals with the features that should be used by the classifier.

[FEATS]
# the type of features to use
type = os

The following altenatives are currently implemented (only os and trill are opensource):

  • type = os # opensmile features
  • type = mld # mid level descriptors, to be published
  • type = trill # TRILL features requires keras to be installed
  • type = spectra # log mel spectra, for convolutional ANNs

Next comes the MODEL section which deals with the classifier:

[MODEL]
# the main thing to sepecify is the kind of classifier:
type = xgb

Choices are:

  • type = xgb # XG-boost algorithm, based on classification trees
  • type = svm # Support Vector Machines, a classifier based on decision planes
  • type = mlp # Multi-Layer-Perceptron, needs a layer-layout to be specified, e.g. layers = {'l1':64}

And finally, the PLOT section specifies possible additional visualizations (a confusion matrix is always plotted)

[PLOT]
tsne = True

A t-SNE plot can be useful to estimate if the selected features seperate the categories at all.

Setting up a base nkululeko experiment

This is one of a series of posts on how to use nkululeko and deals with setting up the "hello world" of nkululeko: performing classification on the berlin emodb emotional datbase.

Typically nkululeko experiments are defined by two files:

  • a python file that is called by the interpreter
  • an initialization file that is interpreted by the nkululeko framework

First we'll take a look at the python file:

# my_experiment.py
# Demonstration code to use the Nkululeko framework

import sys
sys.path.append("TO BE ADAPTED/nkululeko/src")
import configparser # to read the ini file
import experiment as exp # central nkululeko class
from util import Util # mainly for logging

def main(config_file):
    # load one configuration per experiment
    config = configparser.ConfigParser()
    config.read(config_file) # read in the ini file, the experiment is defined there
    util = Util() # init the logging and global stuff

    # create a new experiment
    expr = exp.Experiment(config)
    util.debug(f'running {expr.name}')

    # load the data sets (specified in ini file)
    expr.load_datasets()

    # split into train and test sets
    expr.fill_train_and_tests()
    util.debug(f'train shape : {expr.df_train.shape}, test shape:{expr.df_test.shape}')

    # extract features
    expr.extract_feats()
    util.debug(f'train feats shape : {expr.feats_train.df.shape}, test feats shape:{expr.feats_test.df.shape}')

# initialize a run manager and run the experiment
    expr.init_runmanager()
    expr.run()
    print('DONE')

if __name__ == "__main__":
    main('PATH TO INI FILE/exp_emodb.ini') 
    # main(sys.argv[1]) # alternatively read it from command line

and this would be a minimal nkululeko configuration file (tested with version 0.8)

[EXP]
root = ./emodb/
name = exp_emodb
[DATA]
databases = ['emodb']
emodb = TO BE ADAPTED/emodb
emodb.split_strategy = speaker_split
emodb.testsplit = 40
target = emotion
labels = ['anger', 'boredom', 'disgust', 'fear', 'happiness', 'neutral', 'sadness']
[FEATS]
type = os
[MODEL]
type = svm

I hope the names of the entries are self-explanatory, here's the link to the config file description

How to set up your first nkululeko project

Nkululeko is a framework to build machine learning models that recognize speaker characteristics on a very high level of abstraction (i.e. starting without programming experience).

This post is meant to help you with setting up your first experiment, based on the Berlin Emodb.

1) Set up python

It's written in python so first you have to set up a Python environment

2) Get a database

Load the Berlin emodb database to some location on you harddrive, as discussed in this post. I will refer to the location as "emodb root" from now on.

3) Download nkululeko

Navigate with a browser to the nkululeko github page and click on the "code" button, download the zip or (better) clone with your git software (step 1).

Unpack (if zip file) to some location on your hard disk that I will call "nkululeko root" from now on.

4) Install the required python packages

Inside the virtual environment that you created!

Navigate with a shell to the nkululeko root and install the python packages needed by nkululeko with

pip install -r requirements.txt

5) Adapt the ini file

Use your favourite editor, e.g. visual studio code and open the nkululeko root. If you use visual studio code, set the path to the environment as python interpreter path and store this (nkululeko root and python envirnment path) as a workspace configuration, so next time you can simply open the wprkspace and you're set up.

Open the exp_emodb.ini file and put your nkululeko root as the root value, for me this looks like this:

root = /home/felix/data/research/nkululeko/

Put the emodb root folder as the emodb value, for me this looks like this

emodb = /home/felix/data/audb/emodb

An overview on all nkululeko options should be here

6) Run the experiment

Inside a shell type (or use VSC) and start the process with

python my_experiment.py exp_emodb.ini

7) Inspect the results

If all goes well, the program should start by extracting opensmile features, and, if you're done, you should be able to inspect the results in the folder named like the experiment: exp_emodb.
There should be a subfolder with a confusion matrix named images` and a subfolder for the textual results named `results.

What to do next?

You might be interested in the hello world of nkululeko

.

Get all information from emodb

When you load the Berlin emodb as has been done in numerous postings of this blog, you will get per default only information on file name, speaker id, text id and emotion.

But there is more information contained in the audformat file and this posts shows you how to access it.

If not already somewhere on your computer, start by downloading the emodb:

if not os.path.isdir('./emodb/'):
    !wget -c https://tubcloud.tu-berlin.de/s/LzPWz83Fjneb6SP/download
    !mv download emodb_audformat.zip
    !unzip emodb_audformat.zip
    !rm emodb_audformat.zip

This code will then load the database, prepare a single dataframe with all information and store it to disk for later use:

# load the database to memory
root = './emodb/'
db = audformat.Database.load(root)
# map the file pathes to the audio
db.map_files(lambda x: os.path.join(root, x))   
# access speaker gender and age, and transcription, from the speaker dictionaries
df = db.tables['files'].get(map={'speaker': ['speaker', 'gender', 'age'], 'transcription': ['transcription']})
# copy the emotion label from the the emotion dataframe to the files dataframe
df['emotion'] = db.tables['emotion'].df['emotion']
# add a column with the word count
df['wordcount'] = df['transcription'].apply (lambda row: len(row.split()))
# store to disk for later use
df.to_pickle('store/emodb.pkl')

df.head(1)

Predict emodb emotions with a Multi Layer Perceptron ANN

This post shows you how to classify emotions with a Multi Layer Perceptron (MLP) artificial neural net based on the torch framework (a different very famous ANN framework would be Keras).

Here's a complete jupyter notebook for your convenience.

We start with some imports, you need to install these packages, e.g. with pip, before you run this code:

import audformat
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import torch
import os
import opensmile
from sklearn.preprocessing import LabelEncoder
from sklearn.preprocessing import StandardScaler
from sklearn.metrics import recall_score

Then we need to download and prepare our sample dataset, the Berlin emodb:

# get and unpack the Berlin Emodb emotional database if not already there
if not os.path.isdir('./emodb/'):
    !wget -c https://tubcloud.tu-berlin.de/s/LzPWz83Fjneb6SP/download
    !mv download emodb_audformat.zip
    !unzip emodb_audformat.zip
    !rm emodb_audformat.zip
# prepare the dataframe
db = audformat.Database.load('./emodb')
root = './emodb/'
db.map_files(lambda x: os.path.join(root, x))    
df_emotion = db.tables['emotion'].df
df = db.tables['files'].df
# copy the emotion label from the the emotion dataframe to the files dataframe
df['emotion'] = df_emotion['emotion']

As neural nets can only deal with numbers, we need to encode the target emotion labels with numbers:

# Encode the emotion words as numbers and use this as target 
target = 'enc_emo'
encoder = LabelEncoder()
encoder.fit(df['emotion'])
df[target] = encoder.transform(df['emotion'])

Now the dataframe should look like this:

df.head()

To ensure that we learn about emotions and not speaker idiosyncrasies we need to have speaker disjunct training and development sets:

# define fixed speaker disjunct train and test sets
train_spkrs = df.speaker.unique()[5:]
test_spkrs = df.speaker.unique()[:5]
df_train = df[df.speaker.isin(train_spkrs)]
df_test = df[df.speaker.isin(test_spkrs)]

print(f'#train samples: {df_train.shape[0]}, #test samples: {df_test.shape[0]}')
#train samples: 292, #test samples: 243

Next, we need to extract some acoustic features:

# extract (or get) GeMAPS features
if os.path.isfile('feats_train.pkl'):
    feats_train = pd.read_pickle('feats_train.pkl')
    feats_test = pd.read_pickle('feats_test.pkl')
else:
    smile = opensmile.Smile(
        feature_set=opensmile.FeatureSet.GeMAPSv01b,
        feature_level=opensmile.FeatureLevel.Functionals,
    )
    feats_train = smile.process_files(df_train.index)
    feats_test = smile.process_files(df_test.index)
    feats_train.to_pickle('feats_train.pkl')
    feats_test.to_pickle('feats_test.pkl')

Because neural nets are sensitive to large numbers, we need to scale all features with a mean of 0 and stddev of 1:

# Perform a standard scaling / z-transformation on the features (mean=0, std=1)
scaler = StandardScaler()
scaler.fit(feats_train)
feats_train_norm = pd.DataFrame(scaler.transform(feats_train))
feats_test_norm = pd.DataFrame(scaler.transform(feats_test))

Next we define two torch dataloaders, one for the training and one for the dev set:

def get_loader(df_x, df_y):
    data=[]
    for i in range(len(df_x)):
       data.append([df_x.values[i], df_y[target][i]])
    return torch.utils.data.DataLoader(data, shuffle=True, batch_size=8)
trainloader = get_loader(feats_train_norm, df_train)
testloader = get_loader(feats_test_norm, df_test)

We can then define the model, in this example with one hidden layer of 16 neurons:

class MLP(torch.nn.Module):
    def __init__(self):
        super().__init__()
        self.linear = torch.nn.Sequential(
            torch.nn.Linear(feats_train_norm.shape[1], 16),
            torch.nn.ReLU(),
            torch.nn.Linear(16, len(encoder.classes_))
        )
    def forward(self, x):
        # x: (batch_size, channels, samples)
        x = x.squeeze(dim=1)
        return self.linear(x)

We define two functions to train and evaluate the model:

def train_epoch(model, loader, device, optimizer, criterion):
    model.train()
    losses = []
    for features, labels in loader:
        logits = model(features.to(device))
        loss = criterion(logits, labels.to(device))
        losses.append(loss.item())
        optimizer.zero_grad()
        loss.backward()
        optimizer.step()
    return (np.asarray(losses)).mean()

def evaluate_model(model, loader, device, encoder):
    logits = torch.zeros(len(loader.dataset), len(encoder.classes_))
    targets = torch.zeros(len(loader.dataset))
    model.eval()
    with torch.no_grad():
        for index, (features, labels) in enumerate(loader):
            start_index = index * loader.batch_size
            end_index = (index + 1) * loader.batch_size
            if end_index > len(loader.dataset):
                end_index = len(loader.dataset)
            logits[start_index:end_index, :] = model(features.to(device))
            targets[start_index:end_index] = labels

    predictions = logits.argmax(dim=1)
    uar = recall_score(targets.numpy(), predictions.numpy(), average='macro')
    return uar, targets, predictions

Next we initialize the model and set the loss function (criterion) and optimizer:

device = 'cpu'
model = MLP().to(device)
criterion = torch.nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(model.parameters(), lr=0.0001)
epoch_num = 250
uars_train = []
uars_dev = []
losses = []

We can then do the training loop over the epochs:

for epoch in range(0, epoch_num):
    loss = train_epoch(model, trainloader, device, optimizer, criterion)
    losses.append(loss)
    acc_train = evaluate_model(model, trainloader, device, encoder)[0]
    uars_train.append(acc_train)
    acc_dev, truths, preds = evaluate_model(model, testloader, device, encoder)
    uars_dev.append(acc_dev)
# scale the losses so they fit on the picture
losses = np.asarray(losses)/2

Next we might want to take a look at how the net performed with respect to unweighted average recall (UAR):

plt.figure(dpi=200)
plt.plot(uars_train, 'green', label='train set') 
plt.plot(uars_dev, 'red', label='dev set')
plt.plot(losses, 'grey', label='losses/2')
plt.xlabel('eopchs')
plt.ylabel('UAR')
plt.legend()
plt.show()

And perhaps see the resulting confusion matrix:

from sklearn.metrics import ConfusionMatrixDisplay
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(truths, preds,  normalize = 'true')
disp = ConfusionMatrixDisplay(confusion_matrix=cm, display_labels=encoder.classes_).plot(cmap='gray')

How to set up a python project

These are some general best practise tips how to organize your seminar project.

Set up a git account

git is a software that safes your work on the internet so you can always go back to earlier versions if something goes wrong. A bit like a backup system, but also great for collaborative work.

  • install the "git" software on your computer
  • go to github.com (or try gitlab.org) and get yourself an account.
  • make there a new repository, and name it e.g. my-sample-project
  • if it's a Python project, select the pathon template for the .gitignore file (this will ignore typical python temporary files for upload).
  • go to the main repository page, open the "code" dropdown button and cope the "clone" URL.
  • On your computer in a shell/terminal/console, go where your project should reside (I strongly encourage to use a path without whitespace in it) and type
    git clone <URL>

    and the project folder should be created and is linked with the git repository.

  • learn about the basic git commands by searching for a quick tutorial (git cheat sheet).

install python

  • install a python version, use version >= 3

set up a virtual environment

  • enter your project folder,
    cd my-sample-project
  • create a virtual environment that will contain all the python packages that you use in your project:
    virtualenv -p python3 my-project_env

    If virtualenv is not installed, you can either install it or create the environment with

    python3 -m venv my-project_env
  • then activate the environment
    ./my-project_env/bin/activate

    (might be different for other operating systems)

  • you should recognize the activated environment by it's name in brackets preceding the prompt, e.g. something like
    (my-project_env) user@system:/bla/path/$

    make the kernel explicit for jupyter

    If you use jupyter notebooks, it's safer to explicetely state the python kernel of your environment.
    Within the activated envrionment:

    python -m ipykernel install --name my-project_env
  • if the module ipykernel is not found, you can install it simply with pip:
    pip install ipykernel

    Get yourself a python IDE

    IDE means 'integrated desktop environment' and is something like a very comfortable editor for python source files. If you already know and use one of the many, I wouldn't know a reason to switch. If not, I'd suggest you take a look at VSC, the visual studio code editor as it's free of costs, available on many platforms and can be extended with many available plugins.

I've made a screencast (in German) on how to install python and jupyter notebooks on Windows

How to install Speechalyzer/Labeltool

I wrote a java tool to annotate/transcribe speech data and would like to show in this blog how to run on your system.

First of all, the software is programmed in Java, so you need a java installation on your system, there are two flavours:

  • a JDK (java development kit) would be one to use if you plan to program in Java,
  • a JRE (Java runtime environment) is sufficient to run programs written in Java such as the Speechalyzer. so both (JDK or JRE) work

To test wether you got Java on your system you might want to open a shell/terminal/console (i.e. a window where you can type in system commands) and type

java -version

which either should output a response from the Java interpreter displaying the version or an error message that the program is not installed. As Java is requested to run Speechalyzer, please make sure it is installed.

The next step would be to download the Speechalyzer which actually comes as two softwares:

  • Speechalyzer is the main program which acts as a server to process audio files and actually can be run standalone.
  • Labeltool is the GUI client for Speechalyzer and can be started when Speechalyzer is running to interact with the program via point and click.

To install the programs, click on the links above, click on the "code" dropdown menues on the github pages and select either "as zip file" or use git. If you don't know git I strongly recommend to learn about it and use it it's a mighty tool to version and backup your work, but for know let's assume you use zip.

Save the zip files somewhere on your computer hard disk, perhaps create an own folder "programs" or "research" on your user home folder.

Unzip both folders.

Both of them have configuration files which should be edited with an arbitrary text editor.

Speechalyzer has a file called "speechalyzer.properties" which is located in the "res" folder in the main folder. So if you work with a linuy system, you might want to type

cd Speechalyzer-master
pico res/speechalyzer.properties

and change at least the values for "file type" and "sample rate" to something that makes sense for your audio files.

To adapt the Labeltool to your needs is a bit more complicated so I wrote an own blog post on this

If all went well you're set up and could try the Speechalyzer by printing out its useage in the shell:

java -jar Speechalyzer.jar -h

There are two options to load audio files:
1) copy them to the "recording" directory in the Speechalyzer folder
2) specify the path at startup:

java -jar Speechalyzer.jar -rd /path/to/my/audio/files

either way, you should see a startup message from the program stating how many files where loaded.

You might then want to open another shell/console/terminal, navigate to the Labeltool folder and start to program with

java -jar Labeltool.jar

which should results in a startup window with loaded audio files:

How to adapt Speechalyzer/labeltool to your own labels/experiments

I wrote a java tool to annotate/transcribe speech data and would like to show in this blog how to edit the configuration for an adapted layout of the GUI (Labeltool is the GUI of Speechalyzer).
If you start Labeltool without a Speechalyzer server running it should give an error but for tis demonstration it could be ignored:

java -jar Labeltool.jar 

Your GUI might look like this or different, it depends on the configuration. The configuration is a text file called labeltool.config that should reside in the same folder like Labeltool.jar. You can open it with a text editor of your choice:

and in the upper section you can try out to hide or show GUi panels by setting the switches to true or false.
You can not switch off the Label panel as this is the most basic of Labeltool and always there. In the lower section you would find some button configurations:

So the categoryNames field decides which button series is shown (I hope the rest is self explanatory).
In the example config I depicted above, the Labeltool would look like this (if you closed and re-opened the GUI):

Disclaimer:
I you set

withRecoderControl=false

you will not be able to play any sound (because this logic is behinde the then-hidden play button)

How to create an audformat Database from a pandas Dataframe

This tutorial explaines how to intitialize an audformat database object from a data collection that's store in a pandas dataframe.
You can also find an official example using emo db here

First you would need the neccessary imports:

import os                       # file operations
import pandas as pd             # work with tables
pd.set_option('display.max_rows', 10)

import audformat.define as define  # some definitions
import audformat.utils as utils    # util functions
import audformat
import pickle

We load a sample pandas dataframe from a speech collection labeled with age and gender.

df = pickle.load(open('../files/sample_df.pkl', 'rb'))
df.head(1)


We can then construct an audformat Databse object from this data like this

# remove the absolute path to the audio samples 
root = '/my/example/path/'
files = [file.replace(root, '') for file in df.index.get_level_values('file')]

# start with a general description
db = audformat.Database(
    name='age-gender-samples',
    source='intern',
    usage=audformat.define.Usage.RESEARCH,
    languages=[audformat.utils.map_language('de')],
    description=(
        'Short snippets  annotated by '
        'speaker and speaker age and gender.'
    ),
)
# add audio format information
db.media['microphone'] = audformat.Media(
    type=audformat.define.MediaType.AUDIO,
    sampling_rate=16000,
    channels=1,
    format='wav',
)
# Describe the age data
db.schemes['age'] = audformat.Scheme(
    dtype=audformat.define.DataType.INTEGER,
    minimum=0,
    maximum=100,
    description='Speaker age in years',
)
# describe the gender data
db.schemes['gender'] = audformat.Scheme(
    labels=[
        audformat.define.Gender.FEMALE,
        audformat.define.Gender.MALE,
    ],
    description='Speaker sex',
)
# describe the speaker id data
db.schemes['speaker'] = audformat.Scheme(
    dtype=audformat.define.DataType.STRING,
    description='Name of the speaker',
)
# initialize a data table with an index which corresponds to the file names
db['files'] = audformat.Table(
    audformat.filewise_index(files),
    media_id='microphone',
)
# now add columns to the table for each data item of interest (age, gender and speaker id)
db['files']['age'] = audformat.Column(scheme_id='age')
db['files']['age'].set(df['age'])
db['files']['gender'] = audformat.Column(scheme_id='gender')
db['files']['gender'].set(df['gender'])
db['files']['speaker'] = audformat.Column(scheme_id='speaker')
db['files']['speaker'].set(df['speaker'])

and finally inspect the result

db

name: age-gender-sample
description: Short snippets annotated by speaker and speaker age and gender.
source: intern
usage: research
languages: [deu]
media:
  microphone: {type: audio, format: wav, channels: 1, sampling_rate: 16000}
schemes:
  age: {description: Speaker age in years, dtype: int, minimum: 0, maximum: 100}
  gender:
    description: Speaker sex
    dtype: str
    labels: [female, male]
  speaker: {description: Name of the speaker, dtype: str}
tables:
  files:
    type: filewise
    media_id: microphone
    columns:
      age: {scheme_id: age}
      gender: {scheme_id: gender}
      speaker: {scheme_id: speaker
      }

and perhaps as a test get the unique valuesof all speakers:

    db.tables['files'].df.speaker.unique()

Important: note that the path to the audiofiles needs to be relative to where the database.yaml file resides and is not allowed to start with "./", so if you do

db.files[0]

this should result in something like

audio/mywav_0001.wav