Setting up a base nkululeko experiment

This is one of a series of posts on how to use nkululeko and deals with setting up the "hello world" of nkululeko: performing classification on the berlin emodb emotional datbase.

Nkululeko experiments are defined by one file:

  • an initialization file that is interpreted by the nkululeko framework

This would be a minimal nkululeko configuration file (tested with version 0.8)

[EXP]
root = ./results
name = exp_emodb
[DATA]
databases = ['emodb']
emodb = TO BE ADAPTED/emodb
emodb.split_strategy = speaker_split
emodb.testsplit = 40
target = emotion
labels = ['anger', 'boredom', 'disgust', 'fear', 'happiness', 'neutral', 'sadness']
[FEATS]
type = os
[MODEL]
type = svm

I hope the names of the entries are self-explanatory, here's the link to the config file description

Nkululeko: meta parameter optimization

With linear classifiers that are derived from sklearn, you can simply state your variants for a meta parameter in the ini file:

[MODEL]
type = svm
tuning_params = ['C']
scoring = recall_macro
C = [10, 1, 0.1, 0.01, 0.001, 0.0001]

This will iterate the C parameter of the SVM classifier by the stated values and choose the best performing model.
You can have several "tuning_params" and them a grid search (combining everything with each other) will be performed.

Here's an example for XGB classifier:

[MODEL]
type = xgb
tuning_params = ['subsample', 'n_estimators', 'max_depth']
subsample = [.5, .7]
n_estimators = [50, 80, 200]
max_depth = [1, 6]

Here's one idea how to find the optimal values for 2 layers of an MLP net with nkululeko:

  • store your meta-parameters in arrays
  • loop over them and initialize an experiment each time
  • keep the experiment name but change your parameters and the plot name
  • this way you can re-use your extracted features and do not get your harddisk cluttered.

Here's some python code to illustrate this idea:

def main(config_file):
    # load one configuration per experiment
    config = configparser.ConfigParser()
    config.read(config_file)
    util = Util()
    l1s = [32, 64, 128]
    l2s = [16, 32, 64]
    for l1 in l1s:
        for l2 in l2s:
            # create a new experiment
            expr = exp.Experiment(config)

            plotname = f'{util.get_exp_name()}_{l1}_{l2}'
            util.set_config_val('PLOT', 'name', plotname)

            print(f'running {expr.name} with layers {l1} and {l2}')

            layers = {'l1':l1, 'l2':l2}
            util.set_config_val('MODEL', 'layers', layers)

            # load the data
            expr.load_datasets()

            # split into train and test
            expr.fill_train_and_tests()
            util.debug(f'train shape : {expr.df_train.shape}, test shape:{expr.df_test.shape}')

            # extract features
            expr.extract_feats()
            util.debug(f'train feats shape : {expr.feats_train.df.shape}, test feats shape:{expr.feats_test.df.shape}')

            # initialize a run manager
            expr.init_runmanager()

            # run the experiment
            expr.run()

    print('DONE')

Keep in mind though that meta parameter optimization like done here is in itself a learning problem. It is usually not feasible to systematically try out all combinations of possible values and thus some kind of stochastic approach is preferable.

How to set up your first nkululeko project

Nkululeko is a framework to build machine learning models that recognize speaker characteristics on a very high level of abstraction (i.e. starting without programming experience).

This post is meant to help you with setting up your first experiment, based on the Berlin Emodb.

1) Set up python

It's written in python so first you have to set up a Python environment

2) Get a database

Load the Berlin emodb database to some location on you harddrive, as discussed in this post. I will refer to the location as "emodb root" from now on.

3) Install nkululeko

Inside your virtual environment, run

pip install nkululeko

This should install nkululeko and all required modules.
It takes a long time and a lot of space, when done intially.

5) Adapt the ini file

Use your favourite editor, e.g. visual studio code and edit the file that defines your experiment. You might start with this demo sample.
You can find more templates to start here and an overview on all the options you can set here

Put the emodb root folder as the emodb value, for me this looks like this

emodb = /home/felix/data/audb/emodb

An overview on all nkululeko options should be here

6) Run the experiment

Inside a shell type (or use VSC) and start the process with

python -m nkululeko.nkululeko --config exp_emodb.ini

7) Inspect the results

If all goes well, the program should start by extracting opensmile features, and, if you're done, you should be able to inspect the results in the folder named like the experiment: exp_emodb.
There should be a subfolder with a confusion matrix named images` and a subfolder for the textual results named `results.

What to do next?

You might be interested in the hello world of nkululeko

.

Get all information from emodb

When you load the Berlin emodb as has been done in numerous postings of this blog, you will get per default only information on file name, speaker id, text id and emotion.

But there is more information contained in the audformat file and this posts shows you how to access it.

If not already somewhere on your computer, start by downloading the emodb:

if not os.path.isdir('./emodb/'):
    !wget -c https://tubcloud.tu-berlin.de/s/LfkysdXJfiobiEG
    !mv download emodb_audformat.zip
    !unzip emodb_audformat.zip
    !rm emodb_audformat.zip

This code will then load the database, prepare a single dataframe with all information and store it to disk for later use:

# load the database to memory
root = './emodb/'
db = audformat.Database.load(root)
# map the file pathes to the audio
db.map_files(lambda x: os.path.join(root, x))   
# access speaker gender and age, and transcription, from the speaker dictionaries
df = db.tables['files'].get(map={'speaker': ['speaker', 'gender', 'age'], 'transcription': ['transcription']})
# copy the emotion label from the the emotion dataframe to the files dataframe
df['emotion'] = db.tables['emotion'].df['emotion']
# add a column with the word count
df['wordcount'] = df['transcription'].apply (lambda row: len(row.split()))
# store to disk for later use
df.to_pickle('store/emodb.pkl')

df.head(1)

Machine learning experiment framework

Currently i'm working on (yet another) framework for machine learning, i.e. a python coded set of classes that can be used to run machine learning experiments in a flexible but reusable way.

I'm not sure where this is heading yet, but a first runnable version exists, if interested check it out at my github account, I'll update news there.

The general idea looks something like this:

Predict emodb emotions with a Multi Layer Perceptron ANN

This post shows you how to classify emotions with a Multi Layer Perceptron (MLP) artificial neural net based on the torch framework (a different very famous ANN framework would be Keras).

Here's a complete jupyter notebook for your convenience.

We start with some imports, you need to install these packages, e.g. with pip, before you run this code:

import audformat
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import torch
import os
import opensmile
from sklearn.preprocessing import LabelEncoder
from sklearn.preprocessing import StandardScaler
from sklearn.metrics import recall_score

Then we need to download and prepare our sample dataset, the Berlin emodb:

# get and unpack the Berlin Emodb emotional database if not already there
if not os.path.isdir('./emodb/'):
    !wget -c https://tubcloud.tu-berlin.de/s/8Td8kf8NXpD9aKM/download
    !mv download emodb_audformat.zip
    !unzip emodb_audformat.zip
    !rm emodb_audformat.zip
# prepare the dataframe
db = audformat.Database.load('./emodb')
root = './emodb/'
db.map_files(lambda x: os.path.join(root, x))    
df_emotion = db.tables['emotion'].df
df = db.tables['files'].df
# copy the emotion label from the the emotion dataframe to the files dataframe
df['emotion'] = df_emotion['emotion']

As neural nets can only deal with numbers, we need to encode the target emotion labels with numbers:

# Encode the emotion words as numbers and use this as target 
target = 'enc_emo'
encoder = LabelEncoder()
encoder.fit(df['emotion'])
df[target] = encoder.transform(df['emotion'])

Now the dataframe should look like this:

df.head()

To ensure that we learn about emotions and not speaker idiosyncrasies we need to have speaker disjunct training and development sets:

# define fixed speaker disjunct train and test sets
train_spkrs = df.speaker.unique()[5:]
test_spkrs = df.speaker.unique()[:5]
df_train = df[df.speaker.isin(train_spkrs)]
df_test = df[df.speaker.isin(test_spkrs)]

print(f'#train samples: {df_train.shape[0]}, #test samples: {df_test.shape[0]}')
#train samples: 292, #test samples: 243

Next, we need to extract some acoustic features:

# extract (or get) GeMAPS features
if os.path.isfile('feats_train.pkl'):
    feats_train = pd.read_pickle('feats_train.pkl')
    feats_test = pd.read_pickle('feats_test.pkl')
else:
    smile = opensmile.Smile(
        feature_set=opensmile.FeatureSet.GeMAPSv01b,
        feature_level=opensmile.FeatureLevel.Functionals,
    )
    feats_train = smile.process_files(df_train.index)
    feats_test = smile.process_files(df_test.index)
    feats_train.to_pickle('feats_train.pkl')
    feats_test.to_pickle('feats_test.pkl')

Because neural nets are sensitive to large numbers, we need to scale all features with a mean of 0 and stddev of 1:

# Perform a standard scaling / z-transformation on the features (mean=0, std=1)
scaler = StandardScaler()
scaler.fit(feats_train)
feats_train_norm = pd.DataFrame(scaler.transform(feats_train))
feats_test_norm = pd.DataFrame(scaler.transform(feats_test))

Next we define two torch dataloaders, one for the training and one for the dev set:

def get_loader(df_x, df_y):
    data=[]
    for i in range(len(df_x)):
       data.append([df_x.values[i], df_y[target][i]])
    return torch.utils.data.DataLoader(data, shuffle=True, batch_size=8)
trainloader = get_loader(feats_train_norm, df_train)
testloader = get_loader(feats_test_norm, df_test)

We can then define the model, in this example with one hidden layer of 16 neurons:

class MLP(torch.nn.Module):
    def __init__(self):
        super().__init__()
        self.linear = torch.nn.Sequential(
            torch.nn.Linear(feats_train_norm.shape[1], 16),
            torch.nn.ReLU(),
            torch.nn.Linear(16, len(encoder.classes_))
        )
    def forward(self, x):
        # x: (batch_size, channels, samples)
        x = x.squeeze(dim=1)
        return self.linear(x)

We define two functions to train and evaluate the model:

def train_epoch(model, loader, device, optimizer, criterion):
    model.train()
    losses = []
    for features, labels in loader:
        logits = model(features.to(device))
        loss = criterion(logits, labels.to(device))
        losses.append(loss.item())
        optimizer.zero_grad()
        loss.backward()
        optimizer.step()
    return (np.asarray(losses)).mean()

def evaluate_model(model, loader, device, encoder):
    logits = torch.zeros(len(loader.dataset), len(encoder.classes_))
    targets = torch.zeros(len(loader.dataset))
    model.eval()
    with torch.no_grad():
        for index, (features, labels) in enumerate(loader):
            start_index = index * loader.batch_size
            end_index = (index + 1) * loader.batch_size
            if end_index > len(loader.dataset):
                end_index = len(loader.dataset)
            logits[start_index:end_index, :] = model(features.to(device))
            targets[start_index:end_index] = labels

    predictions = logits.argmax(dim=1)
    uar = recall_score(targets.numpy(), predictions.numpy(), average='macro')
    return uar, targets, predictions

Next we initialize the model and set the loss function (criterion) and optimizer:

device = 'cpu'
model = MLP().to(device)
criterion = torch.nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(model.parameters(), lr=0.0001)
epoch_num = 250
uars_train = []
uars_dev = []
losses = []

We can then do the training loop over the epochs:

for epoch in range(0, epoch_num):
    loss = train_epoch(model, trainloader, device, optimizer, criterion)
    losses.append(loss)
    acc_train = evaluate_model(model, trainloader, device, encoder)[0]
    uars_train.append(acc_train)
    acc_dev, truths, preds = evaluate_model(model, testloader, device, encoder)
    uars_dev.append(acc_dev)
# scale the losses so they fit on the picture
losses = np.asarray(losses)/2

Next we might want to take a look at how the net performed with respect to unweighted average recall (UAR):

plt.figure(dpi=200)
plt.plot(uars_train, 'green', label='train set') 
plt.plot(uars_dev, 'red', label='dev set')
plt.plot(losses, 'grey', label='losses/2')
plt.xlabel('eopchs')
plt.ylabel('UAR')
plt.legend()
plt.show()

And perhaps see the resulting confusion matrix:

from sklearn.metrics import ConfusionMatrixDisplay
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(truths, preds,  normalize = 'true')
disp = ConfusionMatrixDisplay(confusion_matrix=cm, display_labels=encoder.classes_).plot(cmap='gray')

Make a t-SNE plot

This post shows you how to generate a t-distributed stochastic neighbor embedding (t-SNE) plot with the opensmile features extracted from emodb data (which is explained in more detail in a previous blog post).

A t-SNE plot is a very useful visualization, as it condenses your feature space into two dimensions (so it can be plotted) and then uses colors to represent the class membership. This means, if you can identify clusters of same colored dots in your data cloud, the features are able to separate the classes.

We need the following imports:

import audformat
from sklearn.manifold import TSNE
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
import os
import opensmile

First, you download and prepare emodb:

# get and unpack the berlin Emodb emotional database
!wget -c https://tubcloud.tu-berlin.de/s/LzPWz83Fjneb6SP/download
!mv download emodb_audformat.zip
!unzip emodb_audformat.zip
!rm emodb_audformat.zip
# preapare the dataframe
db = audformat.Database.load('./emodb')
root = './emodb/'
db.map_files(lambda x: os.path.join(root, x))
df = db.tables['emotion'].df

Then, you extract the geMAPS features:

smile = opensmile.Smile(
    feature_set=opensmile.FeatureSet.GeMAPSv01b,
    feature_level=opensmile.FeatureLevel.Functionals,
)
feats_df = smile.process_files(df.index)

And finally, you generate the t-SNE plot with the sklearn library like this:

# Plot a TSNE
def plotTsne(feats, labels, perplexity=30, learning_rate=200):
    model = TSNE(n_components=2, random_state=0, perplexity=perplexity, learning_rate=learning_rate)
    tsne_data = model.fit_transform(feats)
    tsne_data_labs = np.vstack((tsne_data.T, labels)).T
    tsne_df = pd.DataFrame(data=tsne_data_labs, columns=('Dim_1', 'Dim_2', 'label'))
    sns.FacetGrid(tsne_df, hue='label', size=6).map(plt.scatter, 'Dim_1', 'Dim_2').add_legend()
    plt.show()
plotTsne(feats_df, df['emotion'], 30, 200)

It seems that these features are useful to distinguish at least the category anger from the rest.

You might want to fiddle around with the two main parameters of the algorithm: perplexity and learning-rate.

A python class to predict your emotions

This is a post to introduce you to the idea of encapsulating functionality with object-oriented programming.

We simply put the emotional classification of speech that was demonstrated in this post in a python class like this:

import opensmile
import os
import audformat
from sklearn import svm
import sounddevice as sd
import soundfile as sf
from scipy.io.wavfile import write

class EmoRec():
    root = './emodb/'
    clf = None
    filename = 'emorec.wav'
    sr = 16000
    def __init__(self):
        self.smile = opensmile.Smile(
            feature_set=opensmile.FeatureSet.GeMAPSv01b,
            feature_level=opensmile.FeatureLevel.Functionals,
        )
        if not os.path.isdir(self.root):
            self.download_emodb()
        db = audformat.Database.load(self.root)
        db.map_files(lambda x: os.path.join(self.root, x))
        self.df_emo = db.tables['emotion'].df
        self.df_files = db.tables['files'].df
        if not self.clf:
            self.train_model()

    def download_emodb(self):
        os.system('wget -c https://tubcloud.tu-berlin.de/s/LzPWz83Fjneb6SP/download')
        os.system('mv download emodb_audformat.zip')
        os.system('unzip emodb_audformat.zip')
        os.system('rm emodb_audformat.zip')

    def train_model(self):
        print('training a model...')
        df_feats = self.smile.process_files(self.df_emo.index)
        train_labels = self.df_emo.emotion
        train_feats =  df_feats
        self.clf = svm.SVC(kernel='linear', C=.001)
        self.clf.fit(train_feats, train_labels)
        print('done')

    def classify(self, wavefile):
        test_feats = self.smile.process_file(wavefile)
        return self.clf.predict(test_feats)

    def classify_from_micro(self, seconds):
        self.record(seconds)
        return self.classify(self.filename)[0]

    def record(self, seconds):
        data = sd.rec(int(seconds * self.sr), samplerate=self.sr, channels=1)
        sd.wait()  
        write(self.filename, self.sr, data)

def main():
    test = EmoRec()
    print(test.classify_from_micro(3))

if __name__ == "__main__":
    main()

To try this you could store the above in a file called , for example, 'emorec.py' and then in a jupyter notebook, call the constructor

import emorec
emoRec = emorec.EmoRec()

and use the functionality

result = emoRec.classify_from_micro(3)
print(f'emodb thinks your emotion is {result}')

Seminar: Analyze speech for emotional expression

This post is a seminar idea sketch. I try to think up a concept for a seminar and link other blog posts from here if they can help to solve the tasks.

Here's a collection of software recommendations that you might want to install /try before the seminar.

Tasks

Get a recording

  • Record a speech of yourself of 3-5 minutes length on a topic that you find emotinally challenging, meaning something you feel strongly about. try to express your feelings while you speak.
  • Obviously you might use some other emotional recordings that you collect.
  • Convert all into a dedicated audio format, usually 16 kHz sample rate, mono channel, >= 16bit quantization should suffice.
  • You might consider storing your data in audformat to be comaptibel with further investigations.

Segment the recording

Perform a segmentation on your recording into parts that have about the right size to carry an emotional expression.

  • In a dialog situation a segment would come naturally as it would correspond to the speech segments alternating between the dialog partners (and then would be called a "turn").
  • A typical lenght is about 3-7 seconds
  • The segmentation can be done manually, via a segmentation tool like Praat, Wavesurfer or even Audacity.
  • An alternative approach is to segment the speech automatically, e.g. by a VAD (voice activity detection) algorithm. A quick search delivers e.g. this software based on Praat or the ina spech segmenter

I wrote a new tutorial on how to segment the data using the ina speech segmenter

Annotate the recording

Decide on a target

  • Which emotion(s) should be analysed?
  • Typically, with emotions, you distinguish between categories (like anger, friendlyness, sadness) and dimensions like pleasure, arousal or dominance (also known as PAD space).
  • If you want to compare across participants it's important you have the same concept of what is your target. Typical candidates would be interest, nervousness or valence.

Decide on a scale

  • There's a whole standard recommendation on the topic of how to describe emotional states.
  • Basically, for this seminar you got to decide if it's binary (0/1, on/off, true/false) or graded, like a discreet value on a Likert scale or simply a continuous value in the range [0, 1] or [-1, 1] (also a surprisingly difficult question)
  • Related to that: with respect to Likert scales the most important question is wether there's a neutral value or not.

Do the annotations

  • The process of assigning a value to a recording is called annotating, labeling or judging.
  • As you decide on a subjective value that depends on your self (temporarily as well as in general) this needs to be done in a real world scenario by as many people as possible (a number between 5 and 20 is quite common).
  • How well this works depends on the target and can be computed by the inter-rater-variability, i.e. the degree the labelers agree with each other. Typical measures for this would be Kappa value or Krippendorff's alpha.
  • The result of the labeling is a list of the segments with their labels, usually a csv file with as many lines as segments.
  • You can do this manually (listen to all segments with your favourite audioplayer and fill the list) or use a tool, e.g. Praat, or (obviously I recommend my own tool) the Speechalyzer, which has been developed to support the annotation of very large datasets.
  • An alternative to annotation would be to use a different physical measure that corresponds well with physical arousal as reference, e.g. physical data like blood pressure skin conductivity or respiration rate.
  • Of course there's also the possibility to do a continous annotation, i.e disregard segments in favour of a fixed frame size (typically below a second)

Load your data with a data processing environment

  • With respect to an environment to run the experiments in, I'd recommend python and jupyter notebooks
  • There's a great python module named pandas that you should get familar with. You will learn not only for this seminar, but be able to process any data in a computer for the rest of your existence!

Extract acoustic features

  • To perform an acoustic analysis you need to extract some kind of features related to acoustics.
  • I distinguish here acoustic from linguistic, i.e. I'd treat transcribed words (and their sentiment) as a different modality.
  • I differentiate between three kinds of features:
    • expert features meaning manually selected features that should make sense for the target at hand, e.g. kind of everything you would compute with Praat or the about 80 GeMAPS features.
    • brute-force features everything you got at hand: usually a combination of frame-based low-level descriptors (one frame: ~ 10-25 msec, a series of values) and statistical functionals, e.g. the 6000+ ComParE16 features. Leave the decision on what is important to an algorithmic approach, e.g. factor analysis.
    • learned features Embeddings computed by an ANN encoder (artificial neural net). These features can usually not be interpreted but can be used in machine learning and are an example for representation learning , end-to-end learning and transfer learning, e.g. the TRILL features.
  • You can extract/describe features manually (e.g. get speaking time, number of pauses, etc.)
  • or use an automated software, for example

Analysis

You might want to collect all data from the seminar participants in a common pandas dataframe to be able to generalize your findings across individual speakers.

  • The most obvious question you can try to answer is: is there a correlation between my emotion value (the dependent variable) and the features that I observe?
  • Another one would be to look at the effect of independent variables, i.e. other attributes of the speech like speaker, speaker traits (age, sex, dialect), languages.

Statistical measures

  • Perform analyses on the most important features for the target
  • Compute correlation coefficients for these features

Visualization

  • Find good visualizations for correlations
    • scatter plots
    • box/violin plot per level of target
    • cluster plots, clustering the levels of target expressed in color values in a two dimensional feature space (simply use two features or perform a dimensionality reduction algorithm on the features, e.g. a PCA)

Machine learning

  • Try automatic prediction of your dependent variable based on the data as test data or split into train and test if you got enough. If you split up the data, be sure not to have the same speakers in train and test set, because otherwise you will only learn some ideosyncratic expression of the speakers.