All posts by felix

Recording and transcribing a speech sample on Google colab“

Set up the recording method using java script:

# all imports
from IPython.display import Javascript
from google.colab import output
from base64 import b64decode

RECORD = """
const sleep  = time => new Promise(resolve => setTimeout(resolve, time))
const b2text = blob => new Promise(resolve => {
  const reader = new FileReader()
  reader.onloadend = e => resolve(e.srcElement.result)
  reader.readAsDataURL(blob)
})
var record = time => new Promise(async resolve => {
  stream = await navigator.mediaDevices.getUserMedia({ audio: true })
  recorder = new MediaRecorder(stream)
  chunks = []
  recorder.ondataavailable = e => chunks.push(e.data)
  recorder.start()
  await sleep(time)
  recorder.onstop = async ()=>{
    blob = new Blob(chunks)
    text = await b2text(blob)
    resolve(text)
  }
  recorder.stop()
})
"""

def record(fn, sec):
  display(Javascript(RECORD))
  s = output.eval_js('record(%d)' % (sec*1000))
  b = b64decode(s.split(',')[1])
  with open(fn,'wb') as f:
    f.write(b)
  return fn

Record something:

 filename = 'felixtest.wav'
record(filename, 5)

Play it back:

import IPython
IPython.display.Audio(filename)

install Google speechbrain

%%capture
!pip install speechbrain
import speechbrain as sb

Load the ASR nodel train on libri speech:

from speechbrain.pretrained import EncoderDecoderASR
asr_model = EncoderDecoderASR.from_hparams(source="speechbrain/asr-crdnn-rnnlm-librispeech", savedir="pretrained_model")

And get a transcript on your audio:

asr_model.transcribe_file(audio_file )

Use speechalyzer to walk through a large set of audio files

I wrote speechalyzer in Java to process a large set of audio files. Here's how you could use this on your audio set.

Install and configure

1) Get it and put it somewhere on your file system, don't forget to also install its GUI, the Labeltool
2) Make sure you got Java on your system.
3) Configure both programs by editing the resource files.

Run

The easiest case is if all of your files are in one directory. You would simply start the Speechalyzer like so (you need to be in the same directory):

java - jar Speechalyzer.jar -rd <path to folder with audio files> &

make sure you configured the right audio extension and sampling rate in the config file (wav format, 16kHz is default).
Then change to the Labeltool directory and start it simply like this:

java - jar Labeltool.jar &

again you might have to adapt the sample rate in the config file (or set it in the GUI). Note you need to be inside the Labeltool directory. Here is a screenshot of the Labeltool displaying some files which can be annotated, labeled or simply played in a chain:

How to compare formant tracks extracted with opensmile vs. Praat

Note to install not parselmouth but the package praat-parselmouth:

!pip install praat-parselmouth

First, some imports

import pandas as pd
import parselmouth 
from parselmouth import praat
import opensmile
import audiofile

Then, a test file:

testfile = '/home/felix/data/data/audio/testsatz.wav'
signal, sampling_rate = audiofile.read(testfile)
print('length in seconds: {}'.format(len(signal)/sampling_rate))

Get the opensmile formant tracks by copying them from the official GeMAPS config file

smile = opensmile.Smile(
    feature_set=opensmile.FeatureSet.GeMAPSv01b,
    feature_level=opensmile.FeatureLevel.LowLevelDescriptors,
)
result_df = smile.process_file(testfile)
centerformantfreqs = ['F1frequency_sma3nz', 'F2frequency_sma3nz', 'F3frequency_sma3nz']
formant_df = result_df[centerformantfreqs]

Get the Praat tracks (smile configuration computes every 10 msec with frame length 20 msec)

sound = parselmouth.Sound(testfile) 
formants = praat.call(sound, "To Formant (burg)", 0.01, 4, 5000, 0.02, 50)
f1_list = []
f2_list = []
f3_list = []
for i in range(2, formants.get_number_of_frames()+1):
    f1 = formants.get_value_at_time(1, formants.get_time_step()*i)
    f2 = formants.get_value_at_time(2, formants.get_time_step()*i)
    f3 = formants.get_value_at_time(3, formants.get_time_step()*i)
    f1_list.append(f1)
    f2_list.append(f2)
    f3_list.append(f3)

To be sure: compare the size of the output:

print('{}, {}'.format(result_df.shape[0], len(f1_list)))

combine and inspect the result:

formant_df['F1_praat'] = f1_list
formant_df['F2_praat'] = f2_list
formant_df['F3_praat'] = f3_list
formant_df.head()

How to extract formant tracks with Praat and python

This tutorial was adapted based on the examples from David R Feinberg

This tutorial assumes you started a Jupyter notebook . If you don't know what this is, here's a tutorial on how to set one up (first part)

First you should install the parselmouth package, which interfaces Praat with python:

!pip install -U praat-parselmouth

which you would then import:

import parselmouth 
from parselmouth import praat

You do need some audio input (wav header, 16 kHz sample rate)

testfile = '/home/felix/data/data/audio/testsatz.wav'

And would then read in the sound with parselmouth like this:

sound = parselmouth.Sound(testfile) 

Here's the code to extract the first three formant tracks, I guess it's more or less self-explanatory if you know Praat.

First, compute the occurrences of periodic instances in the signal:

f0min=75
f0max=300
pointProcess = praat.call(sound, "To PointProcess (periodic, cc)", f0min, f0max)

then, compute the formants:

formants = praat.call(sound, "To Formant (burg)", 0.0025, 5, 5000, 0.025, 50)

And finally assign formant values with times where they make sense (periodic instances)

numPoints = praat.call(pointProcess, "Get number of points")
f1_list = []
f2_list = []
f3_list = []
for point in range(0, numPoints):
    point += 1
    t = praat.call(pointProcess, "Get time from index", point)
    f1 = praat.call(formants, "Get value at time", 1, t, 'Hertz', 'Linear')
    f2 = praat.call(formants, "Get value at time", 2, t, 'Hertz', 'Linear')
    f3 = praat.call(formants, "Get value at time", 3, t, 'Hertz', 'Linear')
    f1_list.append(f1)
    f2_list.append(f2)
    f3_list.append(f3)

How to synthesize a text to speech with Google speech API

This tutorial assumes you started a Jupyter notebook . If you don't know what this is, here's a tutorial on how to set one up (first part)

There is a library for this that's based on the Google translation service that still seems to work: gtts.
You would start by installing the packages used in this tutorial:

!pip install -U gtts pygame python-vlc

The you can import the package:

from gtts import gTTS

, define a text and a configuration:

text = 'Das ist jetzt mal was ich so sage, ich finde das Klasse!'
tts = gTTS(text, lang='de')

and synthesize to a file on disk:

audio_file = './hello.mp3'
tts.save(audio_file)

which you could then play back with vlc

from pygame import mixer  
import vlc
p = vlc.MediaPlayer(audio_file)
p.play()

How to get my speech recognized with Google ASR and python

What you need to do this at first is to get yourselg a Google API key,

  • you need to register with Google speech APIs, i.e. get a Google cloud platform account
  • you need to share payment details, but (at the time of writing, i think) the first 60 minutes of processed speech per month are free.

I export my API key each time I want to use this like so:

export GOOGLE_APPLICATION_CREDENTIALS="/home/felix/data/research/Google/api_key.json"

This tutorial assumes you did that and you started a Jupyter notebook . If you don't know what this is, here's a tutorial on how to set one up (first part)

Bevor you can import the Google speech api make shure it's installed:

!pip  install google-cloud 
!pip install --upgrade google-cloud-speech

Then you would import the Google Cloud client library

from google.cloud import speech
import io

Instantiate a client

client = speech.SpeechClient()

And load yourself a recorded speech file, should be wav format 16kHz sample rate

speech_file = '/home/felix/tmp/google_speech_api_test.wav'

if you run into problems recording one: here is the code that worked for me:

import sounddevice as sd
import numpy as np
from scipy.io.wavfile import write
sr = 16000  # Sample rate
seconds = 3  # Duration of recording
data = sd.rec(int(seconds * fs), samplerate=sr, channels=1)
sd.wait()  # Wait until recording is finished
# Convert `data` to 16 bit integers:
y = (np.iinfo(np.int16).max * (data/np.abs(data).max())).astype(np.int16) 
wavfile.write(speech_file fs, y)

then get yourself an audio object

with io.open(speech_file, "rb") as audio_file:
    content = audio_file.read()
audio = speech.RecognitionAudio(content = content)

Configure the ASR

config = speech.RecognitionConfig(
    encoding=speech.RecognitionConfig.AudioEncoding.LINEAR16,
    sample_rate_hertz=16000,
    language_code="de-DE",
)

Detects speech in the audio file

response = client.recognize(config=config, audio=audio)

and show what you got (with my trial only the first alternative was filled):

for result in response.results:
    for index, alternative in enumerate(result.alternatives):
        print("Transcript {}: {}".format(index, alternative.transcript))

Bio

Dr. Felix Burkhardt does teaching, consulting, research and development on speech communication, human-machine dialog systems, text-to-speech synthesis, speaker classification, ontology based natural language modeling and emotional human-machine interfaces.

Originally an expert of Speech Synthesis at the Technical University of Berlin, he wrote his ph.d. thesis on the simulation of emotional speech by machines, recorded the Berlin acted emotions database, "EmoDB", and maintains several open source projects, including the emotional speech synthesizer "Emofilt" and the speech labeling, the annotation tool "Speechalyzer" and the machine learning framework "Nkululeko". Since 2018 he is the research director at audEERING after having worked for the Deutsche Telekom AG for 18 years. From 2020-2022 he worked as a full professor at the institute of communication science of the Technical University of Berlin.

He was a member of the European Network of Excellence HUMAINE on emotion-oriented computing and is the editor of the W3C Emotion Markup Language specification and serves the program committee for numerous conferences including ACII, AVEC, EmoSPACE, FLAIR, IASTED CI, ICASSP, ICMI, ICPhS, Interspeech, IVA, IWSDS, LREC, Paraling, Prosico, SLSP, WS3P, journals: Specom, CSL, JASA, EURASIP, SIGPRO, IEEE-TAFFC, IEEE-TMM, IEEE-TASL, IEEE-TIP, IEEE-ISSI, IJSE, ETRI, Journal of Phonetics, Neural Processing Letters, UMUAI, and publishers: Wiley

How to extract formant center frequencies (or other acoustic features) from speech data with opensmile in python

There is a framework called OpenSMILE published on Github that can be used to extract high level acoustic features from audio signals and I’d like to show you how to use it with Python.

I’ve set up a notebook for this here.

First you need to install opensmile.

pip install opensmile

General procedure

There are two ways to extract a specific acoustic feature with opensmile:

1) Use an existing config that contains your target feature and filter it from the results
2) Write your own config file and extract only your target feature directly

Method 1 is easier but obviously not resource efficient, 2 is better but then to learn the opensmile config syntax and all the existing modules is not trivial.

Using one example for an acoustic feature: formants, we’ll do both ways. The documentation for the python wrapper of opensmile is here

The following assumes you got a test wave file recorded and stored somewhere:

testfp = '/kaggle/input/testdata/testsatz.wav'
IPython.display.Audio(testfp)

Method 1): Use an existing config file that includes the first three formant frequencies

We start with instantiating the main extractor class, Smile, with a configuration that includes formants. The GeMAPSv01b features set has been derived from the GeMAPS feature set

smile = opensmile.Smile(
feature_set=opensmile.FeatureSet.GeMAPSv01b,
feature_level=opensmile.FeatureLevel.LowLevelDescriptors,
)

Extract this for our test sentence, out comes a pandas dataframe

result_df = smile.process_file(testfp)
print(result_df.shape)

Now use only the three center formant frequencies

centerformantfreqs = [‘F1frequency_sma3nz’, ‘F2frequency_sma3nz’, ‘F3frequency_sma3nz’]
formant_df = result_df[centerformantfreqs]
formant_df.head()

This should be your ouput: per frame three values: the center frequencies of the formants:

.

Method 2): Write your own config file

The documentation for opensmile config files is here.
Most often it is probably easier to look at an existing config file and copy/paste the components you need.

You could edit the opensmile config in a string:

formant_conf_str = '''
[componentInstances:cComponentManager]
instance[dataMemory].type=cDataMemory

;;; default source
[componentInstances:cComponentManager]
instance[dataMemory].type=cDataMemory

;;; source

\{\cm[source{?}:include external source]}

;;; main section

[componentInstances:cComponentManager]
instance[framer].type = cFramer
instance[win].type = cWindower
instance[fft].type = cTransformFFT
instance[resamp].type = cSpecResample
instance[lpc].type = cLpc
instance[formant].type = cFormantLpc

[framer:cFramer]
reader.dmLevel = wave
writer.dmLevel = frames
copyInputName = 1
frameMode = fixed
frameSize = 0.025000
frameStep = 0.010000
frameCenterSpecial = left
noPostEOIprocessing = 1

[win:cWindower]
reader.dmLevel=frames
writer.dmLevel=win
winFunc=gauss
gain=1.0

[fft:cTransformFFT]
reader.dmLevel=win
writer.dmLevel=fft

[resamp:cSpecResample]
reader.dmLevel=fft
writer.dmLevel=outpR
targetFs = 11000

[lpc:cLpc]
reader.dmLevel=outpR
writer.dmLevel=lpc
p=11
method=acf
lpGain=1
saveLPCoeff=1
residual=0
forwardFilter=0
lpSpectrum=0
lpSpecBins=128

[formant:cFormantLpc]
reader.dmLevel=lpc
writer.dmLevel=formant
saveIntensity=1
saveBandwidths=0
maxF=5500.0
minF=50.0
nFormants=3
useLpSpec=0
medianFilter=0
octaveCorrection=0

;;; sink

\{\cm[sink{?}:include external sink]}
'''

which you can save as a config file:

with open('formant.conf', 'w') as fp:
    fp.write(formant_conf_str)

Now we reinstantiate our smile object with the custom config

smile = opensmile.Smile(
    feature_set=’formant.conf’,
    feature_level=’formant’,
)

and extract again

formant_df_2 = smile.process_file(testfp)
formant_df_2.head()

Voila! The output is should be similar to the one you got with the first method.