Training Femke (Voice)

From NURDspace
Training Femke (Voice)
Femke.jpg
Participants
Skills Machine Learning
Status Active
Niche Geinig
Purpose Fun
Tool No
Location
Cost
Tool category

Training Femke (Voice)

femke.jpg {{#if:No | [[Tool Owner::{{{ProjectParticipants}}} | }} {{#if:No | [[Tool Cost::{{{Cost}}} | }}

What the hark?

One of the text2speech voices that we like using a lot in the space is Acapella Group's Femke, which is a dutch-flamish sounding voice. Recently the people working on HomeAssistant released Piper. A fast local neural network text to speech system. Using Piper has allowed us to more easily use generative voices. Piper can make a short sentence is a matter of milliseconds interfering on a CPU. And thus, I would love to be able to train a model based on Femke. This wiki entry will show some of the things I did to make my own Femke dataset and train it.

Preparing the text data

I downloaded the Dutch CC-100 corpus from https://data.statmt.org/cc-100/ Extracted this is a 30GB(!) text file containing a lot of Dutch sentences. Which is perfect for our dataset.

Of course, I don't plan on feeding the entire 30GB of text through Femke. Instead I wrote a simple script that iterates over every line and picks out lines under 160 characters (which in hind-sight was still a lot, the Femke API I am using has a character limit).

The code I wrote allowed to pass through sentences containing words such as "computer", "hack", and a few vulgar Dutch words (Because we all know people want to use Femke to swear). Which eventually brought down the 30GB file to a 62MB text file. However this was only due to me stopping the code because at this point, I believed I had more than enough.

Generating the samples

Being able to use Femke for free is a bit of a api hack, and that is beyond the scope of this wiki entry. As such, I will not disclose how here. But a google search for "Will from afar downloader" will likely point you in the right direction.

I choosed to output into the LJSpeech format, which is just a csv file containing 3 rows. One with the file (without any paths and extension) and two rows containing both the line. (I don't know why). Additionally, I also further split the sentences by a full-stop. Cutting them up even more, and making sure that the sentence size is not under 10 characters.

I let the code run until I had generated 1500 Femke wavs. In a hindsight, I should likely have put in more sanity checks as I noticed there were some random non-Dutch text strings, like urls in the metadata.csv. Oh well.

Training

Setting up the training environment was a bit of a pain under WSL, but I eventually got it working. But again, that's beyond the scope of this entry. Training is going smoothly on a Geforce 3090, however I did end up changing the batch size and precision to 16 as 32 seemed to consume more than 24GB of vram(?). I'm not sure how much this would impact quality of the voice. After about 70 epochs in, it starts to sound more like coherent dutch.

I used the following commands:

Preprocessing

This took a few minutes and used a lot of CPU resources.

python3 -m piper_train.preprocess \
  --language nl \
  --input-dir /mnt/i/voiceTraining/corpus/femke/ \
  --output-dir /mnt/i/voiceTraining/corpus/femke_out/ \
  --dataset-format ljspeech --single-speaker \
  --sample-rate 22050

Training

Training took about 2 minutes per epoch on my 3090, the model started to become coherent after around 70 epochs.

python3 -m piper_train \
    --dataset-dir /mnt/i/voiceTraining/corpus/femke_out/ \
    --accelerator 'gpu' \
    --devices 1 \
    --batch-size 16 \
    --validation-split 0.05 \
    --num-test-examples 5 \
    --max_epochs 10000 \
    --precision 16

Interference

head -n5 /mnt/i/voiceTraining/corpus/femke_out/dataset.jsonl | \
  python3 -m piper_train.infer \
    --checkpoint  /mnt/i/voiceTraining/corpus/femke_out/lightning_logs/version_8/checkpoints/*.ckpt \
    --sample-rate 22050 \
    --output-dir outdir

Version 2

For version 2, I like to curate the dataset a bit more; making sure that it contains no gibberish. Another thing that I noticed is that Femke isn't that well versed in speaking English words as such, for the next dataset I want to include more English words as well. I also want to see if increasing the size from 1500 samples to double, or more, would have a positive impact. For example, the LJSpeech 1.1 dataset has about 13100 entries.