NeMo/tutorials/tts/FastPitch_Finetuning.ipynb
Eric Harper aaacc4b089
Merge r1.5.0 bugfixes and doc updates to main (#3133)
* update branch

Signed-off-by: ericharper <complex451@gmail.com>

* Always save last checkpoint on train end even if folder does not exist (#2976)

* add fix for no checkpoint folder when training ends

Signed-off-by: Jason <jasoli@nvidia.com>

* update

Signed-off-by: Jason <jasoli@nvidia.com>

* fix test

Signed-off-by: Jason <jasoli@nvidia.com>

* fixes

Signed-off-by: Jason <jasoli@nvidia.com>

* typo

Signed-off-by: Jason <jasoli@nvidia.com>

* change check

Signed-off-by: Jason <jasoli@nvidia.com>

* [NLP] Add Apex import guard (#3041)

* add apex import guard

Signed-off-by: ericharper <complex451@gmail.com>

* add apex import guard

Signed-off-by: ericharper <complex451@gmail.com>

* add apex import guard

Signed-off-by: ericharper <complex451@gmail.com>

* style

Signed-off-by: ericharper <complex451@gmail.com>

* remove from init add logging to constructor

Signed-off-by: ericharper <complex451@gmail.com>

* remove from init add logging to constructor

Signed-off-by: ericharper <complex451@gmail.com>

* remove import from init

Signed-off-by: ericharper <complex451@gmail.com>

* remove megatron bert encoder logic from NLPModel

Signed-off-by: ericharper <complex451@gmail.com>

* remove megatron bert from init

Signed-off-by: ericharper <complex451@gmail.com>

* remove megatron bert from init

Signed-off-by: ericharper <complex451@gmail.com>

* remove megatron bert from init

Signed-off-by: ericharper <complex451@gmail.com>

* remove megatron bert from init

Signed-off-by: ericharper <complex451@gmail.com>

* remove megatron bert from init

Signed-off-by: ericharper <complex451@gmail.com>

* remove megatron bert from init

Signed-off-by: ericharper <complex451@gmail.com>

* style

Signed-off-by: ericharper <complex451@gmail.com>

* Exp manager small refactor (#3067)

* Exp manager small refactor

Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca>

* move super() call earlier in the function

Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca>

Co-authored-by: Somshubra Majumdar <titu1994@gmail.com>

* Change container (#3087)

Signed-off-by: smajumdar <titu1994@gmail.com>

Co-authored-by: Eric Harper <complex451@gmail.com>

* Training of machine translation model fails if config parameter `trainer.max_epochs` is used instead of `trainer.max_steps`. (#3112)

* fix: replace distributed_backend for accelarator

Signed-off-by: PeganovAnton <peganoff2@mail.ru>

* Add debug script

Signed-off-by: PeganovAnton <peganoff2@mail.ru>

* Remove debug script

Signed-off-by: PeganovAnton <peganoff2@mail.ru>

* update (#3113)

Signed-off-by: Jason <jasoli@nvidia.com>

* Fix: punctuation capitalization inference on short queries (#3111)

Signed-off-by: PeganovAnton <peganoff2@mail.ru>

Co-authored-by: Eric Harper <complex451@gmail.com>

* Multiple ASR Fixes to SPE tokenization (#3119)

* Reduce num workers for transcribe

Signed-off-by: smajumdar <titu1994@gmail.com>

* Fix SPE tokenizer vocabulary construction

Signed-off-by: smajumdar <titu1994@gmail.com>

* Update tokenizer building script

Signed-off-by: smajumdar <titu1994@gmail.com>

* Remove logs

Signed-off-by: smajumdar <titu1994@gmail.com>

* Megatron GPT training in BCP (#3095)

* BCP megatron training

Signed-off-by: madhukar <madhukar@penguin>

* Add quotes

Signed-off-by: madhukar <madhukar@penguin>

* Style fix

Signed-off-by: madhukar <madhukar@penguin>

Co-authored-by: madhukar <madhukar@penguin>

* Upgrade to PTL 1.5.0 (#3127)

* update for ptl 1.5.0

Signed-off-by: ericharper <complex451@gmail.com>

* update trainer config

Signed-off-by: ericharper <complex451@gmail.com>

* limit cuda visible devices to the first two gpus on check for ranks CI test

Signed-off-by: ericharper <complex451@gmail.com>

* remove comments

Signed-off-by: ericharper <complex451@gmail.com>

* make datasets larger for test

Signed-off-by: ericharper <complex451@gmail.com>

* make datasets larger for test

Signed-off-by: ericharper <complex451@gmail.com>

* update compute_max_steps

Signed-off-by: ericharper <complex451@gmail.com>

* update compute_max_steps

Signed-off-by: ericharper <complex451@gmail.com>

* update package info

Signed-off-by: ericharper <complex451@gmail.com>

* remove duplicate code

Signed-off-by: ericharper <complex451@gmail.com>

* remove comment

Signed-off-by: ericharper <complex451@gmail.com>

Co-authored-by: Jason <jasoli@nvidia.com>
Co-authored-by: Sandeep Subramanian <sandeep.subramanian.1@umontreal.ca>
Co-authored-by: Somshubra Majumdar <titu1994@gmail.com>
Co-authored-by: PeganovAnton <peganoff2@mail.ru>
Co-authored-by: Madhukar K <26607911+madhukarkm@users.noreply.github.com>
Co-authored-by: madhukar <madhukar@penguin>
2021-11-04 10:26:58 -06:00

503 lines
21 KiB
Plaintext
Executable file

{
"cells": [
{
"cell_type": "markdown",
"metadata": {
"id": "8d0bbac2"
},
"source": [
"# Finetuning FastPitch for a new speaker\n",
"\n",
"In this tutorial, we will finetune a single speaker FastPitch (with alignment) model on 5 mins of a new speaker's data. We will finetune the model parameters only on new speaker's text and speech pairs.\n",
"\n",
"We will download the training data, then generate and run a training command to finetune Fastpitch on 5 mins of data, and synthesize the audio from the trained checkpoint.\n",
"\n",
"A final section will describe approaches to improve audio quality past this notebook."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "nGw0CBaAtmQ6"
},
"source": [
"## License\n",
"\n",
"> Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved.\n",
">\n",
"> Licensed under the Apache License, Version 2.0 (the \"License\");\n",
"> you may not use this file except in compliance with the License.\n",
"> You may obtain a copy of the License at\n",
">\n",
"> http://www.apache.org/licenses/LICENSE-2.0\n",
">\n",
"> Unless required by applicable law or agreed to in writing, software\n",
"> distributed under the License is distributed on an \"AS IS\" BASIS,\n",
"> WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n",
"> See the License for the specific language governing permissions and\n",
"> limitations under the License."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "U7bOoIgLttRC"
},
"outputs": [],
"source": [
"\"\"\"\n",
"You can either run this notebook locally (if you have all the dependencies and a GPU) or on Google Colab.\n",
"Instructions for setting up Colab are as follows:\n",
"1. Open a new Python 3 notebook.\n",
"2. Import this notebook from GitHub (File -> Upload Notebook -> \"GITHUB\" tab -> copy/paste GitHub URL)\n",
"3. Connect to an instance with a GPU (Runtime -> Change runtime type -> select \"GPU\" for hardware accelerator)\n",
"4. Run this cell to set up dependencies.\n",
"\"\"\"\n",
"BRANCH = 'main'\n",
"# # If you're using Google Colab and not running locally, uncomment and run this cell.\n",
"# !apt-get install sox libsndfile1 ffmpeg\n",
"# !pip install wget unidecode\n",
"# !python -m pip install git+https://github.com/NeMo/NeMo.git@$BRANCH#egg=nemo_toolkit[tts]"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "2502cf61"
},
"source": [
"## Downloading Data\n",
"___"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "81fa2c02"
},
"source": [
"Download and untar the data.\n",
"\n",
"The data contains a 5 minute subset of audio from speaker 6097 from the HiFiTTS dataset."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "VIFgqxLOpxha"
},
"outputs": [],
"source": [
"!wget https://nemo-public.s3.us-east-2.amazonaws.com/6097_5_mins.tar.gz # Contains 10MB of data\n",
"!tar -xzf 6097_5_mins.tar.gz"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "gSQqq0fBqy8K"
},
"source": [
"Looking at manifest.json, we see a standard NeMo json that contains the filepath, text, and duration. Please note that manifest.json only contains the relative path.\n",
"\n",
"```\n",
"{\"audio_filepath\": \"audio/presentpictureofnsw_02_mann_0532.wav\", \"text\": \"not to stop more than ten minutes by the way\", \"duration\": 2.6, \"text_no_preprocessing\": \"not to stop more than ten minutes by the way,\", \"text_normalized\": \"not to stop more than ten minutes by the way,\"}\n",
"```\n",
"\n",
"Let's take 2 samples from the dataset and split it off into a validation set. Then, split all other samples into the training set."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "B8gVfp5SsuDd"
},
"outputs": [],
"source": [
"!cat ./6097_5_mins/manifest.json | tail -n 2 > ./6097_manifest_dev_ns_all_local.json\n",
"!cat ./6097_5_mins/manifest.json | head -n -2 > ./6097_manifest_train_dur_5_mins_local.json\n",
"!ln -s ./6097_5_mins/audio audio"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "ef75d1d5"
},
"source": [
"## Finetuning FastPitch\n",
"___\n",
"\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "lhhg2wBNtW0r"
},
"source": [
"Let's first download the pretrained checkpoint that we want to finetune from. NeMo will save checkpoints to ~/.cache, so let's move that to our current directory"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "LggELooctXCT"
},
"outputs": [],
"source": [
"import os\n",
"\n",
"import torch\n",
"import IPython.display as ipd\n",
"from matplotlib.pyplot import imshow\n",
"from matplotlib import pyplot as plt\n",
"\n",
"from nemo.collections.tts.models import FastPitchModel\n",
"FastPitchModel.from_pretrained(\"tts_en_fastpitch\")\n",
"\n",
"from pathlib import Path\n",
"nemo_files = [p for p in Path(\"/root/.cache/torch/NeMo/\").glob(\"**/tts_en_fastpitch_align.nemo\")]\n",
"print(f\"Copying {nemo_files[0]} to ./\")\n",
"Path(\"./tts_en_fastpitch_align.nemo\").write_bytes(nemo_files[0].read_bytes())"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "6c8b13b8"
},
"source": [
"To finetune the FastPitch model on the above created filelists, we use `examples/tts/fastpitch2_finetune.py` script to train the models with the `fastpitch_align.yaml` configuration.\n",
"\n",
"Let's grab those files."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "3zg2H-32dNBU"
},
"outputs": [],
"source": [
"!wget https://raw.githubusercontent.com/nvidia/NeMo/$BRANCH/examples/tts/fastpitch2_finetune.py\n",
"!mkdir conf && cd conf && wget https://raw.githubusercontent.com/nvidia/NeMo/$BRANCH/examples/tts/conf/fastpitch_align.yaml && cd .."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "12b5511c"
},
"source": [
"We can now train our model with the following command:\n",
"\n",
"NOTE: This will take about 50 minutes on colab's K80 GPUs.\n",
"\n",
"`python fastpitch2_finetune.py --config-name=fastpitch_align.yaml train_dataset=./6097_manifest_train_dur_5_mins_local.json validation_datasets=./6097_manifest_dev_ns_all_local.json +init_from_nemo_model=./tts_en_fastpitch_align.nemo +trainer.max_steps=1000 ~trainer.max_epochs trainer.check_val_every_n_epoch=25 prior_folder=./Priors6097 model.train_ds.dataloader_params.batch_size=24 model.validation_ds.dataloader_params.batch_size=24 exp_manager.exp_dir=./ljspeech_to_6097_no_mixing_5_mins model.n_speakers=1 model.pitch_avg=121.9 model.pitch_std=23.1 model.pitch_fmin=30 model.pitch_fmax=512 model.optim.lr=2e-4 ~model.optim.sched model.optim.name=adam`"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "reY1LV4lwWoq"
},
"outputs": [],
"source": [
"!python fastpitch2_finetune.py --config-name=fastpitch_align.yaml train_dataset=./6097_manifest_train_dur_5_mins_local.json validation_datasets=./6097_manifest_dev_ns_all_local.json +init_from_nemo_model=./tts_en_fastpitch_align.nemo +trainer.max_steps=1000 ~trainer.max_epochs trainer.check_val_every_n_epoch=25 prior_folder=./Priors6097 model.train_ds.dataloader_params.batch_size=24 model.validation_ds.dataloader_params.batch_size=24 exp_manager.exp_dir=./ljspeech_to_6097_no_mixing_5_mins model.n_speakers=1 model.pitch_avg=121.9 model.pitch_std=23.1 model.pitch_fmin=30 model.pitch_fmax=512 model.optim.lr=2e-4 ~model.optim.sched model.optim.name=adam"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "j2svKvd1eMhf"
},
"source": [
"Let's take a closer look at the training command:\n",
"\n",
"* `python fastpitch2_finetune.py --config-name=fastpitch_align.yaml`\n",
" * --config-name tells the script what config to use.\n",
"\n",
"* `train_dataset=./6097_manifest_train_dur_5_mins_local.json validation_datasets=./6097_manifest_dev_ns_all_local.json`\n",
" * We tell the model what manifest files we can to train and eval on.\n",
"\n",
"* `+init_from_nemo_model=./tts_en_fastpitch_align.nemo`\n",
" * We tell the script what checkpoint to finetune from.\n",
"\n",
"* `+trainer.max_steps=1000 ~trainer.max_epochs trainer.check_val_every_n_epoch=25`\n",
" * For this experiment, we need to tell the script to train for 1000 training steps/iterations. We need to remove max_epochs using `~trainer.max_epochs`.\n",
"\n",
"* `prior_folder=./Priors6097 model.train_ds.dataloader_params.batch_size=24 model.validation_ds.dataloader_params.batch_size=24`\n",
" * Some dataset parameters. The dataset does some online processing and stores the processing steps to the `prior_folder`.\n",
"\n",
"* `exp_manager.exp_dir=./ljspeech_to_6097_no_mixing_5_mins`\n",
" * Where we want to save our log files, tensorboard file, checkpoints, and more\n",
"\n",
"* `model.n_speakers=1`\n",
" * The number of speakers in the data. There is only 1 for now, but we will revisit this parameter later in the notebook\n",
"\n",
"* `model.pitch_avg=121.9 model.pitch_std=23.1 model.pitch_fmin=30 model.pitch_fmax=512`\n",
" * For the new speaker, we need to define new pitch hyperparameters for better audio quality.\n",
" * These parameters work for speaker 6097 from the HiFiTTS dataset\n",
" * For speaker 92, we suggest `model.pitch_avg=214.5 model.pitch_std=30.9 model.pitch_fmin=80 model.pitch_fmax=512`\n",
" * fmin and fmax are hyperparameters to librosa's pyin function. We recommend tweaking these per speaker.\n",
" * After fmin and fmax are defined, pitch mean and std can be easily extracted\n",
"\n",
"* `model.optim.lr=2e-4 ~model.optim.sched model.optim.name=adam`\n",
" * For fine-tuning, we lower the learning rate\n",
" * We use a fixed learning rate of 2e-4\n",
" * We switch from the lamb optimizer to the adam optimizer"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "c3bdf1ed"
},
"source": [
"## Synthesize Samples from Finetuned Checkpoints\n",
"\n",
"---\n",
"\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "f2b46325"
},
"source": [
"Once we have finetuned our FastPitch model, we can synthesize the audio samples for given text using the following inference steps. We use a HiFiGAN vocoder trained on LJSpeech.\n",
"\n",
"We define some helper functions as well."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "886c91dc"
},
"outputs": [],
"source": [
"from nemo.collections.tts.models import HifiGanModel\n",
"from nemo.collections.tts.models import FastPitchModel\n",
"\n",
"vocoder = HifiGanModel.from_pretrained(\"tts_hifigan\")\n",
"vocoder.eval().cuda()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "0a4c986f"
},
"outputs": [],
"source": [
"def infer(spec_gen_model, vocoder_model, str_input, speaker = None):\n",
" \"\"\"\n",
" Synthesizes spectrogram and audio from a text string given a spectrogram synthesis and vocoder model.\n",
" \n",
" Arguments:\n",
" spec_gen_model -- Instance of FastPitch model\n",
" vocoder_model -- Instance of a vocoder model (HiFiGAN in our case)\n",
" str_input -- Text input for the synthesis\n",
" speaker -- Speaker number (in the case of a multi-speaker model -- in the mixing case)\n",
" \n",
" Returns:\n",
" spectrogram, waveform of the synthesized audio.\n",
" \"\"\"\n",
" parser_model = spec_gen_model\n",
" with torch.no_grad():\n",
" parsed = parser_model.parse(str_input)\n",
" if speaker is not None:\n",
" speaker = torch.tensor([speaker]).long().cuda()\n",
" spectrogram = spec_gen_model.generate_spectrogram(tokens=parsed, speaker = speaker)\n",
" audio = vocoder_model.convert_spectrogram_to_audio(spec=spectrogram)\n",
" \n",
" if spectrogram is not None:\n",
" if isinstance(spectrogram, torch.Tensor):\n",
" spectrogram = spectrogram.to('cpu').numpy()\n",
" if len(spectrogram.shape) == 3:\n",
" spectrogram = spectrogram[0]\n",
" if isinstance(audio, torch.Tensor):\n",
" audio = audio.to('cpu').numpy()\n",
" return spectrogram, audio\n",
"\n",
"def get_best_ckpt(experiment_base_dir, new_speaker_id, duration_mins, mixing_enabled, original_speaker_id):\n",
" \"\"\"\n",
" Gives the model checkpoint paths of an experiment we ran. \n",
" \n",
" Arguments:\n",
" experiment_base_dir -- Base experiment directory (specified on top of this notebook as exp_base_dir)\n",
" new_speaker_id -- Speaker id of new HiFiTTS speaker we finetuned FastPitch on\n",
" duration_mins -- total minutes of the new speaker data\n",
" mixing_enabled -- True or False depending on whether we want to mix the original speaker data or not\n",
" original_speaker_id -- speaker id of the original HiFiTTS speaker\n",
" \n",
" Returns:\n",
" List of all checkpoint paths sorted by validation error, Last checkpoint path\n",
" \"\"\"\n",
" if not mixing_enabled:\n",
" exp_dir = \"{}/{}_to_{}_no_mixing_{}_mins\".format(experiment_base_dir, original_speaker_id, new_speaker_id, duration_mins)\n",
" else:\n",
" exp_dir = \"{}/{}_to_{}_mixing_{}_mins\".format(experiment_base_dir, original_speaker_id, new_speaker_id, duration_mins)\n",
" \n",
" ckpt_candidates = []\n",
" last_ckpt = None\n",
" for root, dirs, files in os.walk(exp_dir):\n",
" for file in files:\n",
" if file.endswith(\".ckpt\"):\n",
" val_error = float(file.split(\"v_loss=\")[1].split(\"-epoch\")[0])\n",
" if \"last\" in file:\n",
" last_ckpt = os.path.join(root, file)\n",
" ckpt_candidates.append( (val_error, os.path.join(root, file)))\n",
" ckpt_candidates.sort()\n",
" \n",
" return ckpt_candidates, last_ckpt"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "0153bd5a"
},
"source": [
"Specify the speaker id, duration mins and mixing variable to find the relevant checkpoint from the exp_base_dir and compare the synthesized audio with validation samples of the new speaker."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "8901f88b"
},
"outputs": [],
"source": [
"new_speaker_id = 6097\n",
"duration_mins = 5\n",
"mixing = False\n",
"original_speaker_id = \"ljspeech\"\n",
"\n",
"_ ,last_ckpt = get_best_ckpt(\"./\", new_speaker_id, duration_mins, mixing, original_speaker_id)\n",
"print(last_ckpt)\n",
"\n",
"spec_model = FastPitchModel.load_from_checkpoint(last_ckpt)\n",
"spec_model.eval().cuda()\n",
"_speaker=None\n",
"if mixing:\n",
" _speaker = 1\n",
"\n",
"num_val = 2\n",
"\n",
"manifest_path = os.path.join(\"./\", \"{}_manifest_dev_ns_all_local.json\".format(new_speaker_id))\n",
"val_records = []\n",
"with open(manifest_path, \"r\") as f:\n",
" for i, line in enumerate(f):\n",
" val_records.append( json.loads(line) )\n",
" if len(val_records) >= num_val:\n",
" break\n",
" \n",
"for val_record in val_records:\n",
" print (\"Real validation audio\")\n",
" ipd.display(ipd.Audio(val_record['audio_filepath'], rate=22050))\n",
" print (\"SYNTHESIZED FOR -- Speaker: {} | Dataset size: {} mins | Mixing:{} | Text: {}\".format(new_speaker_id, duration_mins, mixing, val_record['text']))\n",
" spec, audio = infer(spec_model, vocoder, val_record['text'], speaker = _speaker)\n",
" ipd.display(ipd.Audio(audio, rate=22050))\n",
" %matplotlib inline\n",
" #if spec is not None:\n",
" imshow(spec, origin=\"lower\", aspect = \"auto\")\n",
" plt.show()"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "ge2s7s9-w3py"
},
"source": [
"## Improving Speech Quality\n",
"___\n",
"\n",
"We see that from fine-tuning FastPitch, we were able to generate audio in a male voice but the audio quality is not as good as we expect. We recommend two steps to improve audio quality:\n",
"\n",
"* Finetuning HiFiGAN\n",
"* Adding more data\n",
"\n",
"Both of these steps are outside the scope of the notebook due to the limited compute available on colab.\n",
"\n",
"### Finetuning HiFiGAN\n",
"From the synthesized samples, there might be audible audio crackling. To fix this, we need to finetune HiFiGAN on the new speaker's data. HiFiGAN shows improvement using synthesized mel spectrograms, so the first step is to generate mel spectrograms with our finetuned FastPitch model.\n",
"\n",
"```python\n",
"# Get records from the training manifest\n",
"manifest_path = \"./6097_manifest_train_dur_5_mins_local.json\"\n",
"records = []\n",
"with open(manifest_path, \"r\") as f:\n",
" for i, line in enumerate(f):\n",
" records.append(json.loads(line))\n",
"\n",
"# Generate a spectrogram for each item\n",
"for i, r in enumerate(records):\n",
" with torch.no_grad():\n",
" parsed = parser_model.parse(r['text'])\n",
" spectrogram = spec_gen_model.generate_spectrogram(tokens=parsed)\n",
" if isinstance(spectrogram, torch.Tensor):\n",
" spectrogram = spectrogram.to('cpu').numpy()\n",
" if len(spectrogram.shape) == 3:\n",
" spectrogram = spectrogram[0]\n",
" np.save(f\"mel_{i}\", spectrogram)\n",
" r[\"mel_filepath\"] = f\"mel_{i}.npy\"\n",
"\n",
"# Save to a new json\n",
"with open(\"hifigan_train_ft.json\", \"w\") as f:\n",
" for r in records:\n",
" f.write(json.dumps(r) + '\\n')\n",
"\n",
"# Please do the same for the validation json. Code is omitted.\n",
"```\n",
"\n",
"We can then finetune hifigan similarly to fastpitch using NeMo's [hifigan_finetune.py](https://github.com/NVIDIA/NeMo/blob/main/examples/tts/hifigan_finetune.py) and [hifigan.yaml](https://github.com/NVIDIA/NeMo/blob/main/examples/tts/conf/hifigan/hifigan.yaml):\n",
"\n",
"`python examples/tts/hifigan_finetune.py --config_name=hifigan.yaml model.train_ds.dataloader_params.batch_size=32 model.max_steps=1000 ~model.sched model.optim.lr=0.0001 train_dataset=./hifigan_train_ft.json validation_datasets=./hifigan_val_ft.json exp_manager.exp_dir=hifigan_ft +init_from_nemo_model=tts_hifigan.nemo trainer.check_val_every_n_epoch=10`\n",
"\n",
"### Improving TTS by Adding More Data\n",
"We can add more data in two ways. they can be combined for the best effect:\n",
"\n",
"* Add more training data from the new speaker\n",
"\n",
"The entire notebook can be repeated from the top after a new .json is defined for the additional data. Modify your finetuning commands to point to the new json. Be sure to increase the number of steps as more data is added to both the fastpitch and hifigan finetuning. We recommend 1000 steps per minute of audio for fastpitch and 500 steps per minute of audio for hifigan.\n",
"\n",
"* Mix new speaker data with old speaker data\n",
"\n",
"We recommend to train fastpitch using both old speaker data (LJSpeech in this notebook) and the new speaker data. In this case, please modify the .json when finetuning fastpitch to include speaker information:\n",
"\n",
"`\n",
"{\"audio_filepath\": \"new_speaker.wav\", \"text\": \"sample\", \"duration\": 2.6, \"speaker\": 1}\n",
"{\"audio_filepath\": \"old_speaker.wav\", \"text\": \"LJSpeech sample\", \"duration\": 2.6, \"speaker\": 0}\n",
"`\n",
"5 hours of data from the old speaker should be sufficient. Since we should have less data from the new speaker, we need to ensure that the model sees a similar amount of new data and old data. For each sample from the old speaker, please add a sample from the new speaker in the .json. The samples from the new speaker will be repeated.\n",
"\n",
"Modify the fastpitch training command to point to the new training and validation .jsons, and update `model.n_speakers=1` to `model.n_speakers=2`. Ensure the pitch statistics correspond to the new speaker.\n",
"\n",
"For HiFiGAN finetuning, the training should be done on the new speaker data."
]
}
],
"metadata": {
"language_info": {
"name": "python"
},
"orig_nbformat": 4
},
"nbformat": 4,
"nbformat_minor": 5
}