NeMo/tutorials/speaker_tasks/ASR_with_SpeakerDiarization.ipynb
Nithin Rao dc9ed88f78
Modify speaker input (#3100)
* initial_commit

Signed-off-by: nithinraok <nithinrao.koluguri@gmail.com>

* init diarizer

Signed-off-by: nithinraok <nithinrao.koluguri@gmail.com>

* vad+speaker

Signed-off-by: nithinraok <nithinrao.koluguri@gmail.com>

* vad update

Signed-off-by: nithinraok <nithinrao.koluguri@gmail.com>

* speaker done

Signed-off-by: nithinraok <nithinrao.koluguri@gmail.com>

* initial working version

Signed-off-by: nithinraok <nithinrao.koluguri@gmail.com>

* compare outputs

Signed-off-by: nithinraok <nithinrao.koluguri@gmail.com>

* added uem support

Signed-off-by: nithinraok <nithinrao.koluguri@gmail.com>

* pyannote improvements

Signed-off-by: nithinraok <nithinrao.koluguri@gmail.com>

* updated config and script name

Signed-off-by: nithinraok <nithinrao.koluguri@gmail.com>

* style fix

Signed-off-by: nithinraok <nithinrao.koluguri@gmail.com>

* update Jenkins file

Signed-off-by: nithinraok <nithinrao.koluguri@gmail.com>

* jenkins fix

Signed-off-by: nithinraok <nithinrao.koluguri@gmail.com>

* jenkins fix

Signed-off-by: nithinraok <nithinrao.koluguri@gmail.com>

* update file path in jenkins

Signed-off-by: nithinraok <nithinrao.koluguri@gmail.com>

* update file path in jenkins

Signed-off-by: nithinraok <nithinrao.koluguri@gmail.com>

* update file path in jenkins

Signed-off-by: nithinraok <nithinrao.koluguri@gmail.com>

* jenkins quote fix

Signed-off-by: nithinraok <nithinrao.koluguri@gmail.com>

* update offline speaker diarization notebook

Signed-off-by: nithinraok <nithinrao.koluguri@gmail.com>

* intial working asr_with_diarization

Signed-off-by: nithinraok <nithinrao.koluguri@gmail.com>

* almost done, revist scoring part

Signed-off-by: nithinraok <nithinrao.koluguri@gmail.com>

* fixed eval in offline diarization with asr

Signed-off-by: nithinraok <nithinrao.koluguri@gmail.com>

* update write2manifest to consider only up to max audio duration

Signed-off-by: nithinraok <nithinrao.koluguri@gmail.com>

* asr with diarization notebook

Signed-off-by: nithinraok <nithinrao.koluguri@gmail.com>

* Fixed ASR_with_diarization tutorial.ipynb and diarization_utils and edited config yaml file

Signed-off-by: Taejin Park <tango4j@gmail.com>

* Fixed VAD parameters in Speaker_Diarization_Inference.ipynb

Signed-off-by: Taejin Park <tango4j@gmail.com>

* Added Jenkins test, doc strings and updated README

Signed-off-by: nithinraok <nithinrao.koluguri@gmail.com>

* update jenkins test

Signed-off-by: nithinraok <nithinrao.koluguri@gmail.com>

* Doc info in offline_diarization_with_asr

Signed-off-by: nithinraok <nithinrao.koluguri@gmail.com>

* Review comments

Signed-off-by: nithinraok <nithinrao.koluguri@gmail.com>

* update outdir paths

Signed-off-by: nithinraok <nithinrao.koluguri@gmail.com>

Co-authored-by: Taejin Park <tango4j@gmail.com>
2021-11-06 10:55:32 -04:00

499 lines
18 KiB
Plaintext

{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Automatic Speech Recognition with Speaker Diarization"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"\"\"\"\n",
"You can run either this notebook locally (if you have all the dependencies and a GPU) or on Google Colab.\n",
"\n",
"Instructions for setting up Colab are as follows:\n",
"1. Open a new Python 3 notebook.\n",
"2. Import this notebook from GitHub (File -> Upload Notebook -> \"GITHUB\" tab -> copy/paste GitHub URL)\n",
"3. Connect to an instance with a GPU (Runtime -> Change runtime type -> select \"GPU\" for hardware accelerator)\n",
"4. Run this cell to set up dependencies.\n",
"\"\"\"\n",
"# If you're using Google Colab and not running locally, run this cell.\n",
"\n",
"## Install dependencies\n",
"!pip install wget\n",
"!apt-get install sox libsndfile1 ffmpeg\n",
"!pip install unidecode\n",
"\n",
"# ## Install NeMo\n",
"BRANCH = 'main'\n",
"!python -m pip install git+https://github.com/NVIDIA/NeMo.git@$BRANCH#egg=nemo_toolkit[asr]\n",
"\n",
"## Install TorchAudio\n",
"!pip install torchaudio -f https://download.pytorch.org/whl/torch_stable.html"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Introduction"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Speaker diarization lets us figure out \"who spoke when\" in the transcription. Without speaker diarization, we cannot distinguish the speakers in the transcript generated from automatic speech recognition (ASR). Nowadays, ASR combined with speaker diarization has shown immense use in many tasks, ranging from analyzing meeting transcription to media indexing. \n",
"\n",
"In this tutorial, we demonstrate how we can get ASR transcriptions combined with speaker labels. Since we don't include detailed process of getting ASR result or diarization result, please refer to the following links for more in-depth description.\n",
"\n",
"If you need detailed understanding of transcribing words with ASR, refer to this [ASR Tutorial](https://github.com/NVIDIA/NeMo/blob/stable/tutorials/asr/ASR_with_NeMo.ipynb) tutorial.\n",
"\n",
"\n",
"For detailed parameter setting and execution of speaker diarization, refer to this [Diarization Inference](https://github.com/NVIDIA/NeMo/blob/stable/tutorials/speaker_tasks/Speaker_Diarization_Inference.ipynb) tutorial.\n",
"\n",
"\n",
"An example script that runs ASR and speaker diarization together can be found at [ASR with Diarization](https://github.com/NVIDIA/NeMo/blob/main/examples/speaker_tasks/diarization/asr_with_diarization.py).\n",
"\n",
"### Speaker diarization in ASR pipeline\n",
"\n",
"Speaker diarization result in ASR pipeline should align well with ASR output. Thus, we use ASR output to create Voice Activity Detection (VAD) timestamps to obtain segments we want to diarize. The segments we obtain from the VAD timestamps are further segmented into sub-segments in the speaker diarization step. Finally, after obtaining the speaker labels from speaker diarization, we match the decoded words with speaker labels to generate a transcript with speaker labels.\n",
"\n",
" ASR → VAD timestamps and decoded words → speaker diarization → speaker label matching"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Import libraries\n",
"\n",
"Let's first import nemo asr and other libraries for visualization purposes."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import nemo.collections.asr as nemo_asr\n",
"import numpy as np\n",
"from IPython.display import Audio, display\n",
"import librosa\n",
"import os\n",
"import wget\n",
"import matplotlib.pyplot as plt\n",
"\n",
"import nemo\n",
"from nemo.collections.asr.parts.utils.diarization_utils import ASR_DIAR_OFFLINE\n",
"\n",
"import glob\n",
"\n",
"import pprint\n",
"pp = pprint.PrettyPrinter(indent=4)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We demonstrate this tutorial using a merged AN4 audioclip. The merged audioclip contains the speech of two speakers (male and female) reading dates in different formats. Run the following script to download the audioclip and play it."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"ROOT = os.getcwd()\n",
"data_dir = os.path.join(ROOT,'data')\n",
"os.makedirs(data_dir, exist_ok=True)\n",
"\n",
"an4_audio_url = \"https://nemo-public.s3.us-east-2.amazonaws.com/an4_diarize_test.wav\"\n",
"if not os.path.exists(os.path.join(data_dir,'an4_diarize_test.wav')):\n",
" AUDIO_FILENAME = wget.download(an4_audio_url, data_dir)\n",
"else:\n",
" AUDIO_FILENAME = os.path.join(data_dir,'an4_diarize_test.wav')\n",
"\n",
"audio_file_list = glob.glob(f\"{data_dir}/*.wav\")\n",
"print(\"Input audio file list: \\n\", audio_file_list)\n",
"\n",
"signal, sample_rate = librosa.load(AUDIO_FILENAME, sr=None)\n",
"display(Audio(signal,rate=sample_rate))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"`display_waveform()` and `get_color()` functions are defined for displaying the waveform with diarization results."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"def display_waveform(signal,text='Audio',overlay_color=[]):\n",
" fig,ax = plt.subplots(1,1)\n",
" fig.set_figwidth(20)\n",
" fig.set_figheight(2)\n",
" plt.scatter(np.arange(len(signal)),signal,s=1,marker='o',c='k')\n",
" if len(overlay_color):\n",
" plt.scatter(np.arange(len(signal)),signal,s=1,marker='o',c=overlay_color)\n",
" fig.suptitle(text, fontsize=16)\n",
" plt.xlabel('time (secs)', fontsize=18)\n",
" plt.ylabel('signal strength', fontsize=14);\n",
" plt.axis([0,len(signal),-0.5,+0.5])\n",
" time_axis,_ = plt.xticks();\n",
" plt.xticks(time_axis[:-1],time_axis[:-1]/sample_rate);\n",
" \n",
"COLORS=\"b g c m y\".split()\n",
"\n",
"def get_color(signal,speech_labels,sample_rate=16000):\n",
" c=np.array(['k']*len(signal))\n",
" for time_stamp in speech_labels:\n",
" start,end,label=time_stamp.split()\n",
" start,end = int(float(start)*16000),int(float(end)*16000),\n",
" if label == \"speech\":\n",
" code = 'red'\n",
" else:\n",
" code = COLORS[int(label.split('_')[-1])]\n",
" c[start:end]=code\n",
" \n",
" return c "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Using the above function, we can display the waveform of the example audio clip. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"display_waveform(signal)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Parameter setting for ASR and diarization\n",
"First, we need to setup the following parameters for ASR and diarization. We start our demonstration by first transcribing the audio recording using our pretrained ASR model `QuartzNet15x5Base-En` and use the CTC output probabilities to get timestamps for the spoken words. We then use these timestamps to get speaker label information using speaker diarizer model. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from omegaconf import OmegaConf\n",
"import shutil\n",
"CONFIG_URL = \"https://raw.githubusercontent.com/NVIDIA/NeMo/modify_speaker_input/examples/speaker_tasks/diarization/conf/offline_diarization_with_asr.yaml\"\n",
"\n",
"if not os.path.exists(os.path.join(data_dir,'offline_diarization_with_asr.yaml')):\n",
" CONFIG = wget.download(CONFIG_URL, data_dir)\n",
"else:\n",
" CONFIG = os.path.join(data_dir,'offline_diarization_with_asr.yaml')\n",
" \n",
"cfg = OmegaConf.load(CONFIG)\n",
"print(OmegaConf.to_yaml(cfg))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Speaker Diarization scripts commonly expects following arguments:\n",
"1. manifest_filepath : Path to manifest file containing json lines of format: {\"audio_filepath\": \"/path/to/audio_file\", \"offset\": 0, \"duration\": null, \"label\": \"infer\", \"text\": \"-\", \"num_speakers\": null, \"rttm_filepath\": \"/path/to/rttm/file\", \"uem_filepath\"=\"/path/to/uem/filepath\"}\n",
"2. out_dir : directory where outputs and intermediate files are stored. \n",
"3. oracle_vad: If this is true then we extract speech activity labels from rttm files, if False then either \n",
"4. vad.model_path or external_manifestpath containing speech activity labels has to be passed. \n",
"\n",
"Mandatory fields are audio_filepath, offset, duration, label and text. For the rest if you would like to evaluate with known number of speakers pass the value else `null`. If you would like to score the system with known rttms then that should be passed as well, else `null`. uem file is used to score only part of your audio for evaluation purposes, hence pass if you would like to evaluate on it else `null`.\n",
"\n",
"\n",
"**Note:** we expect audio and corresponding RTTM have **same base name** and the name should be **unique**. \n",
"\n",
"For example: if audio file name is **test_an4**.wav, if provided we expect corresponding rttm file name to be **test_an4**.rttm (note the matching **test_an4** base name)\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Lets create manifest with the an4 audio and rttm available. If you have more than one files you may also use the script `NeMo/scripts/speaker_tasks/rttm_to_manifest.py` to generate manifest file from list of audio files and optionally rttm files "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Create a manifest for input with below format. \n",
"# {\"audio_filepath\": \"/path/to/audio_file\", \"offset\": 0, \"duration\": null, \"label\": \"infer\", \"text\": \"-\", \n",
"# \"num_speakers\": null, \"rttm_filepath\": \"/path/to/rttm/file\", \"uem_filepath\"=\"/path/to/uem/filepath\"}\n",
"import json\n",
"meta = {\n",
" 'audio_filepath': AUDIO_FILENAME, \n",
" 'offset': 0, \n",
" 'duration':None, \n",
" 'label': 'infer', \n",
" 'text': '-', \n",
" 'num_speakers': 2, \n",
" 'rttm_filepath': None, \n",
" 'uem_filepath' : None\n",
"}\n",
"with open(os.path.join(data_dir,'input_manifest.json'),'w') as fp:\n",
" json.dump(meta,fp)\n",
" fp.write('\\n')\n",
"\n",
"\n",
"cfg.diarizer.manifest_filepath = os.path.join(data_dir,'input_manifest.json')\n",
"!cat {cfg.diarizer.manifest_filepath}"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Set the parameters required for diarization, here we get voice activity labels from ASR, which is set through parameter `cfg.diarizer.asr.parameters.asr_based_vad`"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"pretrained_speaker_model='ecapa_tdnn'\n",
"cfg.diarizer.manifest_filepath = cfg.diarizer.manifest_filepath\n",
"cfg.diarizer.out_dir = data_dir #Directory to store intermediate files and prediction outputs\n",
"cfg.diarizer.speaker_embeddings.model_path = pretrained_speaker_model\n",
"cfg.diarizer.speaker_embeddings.parameters.window_length_in_sec = 1.5\n",
"cfg.diarizer.speaker_embeddings.parameters.shift_length_in_sec = 0.75\n",
"cfg.diarizer.clustering.parameters.oracle_num_speakers=True\n",
"\n",
"# USE VAD generated from ASR timestamps\n",
"cfg.diarizer.asr.model_path = 'QuartzNet15x5Base-En'\n",
"cfg.diarizer.oracle_vad = False # ----> ORACLE VAD \n",
"cfg.diarizer.asr.parameters.asr_based_vad = True\n",
"cfg.diarizer.asr.parameters.threshold=300"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Let's create an instance from ASR_DIAR_OFFLINE class. We pass the ``params`` variable to setup the parameters for both ASR and diarization. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from nemo.collections.asr.parts.utils.speaker_utils import audio_rttm_map\n",
"asr_diar_offline = ASR_DIAR_OFFLINE(**cfg.diarizer.asr.parameters)\n",
"asr_diar_offline.root_path = cfg.diarizer.out_dir\n",
"\n",
"AUDIO_RTTM_MAP = audio_rttm_map(cfg.diarizer.manifest_filepath)\n",
"asr_diar_offline.AUDIO_RTTM_MAP = AUDIO_RTTM_MAP\n",
"asr_model = asr_diar_offline.set_asr_model(cfg.diarizer.asr.model_path)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Run ASR and get word timestamps\n",
"Before we run speaker diarization, we should run ASR and get the ASR output to generate decoded words and timestamps for those words. The following two variables are obtained from `run_ASR()` function."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"- words List[str]: contains the sequence of words.\n",
"- word_ts List[int]: contains frame level index of the start and the end of each word."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"word_list, word_ts_list = asr_diar_offline.run_ASR(asr_model)\n",
"\n",
"print(\"Decoded word output: \\n\", word_list[0])\n",
"print(\"Word-level timestamps \\n\", word_ts_list[0])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Run diarization with extracted word timestamps\n",
"\n",
"\n",
"We need to convert ASR based VAD output (*.rttm format) to VAD manifest (*.json format) file. The following function converts the rttm files into manifest file and returns the path for manifest file."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now that all the components for diarization is ready, let's run diarization by calling `run_diarization()` function."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"score = asr_diar_offline.run_diarization(cfg, word_ts_list)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"`run_diarization()` function creates `an4_diarize_test.rttm` file. Let's see what is written in this `rttm` file."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"def read_file(path_to_file):\n",
" with open(path_to_file) as f:\n",
" contents = f.read().splitlines()\n",
" return contents"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"predicted_speaker_label_rttm_path = f\"{data_dir}/pred_rttms/an4_diarize_test.rttm\"\n",
"pred_rttm = read_file(predicted_speaker_label_rttm_path)\n",
"\n",
"pp.pprint(pred_rttm)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from nemo.collections.asr.parts.utils.speaker_utils import rttm_to_labels\n",
"pred_labels = rttm_to_labels(predicted_speaker_label_rttm_path)\n",
"\n",
"color = get_color(signal, pred_labels)\n",
"display_waveform(signal,'Audio with Speaker Labels', color)\n",
"display(Audio(signal,rate=16000))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Check the transcription output\n",
"Now we've done all the processes for running ASR and diarization, let's match the diarization result with ASR result and get the final output. `write_json_and_transcript()` function matches diarization output `diar_labels` with `word_list` using the timestamp information `word_ts_list`. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"asr_output_dict = asr_diar_offline.write_json_and_transcript(word_list, word_ts_list)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"After running `write_json_and_transcript()` function, the transcription output will be located in `./pred_rttms` folder, which shows **start time to end time of the utterance, speaker ID, and words spoken** during the notified time."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"transcription_path_to_file = f\"{data_dir}/pred_rttms/an4_diarize_test.txt\"\n",
"transcript = read_file(transcription_path_to_file)\n",
"pp.pprint(transcript)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Another output is transcription output in JSON format, which is saved in `./pred_rttms/an4_diarize_test.json`. \n",
"\n",
"In the JSON format output, we include information such as **transcription, estimated number of speakers (variable named `speaker_count`), start and end time of each word and most importantly, speaker label for each word.**"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"transcription_path_to_file = f\"{data_dir}/pred_rttms/an4_diarize_test.json\"\n",
"json_contents = read_file(transcription_path_to_file)\n",
"pp.pprint(json_contents)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.2"
}
},
"nbformat": 4,
"nbformat_minor": 4
}