Docs, Contributing, Readme draft (#977)

* Docs, Contributing, Readme draft

Signed-off-by: Oleksii Kuchaiev <okuchaiev@nvidia.com>

* minor readme update

Signed-off-by: Oleksii Kuchaiev <okuchaiev@nvidia.com>

* update

Signed-off-by: Oleksii Kuchaiev <okuchaiev@nvidia.com>

* docs non-existing link

Signed-off-by: Oleksii Kuchaiev <okuchaiev@nvidia.com>

* update link

Signed-off-by: Oleksii Kuchaiev <okuchaiev@nvidia.com>

* add readthedocs.yaml

Signed-off-by: Oleksii Kuchaiev <okuchaiev@nvidia.com>
This commit is contained in:
Oleksii Kuchaiev 2020-08-03 17:18:44 -07:00 committed by GitHub
parent edbf3cd4ba
commit d1abb897c0
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
24 changed files with 1897 additions and 14 deletions

1
.gitignore vendored
View file

@ -85,6 +85,7 @@ instance/
# Sphinx documentation
docs/_build/
docs/sources/build
# PyBuilder
target/

31
.readthedocs.yml Normal file
View file

@ -0,0 +1,31 @@
# =============================================================================
# Copyright (c) 2020 NVIDIA. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# =============================================================================
# Read the Docs configuration file
# See https://docs.readthedocs.io/en/stable/config-file/v2.html for details
# Required field.
version: 2
# Build documentation in the docs/ directory with Sphinx.
sphinx:
configuration: docs/source/conf.py
# Set the version of Python and requirements required to build your docs
python:
version: 3.7
install:
- requirements: requirements/requirements_docs.txt

57
CONTRIBUTING.md Normal file
View file

@ -0,0 +1,57 @@
# Contributions are welcome!
We do all of NeMo's development in the open. Contributions from NeMo community are welcome.
# Pull Requests (PR) Guidelines
1) Make sure your PR does one thing. Have a clear answer to "What does this PR do?".
2) Read General Principles and style guide below
3) Make sure unittest pass on your machine
4) Make sure you sign your commits. E.g. use ``git commit -s`` when before your commit
5) Make sure all unittests finish successfully before sending PR ``pytest`` or (if yor dev box does not have GPU) ``pytest --cpu`` from NeMo's root folder
6) Send your PR and request a review
Send your PR to the `master` branch
Whom should you ask for review:
1. For changes to NeMo's core: @okuchaiev, @blisc, @titu1994, @tkornuta-nvidia, or @ericharper
1. For changes to NeMo's ASR collection: @okuchaiev, @titu1994, @redoctopus, @blisc, or @vsl9
1. For changes to NeMo's NLP collection: @ekmb, @yzhang123, @VahidooX, @vladgets, or @ericharper
1. For changes to NeMo's TTS collection: @blisc or @stasbel
Note that some people may self-assign to review your PR - in which case, please wait for them to add a review.
Your pull requests must pass all checks and peer-review before they can be merged.
# General principles
1. **User-oriented**: make it easy for end users, even at the cost of writing more code in the background
1. **Robust**: make it hard for users to make mistakes.
1. **Supporting of both training and inferencing**: if a module can only be used for training, write a companion module to be used during inference.
1. **Reusable**: for every piece of code, think about how it can be reused in the future and make it easy to be reused.
1. **Readable**: code should be easier to read.
1. **Legal**: if you copy even one line of code from the Internet, make sure that the code allows the license that NeMo supports. Give credit and link back to the code.
1. **Sensible**: code should make sense. If you think a piece of code might be confusing, write comments.
## Python style
We use ``black`` as our style guide. To check whether your code will pass style check (from the NeMo's repo folder) run:
``python setup.py style`` and if it does not pass run ``python setup.py style --fix``.
1. Include docstrings for every class and method exposed to the user.
1. Use Python 3 type hints for every class and method exposed to the user.
1. Avoid wild import: ``from X import *`` unless in ``X.py``, ``__all__`` is defined.
1. Minimize the use of ``**kwargs``.
1. ``RaiseError`` is preferred to ``assert``. Write: ```if X: raise Error``` instead of ```assert X```.
1. Classes are preferred to standalone methods.
1. Methods should be atomic. A method shouldn't be longer than 75 lines, e.g. can be fit into the computer screen without scrolling.
1. If a method has arguments that don't fit into one line, each argument should be in its own line for readability.
1. Add ``__init__.py`` for every folder.
1. F-strings are prefered to formatted strings.
1. Loggers are preferred to print. In NeMo, you can use logger from ``from nemo.utils import logging``
1. Private functions (functions start with ``_``) shouldn't be called outside its host file.
1. If a comment lasts multiple lines, use ``'''`` instead of ``#``.
# Collections
Collection is a logical grouping of related Neural Modules. It is a grouping of modules that share a domain area or semantics.
When contributing module to a collection, please make sure it belongs to that category.
If you would like to start a new one and contribute back to the platform, you are very welcome to do so.

65
README.rst Normal file
View file

@ -0,0 +1,65 @@
|status| |license| |lgtm_grade| |lgtm_alerts| |black|
.. |status| image:: http://www.repostatus.org/badges/latest/active.svg
:target: http://www.repostatus.org/#active
:alt: Project Status: Active The project has reached a stable, usable state and is being actively developed.
.. |license| image:: https://img.shields.io/badge/License-Apache%202.0-brightgreen.svg
:target: https://github.com/NVIDIA/NeMo/blob/master/LICENSE
:alt: NeMo core license and license for collections in this repo
.. |lgtm_grade| image:: https://img.shields.io/lgtm/grade/python/g/NVIDIA/NeMo.svg?logo=lgtm&logoWidth=18
:target: https://lgtm.com/projects/g/NVIDIA/NeMo/context:python
:alt: Language grade: Python
.. |lgtm_alerts| image:: https://img.shields.io/lgtm/alerts/g/NVIDIA/NeMo.svg?logo=lgtm&logoWidth=18
:target: https://lgtm.com/projects/g/NVIDIA/NeMo/alerts/
:alt: Total alerts
.. |black| image:: https://img.shields.io/badge/code%20style-black-000000.svg
:target: https://github.com/psf/black
:alt: Code style: black
**NVIDIA NeMo**
===============
Train State of the Art AI Models
--------------------------------
**Introduction**
NeMo is a toolkit for creating `Conversational AI <https://developer.nvidia.com/conversational-ai#started>`_ applications.
NeMo toolkit makes it possible for researchers to easily compose complex neural network architectures for conversational AI using reusable components - Neural Modules.
**Neural Modules** are conceptual blocks of neural networks that take *typed* inputs and produce *typed* outputs. Such modules typically represent data layers, encoders, decoders, language models, loss functions, or methods of combining activations.
The toolkit comes with extendable collections of pre-built modules and ready-to-use models for automatic speech recognition (ASR), natural language processing (NLP) and text synthesis (TTS).
Built for speed, NeMo can utilize NVIDIA's Tensor Cores and scale out training to multiple GPUs and multiple nodes.
`Documentation <https://docs.nvidia.com/deeplearning/nemo/developer_guide/en/candidate/>`_
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Requirements
------------
NeMo's works with:
1) Python 3.6 or 3.7
2) Pytorch 1.6 or above
Installation
~~~~~~~~~~~~
``pip install nemo_toolkit[all]==version``
We recommend using NVIDIA's PyTorch container
.. code-block:: bash
docker run --gpus all -it --rm -v <nemo_github_folder>:/NeMo --shm-size=8g \
-p 8888:8888 -p 6006:6006 --ulimit memlock=-1 --ulimit \
stack=67108864 nvcr.io/nvidia/pytorch:20.06-py3

0
docs/.nojekyll Normal file
View file

216
docs/sources/Makefile Normal file
View file

@ -0,0 +1,216 @@
# Makefile for Sphinx documentation
#
# You can set these variables from the command line.
SPHINXOPTS =
SPHINXBUILD = sphinx-build
PAPER =
BUILDDIR = build
# User-friendly check for sphinx-build
ifeq ($(shell which $(SPHINXBUILD) >/dev/null 2>&1; echo $$?), 1)
$(error The '$(SPHINXBUILD)' command was not found. Make sure you have Sphinx installed, then set the SPHINXBUILD environment variable to point to the full path of the '$(SPHINXBUILD)' executable. Alternatively you can add the directory with the executable to your PATH. If you don't have Sphinx installed, grab it from http://sphinx-doc.org/)
endif
# Internal variables.
PAPEROPT_a4 = -D latex_paper_size=a4
PAPEROPT_letter = -D latex_paper_size=letter
ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) source
# the i18n builder cannot share the environment and doctrees with the others
I18NSPHINXOPTS = $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) source
.PHONY: help
help:
@echo "Please use \`make <target>' where <target> is one of"
@echo " html to make standalone HTML files"
@echo " dirhtml to make HTML files named index.html in directories"
@echo " singlehtml to make a single large HTML file"
@echo " pickle to make pickle files"
@echo " json to make JSON files"
@echo " htmlhelp to make HTML files and a HTML help project"
@echo " qthelp to make HTML files and a qthelp project"
@echo " applehelp to make an Apple Help Book"
@echo " devhelp to make HTML files and a Devhelp project"
@echo " epub to make an epub"
@echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter"
@echo " latexpdf to make LaTeX files and run them through pdflatex"
@echo " latexpdfja to make LaTeX files and run them through platex/dvipdfmx"
@echo " text to make text files"
@echo " man to make manual pages"
@echo " texinfo to make Texinfo files"
@echo " info to make Texinfo files and run them through makeinfo"
@echo " gettext to make PO message catalogs"
@echo " changes to make an overview of all changed/added/deprecated items"
@echo " xml to make Docutils-native XML files"
@echo " pseudoxml to make pseudoxml-XML files for display purposes"
@echo " linkcheck to check all external links for integrity"
@echo " doctest to run all doctests embedded in the documentation (if enabled)"
@echo " coverage to run coverage check of the documentation (if enabled)"
.PHONY: clean
clean:
rm -rf $(BUILDDIR)/*
.PHONY: html
html:
$(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html
@echo
@echo "Build finished. The HTML pages are in $(BUILDDIR)/html."
.PHONY: dirhtml
dirhtml:
$(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) $(BUILDDIR)/dirhtml
@echo
@echo "Build finished. The HTML pages are in $(BUILDDIR)/dirhtml."
.PHONY: singlehtml
singlehtml:
$(SPHINXBUILD) -b singlehtml $(ALLSPHINXOPTS) $(BUILDDIR)/singlehtml
@echo
@echo "Build finished. The HTML page is in $(BUILDDIR)/singlehtml."
.PHONY: pickle
pickle:
$(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) $(BUILDDIR)/pickle
@echo
@echo "Build finished; now you can process the pickle files."
.PHONY: json
json:
$(SPHINXBUILD) -b json $(ALLSPHINXOPTS) $(BUILDDIR)/json
@echo
@echo "Build finished; now you can process the JSON files."
.PHONY: htmlhelp
htmlhelp:
$(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) $(BUILDDIR)/htmlhelp
@echo
@echo "Build finished; now you can run HTML Help Workshop with the" \
".hhp project file in $(BUILDDIR)/htmlhelp."
.PHONY: qthelp
qthelp:
$(SPHINXBUILD) -b qthelp $(ALLSPHINXOPTS) $(BUILDDIR)/qthelp
@echo
@echo "Build finished; now you can run "qcollectiongenerator" with the" \
".qhcp project file in $(BUILDDIR)/qthelp, like this:"
@echo "# qcollectiongenerator $(BUILDDIR)/qthelp/OpenSeq2Seq.qhcp"
@echo "To view the help file:"
@echo "# assistant -collectionFile $(BUILDDIR)/qthelp/OpenSeq2Seq.qhc"
.PHONY: applehelp
applehelp:
$(SPHINXBUILD) -b applehelp $(ALLSPHINXOPTS) $(BUILDDIR)/applehelp
@echo
@echo "Build finished. The help book is in $(BUILDDIR)/applehelp."
@echo "N.B. You won't be able to view it unless you put it in" \
"~/Library/Documentation/Help or install it in your application" \
"bundle."
.PHONY: devhelp
devhelp:
$(SPHINXBUILD) -b devhelp $(ALLSPHINXOPTS) $(BUILDDIR)/devhelp
@echo
@echo "Build finished."
@echo "To view the help file:"
@echo "# mkdir -p $$HOME/.local/share/devhelp/OpenSeq2Seq"
@echo "# ln -s $(BUILDDIR)/devhelp $$HOME/.local/share/devhelp/OpenSeq2Seq"
@echo "# devhelp"
.PHONY: epub
epub:
$(SPHINXBUILD) -b epub $(ALLSPHINXOPTS) $(BUILDDIR)/epub
@echo
@echo "Build finished. The epub file is in $(BUILDDIR)/epub."
.PHONY: latex
latex:
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
@echo
@echo "Build finished; the LaTeX files are in $(BUILDDIR)/latex."
@echo "Run \`make' in that directory to run these through (pdf)latex" \
"(use \`make latexpdf' here to do that automatically)."
.PHONY: latexpdf
latexpdf:
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
@echo "Running LaTeX files through pdflatex..."
$(MAKE) -C $(BUILDDIR)/latex all-pdf
@echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex."
.PHONY: latexpdfja
latexpdfja:
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
@echo "Running LaTeX files through platex and dvipdfmx..."
$(MAKE) -C $(BUILDDIR)/latex all-pdf-ja
@echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex."
.PHONY: text
text:
$(SPHINXBUILD) -b text $(ALLSPHINXOPTS) $(BUILDDIR)/text
@echo
@echo "Build finished. The text files are in $(BUILDDIR)/text."
.PHONY: man
man:
$(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man
@echo
@echo "Build finished. The manual pages are in $(BUILDDIR)/man."
.PHONY: texinfo
texinfo:
$(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo
@echo
@echo "Build finished. The Texinfo files are in $(BUILDDIR)/texinfo."
@echo "Run \`make' in that directory to run these through makeinfo" \
"(use \`make info' here to do that automatically)."
.PHONY: info
info:
$(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo
@echo "Running Texinfo files through makeinfo..."
make -C $(BUILDDIR)/texinfo info
@echo "makeinfo finished; the Info files are in $(BUILDDIR)/texinfo."
.PHONY: gettext
gettext:
$(SPHINXBUILD) -b gettext $(I18NSPHINXOPTS) $(BUILDDIR)/locale
@echo
@echo "Build finished. The message catalogs are in $(BUILDDIR)/locale."
.PHONY: changes
changes:
$(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes
@echo
@echo "The overview file is in $(BUILDDIR)/changes."
.PHONY: linkcheck
linkcheck:
$(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) $(BUILDDIR)/linkcheck
@echo
@echo "Link check complete; look for any errors in the above output " \
"or in $(BUILDDIR)/linkcheck/output.txt."
.PHONY: doctest
doctest:
$(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest
@echo "Testing of doctests in the sources finished, look at the " \
"results in $(BUILDDIR)/doctest/output.txt."
.PHONY: coverage
coverage:
$(SPHINXBUILD) -b coverage $(ALLSPHINXOPTS) $(BUILDDIR)/coverage
@echo "Testing of coverage in the sources finished, look at the " \
"results in $(BUILDDIR)/coverage/python.txt."
.PHONY: xml
xml:
$(SPHINXBUILD) -b xml $(ALLSPHINXOPTS) $(BUILDDIR)/xml
@echo
@echo "Build finished. The XML files are in $(BUILDDIR)/xml."
.PHONY: pseudoxml
pseudoxml:
$(SPHINXBUILD) -b pseudoxml $(ALLSPHINXOPTS) $(BUILDDIR)/pseudoxml
@echo
@echo "Build finished. The pseudo-XML files are in $(BUILDDIR)/pseudoxml."

View file

@ -0,0 +1,18 @@
NeMo Core API
=============
Classes and Interfaces
----------------------
.. autoclass:: nemo.core.ModelPT
:show-inheritance:
:members: setup_training_data, setup_optimization, setup_validation_data, setup_test_data, register_artifact
Neural Types
------------
.. automodule:: nemo.core.neural_types.neural_type
:members:
:undoc-members:
:show-inheritance:

View file

@ -0,0 +1,33 @@
NeMo ASR collection API
=======================
Model Classes
-------------
.. autoclass:: nemo.collections.asr.models.EncDecCTCModel
:show-inheritance:
:members: setup_training_data, setup_optimization, setup_validation_data, setup_test_data, register_artifact
.. autoclass:: nemo.collections.asr.models.EncDecClassificationModel
:show-inheritance:
:members: setup_training_data, setup_optimization, setup_validation_data, setup_test_data, register_artifact
.. autoclass:: nemo.collections.asr.models.EncDecSpeakerLabelModel
:show-inheritance:
:members: setup_training_data, setup_optimization, setup_validation_data, setup_test_data, register_artifact
Modules
-------
.. autoclass:: nemo.collections.asr.modules.ConvASREncoder
:show-inheritance:
:members:
.. autoclass:: nemo.collections.asr.modules.ConvASRDecoder
:show-inheritance:
:members:

View file

@ -0,0 +1,938 @@
@inproceedings{panayotov2015librispeech,
title={Librispeech: an ASR corpus based on public domain audio books},
author={Panayotov, Vassil and Chen, Guoguo and Povey, Daniel and Khudanpur, Sanjeev},
booktitle={Acoustics, Speech and Signal Processing (ICASSP), 2015 IEEE International Conference on},
pages={5206--5210},
year={2015},
organization={IEEE}
}
@article{luong17,
author = {Minh{-}Thang Luong and Eugene Brevdo and Rui Zhao},
title = {Neural Machine Translation (seq2seq) Tutorial},
journal = {https://github.com/tensorflow/nmt},
year = {2017},
}
@INPROCEEDINGS{LaurentSeqWiseBN,
author={C. {Laurent} and G. {Pereyra} and P. {Brakel} and Y. {Zhang} and Y. {Bengio}},
booktitle={2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
title={Batch normalized recurrent neural networks},
year={2016},
volume={},
number={},
pages={2657-2661},
keywords={feedforward neural nets;learning (artificial intelligence);recurrent neural nets;speech recognition;batch normalized recurrent neural networks;RNN;sequential data;long-term dependency learning;convergence rate improvement;intermediate representation normalization;feedforward neural networks;speech recognition task;language modeling;training criterion;Training;Recurrent neural networks;Convergence;Speech recognition;Computer architecture;Speech;batch normalization;RNN;LSTM;optimization},
doi={10.1109/ICASSP.2016.7472159},
ISSN={2379-190X},
month={March},}
@article{graves2005,
author = {Alex Graves and Jurgen Schmidhuber},
title = {Framewise phoneme classification with bidirectional LSTM and other neural network architectures},
journal = {Neural Networks, vol. 18},
pages={602-610},
year = {2005},
}
@inproceedings{graves2006,
title={Connectionist temporal classification: labelling unsegmented sequence data with recurrent neural networks},
author={Graves, Alex and Fern{\'a}ndez, Santiago and Gomez, Faustino and Schmidhuber, J{\"u}rgen},
booktitle={Proceedings of the 23rd international conference on Machine learning},
pages={369--376},
year={2006},
organization={ACM}
}
@article{li2019jasper,
title={Jasper: An End-to-End Convolutional Neural Acoustic Model},
author={Li, Jason and Lavrukhin, Vitaly and Ginsburg, Boris and Leary, Ryan and Kuchaiev, Oleksii and Cohen, Jonathan M and Nguyen, Huyen and Gadde, Ravi Teja},
journal={arXiv preprint arXiv:1904.03288},
year={2019}
}
@misc{ardila2019common,
title={Common Voice: A Massively-Multilingual Speech Corpus},
author={Rosana Ardila and Megan Branson and Kelly Davis and Michael Henretty and Michael Kohler and Josh Meyer and Reuben Morais and Lindsay Saunders and Francis M. Tyers and Gregor Weber},
year={2019},
eprint={1912.06670},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
@article{graves2012,
title={Sequence Transduction with Recurrent Neural Networks},
author={Graves, Alex},
journal={arXiv preprint arXiv:1211.3711},
year={2012}
}
@article{graves2013,
title={Generating sequences with recurrent neural networks},
author={Graves, Alex},
journal={arXiv preprint arXiv:1308.0850},
year={2013}
}
@article{sergeev2018horovod,
title={Horovod: fast and easy distributed deep learning in TensorFlow},
author={Sergeev, Alexander and Del Balso, Mike},
journal={arXiv preprint arXiv:1802.05799},
year={2018}
}
@misc{NVVolta,
title = {NVIDIA TESLA V100 GPU ARCHITECTURE},
howpublished = {\url{http://images.nvidia.com/content/volta-architecture/pdf/volta-architecture-whitepaper.pdf}},
note = {Accessed: 2018-10-09}
}
@article{NVTuring,
title = {NVIDIA TURING GPU ARCHITECTURE},
howpublished = {\url{https://www.nvidia.com/content/dam/en-zz/Solutions/design-visualization/technologies/turing-architecture/NVIDIA-Turing-Architecture-Whitepaper.pdf}},
author = {NVIDIA},
year = {2018},
note = {Accessed: 2018-10-09}
}
@misc{Rygaard2015,
title = {Using Synthesized Speech to Improve Speech Recognition for Low-Resource Languages},
author = {Luise Valentin Rygaard},
howpublished = {\url{https://parasol.tamu.edu/dreu2015/Rygaard/report.pdf}},
year = {2015},
}
@misc{OpenSeq2Seq,
title = {OpenSeq2Seq: extensible toolkit for distributed and mixed precision training of sequence-to-sequence models},
author = {Kuchaiev, Oleksii and Ginsburg, Boris and Gitman, Igor and Lavrukhin,Vitaly and Case, Carl and Micikevicius, Paulius},
howpublished = {\url{https://arxiv.org/abs/1805.10387}},
year = {2018},
}
@misc{MPGuide,
title = {Training with Mixed Precision},
howpublished = {\url{http://docs.nvidia.com/deeplearning/sdk/mixed-precision-training/}},
note = {Accessed: 2018-04-06},
}
@misc{Mozilla,
title = {Mozilla: A Journey to less than 10\% Word Error Rate},
howpublished = {\url{https://hacks.mozilla.org/2017/11/a-journey-to-10-word-error-rate/}},
note = {Accessed: 2018-04-06},
}
@article{Waibel1989,
title={A time-delay neural network architecture for isolated word recognition},
author={Waibel, Alexander, and Hanazawa, Toshiyki and Hinton,Geoffrey and Shirano, Kiyohiro and Lang, Kevin },
journal={IEEE Trans. on Acoustics, Speech and Signal Processing},
year={1989}
}
@article{Lang1990,
title={A time-delay neural network architecture for isolated word recognition},
author={Lang, Kevin and Waibel, Alexander, and Hinton,Geoffrey },
journal={Neural Networks},
year={1990}
}
@book{Bengio1996,
Author = {Bengio, Y.},
Publisher = {International Thomson Computer Press},
Title = {Neural Networks for Speech and Sequence Recognition},
Year = {1996}
}
@article{Bengio1992,
title={Global optimization of a neural network-hidden Markov model hybrid},
author={Bengio, Y., and De Mori, R., and Flammia, G., and Kompe, R. },
journal={IEEE Transactions on Neural Networks, 3(2), 252259},
year={1992}
}
@article{Bourlard1994,
title={Connectionist speech recognition: a hybrid approach},
author={Bourlard, H. A. and Morgan, N.},
journal={volume 247 Springer },
year={1994}
}
@article{srivastava14a,
author = {Nitish Srivastava, and Geoffrey Hinton, and Alex Krizhevsky, and Ilya Sutskever, and Ruslan Salakhutdinov},
title = {Dropout: A Simple Way to Prevent Neural Networks from Overfitting},
journal = {Journal of Machine Learning Research},
year = {2014},
volume = {15},
pages = {1929-1958},
url = {http://jmlr.org/papers/v15/srivastava14a.html}
}
@article{Hinton2012,
title={Deep Neural Networks for Acoustic Modeling in Speech Recognition},
author={ Hinton,Geoffrey and Deng, Li and Yu, Dong and Dahl,George and Mohamed,Abdel-rahman and Jaitly, Navdeep and Senior, Andrew and Vanhoucke, Vincent and Nguyen, Patrick and Kingsbury, Brian and Sainath, Tara},
journal={IEEE Signal Processing Magazine},
year={2012}
}
@article{Graves2014,
title={Towards End-to-End Speech Recognition with Recurrent Neural Networks},
author={Graves, Alex and Jaitly, Navdeep},
journal={International Conference on Machine Learning},
year={2014}
}
@article{Chorowski2014,
title={End-to-end Continuous Speech Recognition using Attention-based Recurrent NN: First Results},
author={ Chorowski, Jan, and Bahdanau, Dzmitry , and Cho, Kyunghyun , and Bengio, Yoshua },
journal={Neural Information Processing Systems: Workshop Deep Learning and Representation Learning Workshop },
year={2014}
}
@article{Sak2014,
title={Long short-term memory recurrent neural network architectures for large scale acoustic modeling},
author={Sak, Hasim and Senior, Andrew and Beaufays, Francoise },
journal={Interspeech 2014},
year={2014}
}
@article{Ko2015,
title={Audio Augmentation for Speech Recognition},
author={Tom, Ko and Vijayaditya, Peddinti and Daniel, Povey
and Sanjeev, Khudanpur },
journal={Interspeech 2015},
year={2015}
}
@article{Tjandra2017,
title={Listening while Speaking: Speech Chain by Deep Learning},
author={Andros, Tjandra and Sakriani, Sakti and Satoshi, Nakamura },
journal={ASRU 2017},
year={2017}
}
@article{Tjandra2018,
title={Machine Speech Chain with One-shot Speaker Adaptation},
author={Andros, Tjandra and Sakriani, Sakti and Satoshi, Nakamura },
journal={Interspeech 2018},
year={2018}
}
@article{bahdanau2014neural,
title={Neural machine translation by jointly learning to align and translate},
author={Bahdanau, Dzmitry and Cho, Kyunghyun and Bengio, Yoshua},
journal={arXiv preprint arXiv:1409.0473},
year={2014}
}
@article{cho2014learning,
title={Learning phrase representations using RNN encoder-decoder for statistical machine translation},
author={Cho, Kyunghyun and Van Merri{\"e}nboer, Bart and Gulcehre, Caglar and Bahdanau, Dzmitry and Bougares, Fethi and Schwenk, Holger and Bengio, Yoshua},
journal={arXiv preprint arXiv:1406.1078},
year={2014}
}
@article{rush2015neural,
title={A neural attention model for abstractive sentence summarization},
author={Rush, Alexander M and Chopra, Sumit and Weston, Jason},
journal={arXiv preprint arXiv:1509.00685},
year={2015}
}
@article{micikevicius2017mixed,
title={Mixed precision training},
author={Micikevicius, Paulius and Narang, Sharan and Alben, Jonah and Diamos, Gregory and Elsen, Erich and Garcia, David and Ginsburg, Boris and Houston, Michael and Kuchaev, Oleksii and Venkatesh, Ganesh and others},
journal={arXiv preprint arXiv:1710.03740},
year={2017}
}
@ARTICLE{Britz:2017,
author = {{Britz}, Denny and {Goldie}, Anna and {Luong}, Thang and {Le}, Quoc},
title = "{Massive Exploration of Neural Machine Translation Architectures}",
journal = {ArXiv e-prints arXiv:1703.03906},
archivePrefix = "arXiv",
eprinttype = {arxiv},
eprint = {1703.03906},
primaryClass = "cs.CL",
keywords = {Computer Science - Computation and Language},
year = 2017,
month = mar,
}
@inproceedings{vaswani2017attention,
title={Attention is all you need},
author={Vaswani, Ashish and Shazeer, Noam and Parmar, Niki and Uszkoreit, Jakob and Jones, Llion and Gomez, Aidan N and Kaiser, {\L}ukasz and Polosukhin, Illia},
booktitle={Advances in Neural Information Processing Systems},
pages={6000--6010},
year={2017}
}
@inproceedings{abadi2016tensorflow,
title={TensorFlow: A System for Large-Scale Machine Learning.},
author={Abadi, Mart{\'\i}n and Barham, Paul and Chen, Jianmin and Chen, Zhifeng and Davis, Andy and Dean, Jeffrey and Devin, Matthieu and Ghemawat, Sanjay and Irving, Geoffrey and Isard, Michael and others},
booktitle={OSDI},
volume={16},
pages={265--283},
year={2016}
}
@article{tensor2tensor,
author = {Ashish Vaswani and Samy Bengio and Eugene Brevdo and Francois Chollet and Aidan N. Gomez and Stephan Gouws and Llion Jones and \L{}ukasz Kaiser and Nal Kalchbrenner and Niki Parmar and Ryan Sepassi and
Noam Shazeer and Jakob Uszkoreit},
title = {Tensor2Tensor for Neural Machine Translation},
journal = {CoRR},
volume = {abs/1803.07416},
year = {2018},
url = {http://arxiv.org/abs/1803.07416},
}
@article{gehring2017convs2s,
author = {Gehring, Jonas, and Auli, Michael and Grangier, David and Yarats, Denis and Dauphin, Yann N},
title = "{Convolutional Sequence to Sequence Learning}",
journal = {ArXiv e-prints arXiv:1705.03122},
archivePrefix = "arXiv",
eprinttype = {arxiv},
eprint = {1705.03122},
primaryClass = "cs.CL",
keywords = {Computer Science - Computation and Language},
year = 2017,
month = May,
}
@inproceedings{chan2015,
title={Listen, attend and spell},
author={Chan, William and Jaitly, Navdeep and Le, Quoc V and Vinyals, Oriol},
booktitle={Acoustics, Speech and Signal Processing (ICASSP), 2016 IEEE International Conference on},
pages={5206--5210},
year={2016},
organization={IEEE}
}
@inproceedings{xu2015show,
title={Show, attend and tell: Neural image caption generation with visual attention},
author={Xu, Kelvin and Ba, Jimmy and Kiros, Ryan and Cho, Kyunghyun and Courville, Aaron and Salakhudinov, Ruslan and Zemel, Rich and Bengio, Yoshua},
booktitle={International Conference on Machine Learning},
pages={2048--2057},
year={2015}
}
@incollection{Sutskever2014,
title = {Sequence to Sequence Learning with Neural Networks},
author = {Sutskever, Ilya and Vinyals, Oriol and Le, Quoc V},
booktitle = {Advances in Neural Information Processing Systems 27},
editor = {Z. Ghahramani and M. Welling and C. Cortes and N. D. Lawrence and K. Q. Weinberger},
pages = {3104--3112},
year = {2014},
publisher = {Curran Associates, Inc.},
url = {http://papers.nips.cc/paper/5346-sequence-to-sequence-learning-with-neural-networks.pdf}
}
@article{DeepSpeech2014,
title = {Deep Speech: Scaling up end-to-end speech recognition},
author = {Awni Y. Hannun and Carl Case and Jared Casper and Bryan Catanzaro and Greg Diamos and Erich Elsen and Ryan Prenger and Sanjeev Satheesh and Shubho Sengupta and Adam Coates and Andrew Y. Ng},
journal = {CoRR},
volume = {abs/1412.5567},
year = {2014},
url = {http://arxiv.org/abs/1412.5567},
archivePrefix = {arXiv},
eprint = {1412.5567},
timestamp = {Mon, 13 Aug 2018 16:48:07 +0200},
biburl = {https://dblp.org/rec/bib/journals/corr/HannunCCCDEPSSCN14},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
@inproceedings{DeepSpeech2,
author = {Amodei, Dario and Ananthanarayanan, Sundaram and Anubhai, Rishita and Bai, Jingliang and Battenberg, Eric and Case, Carl and Casper, Jared and Catanzaro, Bryan and Cheng, Qiang and Chen, Guoliang and Chen, Jie and Chen, Jingdong and Chen, Zhijie and Chrzanowski, Mike and Coates, Adam and Diamos, Greg and Ding, Ke and Du, Niandong and Elsen, Erich and Engel, Jesse and Fang, Weiwei and Fan, Linxi and Fougner, Christopher and Gao, Liang and Gong, Caixia and Hannun, Awni and Han, Tony and Johannes, Lappi Vaino and Jiang, Bing and Ju, Cai and Jun, Billy and LeGresley, Patrick and Lin, Libby and Liu, Junjie and Liu, Yang and Li, Weigao and Li, Xiangang and Ma, Dongpeng and Narang, Sharan and Ng, Andrew and Ozair, Sherjil and Peng, Yiping and Prenger, Ryan and Qian, Sheng and Quan, Zongfeng and Raiman, Jonathan and Rao, Vinay and Satheesh, Sanjeev and Seetapun, David and Sengupta, Shubho and Srinet, Kavya and Sriram, Anuroop and Tang, Haiyuan and Tang, Liliang and Wang, Chong and Wang, Jidong and Wang, Kaifu and Wang, Yi and Wang, Zhijian and Wang, Zhiqian and Wu, Shuang and Wei, Likai and Xiao, Bo and Xie, Wen and Xie, Yan and Yogatama, Dani and Yuan, Bin and Zhan, Jun and Zhu, Zhenyao},
title = {Deep Speech 2: End-to-end Speech Recognition in English and Mandarin},
booktitle = {Proceedings of the 33rd International Conference on International Conference on Machine Learning - Volume 48},
series = {ICML'16},
year = {2016},
location = {New York, NY, USA},
pages = {173--182},
numpages = {10},
url = {http://dl.acm.org/citation.cfm?id=3045390.3045410},
acmid = {3045410},
publisher = {JMLR.org},
}
@inproceedings{prabhavalkar2017comparison,
title={A comparison of sequence-to-sequence models for speech recognition},
author={Prabhavalkar, Rohit and Rao, Kanishka and Sainath, Tara N and Li, Bo and Johnson, Leif and Jaitly, Navdeep},
booktitle={Proc. Interspeech},
pages={939--943},
year={2017}
}
@article{chiu2017state,
title={State-of-the-art speech recognition with sequence-to-sequence models},
author={Chiu, Chung-Cheng and Sainath, Tara N and Wu, Yonghui and Prabhavalkar, Rohit and Nguyen, Patrick and Chen, Zhifeng and Kannan, Anjuli and Weiss, Ron J and Rao, Kanishka and Gonina, Katya and others},
journal={arXiv preprint arXiv:1712.01769},
year={2017}
}
@misc{NVMixed,
title = {{NVIDA's Mixed-Precision Training - TensorFlow example}},
howpublished = {\url{https://docs.nvidia.com/deeplearning/sdk/mixed-precision-training/#example_tensorflow}},
author={NVIDIA},
note = {Accessed: 2018-10-09},
year={2018}
}
@article{gehring2017,
title={Convolutional sequence to sequence learning},
author={Gehring, Jonas and Auli, Michael and Grangier, David and Yarats, Denis and Dauphin, Yann N},
journal={arXiv preprint arXiv:1705.03122},
year={2017}
}
@article{collobert2016,
title={Wav2letter: an end-to-end convnet-based speech recognition system},
author={Collobert, Ronan and Puhrsch, Christian and Synnaeve, Gabriel},
journal={arXiv preprint arXiv:1609.03193},
year={2016}
}
@inproceedings{Zhang2016,
author={Ying Zhang and Mohammad Pezeshki and Philémon Brakel and Saizheng Zhang and César Laurent and Yoshua Bengio and Aaron Courville},
title={Towards End-to-End Speech Recognition with Deep Convolutional Neural Networks},
year=2016,
booktitle={Interspeech 2016},
doi={10.21437/Interspeech.2016-1446},
url={http://dx.doi.org/10.21437/Interspeech.2016-1446},
pages={410--414}
}
@inproceedings{Zhang2017,
title={Very deep convolutional networks for end-to-end speech recognition},
author={Zhang, Yu, and Chan, William, and Jaitly, Navdeep},
booktitle={Acoustics, Speech and Signal Processing (ICASSP), 2017 IEEE International Conference on},
year={2017},
organization={IEEE}
}
@article{Wang2017,
title={Tacotron: Towards End-to-End Speech Synthesis},
author={ Wang, Yuxuan, and Skerry-Ryan, RJ, and Stanton, Daisy and Wu, Yonghui and Weiss, Ron, and Jaitly, Navdeep and Yang, Zongheng and Xiao, Ying and Chen,Zhifeng and Bengio, Samy and Le, Quoc and Agiomyrgiannakis, Yannis and Clark,Rob and Saurous, Rif A.},
journal={arXiv preprint arXiv:1703.10135},
year={2017}
}
@inproceedings{shen2018natural,
title={Natural TTS synthesis by conditioning wavenet on mel spectrogram predictions},
author={Shen, Jonathan and Pang, Ruoming and Weiss, Ron J and Schuster, Mike and Jaitly, Navdeep and Yang, Zongheng and Chen, Zhifeng and Zhang, Yu and Wang, Yuxuan and Skerrv-Ryan, Rj and others},
booktitle={2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={4779--4783},
year={2018},
organization={IEEE}
}
@article{griffin1984signal,
title={Signal estimation from modified short-time Fourier transform},
author={Griffin, Daniel and Lim, Jae},
journal={IEEE Transactions on Acoustics, Speech, and Signal Processing},
volume={32},
number={2},
pages={236--243},
year={1984},
publisher={IEEE}
}
@misc{ito2017lj,
title={The LJ speech dataset},
author={Ito, Keith and others},
year={2017}
}
@misc{mailabs,
title = {{The M-AILABS Speech Dataset}},
howpublished = {\url{http://www.m-ailabs.bayern/en/the-mailabs-speech-dataset/}},
author={M-AILABS},
note = {Accessed: 2018-10-09},
year={2018}
}
@article{merity2016pointer,
title={Pointer sentinel mixture models},
author={Merity, Stephen and Xiong, Caiming and Bradbury, James and Socher, Richard},
journal={arXiv preprint arXiv:1609.07843},
year={2016}
}
@inproceedings{socher2013recursive,
title={Recursive deep models for semantic compositionality over a sentiment treebank},
author={Socher, Richard and Perelygin, Alex and Wu, Jean and Chuang, Jason and Manning, Christopher D and Ng, Andrew and Potts, Christopher},
booktitle={Proceedings of the 2013 conference on empirical methods in natural language processing},
pages={1631--1642},
year={2013}
}
@InProceedings{maas-EtAl:2011:ACL-HLT2011,
author = {Maas, Andrew L. and Daly, Raymond E. and Pham, Peter T. and Huang, Dan and Ng, Andrew Y. and Potts, Christopher},
title = {Learning Word Vectors for Sentiment Analysis},
booktitle = {Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies},
month = {June},
year = {2011},
address = {Portland, Oregon, USA},
publisher = {Association for Computational Linguistics},
pages = {142--150},
url = {http://www.aclweb.org/anthology/P11-1015}
}
@inproceedings{Povey2018SemiOrthogonalLM,
title={Semi-Orthogonal Low-Rank Matrix Factorization for Deep Neural Networks},
author={Daniel Povey and Gaofeng Cheng and Yiming Wang and Ke Li and Hainan Xu and Mahsa Yarmohammadi and Sanjeev Khudanpur},
booktitle={Interspeech},
year={2018}
}
@article{CAPIO2017,
author = {Kyu J. Han and Akshay Chandrashekaran and Jungsuk Kim and Ian R. Lane},
title = {The {CAPIO} 2017 Conversational Speech Recognition System},
journal = {CoRR},
volume = {abs/1801.00059},
year = {2018},
url = {http://arxiv.org/abs/1801.00059},
archivePrefix = {arXiv},
eprint = {1801.00059},
timestamp = {Mon, 13 Aug 2018 16:49:10 +0200},
biburl = {https://dblp.org/rec/bib/journals/corr/abs-1801-00059},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
@article{WaveNet,
author = {A{\"{a}}ron van den Oord and Sander Dieleman and Heiga Zen and Karen Simonyan and Oriol Vinyals and Alex Graves and Nal Kalchbrenner and Andrew W. Senior and Koray Kavukcuoglu},
title = {WaveNet: {A} Generative Model for Raw Audio},
journal = {CoRR},
volume = {abs/1609.03499},
year = {2016},
url = {http://arxiv.org/abs/1609.03499},
archivePrefix = {arXiv},
eprint = {1609.03499},
timestamp = {Mon, 13 Aug 2018 16:49:15 +0200},
biburl = {https://dblp.org/rec/bib/journals/corr/OordDZSVGKSK16},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
@article{FacebookGERENGBackTranslation,
author = {Rico Sennrich and Barry Haddow and Alexandra Birch},
title = {Improving Neural Machine Translation Models with Monolingual Data},
journal = {CoRR},
volume = {abs/1511.06709},
year = {2015},
url = {http://arxiv.org/abs/1511.06709},
archivePrefix = {arXiv},
eprint = {1511.06709},
timestamp = {Mon, 13 Aug 2018 16:47:05 +0200},
biburl = {https://dblp.org/rec/bib/journals/corr/SennrichHB15a},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
@article{GlobalStyleTokens,
author = {Yuxuan Wang and Daisy Stanton and Yu Zhang and R. J. Skerry{-}Ryan and Eric Battenberg and Joel Shor and Ying Xiao and Fei Ren and Ye Jia and Rif A. Saurous},
title = {Style Tokens: Unsupervised Style Modeling, Control and Transfer in End-to-End Speech Synthesis},
journal = {CoRR},
volume = {abs/1803.09017},
year = {2018},
url = {http://arxiv.org/abs/1803.09017},
archivePrefix = {arXiv},
eprint = {1803.09017},
timestamp = {Mon, 13 Aug 2018 16:46:53 +0200},
biburl = {https://dblp.org/rec/bib/journals/corr/abs-1803-09017},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
@article{IoffeS15BatchNorm,
author = {Sergey Ioffe and Christian Szegedy},
title = {Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift},
journal = {CoRR},
volume = {abs/1502.03167},
year = {2015},
url = {http://arxiv.org/abs/1502.03167},
archivePrefix = {arXiv},
eprint = {1502.03167},
timestamp = {Mon, 13 Aug 2018 16:47:06 +0200},
biburl = {https://dblp.org/rec/bib/journals/corr/IoffeS15},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
@article{kingma,
author = {Diederik P. Kingma and
Jimmy Ba},
title = {Adam: {A} Method for Stochastic Optimization},
journal = {CoRR},
volume = {abs/1412.6980},
year = {2014},
url = {http://arxiv.org/abs/1412.6980},
archivePrefix = {arXiv},
eprint = {1412.6980},
timestamp = {Mon, 13 Aug 2018 01:00:00 +0200},
biburl = {https://dblp.org/rec/bib/journals/corr/KingmaB14},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
@incollection{Salimans2016WeightNorm,
title = {Weight Normalization: A Simple Reparameterization to Accelerate Training of Deep Neural Networks},
author = {Salimans, Tim and Kingma, Durk P},
booktitle = {Advances in Neural Information Processing Systems 29},
editor = {D. D. Lee and M. Sugiyama and U. V. Luxburg and I. Guyon and R. Garnett},
pages = {901--909},
year = {2016},
publisher = {Curran Associates, Inc.},
url = {http://papers.nips.cc/paper/6114-weight-normalization-a-simple-reparameterization-to-accelerate-training-of-deep-neural-networks.pdf}
}
@article{wu2016google,
title={Google's neural machine translation system: Bridging the gap between human and machine translation},
author={Wu, Yonghui and Schuster, Mike and Chen, Zhifeng and Le, Quoc V and Norouzi, Mohammad and Macherey, Zolfgang and Krikun, Maxim and Cao, Yuan and Gao, Qin and Macherey, Klaus and others},
journal={arXiv preprint arXiv:1609.08144},
year={2016}
}
@inproceedings{opennmt,
author = {Guillaume Klein and Yoon Kim and Yuntian Deng and Jean Senellart and Alexander M. Rush},
title = {OpenNMT: Open-Source Toolkit for Neural Machine Translation},
booktitle = {Proc. ACL},
year = {2017},
url = {https://doi.org/10.18653/v1/P17-4012},
doi = {10.18653/v1/P17-4012}
}
@article{paszke2017automatic,
title={Automatic differentiation in PyTorch},
author={Paszke, Adam and Gross, Sam and Chintala, Soumith and Chanan, Gregory and Yang, Edward and DeVito, Zachary and Lin, Zeming and Desmaison, Alban and Antiga, Luca and Lerer, Adam},
year={2017}
}
@article{yu2014introduction,
title={An introduction to computational networks and the computational network toolkit},
author={Yu, Dong and Eversole, Adam and Seltzer, Mike and Yao, Kaisheng and Huang, Zhiheng and Guenter, Brian and Kuchaiev, Oleksii and Zhang, Yu and Seide, Frank and Wang, Huaming and others},
journal={Microsoft Technical Report MSR-TR-2014--112},
year={2014}
}
@article{nvidia2017v100,
title={V100 GPU architecture. The worlds most advanced data center GPU. Version WP-08608-001\_v1. 1},
author={NVIDIA, Tesla},
journal={NVIDIA. Aug},
pages={108},
year={2017}
}
@article{post2018call,
title={A call for clarity in reporting bleu scores},
author={Post, Matt},
journal={arXiv preprint arXiv:1804.08771},
year={2018}
}
@article{Ba2016LayerNorm,
author = {Jimmy Lei Ba and Jamie Ryan Kiros and Geoffrey E Hinton},
title = {Layer normalization},
journal = {CoRR},
volume = {abs/1607.06450},
year = {2016},
url = {http://arxiv.org/abs/1607.06450},
archivePrefix = {arXiv},
}
@inproceedings{Dauphin2017GLU,
author = {Dauphin, Yann N. and Fan, Angela and Auli, Michael and Grangier, David},
title = {Language Modeling with Gated Convolutional Networks},
booktitle = {Proceedings of the 34th International Conference on Machine Learning - Volume 70},
series = {ICML'17},
year = {2017},
location = {Sydney, NSW, Australia},
pages = {933--941},
numpages = {9},
url = {http://dl.acm.org/citation.cfm?id=3305381.3305478},
acmid = {3305478},
publisher = {JMLR.org},
}
@incollection{Oord2016PixelCNN,
title = {Conditional Image Generation with PixelCNN Decoders},
author = {van den Oord, Aaron and Kalchbrenner, Nal and Espeholt, Lasse and kavukcuoglu, koray and Vinyals, Oriol and Graves, Alex},
booktitle = {Advances in Neural Information Processing Systems 29},
editor = {D. D. Lee and M. Sugiyama and U. V. Luxburg and I. Guyon and R. Garnett},
pages = {4790--4798},
year = {2016},
publisher = {Curran Associates, Inc.},
url = {http://papers.nips.cc/paper/6527-conditional-image-generation-with-pixelcnn-decoders.pdf}
}
@article{he2015,
title={Deep residual learning for image recognition},
author={K. He, and X. Zhang, and S. Ren, and J. Sun},
journal={arXiv preprint arXiv:1512.03385},
year={2015}
}
@article{huang2016,
title={Densely Connected Convolutional Networks},
author={Gao Huang, and Zhuang Liu, and Laurens van der Maaten, and Kilian Q. Weinberger},
journal={arXiv preprint arXiv:1608.06993},
year={2016}
}
@inproceedings{heafield2011kenlm,
title={KenLM: Faster and smaller language model queries},
author={Heafield, Kenneth},
booktitle={Proceedings of the sixth workshop on statistical machine translation},
pages={187--197},
year={2011},
organization={Association for Computational Linguistics}
}
@article{dai2018transformer,
title={Transformer-XL: Language Modeling with Longer-Term Dependency},
author={Dai, Zihang and Yang, Zhilin and Yang, Yiming and Cohen, William W and Carbonell, Jaime and Le, Quoc V and Salakhutdinov, Ruslan},
year={2018},
journal = {CoRR},
volume = {abs/1901.02860},
url = {http://arxiv.org/abs/1901.02860},
archivePrefix = {arXiv},
eprint = {1901.02860},
timestamp = {Fri, 01 Feb 2019 13:39:59 +0100},
biburl = {https://dblp.org/rec/bib/journals/corr/abs-1901-02860},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
@inproceedings{Saon+2016,
author={George Saon and Tom Sercu and Steven Rennie and Hong-Kwang J. Kuo},
title={The IBM 2016 English Conversational Telephone Speech Recognition System},
year=2016,
booktitle={Interspeech 2016},
doi={10.21437/Interspeech.2016-1460},
url={http://dx.doi.org/10.21437/Interspeech.2016-1460},
pages={7--11}
}
@INPROCEEDINGS{Sercu-2016,
author={T. {Sercu} and C. {Puhrsch} and B. {Kingsbury} and Y. {LeCun}},
booktitle={2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
title={Very deep multilingual convolutional neural networks for LVCSR},
year={2016},
volume={},
number={},
pages={4955-4959},
keywords={natural language processing;neural nets;speech recognition;very deep multilingual convolutional neural networks;LVCSR;CNN;large vocabulary continuous speech recognition systems;word error rate;Training;Context;Hidden Markov models;Neural networks;Computer architecture;Kernel;Training data;Convolutional Networks;Multilingual;Acoustic Modeling;Speech Recognition;Neural Networks},
doi={10.1109/ICASSP.2016.7472620},
ISSN={2379-190X},
month={March},}
@inproceedings{Sercu+2016,
author={Tom Sercu and Vaibhava Goel},
title={Advances in Very Deep Convolutional Neural Networks for LVCSR},
year=2016,
booktitle={Interspeech 2016},
doi={10.21437/Interspeech.2016-1033},
url={http://dx.doi.org/10.21437/Interspeech.2016-1033},
pages={3429--3433}
}
@INPROCEEDINGS{Xiong-2018,
author={W. {Xiong} and L. {Wu} and F. {Alleva} and J. {Droppo} and X. {Huang} and A. {Stolcke}},
booktitle={2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
title={The Microsoft 2017 Conversational Speech Recognition System},
year={2018},
volume={},
number={},
pages={5934-5938},
keywords={convolution;feedforward neural nets;natural language processing;speaker recognition;speech processing;language model rescoring step;senone level;switchboard domains;character-based LSTM language models;NIST 2000 switchboard test set;frame level;word-level voting;acoustic model posteriors;dialog session aware LSTM language models;CNN-BLSTM acoustic model;Microsoft 2017 conversational speech recognition system;Acoustics;Error analysis;Training;Speech recognition;Switches;Computational modeling;Context modeling;Conversational speech recognition;CNN;LACE;BLSTM;LSTM-LM;system combination;human parity},
doi={10.1109/ICASSP.2018.8461870},
ISSN={2379-190X},
month={April},}
@inproceedings{zeyer2018improved,
author={Albert Zeyer and Kazuki Irie and Ralf Schlüter and Hermann Ney},
title={Improved Training of End-to-end Attention Models for Speech Recognition},
year=2018,
booktitle={Proc. Interspeech 2018},
pages={7--11},
doi={10.21437/Interspeech.2018-1616},
url={http://dx.doi.org/10.21437/Interspeech.2018-1616}
}
@article{Wav2LetterV2,
author = {Vitaliy Liptchinsky and
Gabriel Synnaeve and
Ronan Collobert},
title = {Letter-Based Speech Recognition with Gated ConvNets},
journal = {CoRR},
volume = {abs/1712.09444},
year = {2017},
url = {http://arxiv.org/abs/1712.09444},
archivePrefix = {arXiv},
eprint = {1712.09444},
timestamp = {Mon, 13 Aug 2018 16:46:33 +0200},
biburl = {https://dblp.org/rec/bib/journals/corr/abs-1712-09444},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
@article{zeghidour2018,
author = {Neil Zeghidour and
Qiantong Xu and
Vitaliy Liptchinsky and
Nicolas Usunier and
Gabriel Synnaeve and
Ronan Collobert},
title = {Fully Convolutional Speech Recognition},
journal = {CoRR},
volume = {abs/1812.06864},
year = {2018},
url = {http://arxiv.org/abs/1812.06864},
archivePrefix = {arXiv},
eprint = {1812.06864},
timestamp = {Tue, 01 Jan 2019 15:01:25 +0100},
biburl = {https://dblp.org/rec/bib/journals/corr/abs-1812-06864},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
@inproceedings{Hadian2018,
author={Hossein Hadian and Hossein Sameti and Daniel Povey and Sanjeev Khudanpur},
title={End-to-end Speech Recognition Using Lattice-free MMI},
year=2018,
booktitle={Proc. Interspeech 2018},
pages={12--16},
doi={10.21437/Interspeech.2018-1423},
url={http://dx.doi.org/10.21437/Interspeech.2018-1423}
}
@inproceedings{Tang2018,
author={Jian Tang and Yan Song and Lirong Dai and Ian McLoughlin},
title={Acoustic Modeling with Densely Connected Residual Network for Multichannel Speech Recognition},
year=2018,
booktitle={Proc. Interspeech 2018},
pages={1783--1787},
doi={10.21437/Interspeech.2018-1089},
url={http://dx.doi.org/10.21437/Interspeech.2018-1089}
}
@article{Kurata2017LanguageMW,
title={Language modeling with highway LSTM},
author={Gakuto Kurata and Bhuvana Ramabhadran and George Saon and Abhinav Sethy},
journal={2017 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU)},
year={2017},
pages={244-251}
}
@inproceedings{Saon2017,
author={George Saon and Gakuto Kurata and Tom Sercu and Kartik Audhkhasi and Samuel Thomas and Dimitrios Dimitriadis and Xiaodong Cui and Bhuvana Ramabhadran and Michael Picheny and Lynn-Li Lim and Bergul Roomi and Phil Hall},
title={English Conversational Telephone Speech Recognition by Humans and Machines},
year=2017,
booktitle={Proc. Interspeech 2017},
pages={132--136},
doi={10.21437/Interspeech.2017-405},
url={http://dx.doi.org/10.21437/Interspeech.2017-405}
}
@inproceedings{Povey+2016,
author={Daniel Povey and Vijayaditya Peddinti and Daniel Galvez and Pegah Ghahremani and Vimal Manohar and Xingyu Na and Yiming Wang and Sanjeev Khudanpur},
title={Purely Sequence-Trained Neural Networks for ASR Based on Lattice-Free MMI},
year=2016,
booktitle={Interspeech 2016},
doi={10.21437/Interspeech.2016-595},
url={http://dx.doi.org/10.21437/Interspeech.2016-595},
pages={2751--2755}
}
@article{Yang2018,
author = {Xuerui Yang and
Jiwei Li and
Xi Zhou},
title = {A novel pyramidal-FSMN architecture with lattice-free {MMI} for speech
recognition},
journal = {CoRR},
volume = {abs/1810.11352},
year = {2018},
url = {http://arxiv.org/abs/1810.11352},
archivePrefix = {arXiv},
eprint = {1810.11352},
timestamp = {Wed, 31 Oct 2018 14:24:29 +0100},
biburl = {https://dblp.org/rec/bib/journals/corr/abs-1810-11352},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
@article{liptchinsky2017based,
title={Letter-Based Speech Recognition with Gated ConvNets},
author={Liptchinsky, Vitaliy and Synnaeve, Gabriel and Collobert, Ronan},
journal={arXiv preprint arXiv:1712.09444},
year={2017}
}
@inproceedings{Weng2018,
author={Chao Weng and Jia Cui and Guangsen Wang and Jun Wang and Chengzhu Yu and Dan Su and Dong Yu},
title={Improving Attention Based Sequence-to-Sequence Models for End-to-End English Conversational Speech Recognition},
year=2018,
booktitle={Proc. Interspeech 2018},
pages={761--765},
doi={10.21437/Interspeech.2018-1030},
url={http://dx.doi.org/10.21437/Interspeech.2018-1030}
}
@INPROCEEDINGS{Battenberg2017,
author={E. {Battenberg} and J. {Chen} and R. {Child} and A. {Coates} and Y. G. Y. {Li} and H. {Liu} and S. {Satheesh} and A. {Sriram} and Z. {Zhu}},
booktitle={2017 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU)},
title={Exploring neural transducers for end-to-end speech recognition},
year={2017},
volume={},
number={},
pages={206-213},
keywords={recurrent neural nets;speech recognition;Hub500 benchmark;CTC models;speech recognition pipeline;RNN-Transducer models;language model;Seq2Seq models;end-to-end speech recognition;neural transducers;Decoding;Hidden Markov models;Transducers;Task analysis;Speech;Mathematical model;Neural networks},
doi={10.1109/ASRU.2017.8268937},
ISSN={},
month={Dec},
}
@inproceedings{
loshchilov2018,
title={Decoupled Weight Decay Regularization},
author={Ilya Loshchilov and Frank Hutter},
booktitle={International Conference on Learning Representations},
year={2019},
url={https://openreview.net/forum?id=Bkg6RiCqY7},
}
@article{zhang2017ndadam,
author = {Zijun Zhang and Lin Ma and Zongpeng Li and Chuan Wu},
title = {Normalized Direction-preserving Adam},
journal = {arXiv e-prints arXiv:1709.04546},
year = {2017},
}
@article{park2019,
author = {{Park}, Daniel S. and {Chan}, William and {Zhang}, Yu and
{Chiu}, Chung-Cheng and {Zoph}, Barret and {Cubuk}, Ekin D. and
{Le}, Quoc V.},
title = "{SpecAugment: A Simple Data Augmentation Method for Automatic Speech Recognition}",
journal = {arXiv e-prints},
year = "2019",
eid = {arXiv:1904.08779},
eprint = {1904.08779},
}
@article{novograd2019,
author = {{Ginsburg}, Boris and {Castonguay}, Patrice and {Hrinchuk}, Oleksii and
{Kuchaiev}, Oleksii and {Lavrukhin}, Vitaly and {Leary}, Ryan and
{Li}, Jason and {Nguyen}, Huyen and {Cohen}, Jonathan M.},
title = "{Stochastic Gradient Methods with Layer-wise Adaptive Moments for Training of Deep Networks}",
journal = {arXiv e-prints},
year = "2019",
eid = {arXiv:1905.11286},
eprint = {1905.11286},
}
@article{kriman2019quartznet,
title={Quartznet: {Deep} automatic speech recognition with 1d time-channel separable convolutions},
author={Kriman, Samuel and Beliaev, Stanislav and Ginsburg, Boris and Huang, Jocelyn and Kuchaiev, Oleksii and Lavrukhin, Vitaly and Leary, Ryan and Li, Jason and Zhang, Yang},
journal={arXiv preprint arXiv:1910.10261},
year={2019}
}
@misc{itu1988g711,
title={{ITU-T} {G.711} - {Pulse} code modulation ({PCM}) of voice frequencies},
author={ITU-T Geneva Switzerland},
year={1988},
}

Binary file not shown.

After

Width:  |  Height:  |  Size: 53 KiB

View file

@ -0,0 +1,154 @@
Datasets
========
You can get started with the following datasets.
.. _LibriSpeech_dataset:
LibriSpeech
-----------
Run these scripts to download LibriSpeech data and convert it into format expected by `nemo_asr`.
You should have at least 250GB free space.
.. code-block:: bash
# install sox
sudo apt-get install sox
mkdir data
python get_librispeech_data.py --data_root=data --data_set=ALL
After this, your `data` folder should contain wav files and `.json` manifests for NeMo ASR datalayer:
Each line is a training example. `audio_filepath` contains path to the wav file, `duration` it's duration in seconds and `text` it's transcript:
.. code-block:: json
{"audio_filepath": "<absolute_path_to>/1355-39947-0000.wav", "duration": 11.3, "text": "psychotherapy and the community both the physician and the patient find their place in the community the life interests of which are superior to the interests of the individual"}
{"audio_filepath": "<absolute_path_to>/1355-39947-0001.wav", "duration": 15.905, "text": "it is an unavoidable question how far from the higher point of view of the social mind the psychotherapeutic efforts should be encouraged or suppressed are there any conditions which suggest suspicion of or direct opposition to such curative work"}
Fisher English Training Speech
------------------------------
Run these scripts to convert the Fisher English Training Speech data into a format expected by the `nemo_asr` collection.
In brief, the following scripts convert the .sph files to .wav, slice those files into smaller audio samples, match the smaller slices with their corresponding transcripts, and split the resulting audio segments into train, validation, and test sets (with one manifest each).
.. note::
You will need at least 106GB of space to run the .wav conversion, and an additional 105GB for the slicing and matching.
You will need to have sph2pipe installed in order to run the .wav conversion.
**Instructions**
These scripts assume that you already have the Fisher dataset from the Linguistic Data Consortium, with a directory structure that looks something like this:
.. code-block:: bash
FisherEnglishTrainingSpeech/
├── LDC2004S13-Part1
│   ├── fe_03_p1_transcripts
│   ├── fisher_eng_tr_sp_d1
│   ├── fisher_eng_tr_sp_d2
│   ├── fisher_eng_tr_sp_d3
│   └── ...
└── LDC2005S13-Part2
├── fe_03_p2_transcripts
├── fe_03_p2_sph1
├── fe_03_p2_sph2
├── fe_03_p2_sph3
└── ...
The transcripts that will be used are located in `fe_03_p<1,2>_transcripts/data/trans`, and the audio files (.sph) are located in the remaining directories in an `audio` subdirectory.
First, convert the audio files from .sph to .wav by running:
.. code-block:: bash
cd <nemo_root>/scripts
python fisher_audio_to_wav.py \
--data_root=<fisher_root> --dest_root=<conversion_target_dir>
This will place the unsliced .wav files in `<conversion_target_dir>/LDC200[4,5]S13-Part[1,2]/audio-wav/`.
It will take several minutes to run.
Next, process the transcripts and slice the audio data:
.. code-block:: bash
python process_fisher_data.py \
--audio_root=<conversion_target_dir> --transcript_root=<fisher_root> \
--dest_root=<processing_target_dir> \
--remove_noises
This script will split the full dataset into train, validation, and test sets, and place the audio slices in the corresponding folders in the destination directory.
One manifest will be written out per set, which includes each slice's transcript, duration, and path.
This will likely take around 20 minutes to run.
Once finished, you may delete the 10 minute long .wav files if you wish.
2000 HUB5 English Evaluation Speech
-----------------------------------
Run the following script to convert the HUB5 data into a format expected by the `nemo_asr` collection.
Similarly to the Fisher dataset processing scripts, this script converts the .sph files to .wav, slices the audio files and transcripts into utterances, and combines them into segments of some minimum length (default is 10 seconds).
The resulting segments are all written out to an audio directory, and the corresponding transcripts are written to a manifest JSON file.
.. note::
You will need 5GB of free space to run this script.
You will also need to have sph2pipe installed.
This script assumes you already have the 2000 HUB5 dataset from the Linguistic Data Consortium.
Run the following to process the 2000 HUB5 English Evaluation Speech samples:
.. code-block:: bash
python process_hub5_data.py \
--data_root=<path_to_HUB5_data> \
--dest_root=<target_dir>
You may optionally include `--min_slice_duration=<num_seconds>` if you would like to change the minimum audio segment duration.
AN4 Dataset
-----------
This is a small dataset recorded and distributed by Carnegie Mellon University, and consists of recordings of people spelling out addresses, names, etc.
Information about this dataset can be found on the `official CMU site <http://www.speech.cs.cmu.edu/databases/an4/>`_.
Please download and extract the dataset (which is labeled "NIST's Sphere audio (.sph) format (64M)" on the site linked above): http://www.speech.cs.cmu.edu/databases/an4/an4_sphere.tar.gz.
Running the following script will convert the .sph files to .wav using sox, and build one training and one test manifest.
.. code-block:: bash
python process_an4_data.py --data_root=<path_to_extracted_data>
Once this script finishes, you should have a `train_manifest.json` and `test_manifest.json` in the `<data_root>/an4/` directory.
Aishell1
--------
Run these scripts to download Aishell1 data and convert it into format expected by `nemo_asr`.
.. code-block:: bash
# install sox
sudo apt-get install sox
mkdir data
python get_aishell_data.py --data_root=data
After this, your `data` folder should contain a `data_aishell` folder which contains wav, transcript folder and related `.json` files and `vocab.txt`.
Aishell2
--------
Run the script to process AIShell-2 dataset in order to generate files in the supported format of `nemo_asr`. You should set the data folder of AIShell-2 using `--audio_folder` and where to push these files using `--dest_folder`.
.. code-block:: bash
python process_aishell2_data.py --audio_folder=<data directory> --dest_folder=<destination directory>
Then, you should have `train.json` `dev.json` `test.json` and `vocab.txt` in `dest_folder`.

View file

@ -0,0 +1,10 @@
Automatic Speech Recognition (ASR)
==================================
.. toctree::
:maxdepth: 8
datasets
models
api

Binary file not shown.

After

Width:  |  Height:  |  Size: 25 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 273 KiB

View file

@ -0,0 +1,41 @@
Models
======
Currently, NeMo's ASR collection supports the following models:
.. _Jasper_model:
Jasper
------
Jasper ("Just Another SPeech Recognizer") :cite:`asr-models-li2019jasper` is a deep time delay neural network (TDNN) comprising of blocks of 1D-convolutional layers.
Jasper family of models are denoted as Jasper_[BxR] where B is the number of blocks, and R - the number of convolutional sub-blocks within a block. Each sub-block contains a 1-D convolution, batch normalization, ReLU, and dropout:
.. image:: jasper_vertical.png
:align: center
:alt: japer model
QuartzNet
---------
QuartzNet :cite:`asr-models-kriman2019quartznet` is a version of Jasper :cite:`asr-models-li2019jasper` model with separable convolutions and larger filters. It can achieve performance
similar to Jasper but with an order of magnitude less parameters.
Similarly to Jasper, QuartzNet family of models are denoted as QuartzNet_[BxR] where B is the number of blocks, and R - the number of convolutional sub-blocks within a block. Each sub-block contains a 1-D *separable* convolution, batch normalization, ReLU, and dropout:
.. image:: quartz_vertical.png
:align: center
:alt: quartznet model
Jasper and QuartzNet models can be instantiated using :class:`EncDecCTCModel<nemo.collections.asr.models.EncDecCTCModel>` class.
References
----------
.. bibliography:: asr_all.bib
:style: plain
:labelprefix: ASR-MODELS
:keyprefix: asr-models-

Binary file not shown.

After

Width:  |  Height:  |  Size: 257 KiB

239
docs/sources/source/conf.py Normal file
View file

@ -0,0 +1,239 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
#
# nemo documentation build configuration file, created by
# sphinx-quickstart on Sat Nov 17 15:55:54 2018.
#
# This file is execfile()d with the current directory set to its
# containing dir.
#
# Note that not all possible configuration values are present in this
# autogenerated file.
#
# All configuration values have a default; values that are commented out
# infer to show the default.
import os
import sys
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
sys.path.insert(0, os.path.abspath("../../.."))
sys.path.insert(0, os.path.abspath(os.path.join("../../..", "nemo")))
autodoc_mock_imports = [
'torch',
'torch.nn',
'torch.utils',
'torch.optim',
'torch.utils.data',
'torch.utils.data.sampler',
'torchvision',
'torchvision.models',
'torchtext',
'torch_stft',
'h5py',
'kaldi_io',
'transformers',
'transformers.tokenization_bert',
'apex',
'ruamel',
'frozendict',
'inflect',
'unidecode',
'librosa',
'soundfile',
'sentencepiece',
'youtokentome',
'megatron-lm',
'numpy',
'dateutil',
'wget',
'scipy',
'pandas',
'matplotlib',
'sklearn',
'braceexpand',
'webdataset',
'tqdm',
'numba',
]
# -- General configuration ------------------------------------------------
# If your documentation needs a minimal Sphinx version, state it here.
#
# needs_sphinx = '1.0'
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
# ones.
extensions = [
"sphinx.ext.autodoc",
"sphinx.ext.todo",
"sphinx.ext.coverage",
"sphinx.ext.mathjax",
"sphinx.ext.ifconfig",
"sphinx.ext.viewcode",
"sphinx.ext.napoleon",
"sphinx.ext.githubpages",
"sphinxcontrib.bibtex",
]
# Set default flags for all classes.
autodoc_default_flags = [
'members',
'undoc-members',
'show-inheritance',
]
locale_dirs = ['locale/'] # path is example but recommended.
gettext_compact = False # optional.
# Add any paths that contain templates here, relative to this directory.
templates_path = ["_templates"]
# The suffix(es) of source filenames.
# You can specify multiple suffix as a list of string:
#
# source_suffix = ['.rst', '.md']
source_suffix = ".rst"
# The master toctree document.
master_doc = "index"
# General information about the project.
project = "nemo"
copyright = "2018-2020, NVIDIA"
author = "NVIDIA"
# The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the
# built documents.
from package_info import __version__
# The short X.Y version.
# version = "0.10.0"
version = __version__
# The full version, including alpha/beta/rc tags.
# release = "0.9.0"
release = __version__
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
#
# This is also used if you do content translation via gettext catalogs.
# Usually you set "language" from the command line for these cases.
language = None
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
# This patterns also effect to html_static_path and html_extra_path
exclude_patterns = []
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = "sphinx"
# If true, `todo` and `todoList` produce output, else they produce nothing.
todo_include_todos = True
# -- Options for HTML output ----------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
#
# html_theme = 'alabaster'
html_theme = "sphinx_rtd_theme"
html_theme_options = {
"canonical_url": "",
"analytics_id": "",
"logo_only": False,
"display_version": True,
"prev_next_buttons_location": "bottom",
"style_external_links": False,
"vcs_pageview_mode": "",
# Toc options
"collapse_navigation": True,
"sticky_navigation": True,
"navigation_depth": 4,
"includehidden": True,
"titles_only": False,
}
# Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the
# documentation.
#
# html_theme_options = {}
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = []
# Custom sidebar templates, must be a dictionary that maps document names
# to template names.
#
# This is required for the alabaster theme
# refs: http://alabaster.readthedocs.io/en/latest/installation.html#sidebars
html_sidebars = {"**": ["relations.html", "searchbox.html",]} # needs 'show_related': True theme option to display
html_theme_options = {
"canonical_url": "",
"analytics_id": "",
"style_external_links": False,
# Toc options
"collapse_navigation": True,
"sticky_navigation": True,
"navigation_depth": 4,
"includehidden": True,
"titles_only": False,
}
# -- Options for HTMLHelp output ------------------------------------------
# Output file base name for HTML help builder.
htmlhelp_basename = "nemodoc"
# -- Options for LaTeX output ---------------------------------------------
latex_elements = {
# The paper size ('letterpaper' or 'a4paper').
#
# 'papersize': 'letterpaper',
# The font size ('10pt', '11pt' or '12pt').
#
# 'pointsize': '10pt',
# Additional stuff for the LaTeX preamble.
#
# 'preamble': '',
# Latex figure (float) alignment
#
# 'figure_align': 'htbp',
}
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title,
# author, documentclass [howto, manual, or own class]).
latex_documents = [(master_doc, "nemo.tex", "nemo Documentation", "AI App Design team", "manual",)]
# -- Options for manual page output ---------------------------------------
# One entry per manual page. List of tuples
# (source start file, name, description, authors, manual section).
man_pages = [(master_doc, "nemo", "nemo Documentation", [author], 1)]
# -- Options for Texinfo output -------------------------------------------
# Grouping the document tree into Texinfo files. List of tuples
# (source start file, target name, title, author,
# dir menu entry, description, category)
texinfo_documents = [
(master_doc, "nemo", "nemo Documentation", author, "nemo", "One line description of project.", "Miscellaneous",)
]
autoclass_content = 'both'

View file

@ -0,0 +1,28 @@
Core Concepts
=============
Neural Module
~~~~~~~~~~~~~
Neural Modules are building blocks for Models.
They accept (typed) inputs and return (typed) outputs. *All Neural Modules inherit from ``torch.nn.Module`` and, therefore, compatible with PyTorch ecosystem.* There are 3 types on Neural Modules:
* Regular modules
* Dataset/IterableDataset
* Losses
Model
~~~~~
NeMo Model is an entity which contains 100% of information necessary to invoke training/fine-tuning.
It is based on Pytorch Lightning's LightningModule and as such contains information on:
* Neural Network architecture, including necessary pre- and post- processing
* How data is handled for training/validation/testing
* Optimization, learning rate schedules, scaling, etc.
Neural Types
~~~~~~~~~~~~
Neural Types perform semantic checks for modules and models inputs/outputs. They contain information about:
* Semantics of what is stored in the tensors. For example, logits, logprobs, audiosignal, embeddings, etc.
* Axes layout, semantic and (optionally) dimensionality. For example: [Batch, Time, Channel]

View file

@ -0,0 +1,42 @@
NVIDIA NeMo Developer Guide
===========================
.. toctree::
:hidden:
:maxdepth: 8
Introduction <self>
core
asr/intro
nlp/intro
tts/intro
api-docs/nemo
NeMo is a library for easy training, building and manipulating of AI models.
NeMo's current focus is providing great experience for Conversational AI.
NeMo models can be trained on multi-GPU and multi-node, with or without Mixed Precision
Many models in NeMo come with high-quality pre-trained checkpoints.
Requirements
------------
NeMo's main requirements are:
1) Python 3.6 or 3.7
2) Pytorch 1.6 or above
Installation
~~~~~~~~~~~~
``pip install nemo_toolkit[all]==version``
We recommend using NVIDIA's PyTorch container
.. code-block:: bash
docker run --gpus all -it --rm -v <nemo_github_folder>:/NeMo --shm-size=8g \
-p 8888:8888 -p 6006:6006 --ulimit memlock=-1 --ulimit \
stack=67108864 nvcr.io/nvidia/pytorch:20.06-py3

View file

@ -0,0 +1,2 @@
Natural Language Processing (NLP)
=================================

View file

@ -0,0 +1,2 @@
Speech Synthesis
================

3
docs/sources/update_docs.sh Executable file
View file

@ -0,0 +1,3 @@
rm -rf build
make clean
make html

View file

@ -46,13 +46,15 @@ class ModelPT(LightningModule, Model):
def __init__(self, cfg: DictConfig, trainer: Trainer = None):
"""
Base class from which all NeMo models should inherit
Args:
cfg (DictConfig): configuration object.
The cfg object should have (optionally) the following sub-configs:
* train_ds - to instantiate training dataset
* validation_ds - to instantiate validation dataset
* test_ds - to instantiate testing dataset
* optim - to instantiate optimizer with learning rate scheduler
* train_ds - to instantiate training dataset
* validation_ds - to instantiate validation dataset
* test_ds - to instantiate testing dataset
* optim - to instantiate optimizer with learning rate scheduler
trainer (Optional): Pytorch Lightning Trainer instance
"""
@ -198,6 +200,7 @@ class ModelPT(LightningModule, Model):
def setup_training_data(self, train_data_config: Union[DictConfig, Dict]):
"""
Setups data loader to be used in training
Args:
train_data_layer_config: training data layer parameters.
Returns:
@ -210,6 +213,7 @@ class ModelPT(LightningModule, Model):
"""
(Optionally) Setups data loader to be used in validation
Args:
val_data_layer_config: validation data layer parameters.
Returns:
@ -219,6 +223,7 @@ class ModelPT(LightningModule, Model):
def setup_test_data(self, test_data_config: Union[DictConfig, Dict]):
"""
(Optionally) Setups data loader to be used in test
Args:
test_data_layer_config: test data layer parameters.
Returns:
@ -231,16 +236,14 @@ class ModelPT(LightningModule, Model):
Prepares an optimizer from a string name and its optional config parameters.
Args:
optim_config: a dictionary containing the following keys.
- "lr": mandatory key for learning rate. Will raise ValueError
if not provided.
optim_config: A dictionary containing the following keys:
- "optimizer": string name pointing to one of the available
optimizers in the registry. If not provided, defaults to "adam".
- "opt_args": Optional list of strings, in the format "arg_name=arg_value".
The list of "arg_value" will be parsed and a dictionary of optimizer
kwargs will be built and supplied to instantiate the optimizer.
* "lr": mandatory key for learning rate. Will raise ValueError if not provided.
* "optimizer": string name pointing to one of the available optimizers in the registry. \
If not provided, defaults to "adam".
* "opt_args": Optional list of strings, in the format "arg_name=arg_value". \
The list of "arg_value" will be parsed and a dictionary of optimizer kwargs \
will be built and supplied to instantiate the optimizer.
"""
# If config was not explicitly passed to us
if optim_config is None:

View file

@ -16,7 +16,7 @@ from typing import Optional, Tuple
from nemo.core.neural_types.axes import AxisKind, AxisType
from nemo.core.neural_types.comparison import NeuralTypeComparisonResult
from nemo.core.neural_types.elements import *
from nemo.core.neural_types.elements import ElementType, VoidType
__all__ = [
'NeuralType',