Go to file
Vlad Getselevich dab33d8a17 Updates in IR+QA example
Signed-off-by: Vlad Getselevich <vgetselevich@nvidia.com>
2021-01-13 12:13:56 -08:00
docs ASR tutorial (#993) 2020-08-06 13:54:10 -07:00
examples Updates in IR+QA example 2021-01-13 12:13:56 -08:00
external removed the # ==== showstoppers in all headers (#924) 2020-07-27 12:55:51 -07:00
nemo GlowTTS model (#968) 2020-08-11 14:50:51 -05:00
requirements Megatron DDP (#953) 2020-08-07 21:53:24 -06:00
scripts Add experimental models + updates to configs for MatchboxNet (#1009) 2020-08-10 14:07:29 -07:00
tests Refactoring NLP classifiers, making them all exportable, adding/movin… (#1010) 2020-08-10 18:00:57 -07:00
tools/speech_data_explorer fix style 2020-07-24 10:22:14 -07:00
tutorials/asr ASR tutorial (#993) 2020-08-06 13:54:10 -07:00
.dockerignore Fix Dockerfile to include checked out version 2020-02-11 16:34:09 -05:00
.gitignore TTS update (#1016) 2020-08-11 13:42:51 -04:00
.readthedocs.yml Docs, Contributing, Readme draft (#977) 2020-08-03 17:18:44 -07:00
CONTRIBUTING.md Docs, Contributing, Readme draft (#977) 2020-08-03 17:18:44 -07:00
Dockerfile Restoring Dockerfile (#920) 2020-07-27 13:02:39 -07:00
Jenkinsfile GlowTTS model (#968) 2020-08-11 14:50:51 -05:00
README.rst Docs, Contributing, Readme draft (#977) 2020-08-03 17:18:44 -07:00
reinstall.sh Reinstall Script Updated 2020-01-23 17:25:56 -08:00
setup.cfg RTD fixes for candidate (#991) 2020-08-05 14:45:52 -07:00
setup.py Add Copyright Headers Test to Jenkins (#988) 2020-08-05 14:54:58 -04:00

This file contains ambiguous Unicode characters

This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

 
|status| |license| |lgtm_grade| |lgtm_alerts| |black|

.. |status| image:: http://www.repostatus.org/badges/latest/active.svg
  :target: http://www.repostatus.org/#active
  :alt: Project Status: Active  The project has reached a stable, usable state and is being actively developed.


.. |license| image:: https://img.shields.io/badge/License-Apache%202.0-brightgreen.svg
  :target: https://github.com/NVIDIA/NeMo/blob/master/LICENSE
  :alt: NeMo core license and license for collections in this repo

.. |lgtm_grade| image:: https://img.shields.io/lgtm/grade/python/g/NVIDIA/NeMo.svg?logo=lgtm&logoWidth=18
  :target: https://lgtm.com/projects/g/NVIDIA/NeMo/context:python
  :alt: Language grade: Python

.. |lgtm_alerts| image:: https://img.shields.io/lgtm/alerts/g/NVIDIA/NeMo.svg?logo=lgtm&logoWidth=18
  :target: https://lgtm.com/projects/g/NVIDIA/NeMo/alerts/
  :alt: Total alerts

.. |black| image:: https://img.shields.io/badge/code%20style-black-000000.svg
  :target: https://github.com/psf/black
  :alt: Code style: black

**NVIDIA NeMo**
===============
Train State of the Art AI Models
--------------------------------

**Introduction**

NeMo is a toolkit for creating `Conversational AI <https://developer.nvidia.com/conversational-ai#started>`_ applications.

NeMo toolkit makes it possible for researchers to easily compose complex neural network architectures for conversational AI using reusable components - Neural Modules.
**Neural Modules** are conceptual blocks of neural networks that take *typed* inputs and produce *typed* outputs. Such modules typically represent data layers, encoders, decoders, language models, loss functions, or methods of combining activations.


The toolkit comes with extendable collections of pre-built modules and ready-to-use models for automatic speech recognition (ASR), natural language processing (NLP) and text synthesis (TTS).
Built for speed, NeMo can utilize NVIDIA's Tensor Cores and scale out training to multiple GPUs and multiple nodes.


`Documentation <https://docs.nvidia.com/deeplearning/nemo/developer_guide/en/candidate/>`_
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~


Requirements
------------

NeMo's works with:

1) Python 3.6 or 3.7
2) Pytorch 1.6 or above

Installation
~~~~~~~~~~~~
``pip install nemo_toolkit[all]==version``

We recommend using NVIDIA's PyTorch container

.. code-block:: bash

    docker run --gpus all -it --rm -v <nemo_github_folder>:/NeMo --shm-size=8g \
    -p 8888:8888 -p 6006:6006 --ulimit memlock=-1 --ulimit \
    stack=67108864 nvcr.io/nvidia/pytorch:20.06-py3