NeMo/README.rst

221 lines
11 KiB
ReStructuredText
Raw Permalink Normal View History

|status| |documentation| |license| |lgtm_grade| |lgtm_alerts| |black|
.. |status| image:: http://www.repostatus.org/badges/latest/active.svg
:target: http://www.repostatus.org/#active
:alt: Project Status: Active The project has reached a stable, usable state and is being actively developed.
.. |documentation| image:: https://readthedocs.com/projects/nvidia-nemo/badge/?version=main
:alt: Documentation
:target: https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/
.. |license| image:: https://img.shields.io/badge/License-Apache%202.0-brightgreen.svg
:target: https://github.com/NVIDIA/NeMo/blob/master/LICENSE
:alt: NeMo core license and license for collections in this repo
.. |lgtm_grade| image:: https://img.shields.io/lgtm/grade/python/g/NVIDIA/NeMo.svg?logo=lgtm&logoWidth=18
:target: https://lgtm.com/projects/g/NVIDIA/NeMo/context:python
:alt: Language grade: Python
.. |lgtm_alerts| image:: https://img.shields.io/lgtm/alerts/g/NVIDIA/NeMo.svg?logo=lgtm&logoWidth=18
:target: https://lgtm.com/projects/g/NVIDIA/NeMo/alerts/
:alt: Total alerts
.. |black| image:: https://img.shields.io/badge/code%20style-black-000000.svg
:target: https://github.com/psf/black
:alt: Code style: black
Text denormalization (#1797) * move do_training flag to config Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * adding text denorm Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * add google header Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * delete unused code Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * fix lgtm Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * adding unittests Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * add pynini dependency Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * fix missing import Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * add header Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * fix pytests Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * fix pytests Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * change jenkins Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * add text denorm container Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * add export files Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * add export files Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * add export files Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * add export files Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * try to fix jenkins Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * fix jenkins Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * fix jenkins Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * fix jenkins Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * fix jenkins Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * fix jenkins Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * fix jenkins Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * fix jenkins Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * fix jenkins Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * fix jenkins Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * fix jenkins Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * fix jenkins Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * try fix import Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * try fix import Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * try fix import Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * try fix import Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * try fix import Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * try fix import Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * try fix import Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * try fix import Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * try fix import Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * try fix import Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * try fix import Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * try fix import Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * try fix import Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * try fix import Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * try fix import Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * try fix import Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * try fix import Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * try fix import Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * try fix import Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * try fix import Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * try fix import Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * try fix import Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * try fix import Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * try fix import Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * add test Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * rename tools to nemo_tools Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * rename tools to nemo_tools Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * fix bug Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * adding missing file Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * lgtm Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * add missing header Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * fix pytests Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * try to clean all workspaces Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * try to clean all workspaces Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * try to clean all workspaces Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * try to clean all workspaces Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * try to clean all workspaces Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * try to clean all workspaces Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * try to clean all workspaces Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * try to clean all workspaces Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * try to clean all workspaces Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * try to clean all workspaces Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * try to clean all workspaces Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * try to clean all workspaces Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * try to clean all workspaces Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * try to clean all workspaces Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * try to clean all workspaces Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * try to clean all workspaces Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * try to clean all workspaces Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * try to clean all workspaces Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * try to clean all workspaces Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * try to clean all workspaces Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * try to clean all workspaces Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * try to clean all workspaces Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * try to clean all workspaces Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * try to clean all workspaces Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * try to clean all workspaces Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * try to clean all workspaces Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * move back tools Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * try something Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * try something Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * add package info Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * test jenkins Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * test jenkins Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * adding setup Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * adding setup Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * adding pytests Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * adding requirements Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * add cpu tests Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * try fix Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * try fix Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * try fix Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * try fix Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * try fix Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * try fix Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * try fix Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * fix Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * fix Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * fix Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * fix Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * fix Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * fix Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * fix Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * fix pytests for nlp Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * fix tests Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * jenkins docker test Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * jenkins docker user test Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * jenkins docker user test Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * jenkins docker user test Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * jenkins docker user test Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * jenkins docker user test Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * jenkins docker user test Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * jenkins docker user test Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * jenkins docker user test Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * jenkins docker user test Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * jenkins docker user test Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * jenkins docker less root test Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * jenkins docker less root test Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * jenkins docker less root test Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * jenkins docker less root test Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * jenkins docker less root test Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * jenkins docker less root test Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * jenkins docker less root test Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * jenkins docker less root test Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * jenkins docker less root test Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * jenkins docker less root test Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * jenkins docker less root test Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * jenkins docker less root test Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * jenkins docker less root test Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * jenkins docker less root test Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * jenkins docker less root test Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * jenkins docker less root test Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * jenkins docker less root test Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * jenkins docker less root test Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * jenkins docker less root test Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * jenkins docker less root test Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * jenkins docker less root test Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * jenkins docker less root test Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * jenkins docker less root test Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * jenkins docker less root test Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * jenkins docker less root test Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * jenkins docker less root test Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * jenkins docker less root test Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * jenkins docker less root test Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * jenkins docker less root test Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * jenkins docker less root test Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * jenkins docker less root test Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * jenkins docker less root test Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * jenkins docker less root test Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * jenkins docker less root test Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * jenkins docker less root test Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * jenkins docker less root test Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * jenkins docker less root test Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * jenkins docker less root test Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * jenkins docker less root test Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * jenkins docker less root test Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * jenkins docker less root test Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * jenkins docker less root test Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * jenkins docker less root test Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * jenkins docker less root test Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * jenkins docker less root test Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * jenkins docker less root test Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * jenkins docker less root test Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * delete SH from ci Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * delete Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * fix new nemo_tools path in ci Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * rm output content after ci Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * delete inflect Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * change new weights Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * fix jenkinsfile Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * style fix Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * fix tests Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * fix weight Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * jenkins Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * delete requirement Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * adding docstring Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * fix jenkins Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * add nemo_tools readme Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * add nemo_tools readme Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * add nemo_tools readme Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * add nemo_tools readme Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * address PR review Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * update nemo_tools readme Signed-off-by: Yang Zhang <yangzhang@nvidia.com> Co-authored-by: Oleksii Kuchaiev <okuchaiev@users.noreply.github.com>
2021-03-31 22:31:19 +02:00
.. _main-readme:
**NVIDIA NeMo**
===============
Introduction
------------
NVIDIA NeMo is a conversational AI toolkit built for researchers working on automatic speech recognition (ASR), natural language processing (NLP), and text-to-speech synthesis (TTS).
The primary objective of NeMo is to help researchers from industry and academia to reuse prior work (code and pretrained models and make it easier to create new `conversational AI models <https://developer.nvidia.com/conversational-ai#started>`_.
`Introductory video. <https://www.youtube.com/embed/wBgpMf_KQVw>`_
Key Features
------------
* Speech processing
Merge final doc and bug fixes from r1.4.0 to main (#2952) * update branch for jenkinsfile and dockerfile Signed-off-by: ericharper <complex451@gmail.com> * Typos (#2884) * segmentation tutorial fix Signed-off-by: ekmb <ebakhturina@nvidia.com> * data fixes Signed-off-by: ekmb <ebakhturina@nvidia.com> * Minor Fixes (#2922) * typo Signed-off-by: Jason <jasoli@nvidia.com> * remove notebook from docs Signed-off-by: Jason <jasoli@nvidia.com> * Adding Conformer-Transducer docs. (#2920) * added Conformer-Transducer docs. Signed-off-by: Vahid <vnoroozi@nvidia.com> * Added contextnet. Signed-off-by: Vahid <vnoroozi@nvidia.com> * fixed the title. Signed-off-by: Vahid <vnoroozi@nvidia.com> * Fix numba spec augment for cases where batch size > MAX_THREAD_BUFFER (#2924) * Fix numba spec augment for cases where batch size > MAX_THREAD_BUFFER Signed-off-by: smajumdar <titu1994@gmail.com> * Revert print in test Signed-off-by: smajumdar <titu1994@gmail.com> * Update readme for r1.4.0 (#2927) * Updated readme for r1.4.0. Signed-off-by: Vahid <vnoroozi@nvidia.com> * Updated readme for r1.4.0. Signed-off-by: Vahid <vnoroozi@nvidia.com> * Updated readme for r1.4.0. Signed-off-by: Vahid <vnoroozi@nvidia.com> * Updated readme for r1.4.0. Signed-off-by: Vahid <vnoroozi@nvidia.com> * Updated readme for r1.4.0. Signed-off-by: Vahid <vnoroozi@nvidia.com> * Updated readme for r1.4.0. Signed-off-by: Vahid <vnoroozi@nvidia.com> * Updated readme for r1.4.0. Signed-off-by: Vahid <vnoroozi@nvidia.com> * New NMT Models (#2925) * New pretrained models Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * Update NMT docs Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> Co-authored-by: Eric Harper <complex451@gmail.com> * update branch Signed-off-by: ericharper <complex451@gmail.com> * revert Signed-off-by: ericharper <complex451@gmail.com> Co-authored-by: Evelina <10428420+ekmb@users.noreply.github.com> Co-authored-by: Jason <jasoli@nvidia.com> Co-authored-by: Vahid Noroozi <VahidooX@users.noreply.github.com> Co-authored-by: Somshubra Majumdar <titu1994@gmail.com> Co-authored-by: Sandeep Subramanian <sandeep.subramanian.1@umontreal.ca>
2021-10-06 16:21:54 +02:00
* `Automatic Speech Recognition (ASR) <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/intro.html>`_
* Supported models: Jasper, QuartzNet, CitriNet, Conformer-CTC, Conformer-Transducer, ContextNet, ...
* Supports CTC and Transducer/RNNT losses/decoders
* Beam Search decoding
* `Language Modelling for ASR <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/asr_language_modeling.html>`_: N-gram LM in fusion with Beam Search decoding, Neural Rescoring with Transformer
* `Speech Classification and Speech Command Recognition <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/speech_classification/intro.html>`_: MatchboxNet (Command Recognition)
* `Voice activity Detection (VAD) <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/stable/asr/speech_classification/models.html#marblenet-vad>`_: MarbleNet
Merge r1.4 bugfixes to main (#2918) * update package info Signed-off-by: ericharper <complex451@gmail.com> * update branch for jenkinsfile and dockerfile Signed-off-by: ericharper <complex451@gmail.com> * Adding conformer-transducer models. (#2717) * added the models. Signed-off-by: Vahid <vnoroozi@nvidia.com> * added contextnet models. Signed-off-by: Vahid <vnoroozi@nvidia.com> * added german and chinese models. Signed-off-by: Vahid <vnoroozi@nvidia.com> * fix the abs_pos of conformer. (#2863) Signed-off-by: Vahid <vnoroozi@nvidia.com> * update to match sde (#2867) Signed-off-by: ekmb <ebakhturina@nvidia.com> * updated german ngc model (#2871) Signed-off-by: Yang Zhang <yangzhang@nvidia.com> * Lower bound PTL to safe version (#2876) Signed-off-by: smajumdar <titu1994@gmail.com> * Update notebooks with onnxruntime (#2880) Signed-off-by: smajumdar <titu1994@gmail.com> * Upperbound PTL (#2881) Signed-off-by: smajumdar <titu1994@gmail.com> * minor typo and broken link fixes (#2883) Signed-off-by: nithinraok <nithinrao.koluguri@gmail.com> * Remove numbers from TTS tutorial names (#2882) * Remove numbers from TTS tutorial names Signed-off-by: Jocelyn Huang <jocelynh@nvidia.com> * Update documentation links Signed-off-by: Jocelyn Huang <jocelynh@nvidia.com> * Typos (#2884) * segmentation tutorial fix Signed-off-by: ekmb <ebakhturina@nvidia.com> * data fixes Signed-off-by: ekmb <ebakhturina@nvidia.com> * updated the messages in eval_beamsearch_ngram.py. (#2889) Signed-off-by: Vahid <vnoroozi@nvidia.com> * style (#2890) Signed-off-by: Jason <jasoli@nvidia.com> * Fix broken link (#2891) * fix broken link Signed-off-by: fayejf <fayejf07@gmail.com> * more Signed-off-by: fayejf <fayejf07@gmail.com> * Update sclite eval for new transcription method (#2893) * Update sclite to use updated inference Signed-off-by: smajumdar <titu1994@gmail.com> * Remove WER Signed-off-by: smajumdar <titu1994@gmail.com> * Update sclite script to use new inference methods Signed-off-by: smajumdar <titu1994@gmail.com> * Remove hub 5 Signed-off-by: smajumdar <titu1994@gmail.com> * Fix TransformerDecoder export - r1.4 (#2875) * export fix Signed-off-by: Abhinav Khattar <aklife97@gmail.com> * embedding pos Signed-off-by: Abhinav Khattar <aklife97@gmail.com> * remove bool param Signed-off-by: Abhinav Khattar <aklife97@gmail.com> * changes Signed-off-by: Abhinav Khattar <aklife97@gmail.com> Co-authored-by: Sandeep Subramanian <sandeep.subramanian.1@umontreal.ca> * Update Finetuning notebook (#2906) * update notebook Signed-off-by: Jason <jasoli@nvidia.com> * rename Signed-off-by: Jason <jasoli@nvidia.com> * rename Signed-off-by: Jason <jasoli@nvidia.com> * revert branch to main Signed-off-by: ericharper <complex451@gmail.com> Co-authored-by: Vahid Noroozi <VahidooX@users.noreply.github.com> Co-authored-by: Evelina <10428420+ekmb@users.noreply.github.com> Co-authored-by: Yang Zhang <yzhang123@users.noreply.github.com> Co-authored-by: Somshubra Majumdar <titu1994@gmail.com> Co-authored-by: Nithin Rao <nithinrao.koluguri@gmail.com> Co-authored-by: Jocelyn <jocelynh@nvidia.com> Co-authored-by: Jason <jasoli@nvidia.com> Co-authored-by: fayejf <36722593+fayejf@users.noreply.github.com> Co-authored-by: Abhinav Khattar <aklife97@gmail.com> Co-authored-by: Sandeep Subramanian <sandeep.subramanian.1@umontreal.ca>
2021-09-29 04:13:55 +02:00
* `Speaker Recognition <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/speaker_recognition/intro.html>`_: SpeakerNet, ECAPA_TDNN
Modify speaker input (#3100) * initial_commit Signed-off-by: nithinraok <nithinrao.koluguri@gmail.com> * init diarizer Signed-off-by: nithinraok <nithinrao.koluguri@gmail.com> * vad+speaker Signed-off-by: nithinraok <nithinrao.koluguri@gmail.com> * vad update Signed-off-by: nithinraok <nithinrao.koluguri@gmail.com> * speaker done Signed-off-by: nithinraok <nithinrao.koluguri@gmail.com> * initial working version Signed-off-by: nithinraok <nithinrao.koluguri@gmail.com> * compare outputs Signed-off-by: nithinraok <nithinrao.koluguri@gmail.com> * added uem support Signed-off-by: nithinraok <nithinrao.koluguri@gmail.com> * pyannote improvements Signed-off-by: nithinraok <nithinrao.koluguri@gmail.com> * updated config and script name Signed-off-by: nithinraok <nithinrao.koluguri@gmail.com> * style fix Signed-off-by: nithinraok <nithinrao.koluguri@gmail.com> * update Jenkins file Signed-off-by: nithinraok <nithinrao.koluguri@gmail.com> * jenkins fix Signed-off-by: nithinraok <nithinrao.koluguri@gmail.com> * jenkins fix Signed-off-by: nithinraok <nithinrao.koluguri@gmail.com> * update file path in jenkins Signed-off-by: nithinraok <nithinrao.koluguri@gmail.com> * update file path in jenkins Signed-off-by: nithinraok <nithinrao.koluguri@gmail.com> * update file path in jenkins Signed-off-by: nithinraok <nithinrao.koluguri@gmail.com> * jenkins quote fix Signed-off-by: nithinraok <nithinrao.koluguri@gmail.com> * update offline speaker diarization notebook Signed-off-by: nithinraok <nithinrao.koluguri@gmail.com> * intial working asr_with_diarization Signed-off-by: nithinraok <nithinrao.koluguri@gmail.com> * almost done, revist scoring part Signed-off-by: nithinraok <nithinrao.koluguri@gmail.com> * fixed eval in offline diarization with asr Signed-off-by: nithinraok <nithinrao.koluguri@gmail.com> * update write2manifest to consider only up to max audio duration Signed-off-by: nithinraok <nithinrao.koluguri@gmail.com> * asr with diarization notebook Signed-off-by: nithinraok <nithinrao.koluguri@gmail.com> * Fixed ASR_with_diarization tutorial.ipynb and diarization_utils and edited config yaml file Signed-off-by: Taejin Park <tango4j@gmail.com> * Fixed VAD parameters in Speaker_Diarization_Inference.ipynb Signed-off-by: Taejin Park <tango4j@gmail.com> * Added Jenkins test, doc strings and updated README Signed-off-by: nithinraok <nithinrao.koluguri@gmail.com> * update jenkins test Signed-off-by: nithinraok <nithinrao.koluguri@gmail.com> * Doc info in offline_diarization_with_asr Signed-off-by: nithinraok <nithinrao.koluguri@gmail.com> * Review comments Signed-off-by: nithinraok <nithinrao.koluguri@gmail.com> * update outdir paths Signed-off-by: nithinraok <nithinrao.koluguri@gmail.com> Co-authored-by: Taejin Park <tango4j@gmail.com>
2021-11-06 15:55:32 +01:00
* `Speaker Diarization <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/speaker_diarization/intro.html>`_: SpeakerNet, ECAPA_TDNN
Merge final doc and bug fixes from r1.4.0 to main (#2952) * update branch for jenkinsfile and dockerfile Signed-off-by: ericharper <complex451@gmail.com> * Typos (#2884) * segmentation tutorial fix Signed-off-by: ekmb <ebakhturina@nvidia.com> * data fixes Signed-off-by: ekmb <ebakhturina@nvidia.com> * Minor Fixes (#2922) * typo Signed-off-by: Jason <jasoli@nvidia.com> * remove notebook from docs Signed-off-by: Jason <jasoli@nvidia.com> * Adding Conformer-Transducer docs. (#2920) * added Conformer-Transducer docs. Signed-off-by: Vahid <vnoroozi@nvidia.com> * Added contextnet. Signed-off-by: Vahid <vnoroozi@nvidia.com> * fixed the title. Signed-off-by: Vahid <vnoroozi@nvidia.com> * Fix numba spec augment for cases where batch size > MAX_THREAD_BUFFER (#2924) * Fix numba spec augment for cases where batch size > MAX_THREAD_BUFFER Signed-off-by: smajumdar <titu1994@gmail.com> * Revert print in test Signed-off-by: smajumdar <titu1994@gmail.com> * Update readme for r1.4.0 (#2927) * Updated readme for r1.4.0. Signed-off-by: Vahid <vnoroozi@nvidia.com> * Updated readme for r1.4.0. Signed-off-by: Vahid <vnoroozi@nvidia.com> * Updated readme for r1.4.0. Signed-off-by: Vahid <vnoroozi@nvidia.com> * Updated readme for r1.4.0. Signed-off-by: Vahid <vnoroozi@nvidia.com> * Updated readme for r1.4.0. Signed-off-by: Vahid <vnoroozi@nvidia.com> * Updated readme for r1.4.0. Signed-off-by: Vahid <vnoroozi@nvidia.com> * Updated readme for r1.4.0. Signed-off-by: Vahid <vnoroozi@nvidia.com> * New NMT Models (#2925) * New pretrained models Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * Update NMT docs Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> Co-authored-by: Eric Harper <complex451@gmail.com> * update branch Signed-off-by: ericharper <complex451@gmail.com> * revert Signed-off-by: ericharper <complex451@gmail.com> Co-authored-by: Evelina <10428420+ekmb@users.noreply.github.com> Co-authored-by: Jason <jasoli@nvidia.com> Co-authored-by: Vahid Noroozi <VahidooX@users.noreply.github.com> Co-authored-by: Somshubra Majumdar <titu1994@gmail.com> Co-authored-by: Sandeep Subramanian <sandeep.subramanian.1@umontreal.ca>
2021-10-06 16:21:54 +02:00
* `Pretrained models on different languages. <https://ngc.nvidia.com/catalog/collections/nvidia:nemo_asr>`_: English, Spanish, German, Russian, Chinese, French, Italian, Polish, ...
* `NGC collection of pre-trained speech processing models. <https://ngc.nvidia.com/catalog/collections/nvidia:nemo_asr>`_
* Natural Language Processing
* `Compatible with Hugging Face Transformers and NVIDIA Megatron <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/megatron_finetuning.html>`_
* `Neural Machine Translation (NMT) <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/machine_translation.html>`_
* `Punctuation and Capitalization <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/punctuation_and_capitalization.html>`_
* `Token classification (named entity recognition) <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/token_classification.html>`_
* `Text classification <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/text_classification.html>`_
* `Joint Intent and Slot Classification <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/joint_intent_slot.html>`_
* `BERT pre-training <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/bert_pretraining.html>`_
* `Question answering <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/question_answering.html>`_
* `GLUE benchmark <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/glue_benchmark.html>`_
* `Information retrieval <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/information_retrieval.html>`_
* `Entity Linking <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/entity_linking.html>`_
* `Dialogue State Tracking <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/sgd_qa.html>`_
* `Neural Duplex Text Normalization <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/text_normalization.html>`_
* `NGC collection of pre-trained NLP models. <https://ngc.nvidia.com/catalog/collections/nvidia:nemo_nlp>`_
* `Speech synthesis (TTS) <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/tts/intro.html#>`_
* Spectrogram generation: Tacotron2, GlowTTS, FastSpeech2, FastPitch, FastSpeech2
* Vocoders: WaveGlow, SqueezeWave, UniGlow, MelGAN, HiFiGAN
* End-to-end speech generation: FastPitch_HifiGan_E2E, FastSpeech2_HifiGan_E2E
* `NGC collection of pre-trained TTS models. <https://ngc.nvidia.com/catalog/collections/nvidia:nemo_tts>`_
* `Tools <https://github.com/NVIDIA/NeMo/tree/main/tools>`_
* `Text Processing (text normalization and inverse text normalization) <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/tools/text_processing_deployment.html>`_
* `CTC-Segmentation tool <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/tools/ctc_segmentation.html>`_
Merge final doc and bug fixes from r1.4.0 to main (#2952) * update branch for jenkinsfile and dockerfile Signed-off-by: ericharper <complex451@gmail.com> * Typos (#2884) * segmentation tutorial fix Signed-off-by: ekmb <ebakhturina@nvidia.com> * data fixes Signed-off-by: ekmb <ebakhturina@nvidia.com> * Minor Fixes (#2922) * typo Signed-off-by: Jason <jasoli@nvidia.com> * remove notebook from docs Signed-off-by: Jason <jasoli@nvidia.com> * Adding Conformer-Transducer docs. (#2920) * added Conformer-Transducer docs. Signed-off-by: Vahid <vnoroozi@nvidia.com> * Added contextnet. Signed-off-by: Vahid <vnoroozi@nvidia.com> * fixed the title. Signed-off-by: Vahid <vnoroozi@nvidia.com> * Fix numba spec augment for cases where batch size > MAX_THREAD_BUFFER (#2924) * Fix numba spec augment for cases where batch size > MAX_THREAD_BUFFER Signed-off-by: smajumdar <titu1994@gmail.com> * Revert print in test Signed-off-by: smajumdar <titu1994@gmail.com> * Update readme for r1.4.0 (#2927) * Updated readme for r1.4.0. Signed-off-by: Vahid <vnoroozi@nvidia.com> * Updated readme for r1.4.0. Signed-off-by: Vahid <vnoroozi@nvidia.com> * Updated readme for r1.4.0. Signed-off-by: Vahid <vnoroozi@nvidia.com> * Updated readme for r1.4.0. Signed-off-by: Vahid <vnoroozi@nvidia.com> * Updated readme for r1.4.0. Signed-off-by: Vahid <vnoroozi@nvidia.com> * Updated readme for r1.4.0. Signed-off-by: Vahid <vnoroozi@nvidia.com> * Updated readme for r1.4.0. Signed-off-by: Vahid <vnoroozi@nvidia.com> * New NMT Models (#2925) * New pretrained models Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * Update NMT docs Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> Co-authored-by: Eric Harper <complex451@gmail.com> * update branch Signed-off-by: ericharper <complex451@gmail.com> * revert Signed-off-by: ericharper <complex451@gmail.com> Co-authored-by: Evelina <10428420+ekmb@users.noreply.github.com> Co-authored-by: Jason <jasoli@nvidia.com> Co-authored-by: Vahid Noroozi <VahidooX@users.noreply.github.com> Co-authored-by: Somshubra Majumdar <titu1994@gmail.com> Co-authored-by: Sandeep Subramanian <sandeep.subramanian.1@umontreal.ca>
2021-10-06 16:21:54 +02:00
* `Speech Data Explorer <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/tools/speech_data_explorer.html>`_: a dash-based tool for interactive exploration of ASR/TTS datasets
Built for speed, NeMo can utilize NVIDIA's Tensor Cores and scale out training to multiple GPUs and multiple nodes.
Requirements
------------
1) Python 3.6, 3.7 or 3.8
[BigNLP] Merge Megatron GPT to main (#2975) * fix gpu init after removing debug print in mpu Signed-off-by: ericharper <complex451@gmail.com> * add fused_adam Signed-off-by: ericharper <complex451@gmail.com> * check ds is not none before logging len Signed-off-by: ericharper <complex451@gmail.com> * set fp16 arg to true and fix enum conflict Signed-off-by: ericharper <complex451@gmail.com> * make fp16 arg configurable Signed-off-by: ericharper <complex451@gmail.com> * add grad clip from megatron Signed-off-by: ericharper <complex451@gmail.com> * Linear warmup with cosine annealing and constant holding (#2846) * Testing cosine schedule Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * Style fixes Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * Fixes Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * More fixes Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * update config for constant steps in schedule Signed-off-by: ericharper <complex451@gmail.com> * temporarily import enum from megatron Signed-off-by: ericharper <complex451@gmail.com> * add grad clip for fp32 Signed-off-by: ericharper <complex451@gmail.com> * update check for _del_model_without_trainer Signed-off-by: ericharper <complex451@gmail.com> * updating restore for model parallel Signed-off-by: ericharper <complex451@gmail.com> * add predict script Signed-off-by: ericharper <complex451@gmail.com> * update test iters Signed-off-by: ericharper <complex451@gmail.com> * add barrier Signed-off-by: ericharper <complex451@gmail.com> * return if clip_val is 0 or None Signed-off-by: ericharper <complex451@gmail.com> * when using amp clip grads after they are unscaled Signed-off-by: ericharper <complex451@gmail.com> * make native amp scaler hyperparams configurable Signed-off-by: ericharper <complex451@gmail.com> * (1) nvfuser, (2) amp-casting decoration (#2894) * (1) nvfuser, (2) amp-casting decoration Signed-off-by: Sangkug Lym <slym@nvidia.com> * support bf16 Signed-off-by: Sangkug Lym <slym@nvidia.com> * update package info Signed-off-by: ericharper <complex451@gmail.com> * add set device to constructor Signed-off-by: ericharper <complex451@gmail.com> * set_device in constructor Signed-off-by: ericharper <complex451@gmail.com> * [BigNLP] Remove megatron-lm dependency. (#2910) * remove args Signed-off-by: ericharper <complex451@gmail.com> * remove args Signed-off-by: ericharper <complex451@gmail.com> * remove args Signed-off-by: ericharper <complex451@gmail.com> * remove args Signed-off-by: ericharper <complex451@gmail.com> * remove args in progress Signed-off-by: ericharper <complex451@gmail.com> * remove args in progress Signed-off-by: ericharper <complex451@gmail.com> * remove args in progress Signed-off-by: ericharper <complex451@gmail.com> * remove args in progress Signed-off-by: ericharper <complex451@gmail.com> * add load_fused_kernels Signed-off-by: ericharper <complex451@gmail.com> * add load_fused_kernels Signed-off-by: ericharper <complex451@gmail.com> * update megatron_init Signed-off-by: ericharper <complex451@gmail.com> * add fused kernels Signed-off-by: ericharper <complex451@gmail.com> * add fused kernels Signed-off-by: ericharper <complex451@gmail.com> * update process batch Signed-off-by: ericharper <complex451@gmail.com> * remove erroneous import Signed-off-by: ericharper <complex451@gmail.com> * remove erroneous import Signed-off-by: ericharper <complex451@gmail.com> * remove erroneous import Signed-off-by: ericharper <complex451@gmail.com> * add megatron clip_grad Signed-off-by: ericharper <complex451@gmail.com> * trying to resolve circular import error Signed-off-by: ericharper <complex451@gmail.com> * rename file Signed-off-by: ericharper <complex451@gmail.com> * remove non-gpt models and datasets from __init__ files Signed-off-by: ericharper <complex451@gmail.com> * set device in constructorfor gpu init Signed-off-by: ericharper <complex451@gmail.com> * set device in constructorfor gpu init Signed-off-by: ericharper <complex451@gmail.com> * set_device in constructor Signed-off-by: ericharper <complex451@gmail.com> * clean config Signed-off-by: ericharper <complex451@gmail.com> * update MegatronDataset Signed-off-by: ericharper <complex451@gmail.com> * clean up MegatronModule Signed-off-by: ericharper <complex451@gmail.com> * clean up MegatronModule Signed-off-by: ericharper <complex451@gmail.com> * rename fp16 and bf16 flags to fused_softmax_input_in_fp16/bf16 Signed-off-by: ericharper <complex451@gmail.com> * rename to fused_fp16 Signed-off-by: ericharper <complex451@gmail.com> * add fused_fp16 arg to LayerNorm calls Signed-off-by: ericharper <complex451@gmail.com> * fix arg name Signed-off-by: ericharper <complex451@gmail.com> * fix arg name Signed-off-by: ericharper <complex451@gmail.com> * fix import Signed-off-by: ericharper <complex451@gmail.com> * update arg Signed-off-by: ericharper <complex451@gmail.com> * skip warmup default to True Signed-off-by: ericharper <complex451@gmail.com> * skip warmup default to True Signed-off-by: ericharper <complex451@gmail.com> * Adding complete method to MegatronGPTModel (#2935) Signed-off-by: Oleksii Kuchaiev <okuchaiev@nvidia.com> * make ffn_hidden_size mandatory Signed-off-by: ericharper <complex451@gmail.com> * Manually migrating timing of step into branch (#2937) * 1. Manually migrating timing of step into branch. Signed-off-by: Micha Livne <mlivne@nvidia.com> * 1. Updated file name and content. Signed-off-by: Micha Livne <mlivne@nvidia.com> * 1. Updated to latest code. Signed-off-by: Micha Livne <mlivne@nvidia.com> Co-authored-by: Micha Livne <mlivne@nvidia.com> * remove unused imports Signed-off-by: ericharper <complex451@gmail.com> * remove unused import Signed-off-by: ericharper <complex451@gmail.com> * remove unused import Signed-off-by: ericharper <complex451@gmail.com> * remove unused import Signed-off-by: ericharper <complex451@gmail.com> * check fused_fp16 and fused_bf16 are not both True Signed-off-by: ericharper <complex451@gmail.com> * update predict script for model parallel .nemo Signed-off-by: ericharper <complex451@gmail.com> * typo Signed-off-by: ericharper <complex451@gmail.com> * typo Signed-off-by: ericharper <complex451@gmail.com> Co-authored-by: Oleksii Kuchaiev <okuchaiev@users.noreply.github.com> Co-authored-by: Micha Livne <michalivne@users.noreply.github.com> Co-authored-by: Micha Livne <mlivne@nvidia.com> * NVfuser (#2943) * activation checkpoint recompute Signed-off-by: Sangkug Lym <slym@nvidia.com> * selective nvfuser setup * Megatron gpt bfloat support (#2926) * Save/restore fix Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * Another merge Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * Bf16 args in init Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * Set precision Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * Remove debug stuff Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * add bf16 casting decorator Signed-off-by: Sangkug Lym <slym@nvidia.com> * Bfloat layernorm propagation Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * activation checkpoint recompute Signed-off-by: Sangkug Lym <slym@nvidia.com> * selective nvfuser setup * More arg removal Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * Remove BERTDataset Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * update to latest apex and patch transformer autocast Signed-off-by: ericharper <complex451@gmail.com> Co-authored-by: Sangkug Lym <slym@nvidia.com> Co-authored-by: ericharper <complex451@gmail.com> * don't set jit for bf16 Signed-off-by: ericharper <complex451@gmail.com> * replace apex.mpu Signed-off-by: ericharper <complex451@gmail.com> * fix grad clip Signed-off-by: ericharper <complex451@gmail.com> * NVFuser fixes (#2951) * Fuser fixes Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * Remove dummy handler Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * Remove PTL plugin based logic for fusion Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * remove duplicated file Signed-off-by: ericharper <complex451@gmail.com> * typo (#2960) Signed-off-by: ericharper <complex451@gmail.com> * [BigNLP] Script to convert GPT checkpoint to .nemo (#2958) * remove args Signed-off-by: ericharper <complex451@gmail.com> * remove args Signed-off-by: ericharper <complex451@gmail.com> * remove args Signed-off-by: ericharper <complex451@gmail.com> * remove args Signed-off-by: ericharper <complex451@gmail.com> * remove args in progress Signed-off-by: ericharper <complex451@gmail.com> * remove args in progress Signed-off-by: ericharper <complex451@gmail.com> * remove args in progress Signed-off-by: ericharper <complex451@gmail.com> * remove args in progress Signed-off-by: ericharper <complex451@gmail.com> * add load_fused_kernels Signed-off-by: ericharper <complex451@gmail.com> * add load_fused_kernels Signed-off-by: ericharper <complex451@gmail.com> * update megatron_init Signed-off-by: ericharper <complex451@gmail.com> * add fused kernels Signed-off-by: ericharper <complex451@gmail.com> * add fused kernels Signed-off-by: ericharper <complex451@gmail.com> * update process batch Signed-off-by: ericharper <complex451@gmail.com> * remove erroneous import Signed-off-by: ericharper <complex451@gmail.com> * remove erroneous import Signed-off-by: ericharper <complex451@gmail.com> * remove erroneous import Signed-off-by: ericharper <complex451@gmail.com> * add megatron clip_grad Signed-off-by: ericharper <complex451@gmail.com> * trying to resolve circular import error Signed-off-by: ericharper <complex451@gmail.com> * rename file Signed-off-by: ericharper <complex451@gmail.com> * remove non-gpt models and datasets from __init__ files Signed-off-by: ericharper <complex451@gmail.com> * set device in constructorfor gpu init Signed-off-by: ericharper <complex451@gmail.com> * set device in constructorfor gpu init Signed-off-by: ericharper <complex451@gmail.com> * set_device in constructor Signed-off-by: ericharper <complex451@gmail.com> * clean config Signed-off-by: ericharper <complex451@gmail.com> * update MegatronDataset Signed-off-by: ericharper <complex451@gmail.com> * clean up MegatronModule Signed-off-by: ericharper <complex451@gmail.com> * clean up MegatronModule Signed-off-by: ericharper <complex451@gmail.com> * rename fp16 and bf16 flags to fused_softmax_input_in_fp16/bf16 Signed-off-by: ericharper <complex451@gmail.com> * rename to fused_fp16 Signed-off-by: ericharper <complex451@gmail.com> * add fused_fp16 arg to LayerNorm calls Signed-off-by: ericharper <complex451@gmail.com> * fix arg name Signed-off-by: ericharper <complex451@gmail.com> * fix arg name Signed-off-by: ericharper <complex451@gmail.com> * fix import Signed-off-by: ericharper <complex451@gmail.com> * update arg Signed-off-by: ericharper <complex451@gmail.com> * skip warmup default to True Signed-off-by: ericharper <complex451@gmail.com> * skip warmup default to True Signed-off-by: ericharper <complex451@gmail.com> * Adding complete method to MegatronGPTModel (#2935) Signed-off-by: Oleksii Kuchaiev <okuchaiev@nvidia.com> * make ffn_hidden_size mandatory Signed-off-by: ericharper <complex451@gmail.com> * Manually migrating timing of step into branch (#2937) * 1. Manually migrating timing of step into branch. Signed-off-by: Micha Livne <mlivne@nvidia.com> * 1. Updated file name and content. Signed-off-by: Micha Livne <mlivne@nvidia.com> * 1. Updated to latest code. Signed-off-by: Micha Livne <mlivne@nvidia.com> Co-authored-by: Micha Livne <mlivne@nvidia.com> * remove unused imports Signed-off-by: ericharper <complex451@gmail.com> * remove unused import Signed-off-by: ericharper <complex451@gmail.com> * remove unused import Signed-off-by: ericharper <complex451@gmail.com> * remove unused import Signed-off-by: ericharper <complex451@gmail.com> * check fused_fp16 and fused_bf16 are not both True Signed-off-by: ericharper <complex451@gmail.com> * update predict script for model parallel .nemo Signed-off-by: ericharper <complex451@gmail.com> * typo Signed-off-by: ericharper <complex451@gmail.com> * add script to convert .ckpt to .nemo Signed-off-by: ericharper <complex451@gmail.com> * in progress Signed-off-by: ericharper <complex451@gmail.com> * update Signed-off-by: ericharper <complex451@gmail.com> * convert mp checkpoints to nemo Signed-off-by: ericharper <complex451@gmail.com> * update help Signed-off-by: ericharper <complex451@gmail.com> * add safeguard for model parallel save_to Signed-off-by: ericharper <complex451@gmail.com> * adjust NLPModel save_to to be safer for model parallel Signed-off-by: Oleksii Kuchaiev <okuchaiev@nvidia.com> Co-authored-by: Oleksii Kuchaiev <okuchaiev@users.noreply.github.com> Co-authored-by: Micha Livne <michalivne@users.noreply.github.com> Co-authored-by: Micha Livne <mlivne@nvidia.com> Co-authored-by: Oleksii Kuchaiev <okuchaiev@nvidia.com> * [BigNLP] Update GPT evaluation to work with tensor model parallel (#2959) * in progress Signed-off-by: ericharper <complex451@gmail.com> * update args Signed-off-by: ericharper <complex451@gmail.com> * add request dataset Signed-off-by: ericharper <complex451@gmail.com> * tokenize request Signed-off-by: ericharper <complex451@gmail.com> * in progress Signed-off-by: ericharper <complex451@gmail.com> * able to run Signed-off-by: ericharper <complex451@gmail.com> * reduce logits Signed-off-by: ericharper <complex451@gmail.com> * capture response Signed-off-by: ericharper <complex451@gmail.com> * squeeze and unsqueeze Signed-off-by: ericharper <complex451@gmail.com> * handle non model parallel case Signed-off-by: ericharper <complex451@gmail.com> * clean imports Signed-off-by: ericharper <complex451@gmail.com> * add file Signed-off-by: ericharper <complex451@gmail.com> * convert logits to log_probs Signed-off-by: Oleksii Kuchaiev <okuchaiev@nvidia.com> * rename logits to log_probs Signed-off-by: Oleksii Kuchaiev <okuchaiev@nvidia.com> Co-authored-by: Oleksii Kuchaiev <okuchaiev@nvidia.com> * add megatron gpt pretraining Signed-off-by: ericharper <complex451@gmail.com> * add megatron gpt pretraining Signed-off-by: ericharper <complex451@gmail.com> * add megatron gpt pretraining Signed-off-by: ericharper <complex451@gmail.com> * updating to work with latest megatron Signed-off-by: ericharper <complex451@gmail.com> * updating to work with latest megatron Signed-off-by: ericharper <complex451@gmail.com> * update _del_model Signed-off-by: ericharper <complex451@gmail.com> * adding gpt model Signed-off-by: ericharper <complex451@gmail.com> * adding gpt model Signed-off-by: ericharper <complex451@gmail.com> * adding gpt model Signed-off-by: ericharper <complex451@gmail.com> * instantiate GPTmodel Signed-off-by: ericharper <complex451@gmail.com> * adding build dataset Signed-off-by: ericharper <complex451@gmail.com> * build megatron dataset in .setup Signed-off-by: ericharper <complex451@gmail.com> * setup dataloader Signed-off-by: ericharper <complex451@gmail.com> * add vocab_file and merge_file to megatron init Signed-off-by: ericharper <complex451@gmail.com> * add forward Signed-off-by: ericharper <complex451@gmail.com> * add train loss Signed-off-by: ericharper <complex451@gmail.com> * add optimizer Signed-off-by: ericharper <complex451@gmail.com> * add exp_manager Signed-off-by: ericharper <complex451@gmail.com> * multi-gpu is working Signed-off-by: ericharper <complex451@gmail.com> * adding val loop Signed-off-by: ericharper <complex451@gmail.com> * style Signed-off-by: ericharper <complex451@gmail.com> * adding val loop Signed-off-by: ericharper <complex451@gmail.com> * fix ranks Signed-off-by: ericharper <complex451@gmail.com> * fix model parallel checkpoint saving Signed-off-by: ericharper <complex451@gmail.com> * fix _del_model Signed-off-by: ericharper <complex451@gmail.com> * added megatron batch sampler Signed-off-by: ericharper <complex451@gmail.com> * try to fix num steps Signed-off-by: ericharper <complex451@gmail.com> * add wandb to config Signed-off-by: ericharper <complex451@gmail.com> * log lr Signed-off-by: ericharper <complex451@gmail.com> * add warmup ratio to config Signed-off-by: ericharper <complex451@gmail.com> * update configs Signed-off-by: ericharper <complex451@gmail.com> * update configs Signed-off-by: ericharper <complex451@gmail.com> * add cpu init to args Signed-off-by: ericharper <complex451@gmail.com> * update config Signed-off-by: ericharper <complex451@gmail.com> * update config Signed-off-by: ericharper <complex451@gmail.com> * Initial megatron dataset port Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * Fix merge conflicts Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * License fixes and megatron model porting Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * Style fixes Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * More fixes to import from nemo rather than megatron Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * Fix circular imports Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * Style fixes Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * Revert config file Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * Restructure further to avoid circular imports Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * add Makefile Signed-off-by: ericharper <complex451@gmail.com> * Add megatron modules Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * add license Signed-off-by: ericharper <complex451@gmail.com> * Port from latest megatron Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * update cfg Signed-off-by: ericharper <complex451@gmail.com> * update config Signed-off-by: ericharper <complex451@gmail.com> * add _del_model_without_trainer Signed-off-by: ericharper <complex451@gmail.com> * add data preprocessing script Signed-off-by: ericharper <complex451@gmail.com> * update config Signed-off-by: ericharper <complex451@gmail.com> * use apex mpu Signed-off-by: ericharper <complex451@gmail.com> * replace print_rank_0 with nemo utils logging Signed-off-by: ericharper <complex451@gmail.com> * use apex mpu Signed-off-by: ericharper <complex451@gmail.com> * use apex mpu Signed-off-by: ericharper <complex451@gmail.com> * add use_cpu_initialization Signed-off-by: ericharper <complex451@gmail.com> * fixing autoresume in progress Signed-off-by: ericharper <complex451@gmail.com> * properly removing last checkpoint Signed-off-by: ericharper <complex451@gmail.com> * log consumed samples Signed-off-by: ericharper <complex451@gmail.com> * fix mp autoresume Signed-off-by: ericharper <complex451@gmail.com> * add NLPSaveRestoreConnector Signed-off-by: ericharper <complex451@gmail.com> * Megatron GPT training with NeMo tokenizers (#2818) * Update files from megatron repo Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * Remove non NLP data related files from megatron Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * Merge megatron and nemo tokenizers Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * Remove get_tokenizer() calls from gpt model Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * Update tokenizer yaml config Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * add todo Signed-off-by: ericharper <complex451@gmail.com> * update config Signed-off-by: ericharper <complex451@gmail.com> * make init_method_std configurable Signed-off-by: ericharper <complex451@gmail.com> * make gpu init work by setting random seed earlier Signed-off-by: ericharper <complex451@gmail.com> * fix gpu init after removing debug print in mpu Signed-off-by: ericharper <complex451@gmail.com> * add fused_adam Signed-off-by: ericharper <complex451@gmail.com> * check ds is not none before logging len Signed-off-by: ericharper <complex451@gmail.com> * set fp16 arg to true and fix enum conflict Signed-off-by: ericharper <complex451@gmail.com> * make fp16 arg configurable Signed-off-by: ericharper <complex451@gmail.com> * add grad clip from megatron Signed-off-by: ericharper <complex451@gmail.com> * Linear warmup with cosine annealing and constant holding (#2846) * Testing cosine schedule Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * Style fixes Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * Fixes Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * More fixes Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * update config for constant steps in schedule Signed-off-by: ericharper <complex451@gmail.com> * temporarily import enum from megatron Signed-off-by: ericharper <complex451@gmail.com> * add grad clip for fp32 Signed-off-by: ericharper <complex451@gmail.com> * update check for _del_model_without_trainer Signed-off-by: ericharper <complex451@gmail.com> * updating restore for model parallel Signed-off-by: ericharper <complex451@gmail.com> * add predict script Signed-off-by: ericharper <complex451@gmail.com> * update test iters Signed-off-by: ericharper <complex451@gmail.com> * add barrier Signed-off-by: ericharper <complex451@gmail.com> * return if clip_val is 0 or None Signed-off-by: ericharper <complex451@gmail.com> * when using amp clip grads after they are unscaled Signed-off-by: ericharper <complex451@gmail.com> * make native amp scaler hyperparams configurable Signed-off-by: ericharper <complex451@gmail.com> * (1) nvfuser, (2) amp-casting decoration (#2894) * (1) nvfuser, (2) amp-casting decoration Signed-off-by: Sangkug Lym <slym@nvidia.com> * support bf16 Signed-off-by: Sangkug Lym <slym@nvidia.com> * update package info Signed-off-by: ericharper <complex451@gmail.com> * add set device to constructor Signed-off-by: ericharper <complex451@gmail.com> * set_device in constructor Signed-off-by: ericharper <complex451@gmail.com> * [BigNLP] Remove megatron-lm dependency. (#2910) * remove args Signed-off-by: ericharper <complex451@gmail.com> * remove args Signed-off-by: ericharper <complex451@gmail.com> * remove args Signed-off-by: ericharper <complex451@gmail.com> * remove args Signed-off-by: ericharper <complex451@gmail.com> * remove args in progress Signed-off-by: ericharper <complex451@gmail.com> * remove args in progress Signed-off-by: ericharper <complex451@gmail.com> * remove args in progress Signed-off-by: ericharper <complex451@gmail.com> * remove args in progress Signed-off-by: ericharper <complex451@gmail.com> * add load_fused_kernels Signed-off-by: ericharper <complex451@gmail.com> * add load_fused_kernels Signed-off-by: ericharper <complex451@gmail.com> * update megatron_init Signed-off-by: ericharper <complex451@gmail.com> * add fused kernels Signed-off-by: ericharper <complex451@gmail.com> * add fused kernels Signed-off-by: ericharper <complex451@gmail.com> * update process batch Signed-off-by: ericharper <complex451@gmail.com> * remove erroneous import Signed-off-by: ericharper <complex451@gmail.com> * remove erroneous import Signed-off-by: ericharper <complex451@gmail.com> * remove erroneous import Signed-off-by: ericharper <complex451@gmail.com> * add megatron clip_grad Signed-off-by: ericharper <complex451@gmail.com> * trying to resolve circular import error Signed-off-by: ericharper <complex451@gmail.com> * rename file Signed-off-by: ericharper <complex451@gmail.com> * remove non-gpt models and datasets from __init__ files Signed-off-by: ericharper <complex451@gmail.com> * set device in constructorfor gpu init Signed-off-by: ericharper <complex451@gmail.com> * set device in constructorfor gpu init Signed-off-by: ericharper <complex451@gmail.com> * set_device in constructor Signed-off-by: ericharper <complex451@gmail.com> * clean config Signed-off-by: ericharper <complex451@gmail.com> * update MegatronDataset Signed-off-by: ericharper <complex451@gmail.com> * clean up MegatronModule Signed-off-by: ericharper <complex451@gmail.com> * clean up MegatronModule Signed-off-by: ericharper <complex451@gmail.com> * rename fp16 and bf16 flags to fused_softmax_input_in_fp16/bf16 Signed-off-by: ericharper <complex451@gmail.com> * rename to fused_fp16 Signed-off-by: ericharper <complex451@gmail.com> * add fused_fp16 arg to LayerNorm calls Signed-off-by: ericharper <complex451@gmail.com> * fix arg name Signed-off-by: ericharper <complex451@gmail.com> * fix arg name Signed-off-by: ericharper <complex451@gmail.com> * fix import Signed-off-by: ericharper <complex451@gmail.com> * update arg Signed-off-by: ericharper <complex451@gmail.com> * skip warmup default to True Signed-off-by: ericharper <complex451@gmail.com> * skip warmup default to True Signed-off-by: ericharper <complex451@gmail.com> * Adding complete method to MegatronGPTModel (#2935) Signed-off-by: Oleksii Kuchaiev <okuchaiev@nvidia.com> * make ffn_hidden_size mandatory Signed-off-by: ericharper <complex451@gmail.com> * Manually migrating timing of step into branch (#2937) * 1. Manually migrating timing of step into branch. Signed-off-by: Micha Livne <mlivne@nvidia.com> * 1. Updated file name and content. Signed-off-by: Micha Livne <mlivne@nvidia.com> * 1. Updated to latest code. Signed-off-by: Micha Livne <mlivne@nvidia.com> Co-authored-by: Micha Livne <mlivne@nvidia.com> * remove unused imports Signed-off-by: ericharper <complex451@gmail.com> * remove unused import Signed-off-by: ericharper <complex451@gmail.com> * remove unused import Signed-off-by: ericharper <complex451@gmail.com> * remove unused import Signed-off-by: ericharper <complex451@gmail.com> * check fused_fp16 and fused_bf16 are not both True Signed-off-by: ericharper <complex451@gmail.com> * update predict script for model parallel .nemo Signed-off-by: ericharper <complex451@gmail.com> * typo Signed-off-by: ericharper <complex451@gmail.com> * typo Signed-off-by: ericharper <complex451@gmail.com> Co-authored-by: Oleksii Kuchaiev <okuchaiev@users.noreply.github.com> Co-authored-by: Micha Livne <michalivne@users.noreply.github.com> Co-authored-by: Micha Livne <mlivne@nvidia.com> * NVfuser (#2943) * activation checkpoint recompute Signed-off-by: Sangkug Lym <slym@nvidia.com> * selective nvfuser setup * Megatron gpt bfloat support (#2926) * Save/restore fix Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * Another merge Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * Bf16 args in init Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * Set precision Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * Remove debug stuff Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * add bf16 casting decorator Signed-off-by: Sangkug Lym <slym@nvidia.com> * Bfloat layernorm propagation Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * activation checkpoint recompute Signed-off-by: Sangkug Lym <slym@nvidia.com> * selective nvfuser setup * More arg removal Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * Remove BERTDataset Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * update to latest apex and patch transformer autocast Signed-off-by: ericharper <complex451@gmail.com> Co-authored-by: Sangkug Lym <slym@nvidia.com> Co-authored-by: ericharper <complex451@gmail.com> * don't set jit for bf16 Signed-off-by: ericharper <complex451@gmail.com> * replace apex.mpu Signed-off-by: ericharper <complex451@gmail.com> * fix grad clip Signed-off-by: ericharper <complex451@gmail.com> * NVFuser fixes (#2951) * Fuser fixes Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * Remove dummy handler Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * Remove PTL plugin based logic for fusion Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * remove duplicated file Signed-off-by: ericharper <complex451@gmail.com> * typo (#2960) Signed-off-by: ericharper <complex451@gmail.com> * [BigNLP] Script to convert GPT checkpoint to .nemo (#2958) * remove args Signed-off-by: ericharper <complex451@gmail.com> * remove args Signed-off-by: ericharper <complex451@gmail.com> * remove args Signed-off-by: ericharper <complex451@gmail.com> * remove args Signed-off-by: ericharper <complex451@gmail.com> * remove args in progress Signed-off-by: ericharper <complex451@gmail.com> * remove args in progress Signed-off-by: ericharper <complex451@gmail.com> * remove args in progress Signed-off-by: ericharper <complex451@gmail.com> * remove args in progress Signed-off-by: ericharper <complex451@gmail.com> * add load_fused_kernels Signed-off-by: ericharper <complex451@gmail.com> * add load_fused_kernels Signed-off-by: ericharper <complex451@gmail.com> * update megatron_init Signed-off-by: ericharper <complex451@gmail.com> * add fused kernels Signed-off-by: ericharper <complex451@gmail.com> * add fused kernels Signed-off-by: ericharper <complex451@gmail.com> * update process batch Signed-off-by: ericharper <complex451@gmail.com> * remove erroneous import Signed-off-by: ericharper <complex451@gmail.com> * remove erroneous import Signed-off-by: ericharper <complex451@gmail.com> * remove erroneous import Signed-off-by: ericharper <complex451@gmail.com> * add megatron clip_grad Signed-off-by: ericharper <complex451@gmail.com> * trying to resolve circular import error Signed-off-by: ericharper <complex451@gmail.com> * rename file Signed-off-by: ericharper <complex451@gmail.com> * remove non-gpt models and datasets from __init__ files Signed-off-by: ericharper <complex451@gmail.com> * set device in constructorfor gpu init Signed-off-by: ericharper <complex451@gmail.com> * set device in constructorfor gpu init Signed-off-by: ericharper <complex451@gmail.com> * set_device in constructor Signed-off-by: ericharper <complex451@gmail.com> * clean config Signed-off-by: ericharper <complex451@gmail.com> * update MegatronDataset Signed-off-by: ericharper <complex451@gmail.com> * clean up MegatronModule Signed-off-by: ericharper <complex451@gmail.com> * clean up MegatronModule Signed-off-by: ericharper <complex451@gmail.com> * rename fp16 and bf16 flags to fused_softmax_input_in_fp16/bf16 Signed-off-by: ericharper <complex451@gmail.com> * rename to fused_fp16 Signed-off-by: ericharper <complex451@gmail.com> * add fused_fp16 arg to LayerNorm calls Signed-off-by: ericharper <complex451@gmail.com> * fix arg name Signed-off-by: ericharper <complex451@gmail.com> * fix arg name Signed-off-by: ericharper <complex451@gmail.com> * fix import Signed-off-by: ericharper <complex451@gmail.com> * update arg Signed-off-by: ericharper <complex451@gmail.com> * skip warmup default to True Signed-off-by: ericharper <complex451@gmail.com> * skip warmup default to True Signed-off-by: ericharper <complex451@gmail.com> * Adding complete method to MegatronGPTModel (#2935) Signed-off-by: Oleksii Kuchaiev <okuchaiev@nvidia.com> * make ffn_hidden_size mandatory Signed-off-by: ericharper <complex451@gmail.com> * Manually migrating timing of step into branch (#2937) * 1. Manually migrating timing of step into branch. Signed-off-by: Micha Livne <mlivne@nvidia.com> * 1. Updated file name and content. Signed-off-by: Micha Livne <mlivne@nvidia.com> * 1. Updated to latest code. Signed-off-by: Micha Livne <mlivne@nvidia.com> Co-authored-by: Micha Livne <mlivne@nvidia.com> * remove unused imports Signed-off-by: ericharper <complex451@gmail.com> * remove unused import Signed-off-by: ericharper <complex451@gmail.com> * remove unused import Signed-off-by: ericharper <complex451@gmail.com> * remove unused import Signed-off-by: ericharper <complex451@gmail.com> * check fused_fp16 and fused_bf16 are not both True Signed-off-by: ericharper <complex451@gmail.com> * update predict script for model parallel .nemo Signed-off-by: ericharper <complex451@gmail.com> * typo Signed-off-by: ericharper <complex451@gmail.com> * add script to convert .ckpt to .nemo Signed-off-by: ericharper <complex451@gmail.com> * in progress Signed-off-by: ericharper <complex451@gmail.com> * update Signed-off-by: ericharper <complex451@gmail.com> * convert mp checkpoints to nemo Signed-off-by: ericharper <complex451@gmail.com> * update help Signed-off-by: ericharper <complex451@gmail.com> * add safeguard for model parallel save_to Signed-off-by: ericharper <complex451@gmail.com> * adjust NLPModel save_to to be safer for model parallel Signed-off-by: Oleksii Kuchaiev <okuchaiev@nvidia.com> Co-authored-by: Oleksii Kuchaiev <okuchaiev@users.noreply.github.com> Co-authored-by: Micha Livne <michalivne@users.noreply.github.com> Co-authored-by: Micha Livne <mlivne@nvidia.com> Co-authored-by: Oleksii Kuchaiev <okuchaiev@nvidia.com> * [BigNLP] Update GPT evaluation to work with tensor model parallel (#2959) * in progress Signed-off-by: ericharper <complex451@gmail.com> * update args Signed-off-by: ericharper <complex451@gmail.com> * add request dataset Signed-off-by: ericharper <complex451@gmail.com> * tokenize request Signed-off-by: ericharper <complex451@gmail.com> * in progress Signed-off-by: ericharper <complex451@gmail.com> * able to run Signed-off-by: ericharper <complex451@gmail.com> * reduce logits Signed-off-by: ericharper <complex451@gmail.com> * capture response Signed-off-by: ericharper <complex451@gmail.com> * squeeze and unsqueeze Signed-off-by: ericharper <complex451@gmail.com> * handle non model parallel case Signed-off-by: ericharper <complex451@gmail.com> * clean imports Signed-off-by: ericharper <complex451@gmail.com> * add file Signed-off-by: ericharper <complex451@gmail.com> * convert logits to log_probs Signed-off-by: Oleksii Kuchaiev <okuchaiev@nvidia.com> * rename logits to log_probs Signed-off-by: Oleksii Kuchaiev <okuchaiev@nvidia.com> Co-authored-by: Oleksii Kuchaiev <okuchaiev@nvidia.com> * style Signed-off-by: ericharper <complex451@gmail.com> * fix copyright headers Signed-off-by: ericharper <complex451@gmail.com> * fix copyright headers Signed-off-by: ericharper <complex451@gmail.com> * remove old TimingCallback Signed-off-by: ericharper <complex451@gmail.com> * style Signed-off-by: ericharper <complex451@gmail.com> * update jenkins to use latest apex and sandeep's fork Signed-off-by: ericharper <complex451@gmail.com> * update jenkins Signed-off-by: ericharper <complex451@gmail.com> * update jenkins Signed-off-by: ericharper <complex451@gmail.com> * update jenkins Signed-off-by: ericharper <complex451@gmail.com> * update jenkins Signed-off-by: ericharper <complex451@gmail.com> * try 2109 container Signed-off-by: ericharper <complex451@gmail.com> * try cuda container Signed-off-by: ericharper <complex451@gmail.com> * use internal container Signed-off-by: ericharper <complex451@gmail.com> * update checkpoint tests Signed-off-by: ericharper <complex451@gmail.com> * fix scheduler args Signed-off-by: ericharper <complex451@gmail.com> * update eval Signed-off-by: ericharper <complex451@gmail.com> * style Signed-off-by: ericharper <complex451@gmail.com> * update jenkins to use ptl 1.5 rc Signed-off-by: ericharper <complex451@gmail.com> * add import guard to jenkins Signed-off-by: ericharper <complex451@gmail.com> * add import guard to jenkins Signed-off-by: ericharper <complex451@gmail.com> * remove deterministic Signed-off-by: ericharper <complex451@gmail.com> * install numba .53 Signed-off-by: ericharper <complex451@gmail.com> * allow for more variance Signed-off-by: ericharper <complex451@gmail.com> * update trainer config dataclass Signed-off-by: ericharper <complex451@gmail.com> * test_get_optimizer on gpu Signed-off-by: ericharper <complex451@gmail.com> * revert comment Signed-off-by: ericharper <complex451@gmail.com> * change trainer config default to 32 Signed-off-by: ericharper <complex451@gmail.com> * [BigNLP] Remove fused kernel code instead use Apex (#2984) * remove fused_kernels Signed-off-by: ericharper <complex451@gmail.com> * remove fused_kernels Signed-off-by: ericharper <complex451@gmail.com> * remove fused layer norm and fused softmax and use apex instead Signed-off-by: ericharper <complex451@gmail.com> * update imports Signed-off-by: ericharper <complex451@gmail.com> * remove comment Signed-off-by: ericharper <complex451@gmail.com> * use apex enums Signed-off-by: ericharper <complex451@gmail.com> * use apex enums Signed-off-by: ericharper <complex451@gmail.com> * add tab Signed-off-by: ericharper <complex451@gmail.com> * Timer with sliding window (#3002) Co-authored-by: Micha Livne <michalivne@users.noreply.github.com> * revert tab Signed-off-by: ericharper <complex451@gmail.com> * check for rank zero Signed-off-by: ericharper <complex451@gmail.com> * check for rank zero Signed-off-by: ericharper <complex451@gmail.com> * try explicit log dir Signed-off-by: ericharper <complex451@gmail.com> * add + Signed-off-by: ericharper <complex451@gmail.com> * don't rm Signed-off-by: ericharper <complex451@gmail.com> * make dir if it doesn't exist Signed-off-by: ericharper <complex451@gmail.com> * create mp nemo file in temp directory Signed-off-by: ericharper <complex451@gmail.com> * simplify mp save_to Signed-off-by: ericharper <complex451@gmail.com> * handle mp 1 case Signed-off-by: ericharper <complex451@gmail.com> * style fix Signed-off-by: ericharper <complex451@gmail.com> * remove files Signed-off-by: ericharper <complex451@gmail.com> * fix consumed_samples when resuming Signed-off-by: ericharper <complex451@gmail.com> * fix reinstall.sh Signed-off-by: ericharper <complex451@gmail.com> * update req Signed-off-by: ericharper <complex451@gmail.com> * add more detailed log for dataloaders Signed-off-by: ericharper <complex451@gmail.com> * check if cuda is available before using fused_adam Signed-off-by: ericharper <complex451@gmail.com> * revert comment Signed-off-by: ericharper <complex451@gmail.com> * update eval script to use model.freeze Signed-off-by: ericharper <complex451@gmail.com> * log train loss averaged over gradient accumulation steps Signed-off-by: ericharper <complex451@gmail.com> * check copyright earlier Signed-off-by: ericharper <complex451@gmail.com> * todo Signed-off-by: ericharper <complex451@gmail.com> * override SaveRestoreConnector in NLPModel init Signed-off-by: ericharper <complex451@gmail.com> * move to scripts Signed-off-by: ericharper <complex451@gmail.com> * remove star import Signed-off-by: ericharper <complex451@gmail.com> * remove comments Signed-off-by: ericharper <complex451@gmail.com> * remove unused dataset Signed-off-by: ericharper <complex451@gmail.com> * removed barrier Signed-off-by: ericharper <complex451@gmail.com> * check cfg Signed-off-by: ericharper <complex451@gmail.com> * remove logging Signed-off-by: ericharper <complex451@gmail.com> * freeze, unfreeze Signed-off-by: ericharper <complex451@gmail.com> * return None Signed-off-by: ericharper <complex451@gmail.com> * remove unused imports Signed-off-by: ericharper <complex451@gmail.com> * add TODO Signed-off-by: ericharper <complex451@gmail.com> * typecheck Signed-off-by: ericharper <complex451@gmail.com> * typo Signed-off-by: ericharper <complex451@gmail.com> * todo Signed-off-by: ericharper <complex451@gmail.com> * add common native plugin Signed-off-by: ericharper <complex451@gmail.com> * restore with trainer Signed-off-by: ericharper <complex451@gmail.com> * style Signed-off-by: ericharper <complex451@gmail.com> * deprecate megatron-lm bert Signed-off-by: ericharper <complex451@gmail.com> * deprecate megatron-lm bert Signed-off-by: ericharper <complex451@gmail.com> * compile helpers ont he fly Signed-off-by: ericharper <complex451@gmail.com> * remove amp_level Signed-off-by: ericharper <complex451@gmail.com> * remove amp_level from configs Signed-off-by: ericharper <complex451@gmail.com> * add missing import Signed-off-by: ericharper <complex451@gmail.com> * typo Signed-off-by: ericharper <complex451@gmail.com> * remove amp_level Signed-off-by: ericharper <complex451@gmail.com> * use fast huggingface tokenizers by default Signed-off-by: ericharper <complex451@gmail.com> * deal with huggingface tokenizer positional args Signed-off-by: ericharper <complex451@gmail.com> * deal with huggingface tokenizer positional args Signed-off-by: ericharper <complex451@gmail.com> * deal with huggingface tokenizer positional args Signed-off-by: ericharper <complex451@gmail.com> * revert use_fast default to False Signed-off-by: ericharper <complex451@gmail.com> * return super training_epoch_end Signed-off-by: ericharper <complex451@gmail.com> * remove optimizer_idx arg from training_step Signed-off-by: ericharper <complex451@gmail.com> * remove unused arg from on_train_epoch_end Signed-off-by: ericharper <complex451@gmail.com> * add restore_from_path to nemo config Signed-off-by: ericharper <complex451@gmail.com> * add comment Signed-off-by: ericharper <complex451@gmail.com> * revert Signed-off-by: ericharper <complex451@gmail.com> * override connector if not subclassing NLPSaveRestoreConnector for model parallel save Signed-off-by: ericharper <complex451@gmail.com> * update test optimizer Signed-off-by: ericharper <complex451@gmail.com> * clean up Signed-off-by: ericharper <complex451@gmail.com> * clean up Signed-off-by: ericharper <complex451@gmail.com> * clean up Signed-off-by: ericharper <complex451@gmail.com> * clean up Signed-off-by: ericharper <complex451@gmail.com> * make data_prefix mandatory in config Signed-off-by: ericharper <complex451@gmail.com> * update installation instructions on readme Signed-off-by: ericharper <complex451@gmail.com> * update dockerfile Signed-off-by: ericharper <complex451@gmail.com> * add todo Signed-off-by: ericharper <complex451@gmail.com> * raise error if trying to use always_save_nemo with model parallel model Signed-off-by: ericharper <complex451@gmail.com> * remove comment Signed-off-by: ericharper <complex451@gmail.com> Co-authored-by: Sandeep Subramanian <sandeep.subramanian.1@umontreal.ca> Co-authored-by: Sangkug Lym <slym@nvidia.com> Co-authored-by: Oleksii Kuchaiev <okuchaiev@users.noreply.github.com> Co-authored-by: Micha Livne <michalivne@users.noreply.github.com> Co-authored-by: Micha Livne <mlivne@nvidia.com> Co-authored-by: Oleksii Kuchaiev <okuchaiev@nvidia.com>
2021-10-21 05:06:37 +02:00
2) Pytorch 1.10.0 or above
3) NVIDIA GPU for training
Documentation
-------------
.. |main| image:: https://readthedocs.com/projects/nvidia-nemo/badge/?version=main
:alt: Documentation Status
:scale: 100%
:target: https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/
.. |stable| image:: https://readthedocs.com/projects/nvidia-nemo/badge/?version=stable
:alt: Documentation Status
:scale: 100%
:target: https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/stable/
Merge r1.1 bugfixes to main. Update dep versions. (#2437) * Update notebook branch and Jenkinsfile for 1.1.0 testing (#2378) * update branch Signed-off-by: ericharper <complex451@gmail.com> * update jenkinsfile Signed-off-by: ericharper <complex451@gmail.com> * [BUGFIX] NMT Multi-node was incorrectly computing num_replicas (#2380) * fix property when not using model parallel Signed-off-by: ericharper <complex451@gmail.com> * fix property when not using model parallel Signed-off-by: ericharper <complex451@gmail.com> * add debug statement Signed-off-by: ericharper <complex451@gmail.com> * add debug statement Signed-off-by: ericharper <complex451@gmail.com> * instantiate with NLPDDPPlugin with num_nodes from trainer config Signed-off-by: ericharper <complex451@gmail.com> * Update ASR scripts for tokenizer building and tarred dataset building (#2381) * Update ASR scripts for tokenizer building and tarred dataset building Signed-off-by: smajumdar <titu1994@gmail.com> * Update container Signed-off-by: smajumdar <titu1994@gmail.com> * Add STT Zh Citrinet 1024 Gamma 0.25 model Signed-off-by: smajumdar <titu1994@gmail.com> * Update notebook (#2391) Signed-off-by: smajumdar <titu1994@gmail.com> * ASR Notebooks fix for 1.1.0 (#2395) * nb fix for spring clean Signed-off-by: fayejf <fayejf07@gmail.com> * remove outdated instruction Signed-off-by: fayejf <fayejf07@gmail.com> * Mean normalization (#2397) * norm embeddings Signed-off-by: nithinraok <nithinrao.koluguri@gmail.com> * move to utils Signed-off-by: nithinraok <nithinrao.koluguri@gmail.com> * Bugfix adaptive spec augment time masking (#2398) * bugfix adaptive spec augment Signed-off-by: smajumdar <titu1994@gmail.com> * Revert freq mask guard Signed-off-by: smajumdar <titu1994@gmail.com> * Revert freq mask guard Signed-off-by: smajumdar <titu1994@gmail.com> * Remove static time width clamping Signed-off-by: smajumdar <titu1994@gmail.com> * Correct typos and issues with notebooks (#2402) * Fix Primer notebook Signed-off-by: smajumdar <titu1994@gmail.com> * Typo Signed-off-by: smajumdar <titu1994@gmail.com> * remove accelerator=DDP in tutorial notebooks to avoid errors. (#2403) Signed-off-by: Hoo Chang Shin <hshin@nvidia.com> Co-authored-by: Hoo Chang Shin <hshin@nvidia.com> * [BUGFIX] Megatron in NMT was setting vocab_file to None (#2417) * make vocab_file configurable for megatron in nmt Signed-off-by: ericharper <complex451@gmail.com> * update docs Signed-off-by: ericharper <complex451@gmail.com> * update docs Signed-off-by: ericharper <complex451@gmail.com> * Link updates in docs and notebooks and typo fix (#2416) * typo fix for notebooks Signed-off-by: fayejf <fayejf07@gmail.com> * tiny typo fix in docs Signed-off-by: fayejf <fayejf07@gmail.com> * docs branch->stable Signed-off-by: fayejf <fayejf07@gmail.com> * more docs branch -> stable Signed-off-by: fayejf <fayejf07@gmail.com> * tutorial links branch -> stable Signed-off-by: fayejf <fayejf07@gmail.com> * small fix Signed-off-by: fayejf <fayejf07@gmail.com> * add renamed 06 Signed-off-by: fayejf <fayejf07@gmail.com> * more fixes Signed-off-by: fayejf <fayejf07@gmail.com> * Update onnx (#2420) Signed-off-by: smajumdar <titu1994@gmail.com> * Correct version of onnxruntime (#2422) Signed-off-by: smajumdar <titu1994@gmail.com> * update deployment instructions (#2430) Signed-off-by: ericharper <complex451@gmail.com> * Bumping version to 1.1.0 Signed-off-by: Oleksii Kuchaiev <okuchaiev@nvidia.com> * update jenksinfile Signed-off-by: ericharper <complex451@gmail.com> * add upper bounds Signed-off-by: ericharper <complex451@gmail.com> * update readme Signed-off-by: ericharper <complex451@gmail.com> * update requirements Signed-off-by: ericharper <complex451@gmail.com> * update jenkinsfile Signed-off-by: ericharper <complex451@gmail.com> * update version Signed-off-by: ericharper <complex451@gmail.com> Co-authored-by: Somshubra Majumdar <titu1994@gmail.com> Co-authored-by: fayejf <36722593+fayejf@users.noreply.github.com> Co-authored-by: Nithin Rao <nithinrao.koluguri@gmail.com> Co-authored-by: khcs <khcs@users.noreply.github.com> Co-authored-by: Hoo Chang Shin <hshin@nvidia.com> Co-authored-by: Oleksii Kuchaiev <okuchaiev@nvidia.com>
2021-07-02 23:22:44 +02:00
+---------+-------------+------------------------------------------------------------------------------------------------------------------------------------------+
| Version | Status | Description |
+=========+=============+==========================================================================================================================================+
| Latest | |main| | `Documentation of the latest (i.e. main) branch. <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/>`_ |
+---------+-------------+------------------------------------------------------------------------------------------------------------------------------------------+
| Stable | |stable| | `Documentation of the stable (i.e. most recent release) branch. <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/stable/>`_ |
+---------+-------------+------------------------------------------------------------------------------------------------------------------------------------------+
Tutorials
---------
Merge r1.1 bugfixes to main. Update dep versions. (#2437) * Update notebook branch and Jenkinsfile for 1.1.0 testing (#2378) * update branch Signed-off-by: ericharper <complex451@gmail.com> * update jenkinsfile Signed-off-by: ericharper <complex451@gmail.com> * [BUGFIX] NMT Multi-node was incorrectly computing num_replicas (#2380) * fix property when not using model parallel Signed-off-by: ericharper <complex451@gmail.com> * fix property when not using model parallel Signed-off-by: ericharper <complex451@gmail.com> * add debug statement Signed-off-by: ericharper <complex451@gmail.com> * add debug statement Signed-off-by: ericharper <complex451@gmail.com> * instantiate with NLPDDPPlugin with num_nodes from trainer config Signed-off-by: ericharper <complex451@gmail.com> * Update ASR scripts for tokenizer building and tarred dataset building (#2381) * Update ASR scripts for tokenizer building and tarred dataset building Signed-off-by: smajumdar <titu1994@gmail.com> * Update container Signed-off-by: smajumdar <titu1994@gmail.com> * Add STT Zh Citrinet 1024 Gamma 0.25 model Signed-off-by: smajumdar <titu1994@gmail.com> * Update notebook (#2391) Signed-off-by: smajumdar <titu1994@gmail.com> * ASR Notebooks fix for 1.1.0 (#2395) * nb fix for spring clean Signed-off-by: fayejf <fayejf07@gmail.com> * remove outdated instruction Signed-off-by: fayejf <fayejf07@gmail.com> * Mean normalization (#2397) * norm embeddings Signed-off-by: nithinraok <nithinrao.koluguri@gmail.com> * move to utils Signed-off-by: nithinraok <nithinrao.koluguri@gmail.com> * Bugfix adaptive spec augment time masking (#2398) * bugfix adaptive spec augment Signed-off-by: smajumdar <titu1994@gmail.com> * Revert freq mask guard Signed-off-by: smajumdar <titu1994@gmail.com> * Revert freq mask guard Signed-off-by: smajumdar <titu1994@gmail.com> * Remove static time width clamping Signed-off-by: smajumdar <titu1994@gmail.com> * Correct typos and issues with notebooks (#2402) * Fix Primer notebook Signed-off-by: smajumdar <titu1994@gmail.com> * Typo Signed-off-by: smajumdar <titu1994@gmail.com> * remove accelerator=DDP in tutorial notebooks to avoid errors. (#2403) Signed-off-by: Hoo Chang Shin <hshin@nvidia.com> Co-authored-by: Hoo Chang Shin <hshin@nvidia.com> * [BUGFIX] Megatron in NMT was setting vocab_file to None (#2417) * make vocab_file configurable for megatron in nmt Signed-off-by: ericharper <complex451@gmail.com> * update docs Signed-off-by: ericharper <complex451@gmail.com> * update docs Signed-off-by: ericharper <complex451@gmail.com> * Link updates in docs and notebooks and typo fix (#2416) * typo fix for notebooks Signed-off-by: fayejf <fayejf07@gmail.com> * tiny typo fix in docs Signed-off-by: fayejf <fayejf07@gmail.com> * docs branch->stable Signed-off-by: fayejf <fayejf07@gmail.com> * more docs branch -> stable Signed-off-by: fayejf <fayejf07@gmail.com> * tutorial links branch -> stable Signed-off-by: fayejf <fayejf07@gmail.com> * small fix Signed-off-by: fayejf <fayejf07@gmail.com> * add renamed 06 Signed-off-by: fayejf <fayejf07@gmail.com> * more fixes Signed-off-by: fayejf <fayejf07@gmail.com> * Update onnx (#2420) Signed-off-by: smajumdar <titu1994@gmail.com> * Correct version of onnxruntime (#2422) Signed-off-by: smajumdar <titu1994@gmail.com> * update deployment instructions (#2430) Signed-off-by: ericharper <complex451@gmail.com> * Bumping version to 1.1.0 Signed-off-by: Oleksii Kuchaiev <okuchaiev@nvidia.com> * update jenksinfile Signed-off-by: ericharper <complex451@gmail.com> * add upper bounds Signed-off-by: ericharper <complex451@gmail.com> * update readme Signed-off-by: ericharper <complex451@gmail.com> * update requirements Signed-off-by: ericharper <complex451@gmail.com> * update jenkinsfile Signed-off-by: ericharper <complex451@gmail.com> * update version Signed-off-by: ericharper <complex451@gmail.com> Co-authored-by: Somshubra Majumdar <titu1994@gmail.com> Co-authored-by: fayejf <36722593+fayejf@users.noreply.github.com> Co-authored-by: Nithin Rao <nithinrao.koluguri@gmail.com> Co-authored-by: khcs <khcs@users.noreply.github.com> Co-authored-by: Hoo Chang Shin <hshin@nvidia.com> Co-authored-by: Oleksii Kuchaiev <okuchaiev@nvidia.com>
2021-07-02 23:22:44 +02:00
A great way to start with NeMo is by checking `one of our tutorials <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/stable/starthere/tutorials.html>`_.
Getting help with NeMo
----------------------
FAQ can be found on NeMo's `Discussions board <https://github.com/NVIDIA/NeMo/discussions>`_. You are welcome to ask questions or start discussions there.
Installation
------------
Pip
~~~
Use this installation mode if you want the latest released version.
.. code-block:: bash
apt-get update && apt-get install -y libsndfile1 ffmpeg
pip install Cython
pip install nemo_toolkit['all']
Pip from source
~~~~~~~~~~~~~~~
Use this installation mode if you want the a version from particular GitHub branch (e.g main).
.. code-block:: bash
apt-get update && apt-get install -y libsndfile1 ffmpeg
pip install Cython
python -m pip install git+https://github.com/NVIDIA/NeMo.git@{BRANCH}#egg=nemo_toolkit[all]
From source
~~~~~~~~~~~
Use this installation mode if you are contributing to NeMo.
.. code-block:: bash
apt-get update && apt-get install -y libsndfile1 ffmpeg
git clone https://github.com/NVIDIA/NeMo
cd NeMo
./reinstall.sh
Merge r1.1 bugfixes to main. Update dep versions. (#2437) * Update notebook branch and Jenkinsfile for 1.1.0 testing (#2378) * update branch Signed-off-by: ericharper <complex451@gmail.com> * update jenkinsfile Signed-off-by: ericharper <complex451@gmail.com> * [BUGFIX] NMT Multi-node was incorrectly computing num_replicas (#2380) * fix property when not using model parallel Signed-off-by: ericharper <complex451@gmail.com> * fix property when not using model parallel Signed-off-by: ericharper <complex451@gmail.com> * add debug statement Signed-off-by: ericharper <complex451@gmail.com> * add debug statement Signed-off-by: ericharper <complex451@gmail.com> * instantiate with NLPDDPPlugin with num_nodes from trainer config Signed-off-by: ericharper <complex451@gmail.com> * Update ASR scripts for tokenizer building and tarred dataset building (#2381) * Update ASR scripts for tokenizer building and tarred dataset building Signed-off-by: smajumdar <titu1994@gmail.com> * Update container Signed-off-by: smajumdar <titu1994@gmail.com> * Add STT Zh Citrinet 1024 Gamma 0.25 model Signed-off-by: smajumdar <titu1994@gmail.com> * Update notebook (#2391) Signed-off-by: smajumdar <titu1994@gmail.com> * ASR Notebooks fix for 1.1.0 (#2395) * nb fix for spring clean Signed-off-by: fayejf <fayejf07@gmail.com> * remove outdated instruction Signed-off-by: fayejf <fayejf07@gmail.com> * Mean normalization (#2397) * norm embeddings Signed-off-by: nithinraok <nithinrao.koluguri@gmail.com> * move to utils Signed-off-by: nithinraok <nithinrao.koluguri@gmail.com> * Bugfix adaptive spec augment time masking (#2398) * bugfix adaptive spec augment Signed-off-by: smajumdar <titu1994@gmail.com> * Revert freq mask guard Signed-off-by: smajumdar <titu1994@gmail.com> * Revert freq mask guard Signed-off-by: smajumdar <titu1994@gmail.com> * Remove static time width clamping Signed-off-by: smajumdar <titu1994@gmail.com> * Correct typos and issues with notebooks (#2402) * Fix Primer notebook Signed-off-by: smajumdar <titu1994@gmail.com> * Typo Signed-off-by: smajumdar <titu1994@gmail.com> * remove accelerator=DDP in tutorial notebooks to avoid errors. (#2403) Signed-off-by: Hoo Chang Shin <hshin@nvidia.com> Co-authored-by: Hoo Chang Shin <hshin@nvidia.com> * [BUGFIX] Megatron in NMT was setting vocab_file to None (#2417) * make vocab_file configurable for megatron in nmt Signed-off-by: ericharper <complex451@gmail.com> * update docs Signed-off-by: ericharper <complex451@gmail.com> * update docs Signed-off-by: ericharper <complex451@gmail.com> * Link updates in docs and notebooks and typo fix (#2416) * typo fix for notebooks Signed-off-by: fayejf <fayejf07@gmail.com> * tiny typo fix in docs Signed-off-by: fayejf <fayejf07@gmail.com> * docs branch->stable Signed-off-by: fayejf <fayejf07@gmail.com> * more docs branch -> stable Signed-off-by: fayejf <fayejf07@gmail.com> * tutorial links branch -> stable Signed-off-by: fayejf <fayejf07@gmail.com> * small fix Signed-off-by: fayejf <fayejf07@gmail.com> * add renamed 06 Signed-off-by: fayejf <fayejf07@gmail.com> * more fixes Signed-off-by: fayejf <fayejf07@gmail.com> * Update onnx (#2420) Signed-off-by: smajumdar <titu1994@gmail.com> * Correct version of onnxruntime (#2422) Signed-off-by: smajumdar <titu1994@gmail.com> * update deployment instructions (#2430) Signed-off-by: ericharper <complex451@gmail.com> * Bumping version to 1.1.0 Signed-off-by: Oleksii Kuchaiev <okuchaiev@nvidia.com> * update jenksinfile Signed-off-by: ericharper <complex451@gmail.com> * add upper bounds Signed-off-by: ericharper <complex451@gmail.com> * update readme Signed-off-by: ericharper <complex451@gmail.com> * update requirements Signed-off-by: ericharper <complex451@gmail.com> * update jenkinsfile Signed-off-by: ericharper <complex451@gmail.com> * update version Signed-off-by: ericharper <complex451@gmail.com> Co-authored-by: Somshubra Majumdar <titu1994@gmail.com> Co-authored-by: fayejf <36722593+fayejf@users.noreply.github.com> Co-authored-by: Nithin Rao <nithinrao.koluguri@gmail.com> Co-authored-by: khcs <khcs@users.noreply.github.com> Co-authored-by: Hoo Chang Shin <hshin@nvidia.com> Co-authored-by: Oleksii Kuchaiev <okuchaiev@nvidia.com>
2021-07-02 23:22:44 +02:00
RNNT
~~~~
Note that RNNT requires numba to be installed from conda.
.. code-block:: bash
conda remove numba
pip uninstall numba
2021-07-16 02:17:19 +02:00
conda install -c numba numba
Merge r1.1 bugfixes to main. Update dep versions. (#2437) * Update notebook branch and Jenkinsfile for 1.1.0 testing (#2378) * update branch Signed-off-by: ericharper <complex451@gmail.com> * update jenkinsfile Signed-off-by: ericharper <complex451@gmail.com> * [BUGFIX] NMT Multi-node was incorrectly computing num_replicas (#2380) * fix property when not using model parallel Signed-off-by: ericharper <complex451@gmail.com> * fix property when not using model parallel Signed-off-by: ericharper <complex451@gmail.com> * add debug statement Signed-off-by: ericharper <complex451@gmail.com> * add debug statement Signed-off-by: ericharper <complex451@gmail.com> * instantiate with NLPDDPPlugin with num_nodes from trainer config Signed-off-by: ericharper <complex451@gmail.com> * Update ASR scripts for tokenizer building and tarred dataset building (#2381) * Update ASR scripts for tokenizer building and tarred dataset building Signed-off-by: smajumdar <titu1994@gmail.com> * Update container Signed-off-by: smajumdar <titu1994@gmail.com> * Add STT Zh Citrinet 1024 Gamma 0.25 model Signed-off-by: smajumdar <titu1994@gmail.com> * Update notebook (#2391) Signed-off-by: smajumdar <titu1994@gmail.com> * ASR Notebooks fix for 1.1.0 (#2395) * nb fix for spring clean Signed-off-by: fayejf <fayejf07@gmail.com> * remove outdated instruction Signed-off-by: fayejf <fayejf07@gmail.com> * Mean normalization (#2397) * norm embeddings Signed-off-by: nithinraok <nithinrao.koluguri@gmail.com> * move to utils Signed-off-by: nithinraok <nithinrao.koluguri@gmail.com> * Bugfix adaptive spec augment time masking (#2398) * bugfix adaptive spec augment Signed-off-by: smajumdar <titu1994@gmail.com> * Revert freq mask guard Signed-off-by: smajumdar <titu1994@gmail.com> * Revert freq mask guard Signed-off-by: smajumdar <titu1994@gmail.com> * Remove static time width clamping Signed-off-by: smajumdar <titu1994@gmail.com> * Correct typos and issues with notebooks (#2402) * Fix Primer notebook Signed-off-by: smajumdar <titu1994@gmail.com> * Typo Signed-off-by: smajumdar <titu1994@gmail.com> * remove accelerator=DDP in tutorial notebooks to avoid errors. (#2403) Signed-off-by: Hoo Chang Shin <hshin@nvidia.com> Co-authored-by: Hoo Chang Shin <hshin@nvidia.com> * [BUGFIX] Megatron in NMT was setting vocab_file to None (#2417) * make vocab_file configurable for megatron in nmt Signed-off-by: ericharper <complex451@gmail.com> * update docs Signed-off-by: ericharper <complex451@gmail.com> * update docs Signed-off-by: ericharper <complex451@gmail.com> * Link updates in docs and notebooks and typo fix (#2416) * typo fix for notebooks Signed-off-by: fayejf <fayejf07@gmail.com> * tiny typo fix in docs Signed-off-by: fayejf <fayejf07@gmail.com> * docs branch->stable Signed-off-by: fayejf <fayejf07@gmail.com> * more docs branch -> stable Signed-off-by: fayejf <fayejf07@gmail.com> * tutorial links branch -> stable Signed-off-by: fayejf <fayejf07@gmail.com> * small fix Signed-off-by: fayejf <fayejf07@gmail.com> * add renamed 06 Signed-off-by: fayejf <fayejf07@gmail.com> * more fixes Signed-off-by: fayejf <fayejf07@gmail.com> * Update onnx (#2420) Signed-off-by: smajumdar <titu1994@gmail.com> * Correct version of onnxruntime (#2422) Signed-off-by: smajumdar <titu1994@gmail.com> * update deployment instructions (#2430) Signed-off-by: ericharper <complex451@gmail.com> * Bumping version to 1.1.0 Signed-off-by: Oleksii Kuchaiev <okuchaiev@nvidia.com> * update jenksinfile Signed-off-by: ericharper <complex451@gmail.com> * add upper bounds Signed-off-by: ericharper <complex451@gmail.com> * update readme Signed-off-by: ericharper <complex451@gmail.com> * update requirements Signed-off-by: ericharper <complex451@gmail.com> * update jenkinsfile Signed-off-by: ericharper <complex451@gmail.com> * update version Signed-off-by: ericharper <complex451@gmail.com> Co-authored-by: Somshubra Majumdar <titu1994@gmail.com> Co-authored-by: fayejf <36722593+fayejf@users.noreply.github.com> Co-authored-by: Nithin Rao <nithinrao.koluguri@gmail.com> Co-authored-by: khcs <khcs@users.noreply.github.com> Co-authored-by: Hoo Chang Shin <hshin@nvidia.com> Co-authored-by: Oleksii Kuchaiev <okuchaiev@nvidia.com>
2021-07-02 23:22:44 +02:00
[BigNLP] Merge Megatron GPT to main (#2975) * fix gpu init after removing debug print in mpu Signed-off-by: ericharper <complex451@gmail.com> * add fused_adam Signed-off-by: ericharper <complex451@gmail.com> * check ds is not none before logging len Signed-off-by: ericharper <complex451@gmail.com> * set fp16 arg to true and fix enum conflict Signed-off-by: ericharper <complex451@gmail.com> * make fp16 arg configurable Signed-off-by: ericharper <complex451@gmail.com> * add grad clip from megatron Signed-off-by: ericharper <complex451@gmail.com> * Linear warmup with cosine annealing and constant holding (#2846) * Testing cosine schedule Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * Style fixes Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * Fixes Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * More fixes Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * update config for constant steps in schedule Signed-off-by: ericharper <complex451@gmail.com> * temporarily import enum from megatron Signed-off-by: ericharper <complex451@gmail.com> * add grad clip for fp32 Signed-off-by: ericharper <complex451@gmail.com> * update check for _del_model_without_trainer Signed-off-by: ericharper <complex451@gmail.com> * updating restore for model parallel Signed-off-by: ericharper <complex451@gmail.com> * add predict script Signed-off-by: ericharper <complex451@gmail.com> * update test iters Signed-off-by: ericharper <complex451@gmail.com> * add barrier Signed-off-by: ericharper <complex451@gmail.com> * return if clip_val is 0 or None Signed-off-by: ericharper <complex451@gmail.com> * when using amp clip grads after they are unscaled Signed-off-by: ericharper <complex451@gmail.com> * make native amp scaler hyperparams configurable Signed-off-by: ericharper <complex451@gmail.com> * (1) nvfuser, (2) amp-casting decoration (#2894) * (1) nvfuser, (2) amp-casting decoration Signed-off-by: Sangkug Lym <slym@nvidia.com> * support bf16 Signed-off-by: Sangkug Lym <slym@nvidia.com> * update package info Signed-off-by: ericharper <complex451@gmail.com> * add set device to constructor Signed-off-by: ericharper <complex451@gmail.com> * set_device in constructor Signed-off-by: ericharper <complex451@gmail.com> * [BigNLP] Remove megatron-lm dependency. (#2910) * remove args Signed-off-by: ericharper <complex451@gmail.com> * remove args Signed-off-by: ericharper <complex451@gmail.com> * remove args Signed-off-by: ericharper <complex451@gmail.com> * remove args Signed-off-by: ericharper <complex451@gmail.com> * remove args in progress Signed-off-by: ericharper <complex451@gmail.com> * remove args in progress Signed-off-by: ericharper <complex451@gmail.com> * remove args in progress Signed-off-by: ericharper <complex451@gmail.com> * remove args in progress Signed-off-by: ericharper <complex451@gmail.com> * add load_fused_kernels Signed-off-by: ericharper <complex451@gmail.com> * add load_fused_kernels Signed-off-by: ericharper <complex451@gmail.com> * update megatron_init Signed-off-by: ericharper <complex451@gmail.com> * add fused kernels Signed-off-by: ericharper <complex451@gmail.com> * add fused kernels Signed-off-by: ericharper <complex451@gmail.com> * update process batch Signed-off-by: ericharper <complex451@gmail.com> * remove erroneous import Signed-off-by: ericharper <complex451@gmail.com> * remove erroneous import Signed-off-by: ericharper <complex451@gmail.com> * remove erroneous import Signed-off-by: ericharper <complex451@gmail.com> * add megatron clip_grad Signed-off-by: ericharper <complex451@gmail.com> * trying to resolve circular import error Signed-off-by: ericharper <complex451@gmail.com> * rename file Signed-off-by: ericharper <complex451@gmail.com> * remove non-gpt models and datasets from __init__ files Signed-off-by: ericharper <complex451@gmail.com> * set device in constructorfor gpu init Signed-off-by: ericharper <complex451@gmail.com> * set device in constructorfor gpu init Signed-off-by: ericharper <complex451@gmail.com> * set_device in constructor Signed-off-by: ericharper <complex451@gmail.com> * clean config Signed-off-by: ericharper <complex451@gmail.com> * update MegatronDataset Signed-off-by: ericharper <complex451@gmail.com> * clean up MegatronModule Signed-off-by: ericharper <complex451@gmail.com> * clean up MegatronModule Signed-off-by: ericharper <complex451@gmail.com> * rename fp16 and bf16 flags to fused_softmax_input_in_fp16/bf16 Signed-off-by: ericharper <complex451@gmail.com> * rename to fused_fp16 Signed-off-by: ericharper <complex451@gmail.com> * add fused_fp16 arg to LayerNorm calls Signed-off-by: ericharper <complex451@gmail.com> * fix arg name Signed-off-by: ericharper <complex451@gmail.com> * fix arg name Signed-off-by: ericharper <complex451@gmail.com> * fix import Signed-off-by: ericharper <complex451@gmail.com> * update arg Signed-off-by: ericharper <complex451@gmail.com> * skip warmup default to True Signed-off-by: ericharper <complex451@gmail.com> * skip warmup default to True Signed-off-by: ericharper <complex451@gmail.com> * Adding complete method to MegatronGPTModel (#2935) Signed-off-by: Oleksii Kuchaiev <okuchaiev@nvidia.com> * make ffn_hidden_size mandatory Signed-off-by: ericharper <complex451@gmail.com> * Manually migrating timing of step into branch (#2937) * 1. Manually migrating timing of step into branch. Signed-off-by: Micha Livne <mlivne@nvidia.com> * 1. Updated file name and content. Signed-off-by: Micha Livne <mlivne@nvidia.com> * 1. Updated to latest code. Signed-off-by: Micha Livne <mlivne@nvidia.com> Co-authored-by: Micha Livne <mlivne@nvidia.com> * remove unused imports Signed-off-by: ericharper <complex451@gmail.com> * remove unused import Signed-off-by: ericharper <complex451@gmail.com> * remove unused import Signed-off-by: ericharper <complex451@gmail.com> * remove unused import Signed-off-by: ericharper <complex451@gmail.com> * check fused_fp16 and fused_bf16 are not both True Signed-off-by: ericharper <complex451@gmail.com> * update predict script for model parallel .nemo Signed-off-by: ericharper <complex451@gmail.com> * typo Signed-off-by: ericharper <complex451@gmail.com> * typo Signed-off-by: ericharper <complex451@gmail.com> Co-authored-by: Oleksii Kuchaiev <okuchaiev@users.noreply.github.com> Co-authored-by: Micha Livne <michalivne@users.noreply.github.com> Co-authored-by: Micha Livne <mlivne@nvidia.com> * NVfuser (#2943) * activation checkpoint recompute Signed-off-by: Sangkug Lym <slym@nvidia.com> * selective nvfuser setup * Megatron gpt bfloat support (#2926) * Save/restore fix Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * Another merge Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * Bf16 args in init Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * Set precision Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * Remove debug stuff Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * add bf16 casting decorator Signed-off-by: Sangkug Lym <slym@nvidia.com> * Bfloat layernorm propagation Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * activation checkpoint recompute Signed-off-by: Sangkug Lym <slym@nvidia.com> * selective nvfuser setup * More arg removal Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * Remove BERTDataset Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * update to latest apex and patch transformer autocast Signed-off-by: ericharper <complex451@gmail.com> Co-authored-by: Sangkug Lym <slym@nvidia.com> Co-authored-by: ericharper <complex451@gmail.com> * don't set jit for bf16 Signed-off-by: ericharper <complex451@gmail.com> * replace apex.mpu Signed-off-by: ericharper <complex451@gmail.com> * fix grad clip Signed-off-by: ericharper <complex451@gmail.com> * NVFuser fixes (#2951) * Fuser fixes Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * Remove dummy handler Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * Remove PTL plugin based logic for fusion Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * remove duplicated file Signed-off-by: ericharper <complex451@gmail.com> * typo (#2960) Signed-off-by: ericharper <complex451@gmail.com> * [BigNLP] Script to convert GPT checkpoint to .nemo (#2958) * remove args Signed-off-by: ericharper <complex451@gmail.com> * remove args Signed-off-by: ericharper <complex451@gmail.com> * remove args Signed-off-by: ericharper <complex451@gmail.com> * remove args Signed-off-by: ericharper <complex451@gmail.com> * remove args in progress Signed-off-by: ericharper <complex451@gmail.com> * remove args in progress Signed-off-by: ericharper <complex451@gmail.com> * remove args in progress Signed-off-by: ericharper <complex451@gmail.com> * remove args in progress Signed-off-by: ericharper <complex451@gmail.com> * add load_fused_kernels Signed-off-by: ericharper <complex451@gmail.com> * add load_fused_kernels Signed-off-by: ericharper <complex451@gmail.com> * update megatron_init Signed-off-by: ericharper <complex451@gmail.com> * add fused kernels Signed-off-by: ericharper <complex451@gmail.com> * add fused kernels Signed-off-by: ericharper <complex451@gmail.com> * update process batch Signed-off-by: ericharper <complex451@gmail.com> * remove erroneous import Signed-off-by: ericharper <complex451@gmail.com> * remove erroneous import Signed-off-by: ericharper <complex451@gmail.com> * remove erroneous import Signed-off-by: ericharper <complex451@gmail.com> * add megatron clip_grad Signed-off-by: ericharper <complex451@gmail.com> * trying to resolve circular import error Signed-off-by: ericharper <complex451@gmail.com> * rename file Signed-off-by: ericharper <complex451@gmail.com> * remove non-gpt models and datasets from __init__ files Signed-off-by: ericharper <complex451@gmail.com> * set device in constructorfor gpu init Signed-off-by: ericharper <complex451@gmail.com> * set device in constructorfor gpu init Signed-off-by: ericharper <complex451@gmail.com> * set_device in constructor Signed-off-by: ericharper <complex451@gmail.com> * clean config Signed-off-by: ericharper <complex451@gmail.com> * update MegatronDataset Signed-off-by: ericharper <complex451@gmail.com> * clean up MegatronModule Signed-off-by: ericharper <complex451@gmail.com> * clean up MegatronModule Signed-off-by: ericharper <complex451@gmail.com> * rename fp16 and bf16 flags to fused_softmax_input_in_fp16/bf16 Signed-off-by: ericharper <complex451@gmail.com> * rename to fused_fp16 Signed-off-by: ericharper <complex451@gmail.com> * add fused_fp16 arg to LayerNorm calls Signed-off-by: ericharper <complex451@gmail.com> * fix arg name Signed-off-by: ericharper <complex451@gmail.com> * fix arg name Signed-off-by: ericharper <complex451@gmail.com> * fix import Signed-off-by: ericharper <complex451@gmail.com> * update arg Signed-off-by: ericharper <complex451@gmail.com> * skip warmup default to True Signed-off-by: ericharper <complex451@gmail.com> * skip warmup default to True Signed-off-by: ericharper <complex451@gmail.com> * Adding complete method to MegatronGPTModel (#2935) Signed-off-by: Oleksii Kuchaiev <okuchaiev@nvidia.com> * make ffn_hidden_size mandatory Signed-off-by: ericharper <complex451@gmail.com> * Manually migrating timing of step into branch (#2937) * 1. Manually migrating timing of step into branch. Signed-off-by: Micha Livne <mlivne@nvidia.com> * 1. Updated file name and content. Signed-off-by: Micha Livne <mlivne@nvidia.com> * 1. Updated to latest code. Signed-off-by: Micha Livne <mlivne@nvidia.com> Co-authored-by: Micha Livne <mlivne@nvidia.com> * remove unused imports Signed-off-by: ericharper <complex451@gmail.com> * remove unused import Signed-off-by: ericharper <complex451@gmail.com> * remove unused import Signed-off-by: ericharper <complex451@gmail.com> * remove unused import Signed-off-by: ericharper <complex451@gmail.com> * check fused_fp16 and fused_bf16 are not both True Signed-off-by: ericharper <complex451@gmail.com> * update predict script for model parallel .nemo Signed-off-by: ericharper <complex451@gmail.com> * typo Signed-off-by: ericharper <complex451@gmail.com> * add script to convert .ckpt to .nemo Signed-off-by: ericharper <complex451@gmail.com> * in progress Signed-off-by: ericharper <complex451@gmail.com> * update Signed-off-by: ericharper <complex451@gmail.com> * convert mp checkpoints to nemo Signed-off-by: ericharper <complex451@gmail.com> * update help Signed-off-by: ericharper <complex451@gmail.com> * add safeguard for model parallel save_to Signed-off-by: ericharper <complex451@gmail.com> * adjust NLPModel save_to to be safer for model parallel Signed-off-by: Oleksii Kuchaiev <okuchaiev@nvidia.com> Co-authored-by: Oleksii Kuchaiev <okuchaiev@users.noreply.github.com> Co-authored-by: Micha Livne <michalivne@users.noreply.github.com> Co-authored-by: Micha Livne <mlivne@nvidia.com> Co-authored-by: Oleksii Kuchaiev <okuchaiev@nvidia.com> * [BigNLP] Update GPT evaluation to work with tensor model parallel (#2959) * in progress Signed-off-by: ericharper <complex451@gmail.com> * update args Signed-off-by: ericharper <complex451@gmail.com> * add request dataset Signed-off-by: ericharper <complex451@gmail.com> * tokenize request Signed-off-by: ericharper <complex451@gmail.com> * in progress Signed-off-by: ericharper <complex451@gmail.com> * able to run Signed-off-by: ericharper <complex451@gmail.com> * reduce logits Signed-off-by: ericharper <complex451@gmail.com> * capture response Signed-off-by: ericharper <complex451@gmail.com> * squeeze and unsqueeze Signed-off-by: ericharper <complex451@gmail.com> * handle non model parallel case Signed-off-by: ericharper <complex451@gmail.com> * clean imports Signed-off-by: ericharper <complex451@gmail.com> * add file Signed-off-by: ericharper <complex451@gmail.com> * convert logits to log_probs Signed-off-by: Oleksii Kuchaiev <okuchaiev@nvidia.com> * rename logits to log_probs Signed-off-by: Oleksii Kuchaiev <okuchaiev@nvidia.com> Co-authored-by: Oleksii Kuchaiev <okuchaiev@nvidia.com> * add megatron gpt pretraining Signed-off-by: ericharper <complex451@gmail.com> * add megatron gpt pretraining Signed-off-by: ericharper <complex451@gmail.com> * add megatron gpt pretraining Signed-off-by: ericharper <complex451@gmail.com> * updating to work with latest megatron Signed-off-by: ericharper <complex451@gmail.com> * updating to work with latest megatron Signed-off-by: ericharper <complex451@gmail.com> * update _del_model Signed-off-by: ericharper <complex451@gmail.com> * adding gpt model Signed-off-by: ericharper <complex451@gmail.com> * adding gpt model Signed-off-by: ericharper <complex451@gmail.com> * adding gpt model Signed-off-by: ericharper <complex451@gmail.com> * instantiate GPTmodel Signed-off-by: ericharper <complex451@gmail.com> * adding build dataset Signed-off-by: ericharper <complex451@gmail.com> * build megatron dataset in .setup Signed-off-by: ericharper <complex451@gmail.com> * setup dataloader Signed-off-by: ericharper <complex451@gmail.com> * add vocab_file and merge_file to megatron init Signed-off-by: ericharper <complex451@gmail.com> * add forward Signed-off-by: ericharper <complex451@gmail.com> * add train loss Signed-off-by: ericharper <complex451@gmail.com> * add optimizer Signed-off-by: ericharper <complex451@gmail.com> * add exp_manager Signed-off-by: ericharper <complex451@gmail.com> * multi-gpu is working Signed-off-by: ericharper <complex451@gmail.com> * adding val loop Signed-off-by: ericharper <complex451@gmail.com> * style Signed-off-by: ericharper <complex451@gmail.com> * adding val loop Signed-off-by: ericharper <complex451@gmail.com> * fix ranks Signed-off-by: ericharper <complex451@gmail.com> * fix model parallel checkpoint saving Signed-off-by: ericharper <complex451@gmail.com> * fix _del_model Signed-off-by: ericharper <complex451@gmail.com> * added megatron batch sampler Signed-off-by: ericharper <complex451@gmail.com> * try to fix num steps Signed-off-by: ericharper <complex451@gmail.com> * add wandb to config Signed-off-by: ericharper <complex451@gmail.com> * log lr Signed-off-by: ericharper <complex451@gmail.com> * add warmup ratio to config Signed-off-by: ericharper <complex451@gmail.com> * update configs Signed-off-by: ericharper <complex451@gmail.com> * update configs Signed-off-by: ericharper <complex451@gmail.com> * add cpu init to args Signed-off-by: ericharper <complex451@gmail.com> * update config Signed-off-by: ericharper <complex451@gmail.com> * update config Signed-off-by: ericharper <complex451@gmail.com> * Initial megatron dataset port Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * Fix merge conflicts Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * License fixes and megatron model porting Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * Style fixes Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * More fixes to import from nemo rather than megatron Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * Fix circular imports Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * Style fixes Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * Revert config file Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * Restructure further to avoid circular imports Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * add Makefile Signed-off-by: ericharper <complex451@gmail.com> * Add megatron modules Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * add license Signed-off-by: ericharper <complex451@gmail.com> * Port from latest megatron Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * update cfg Signed-off-by: ericharper <complex451@gmail.com> * update config Signed-off-by: ericharper <complex451@gmail.com> * add _del_model_without_trainer Signed-off-by: ericharper <complex451@gmail.com> * add data preprocessing script Signed-off-by: ericharper <complex451@gmail.com> * update config Signed-off-by: ericharper <complex451@gmail.com> * use apex mpu Signed-off-by: ericharper <complex451@gmail.com> * replace print_rank_0 with nemo utils logging Signed-off-by: ericharper <complex451@gmail.com> * use apex mpu Signed-off-by: ericharper <complex451@gmail.com> * use apex mpu Signed-off-by: ericharper <complex451@gmail.com> * add use_cpu_initialization Signed-off-by: ericharper <complex451@gmail.com> * fixing autoresume in progress Signed-off-by: ericharper <complex451@gmail.com> * properly removing last checkpoint Signed-off-by: ericharper <complex451@gmail.com> * log consumed samples Signed-off-by: ericharper <complex451@gmail.com> * fix mp autoresume Signed-off-by: ericharper <complex451@gmail.com> * add NLPSaveRestoreConnector Signed-off-by: ericharper <complex451@gmail.com> * Megatron GPT training with NeMo tokenizers (#2818) * Update files from megatron repo Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * Remove non NLP data related files from megatron Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * Merge megatron and nemo tokenizers Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * Remove get_tokenizer() calls from gpt model Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * Update tokenizer yaml config Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * add todo Signed-off-by: ericharper <complex451@gmail.com> * update config Signed-off-by: ericharper <complex451@gmail.com> * make init_method_std configurable Signed-off-by: ericharper <complex451@gmail.com> * make gpu init work by setting random seed earlier Signed-off-by: ericharper <complex451@gmail.com> * fix gpu init after removing debug print in mpu Signed-off-by: ericharper <complex451@gmail.com> * add fused_adam Signed-off-by: ericharper <complex451@gmail.com> * check ds is not none before logging len Signed-off-by: ericharper <complex451@gmail.com> * set fp16 arg to true and fix enum conflict Signed-off-by: ericharper <complex451@gmail.com> * make fp16 arg configurable Signed-off-by: ericharper <complex451@gmail.com> * add grad clip from megatron Signed-off-by: ericharper <complex451@gmail.com> * Linear warmup with cosine annealing and constant holding (#2846) * Testing cosine schedule Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * Style fixes Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * Fixes Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * More fixes Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * update config for constant steps in schedule Signed-off-by: ericharper <complex451@gmail.com> * temporarily import enum from megatron Signed-off-by: ericharper <complex451@gmail.com> * add grad clip for fp32 Signed-off-by: ericharper <complex451@gmail.com> * update check for _del_model_without_trainer Signed-off-by: ericharper <complex451@gmail.com> * updating restore for model parallel Signed-off-by: ericharper <complex451@gmail.com> * add predict script Signed-off-by: ericharper <complex451@gmail.com> * update test iters Signed-off-by: ericharper <complex451@gmail.com> * add barrier Signed-off-by: ericharper <complex451@gmail.com> * return if clip_val is 0 or None Signed-off-by: ericharper <complex451@gmail.com> * when using amp clip grads after they are unscaled Signed-off-by: ericharper <complex451@gmail.com> * make native amp scaler hyperparams configurable Signed-off-by: ericharper <complex451@gmail.com> * (1) nvfuser, (2) amp-casting decoration (#2894) * (1) nvfuser, (2) amp-casting decoration Signed-off-by: Sangkug Lym <slym@nvidia.com> * support bf16 Signed-off-by: Sangkug Lym <slym@nvidia.com> * update package info Signed-off-by: ericharper <complex451@gmail.com> * add set device to constructor Signed-off-by: ericharper <complex451@gmail.com> * set_device in constructor Signed-off-by: ericharper <complex451@gmail.com> * [BigNLP] Remove megatron-lm dependency. (#2910) * remove args Signed-off-by: ericharper <complex451@gmail.com> * remove args Signed-off-by: ericharper <complex451@gmail.com> * remove args Signed-off-by: ericharper <complex451@gmail.com> * remove args Signed-off-by: ericharper <complex451@gmail.com> * remove args in progress Signed-off-by: ericharper <complex451@gmail.com> * remove args in progress Signed-off-by: ericharper <complex451@gmail.com> * remove args in progress Signed-off-by: ericharper <complex451@gmail.com> * remove args in progress Signed-off-by: ericharper <complex451@gmail.com> * add load_fused_kernels Signed-off-by: ericharper <complex451@gmail.com> * add load_fused_kernels Signed-off-by: ericharper <complex451@gmail.com> * update megatron_init Signed-off-by: ericharper <complex451@gmail.com> * add fused kernels Signed-off-by: ericharper <complex451@gmail.com> * add fused kernels Signed-off-by: ericharper <complex451@gmail.com> * update process batch Signed-off-by: ericharper <complex451@gmail.com> * remove erroneous import Signed-off-by: ericharper <complex451@gmail.com> * remove erroneous import Signed-off-by: ericharper <complex451@gmail.com> * remove erroneous import Signed-off-by: ericharper <complex451@gmail.com> * add megatron clip_grad Signed-off-by: ericharper <complex451@gmail.com> * trying to resolve circular import error Signed-off-by: ericharper <complex451@gmail.com> * rename file Signed-off-by: ericharper <complex451@gmail.com> * remove non-gpt models and datasets from __init__ files Signed-off-by: ericharper <complex451@gmail.com> * set device in constructorfor gpu init Signed-off-by: ericharper <complex451@gmail.com> * set device in constructorfor gpu init Signed-off-by: ericharper <complex451@gmail.com> * set_device in constructor Signed-off-by: ericharper <complex451@gmail.com> * clean config Signed-off-by: ericharper <complex451@gmail.com> * update MegatronDataset Signed-off-by: ericharper <complex451@gmail.com> * clean up MegatronModule Signed-off-by: ericharper <complex451@gmail.com> * clean up MegatronModule Signed-off-by: ericharper <complex451@gmail.com> * rename fp16 and bf16 flags to fused_softmax_input_in_fp16/bf16 Signed-off-by: ericharper <complex451@gmail.com> * rename to fused_fp16 Signed-off-by: ericharper <complex451@gmail.com> * add fused_fp16 arg to LayerNorm calls Signed-off-by: ericharper <complex451@gmail.com> * fix arg name Signed-off-by: ericharper <complex451@gmail.com> * fix arg name Signed-off-by: ericharper <complex451@gmail.com> * fix import Signed-off-by: ericharper <complex451@gmail.com> * update arg Signed-off-by: ericharper <complex451@gmail.com> * skip warmup default to True Signed-off-by: ericharper <complex451@gmail.com> * skip warmup default to True Signed-off-by: ericharper <complex451@gmail.com> * Adding complete method to MegatronGPTModel (#2935) Signed-off-by: Oleksii Kuchaiev <okuchaiev@nvidia.com> * make ffn_hidden_size mandatory Signed-off-by: ericharper <complex451@gmail.com> * Manually migrating timing of step into branch (#2937) * 1. Manually migrating timing of step into branch. Signed-off-by: Micha Livne <mlivne@nvidia.com> * 1. Updated file name and content. Signed-off-by: Micha Livne <mlivne@nvidia.com> * 1. Updated to latest code. Signed-off-by: Micha Livne <mlivne@nvidia.com> Co-authored-by: Micha Livne <mlivne@nvidia.com> * remove unused imports Signed-off-by: ericharper <complex451@gmail.com> * remove unused import Signed-off-by: ericharper <complex451@gmail.com> * remove unused import Signed-off-by: ericharper <complex451@gmail.com> * remove unused import Signed-off-by: ericharper <complex451@gmail.com> * check fused_fp16 and fused_bf16 are not both True Signed-off-by: ericharper <complex451@gmail.com> * update predict script for model parallel .nemo Signed-off-by: ericharper <complex451@gmail.com> * typo Signed-off-by: ericharper <complex451@gmail.com> * typo Signed-off-by: ericharper <complex451@gmail.com> Co-authored-by: Oleksii Kuchaiev <okuchaiev@users.noreply.github.com> Co-authored-by: Micha Livne <michalivne@users.noreply.github.com> Co-authored-by: Micha Livne <mlivne@nvidia.com> * NVfuser (#2943) * activation checkpoint recompute Signed-off-by: Sangkug Lym <slym@nvidia.com> * selective nvfuser setup * Megatron gpt bfloat support (#2926) * Save/restore fix Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * Another merge Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * Bf16 args in init Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * Set precision Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * Remove debug stuff Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * add bf16 casting decorator Signed-off-by: Sangkug Lym <slym@nvidia.com> * Bfloat layernorm propagation Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * activation checkpoint recompute Signed-off-by: Sangkug Lym <slym@nvidia.com> * selective nvfuser setup * More arg removal Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * Remove BERTDataset Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * update to latest apex and patch transformer autocast Signed-off-by: ericharper <complex451@gmail.com> Co-authored-by: Sangkug Lym <slym@nvidia.com> Co-authored-by: ericharper <complex451@gmail.com> * don't set jit for bf16 Signed-off-by: ericharper <complex451@gmail.com> * replace apex.mpu Signed-off-by: ericharper <complex451@gmail.com> * fix grad clip Signed-off-by: ericharper <complex451@gmail.com> * NVFuser fixes (#2951) * Fuser fixes Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * Remove dummy handler Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * Remove PTL plugin based logic for fusion Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * remove duplicated file Signed-off-by: ericharper <complex451@gmail.com> * typo (#2960) Signed-off-by: ericharper <complex451@gmail.com> * [BigNLP] Script to convert GPT checkpoint to .nemo (#2958) * remove args Signed-off-by: ericharper <complex451@gmail.com> * remove args Signed-off-by: ericharper <complex451@gmail.com> * remove args Signed-off-by: ericharper <complex451@gmail.com> * remove args Signed-off-by: ericharper <complex451@gmail.com> * remove args in progress Signed-off-by: ericharper <complex451@gmail.com> * remove args in progress Signed-off-by: ericharper <complex451@gmail.com> * remove args in progress Signed-off-by: ericharper <complex451@gmail.com> * remove args in progress Signed-off-by: ericharper <complex451@gmail.com> * add load_fused_kernels Signed-off-by: ericharper <complex451@gmail.com> * add load_fused_kernels Signed-off-by: ericharper <complex451@gmail.com> * update megatron_init Signed-off-by: ericharper <complex451@gmail.com> * add fused kernels Signed-off-by: ericharper <complex451@gmail.com> * add fused kernels Signed-off-by: ericharper <complex451@gmail.com> * update process batch Signed-off-by: ericharper <complex451@gmail.com> * remove erroneous import Signed-off-by: ericharper <complex451@gmail.com> * remove erroneous import Signed-off-by: ericharper <complex451@gmail.com> * remove erroneous import Signed-off-by: ericharper <complex451@gmail.com> * add megatron clip_grad Signed-off-by: ericharper <complex451@gmail.com> * trying to resolve circular import error Signed-off-by: ericharper <complex451@gmail.com> * rename file Signed-off-by: ericharper <complex451@gmail.com> * remove non-gpt models and datasets from __init__ files Signed-off-by: ericharper <complex451@gmail.com> * set device in constructorfor gpu init Signed-off-by: ericharper <complex451@gmail.com> * set device in constructorfor gpu init Signed-off-by: ericharper <complex451@gmail.com> * set_device in constructor Signed-off-by: ericharper <complex451@gmail.com> * clean config Signed-off-by: ericharper <complex451@gmail.com> * update MegatronDataset Signed-off-by: ericharper <complex451@gmail.com> * clean up MegatronModule Signed-off-by: ericharper <complex451@gmail.com> * clean up MegatronModule Signed-off-by: ericharper <complex451@gmail.com> * rename fp16 and bf16 flags to fused_softmax_input_in_fp16/bf16 Signed-off-by: ericharper <complex451@gmail.com> * rename to fused_fp16 Signed-off-by: ericharper <complex451@gmail.com> * add fused_fp16 arg to LayerNorm calls Signed-off-by: ericharper <complex451@gmail.com> * fix arg name Signed-off-by: ericharper <complex451@gmail.com> * fix arg name Signed-off-by: ericharper <complex451@gmail.com> * fix import Signed-off-by: ericharper <complex451@gmail.com> * update arg Signed-off-by: ericharper <complex451@gmail.com> * skip warmup default to True Signed-off-by: ericharper <complex451@gmail.com> * skip warmup default to True Signed-off-by: ericharper <complex451@gmail.com> * Adding complete method to MegatronGPTModel (#2935) Signed-off-by: Oleksii Kuchaiev <okuchaiev@nvidia.com> * make ffn_hidden_size mandatory Signed-off-by: ericharper <complex451@gmail.com> * Manually migrating timing of step into branch (#2937) * 1. Manually migrating timing of step into branch. Signed-off-by: Micha Livne <mlivne@nvidia.com> * 1. Updated file name and content. Signed-off-by: Micha Livne <mlivne@nvidia.com> * 1. Updated to latest code. Signed-off-by: Micha Livne <mlivne@nvidia.com> Co-authored-by: Micha Livne <mlivne@nvidia.com> * remove unused imports Signed-off-by: ericharper <complex451@gmail.com> * remove unused import Signed-off-by: ericharper <complex451@gmail.com> * remove unused import Signed-off-by: ericharper <complex451@gmail.com> * remove unused import Signed-off-by: ericharper <complex451@gmail.com> * check fused_fp16 and fused_bf16 are not both True Signed-off-by: ericharper <complex451@gmail.com> * update predict script for model parallel .nemo Signed-off-by: ericharper <complex451@gmail.com> * typo Signed-off-by: ericharper <complex451@gmail.com> * add script to convert .ckpt to .nemo Signed-off-by: ericharper <complex451@gmail.com> * in progress Signed-off-by: ericharper <complex451@gmail.com> * update Signed-off-by: ericharper <complex451@gmail.com> * convert mp checkpoints to nemo Signed-off-by: ericharper <complex451@gmail.com> * update help Signed-off-by: ericharper <complex451@gmail.com> * add safeguard for model parallel save_to Signed-off-by: ericharper <complex451@gmail.com> * adjust NLPModel save_to to be safer for model parallel Signed-off-by: Oleksii Kuchaiev <okuchaiev@nvidia.com> Co-authored-by: Oleksii Kuchaiev <okuchaiev@users.noreply.github.com> Co-authored-by: Micha Livne <michalivne@users.noreply.github.com> Co-authored-by: Micha Livne <mlivne@nvidia.com> Co-authored-by: Oleksii Kuchaiev <okuchaiev@nvidia.com> * [BigNLP] Update GPT evaluation to work with tensor model parallel (#2959) * in progress Signed-off-by: ericharper <complex451@gmail.com> * update args Signed-off-by: ericharper <complex451@gmail.com> * add request dataset Signed-off-by: ericharper <complex451@gmail.com> * tokenize request Signed-off-by: ericharper <complex451@gmail.com> * in progress Signed-off-by: ericharper <complex451@gmail.com> * able to run Signed-off-by: ericharper <complex451@gmail.com> * reduce logits Signed-off-by: ericharper <complex451@gmail.com> * capture response Signed-off-by: ericharper <complex451@gmail.com> * squeeze and unsqueeze Signed-off-by: ericharper <complex451@gmail.com> * handle non model parallel case Signed-off-by: ericharper <complex451@gmail.com> * clean imports Signed-off-by: ericharper <complex451@gmail.com> * add file Signed-off-by: ericharper <complex451@gmail.com> * convert logits to log_probs Signed-off-by: Oleksii Kuchaiev <okuchaiev@nvidia.com> * rename logits to log_probs Signed-off-by: Oleksii Kuchaiev <okuchaiev@nvidia.com> Co-authored-by: Oleksii Kuchaiev <okuchaiev@nvidia.com> * style Signed-off-by: ericharper <complex451@gmail.com> * fix copyright headers Signed-off-by: ericharper <complex451@gmail.com> * fix copyright headers Signed-off-by: ericharper <complex451@gmail.com> * remove old TimingCallback Signed-off-by: ericharper <complex451@gmail.com> * style Signed-off-by: ericharper <complex451@gmail.com> * update jenkins to use latest apex and sandeep's fork Signed-off-by: ericharper <complex451@gmail.com> * update jenkins Signed-off-by: ericharper <complex451@gmail.com> * update jenkins Signed-off-by: ericharper <complex451@gmail.com> * update jenkins Signed-off-by: ericharper <complex451@gmail.com> * update jenkins Signed-off-by: ericharper <complex451@gmail.com> * try 2109 container Signed-off-by: ericharper <complex451@gmail.com> * try cuda container Signed-off-by: ericharper <complex451@gmail.com> * use internal container Signed-off-by: ericharper <complex451@gmail.com> * update checkpoint tests Signed-off-by: ericharper <complex451@gmail.com> * fix scheduler args Signed-off-by: ericharper <complex451@gmail.com> * update eval Signed-off-by: ericharper <complex451@gmail.com> * style Signed-off-by: ericharper <complex451@gmail.com> * update jenkins to use ptl 1.5 rc Signed-off-by: ericharper <complex451@gmail.com> * add import guard to jenkins Signed-off-by: ericharper <complex451@gmail.com> * add import guard to jenkins Signed-off-by: ericharper <complex451@gmail.com> * remove deterministic Signed-off-by: ericharper <complex451@gmail.com> * install numba .53 Signed-off-by: ericharper <complex451@gmail.com> * allow for more variance Signed-off-by: ericharper <complex451@gmail.com> * update trainer config dataclass Signed-off-by: ericharper <complex451@gmail.com> * test_get_optimizer on gpu Signed-off-by: ericharper <complex451@gmail.com> * revert comment Signed-off-by: ericharper <complex451@gmail.com> * change trainer config default to 32 Signed-off-by: ericharper <complex451@gmail.com> * [BigNLP] Remove fused kernel code instead use Apex (#2984) * remove fused_kernels Signed-off-by: ericharper <complex451@gmail.com> * remove fused_kernels Signed-off-by: ericharper <complex451@gmail.com> * remove fused layer norm and fused softmax and use apex instead Signed-off-by: ericharper <complex451@gmail.com> * update imports Signed-off-by: ericharper <complex451@gmail.com> * remove comment Signed-off-by: ericharper <complex451@gmail.com> * use apex enums Signed-off-by: ericharper <complex451@gmail.com> * use apex enums Signed-off-by: ericharper <complex451@gmail.com> * add tab Signed-off-by: ericharper <complex451@gmail.com> * Timer with sliding window (#3002) Co-authored-by: Micha Livne <michalivne@users.noreply.github.com> * revert tab Signed-off-by: ericharper <complex451@gmail.com> * check for rank zero Signed-off-by: ericharper <complex451@gmail.com> * check for rank zero Signed-off-by: ericharper <complex451@gmail.com> * try explicit log dir Signed-off-by: ericharper <complex451@gmail.com> * add + Signed-off-by: ericharper <complex451@gmail.com> * don't rm Signed-off-by: ericharper <complex451@gmail.com> * make dir if it doesn't exist Signed-off-by: ericharper <complex451@gmail.com> * create mp nemo file in temp directory Signed-off-by: ericharper <complex451@gmail.com> * simplify mp save_to Signed-off-by: ericharper <complex451@gmail.com> * handle mp 1 case Signed-off-by: ericharper <complex451@gmail.com> * style fix Signed-off-by: ericharper <complex451@gmail.com> * remove files Signed-off-by: ericharper <complex451@gmail.com> * fix consumed_samples when resuming Signed-off-by: ericharper <complex451@gmail.com> * fix reinstall.sh Signed-off-by: ericharper <complex451@gmail.com> * update req Signed-off-by: ericharper <complex451@gmail.com> * add more detailed log for dataloaders Signed-off-by: ericharper <complex451@gmail.com> * check if cuda is available before using fused_adam Signed-off-by: ericharper <complex451@gmail.com> * revert comment Signed-off-by: ericharper <complex451@gmail.com> * update eval script to use model.freeze Signed-off-by: ericharper <complex451@gmail.com> * log train loss averaged over gradient accumulation steps Signed-off-by: ericharper <complex451@gmail.com> * check copyright earlier Signed-off-by: ericharper <complex451@gmail.com> * todo Signed-off-by: ericharper <complex451@gmail.com> * override SaveRestoreConnector in NLPModel init Signed-off-by: ericharper <complex451@gmail.com> * move to scripts Signed-off-by: ericharper <complex451@gmail.com> * remove star import Signed-off-by: ericharper <complex451@gmail.com> * remove comments Signed-off-by: ericharper <complex451@gmail.com> * remove unused dataset Signed-off-by: ericharper <complex451@gmail.com> * removed barrier Signed-off-by: ericharper <complex451@gmail.com> * check cfg Signed-off-by: ericharper <complex451@gmail.com> * remove logging Signed-off-by: ericharper <complex451@gmail.com> * freeze, unfreeze Signed-off-by: ericharper <complex451@gmail.com> * return None Signed-off-by: ericharper <complex451@gmail.com> * remove unused imports Signed-off-by: ericharper <complex451@gmail.com> * add TODO Signed-off-by: ericharper <complex451@gmail.com> * typecheck Signed-off-by: ericharper <complex451@gmail.com> * typo Signed-off-by: ericharper <complex451@gmail.com> * todo Signed-off-by: ericharper <complex451@gmail.com> * add common native plugin Signed-off-by: ericharper <complex451@gmail.com> * restore with trainer Signed-off-by: ericharper <complex451@gmail.com> * style Signed-off-by: ericharper <complex451@gmail.com> * deprecate megatron-lm bert Signed-off-by: ericharper <complex451@gmail.com> * deprecate megatron-lm bert Signed-off-by: ericharper <complex451@gmail.com> * compile helpers ont he fly Signed-off-by: ericharper <complex451@gmail.com> * remove amp_level Signed-off-by: ericharper <complex451@gmail.com> * remove amp_level from configs Signed-off-by: ericharper <complex451@gmail.com> * add missing import Signed-off-by: ericharper <complex451@gmail.com> * typo Signed-off-by: ericharper <complex451@gmail.com> * remove amp_level Signed-off-by: ericharper <complex451@gmail.com> * use fast huggingface tokenizers by default Signed-off-by: ericharper <complex451@gmail.com> * deal with huggingface tokenizer positional args Signed-off-by: ericharper <complex451@gmail.com> * deal with huggingface tokenizer positional args Signed-off-by: ericharper <complex451@gmail.com> * deal with huggingface tokenizer positional args Signed-off-by: ericharper <complex451@gmail.com> * revert use_fast default to False Signed-off-by: ericharper <complex451@gmail.com> * return super training_epoch_end Signed-off-by: ericharper <complex451@gmail.com> * remove optimizer_idx arg from training_step Signed-off-by: ericharper <complex451@gmail.com> * remove unused arg from on_train_epoch_end Signed-off-by: ericharper <complex451@gmail.com> * add restore_from_path to nemo config Signed-off-by: ericharper <complex451@gmail.com> * add comment Signed-off-by: ericharper <complex451@gmail.com> * revert Signed-off-by: ericharper <complex451@gmail.com> * override connector if not subclassing NLPSaveRestoreConnector for model parallel save Signed-off-by: ericharper <complex451@gmail.com> * update test optimizer Signed-off-by: ericharper <complex451@gmail.com> * clean up Signed-off-by: ericharper <complex451@gmail.com> * clean up Signed-off-by: ericharper <complex451@gmail.com> * clean up Signed-off-by: ericharper <complex451@gmail.com> * clean up Signed-off-by: ericharper <complex451@gmail.com> * make data_prefix mandatory in config Signed-off-by: ericharper <complex451@gmail.com> * update installation instructions on readme Signed-off-by: ericharper <complex451@gmail.com> * update dockerfile Signed-off-by: ericharper <complex451@gmail.com> * add todo Signed-off-by: ericharper <complex451@gmail.com> * raise error if trying to use always_save_nemo with model parallel model Signed-off-by: ericharper <complex451@gmail.com> * remove comment Signed-off-by: ericharper <complex451@gmail.com> Co-authored-by: Sandeep Subramanian <sandeep.subramanian.1@umontreal.ca> Co-authored-by: Sangkug Lym <slym@nvidia.com> Co-authored-by: Oleksii Kuchaiev <okuchaiev@users.noreply.github.com> Co-authored-by: Micha Livne <michalivne@users.noreply.github.com> Co-authored-by: Micha Livne <mlivne@nvidia.com> Co-authored-by: Oleksii Kuchaiev <okuchaiev@nvidia.com>
2021-10-21 05:06:37 +02:00
Megatron GPT
~~~~~~~~~~~~
Megatron GPT training requires NVIDIA Apex to be installed.
.. code-block:: bash
git clone https://github.com/NVIDIA/apex
cd apex
pip install -v --disable-pip-version-check --no-cache-dir --global-option="--cpp_ext" --global-option="--cuda_ext" ./
Docker containers:
~~~~~~~~~~~~~~~~~~
[BigNLP] Merge Megatron GPT to main (#2975) * fix gpu init after removing debug print in mpu Signed-off-by: ericharper <complex451@gmail.com> * add fused_adam Signed-off-by: ericharper <complex451@gmail.com> * check ds is not none before logging len Signed-off-by: ericharper <complex451@gmail.com> * set fp16 arg to true and fix enum conflict Signed-off-by: ericharper <complex451@gmail.com> * make fp16 arg configurable Signed-off-by: ericharper <complex451@gmail.com> * add grad clip from megatron Signed-off-by: ericharper <complex451@gmail.com> * Linear warmup with cosine annealing and constant holding (#2846) * Testing cosine schedule Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * Style fixes Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * Fixes Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * More fixes Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * update config for constant steps in schedule Signed-off-by: ericharper <complex451@gmail.com> * temporarily import enum from megatron Signed-off-by: ericharper <complex451@gmail.com> * add grad clip for fp32 Signed-off-by: ericharper <complex451@gmail.com> * update check for _del_model_without_trainer Signed-off-by: ericharper <complex451@gmail.com> * updating restore for model parallel Signed-off-by: ericharper <complex451@gmail.com> * add predict script Signed-off-by: ericharper <complex451@gmail.com> * update test iters Signed-off-by: ericharper <complex451@gmail.com> * add barrier Signed-off-by: ericharper <complex451@gmail.com> * return if clip_val is 0 or None Signed-off-by: ericharper <complex451@gmail.com> * when using amp clip grads after they are unscaled Signed-off-by: ericharper <complex451@gmail.com> * make native amp scaler hyperparams configurable Signed-off-by: ericharper <complex451@gmail.com> * (1) nvfuser, (2) amp-casting decoration (#2894) * (1) nvfuser, (2) amp-casting decoration Signed-off-by: Sangkug Lym <slym@nvidia.com> * support bf16 Signed-off-by: Sangkug Lym <slym@nvidia.com> * update package info Signed-off-by: ericharper <complex451@gmail.com> * add set device to constructor Signed-off-by: ericharper <complex451@gmail.com> * set_device in constructor Signed-off-by: ericharper <complex451@gmail.com> * [BigNLP] Remove megatron-lm dependency. (#2910) * remove args Signed-off-by: ericharper <complex451@gmail.com> * remove args Signed-off-by: ericharper <complex451@gmail.com> * remove args Signed-off-by: ericharper <complex451@gmail.com> * remove args Signed-off-by: ericharper <complex451@gmail.com> * remove args in progress Signed-off-by: ericharper <complex451@gmail.com> * remove args in progress Signed-off-by: ericharper <complex451@gmail.com> * remove args in progress Signed-off-by: ericharper <complex451@gmail.com> * remove args in progress Signed-off-by: ericharper <complex451@gmail.com> * add load_fused_kernels Signed-off-by: ericharper <complex451@gmail.com> * add load_fused_kernels Signed-off-by: ericharper <complex451@gmail.com> * update megatron_init Signed-off-by: ericharper <complex451@gmail.com> * add fused kernels Signed-off-by: ericharper <complex451@gmail.com> * add fused kernels Signed-off-by: ericharper <complex451@gmail.com> * update process batch Signed-off-by: ericharper <complex451@gmail.com> * remove erroneous import Signed-off-by: ericharper <complex451@gmail.com> * remove erroneous import Signed-off-by: ericharper <complex451@gmail.com> * remove erroneous import Signed-off-by: ericharper <complex451@gmail.com> * add megatron clip_grad Signed-off-by: ericharper <complex451@gmail.com> * trying to resolve circular import error Signed-off-by: ericharper <complex451@gmail.com> * rename file Signed-off-by: ericharper <complex451@gmail.com> * remove non-gpt models and datasets from __init__ files Signed-off-by: ericharper <complex451@gmail.com> * set device in constructorfor gpu init Signed-off-by: ericharper <complex451@gmail.com> * set device in constructorfor gpu init Signed-off-by: ericharper <complex451@gmail.com> * set_device in constructor Signed-off-by: ericharper <complex451@gmail.com> * clean config Signed-off-by: ericharper <complex451@gmail.com> * update MegatronDataset Signed-off-by: ericharper <complex451@gmail.com> * clean up MegatronModule Signed-off-by: ericharper <complex451@gmail.com> * clean up MegatronModule Signed-off-by: ericharper <complex451@gmail.com> * rename fp16 and bf16 flags to fused_softmax_input_in_fp16/bf16 Signed-off-by: ericharper <complex451@gmail.com> * rename to fused_fp16 Signed-off-by: ericharper <complex451@gmail.com> * add fused_fp16 arg to LayerNorm calls Signed-off-by: ericharper <complex451@gmail.com> * fix arg name Signed-off-by: ericharper <complex451@gmail.com> * fix arg name Signed-off-by: ericharper <complex451@gmail.com> * fix import Signed-off-by: ericharper <complex451@gmail.com> * update arg Signed-off-by: ericharper <complex451@gmail.com> * skip warmup default to True Signed-off-by: ericharper <complex451@gmail.com> * skip warmup default to True Signed-off-by: ericharper <complex451@gmail.com> * Adding complete method to MegatronGPTModel (#2935) Signed-off-by: Oleksii Kuchaiev <okuchaiev@nvidia.com> * make ffn_hidden_size mandatory Signed-off-by: ericharper <complex451@gmail.com> * Manually migrating timing of step into branch (#2937) * 1. Manually migrating timing of step into branch. Signed-off-by: Micha Livne <mlivne@nvidia.com> * 1. Updated file name and content. Signed-off-by: Micha Livne <mlivne@nvidia.com> * 1. Updated to latest code. Signed-off-by: Micha Livne <mlivne@nvidia.com> Co-authored-by: Micha Livne <mlivne@nvidia.com> * remove unused imports Signed-off-by: ericharper <complex451@gmail.com> * remove unused import Signed-off-by: ericharper <complex451@gmail.com> * remove unused import Signed-off-by: ericharper <complex451@gmail.com> * remove unused import Signed-off-by: ericharper <complex451@gmail.com> * check fused_fp16 and fused_bf16 are not both True Signed-off-by: ericharper <complex451@gmail.com> * update predict script for model parallel .nemo Signed-off-by: ericharper <complex451@gmail.com> * typo Signed-off-by: ericharper <complex451@gmail.com> * typo Signed-off-by: ericharper <complex451@gmail.com> Co-authored-by: Oleksii Kuchaiev <okuchaiev@users.noreply.github.com> Co-authored-by: Micha Livne <michalivne@users.noreply.github.com> Co-authored-by: Micha Livne <mlivne@nvidia.com> * NVfuser (#2943) * activation checkpoint recompute Signed-off-by: Sangkug Lym <slym@nvidia.com> * selective nvfuser setup * Megatron gpt bfloat support (#2926) * Save/restore fix Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * Another merge Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * Bf16 args in init Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * Set precision Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * Remove debug stuff Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * add bf16 casting decorator Signed-off-by: Sangkug Lym <slym@nvidia.com> * Bfloat layernorm propagation Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * activation checkpoint recompute Signed-off-by: Sangkug Lym <slym@nvidia.com> * selective nvfuser setup * More arg removal Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * Remove BERTDataset Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * update to latest apex and patch transformer autocast Signed-off-by: ericharper <complex451@gmail.com> Co-authored-by: Sangkug Lym <slym@nvidia.com> Co-authored-by: ericharper <complex451@gmail.com> * don't set jit for bf16 Signed-off-by: ericharper <complex451@gmail.com> * replace apex.mpu Signed-off-by: ericharper <complex451@gmail.com> * fix grad clip Signed-off-by: ericharper <complex451@gmail.com> * NVFuser fixes (#2951) * Fuser fixes Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * Remove dummy handler Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * Remove PTL plugin based logic for fusion Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * remove duplicated file Signed-off-by: ericharper <complex451@gmail.com> * typo (#2960) Signed-off-by: ericharper <complex451@gmail.com> * [BigNLP] Script to convert GPT checkpoint to .nemo (#2958) * remove args Signed-off-by: ericharper <complex451@gmail.com> * remove args Signed-off-by: ericharper <complex451@gmail.com> * remove args Signed-off-by: ericharper <complex451@gmail.com> * remove args Signed-off-by: ericharper <complex451@gmail.com> * remove args in progress Signed-off-by: ericharper <complex451@gmail.com> * remove args in progress Signed-off-by: ericharper <complex451@gmail.com> * remove args in progress Signed-off-by: ericharper <complex451@gmail.com> * remove args in progress Signed-off-by: ericharper <complex451@gmail.com> * add load_fused_kernels Signed-off-by: ericharper <complex451@gmail.com> * add load_fused_kernels Signed-off-by: ericharper <complex451@gmail.com> * update megatron_init Signed-off-by: ericharper <complex451@gmail.com> * add fused kernels Signed-off-by: ericharper <complex451@gmail.com> * add fused kernels Signed-off-by: ericharper <complex451@gmail.com> * update process batch Signed-off-by: ericharper <complex451@gmail.com> * remove erroneous import Signed-off-by: ericharper <complex451@gmail.com> * remove erroneous import Signed-off-by: ericharper <complex451@gmail.com> * remove erroneous import Signed-off-by: ericharper <complex451@gmail.com> * add megatron clip_grad Signed-off-by: ericharper <complex451@gmail.com> * trying to resolve circular import error Signed-off-by: ericharper <complex451@gmail.com> * rename file Signed-off-by: ericharper <complex451@gmail.com> * remove non-gpt models and datasets from __init__ files Signed-off-by: ericharper <complex451@gmail.com> * set device in constructorfor gpu init Signed-off-by: ericharper <complex451@gmail.com> * set device in constructorfor gpu init Signed-off-by: ericharper <complex451@gmail.com> * set_device in constructor Signed-off-by: ericharper <complex451@gmail.com> * clean config Signed-off-by: ericharper <complex451@gmail.com> * update MegatronDataset Signed-off-by: ericharper <complex451@gmail.com> * clean up MegatronModule Signed-off-by: ericharper <complex451@gmail.com> * clean up MegatronModule Signed-off-by: ericharper <complex451@gmail.com> * rename fp16 and bf16 flags to fused_softmax_input_in_fp16/bf16 Signed-off-by: ericharper <complex451@gmail.com> * rename to fused_fp16 Signed-off-by: ericharper <complex451@gmail.com> * add fused_fp16 arg to LayerNorm calls Signed-off-by: ericharper <complex451@gmail.com> * fix arg name Signed-off-by: ericharper <complex451@gmail.com> * fix arg name Signed-off-by: ericharper <complex451@gmail.com> * fix import Signed-off-by: ericharper <complex451@gmail.com> * update arg Signed-off-by: ericharper <complex451@gmail.com> * skip warmup default to True Signed-off-by: ericharper <complex451@gmail.com> * skip warmup default to True Signed-off-by: ericharper <complex451@gmail.com> * Adding complete method to MegatronGPTModel (#2935) Signed-off-by: Oleksii Kuchaiev <okuchaiev@nvidia.com> * make ffn_hidden_size mandatory Signed-off-by: ericharper <complex451@gmail.com> * Manually migrating timing of step into branch (#2937) * 1. Manually migrating timing of step into branch. Signed-off-by: Micha Livne <mlivne@nvidia.com> * 1. Updated file name and content. Signed-off-by: Micha Livne <mlivne@nvidia.com> * 1. Updated to latest code. Signed-off-by: Micha Livne <mlivne@nvidia.com> Co-authored-by: Micha Livne <mlivne@nvidia.com> * remove unused imports Signed-off-by: ericharper <complex451@gmail.com> * remove unused import Signed-off-by: ericharper <complex451@gmail.com> * remove unused import Signed-off-by: ericharper <complex451@gmail.com> * remove unused import Signed-off-by: ericharper <complex451@gmail.com> * check fused_fp16 and fused_bf16 are not both True Signed-off-by: ericharper <complex451@gmail.com> * update predict script for model parallel .nemo Signed-off-by: ericharper <complex451@gmail.com> * typo Signed-off-by: ericharper <complex451@gmail.com> * add script to convert .ckpt to .nemo Signed-off-by: ericharper <complex451@gmail.com> * in progress Signed-off-by: ericharper <complex451@gmail.com> * update Signed-off-by: ericharper <complex451@gmail.com> * convert mp checkpoints to nemo Signed-off-by: ericharper <complex451@gmail.com> * update help Signed-off-by: ericharper <complex451@gmail.com> * add safeguard for model parallel save_to Signed-off-by: ericharper <complex451@gmail.com> * adjust NLPModel save_to to be safer for model parallel Signed-off-by: Oleksii Kuchaiev <okuchaiev@nvidia.com> Co-authored-by: Oleksii Kuchaiev <okuchaiev@users.noreply.github.com> Co-authored-by: Micha Livne <michalivne@users.noreply.github.com> Co-authored-by: Micha Livne <mlivne@nvidia.com> Co-authored-by: Oleksii Kuchaiev <okuchaiev@nvidia.com> * [BigNLP] Update GPT evaluation to work with tensor model parallel (#2959) * in progress Signed-off-by: ericharper <complex451@gmail.com> * update args Signed-off-by: ericharper <complex451@gmail.com> * add request dataset Signed-off-by: ericharper <complex451@gmail.com> * tokenize request Signed-off-by: ericharper <complex451@gmail.com> * in progress Signed-off-by: ericharper <complex451@gmail.com> * able to run Signed-off-by: ericharper <complex451@gmail.com> * reduce logits Signed-off-by: ericharper <complex451@gmail.com> * capture response Signed-off-by: ericharper <complex451@gmail.com> * squeeze and unsqueeze Signed-off-by: ericharper <complex451@gmail.com> * handle non model parallel case Signed-off-by: ericharper <complex451@gmail.com> * clean imports Signed-off-by: ericharper <complex451@gmail.com> * add file Signed-off-by: ericharper <complex451@gmail.com> * convert logits to log_probs Signed-off-by: Oleksii Kuchaiev <okuchaiev@nvidia.com> * rename logits to log_probs Signed-off-by: Oleksii Kuchaiev <okuchaiev@nvidia.com> Co-authored-by: Oleksii Kuchaiev <okuchaiev@nvidia.com> * add megatron gpt pretraining Signed-off-by: ericharper <complex451@gmail.com> * add megatron gpt pretraining Signed-off-by: ericharper <complex451@gmail.com> * add megatron gpt pretraining Signed-off-by: ericharper <complex451@gmail.com> * updating to work with latest megatron Signed-off-by: ericharper <complex451@gmail.com> * updating to work with latest megatron Signed-off-by: ericharper <complex451@gmail.com> * update _del_model Signed-off-by: ericharper <complex451@gmail.com> * adding gpt model Signed-off-by: ericharper <complex451@gmail.com> * adding gpt model Signed-off-by: ericharper <complex451@gmail.com> * adding gpt model Signed-off-by: ericharper <complex451@gmail.com> * instantiate GPTmodel Signed-off-by: ericharper <complex451@gmail.com> * adding build dataset Signed-off-by: ericharper <complex451@gmail.com> * build megatron dataset in .setup Signed-off-by: ericharper <complex451@gmail.com> * setup dataloader Signed-off-by: ericharper <complex451@gmail.com> * add vocab_file and merge_file to megatron init Signed-off-by: ericharper <complex451@gmail.com> * add forward Signed-off-by: ericharper <complex451@gmail.com> * add train loss Signed-off-by: ericharper <complex451@gmail.com> * add optimizer Signed-off-by: ericharper <complex451@gmail.com> * add exp_manager Signed-off-by: ericharper <complex451@gmail.com> * multi-gpu is working Signed-off-by: ericharper <complex451@gmail.com> * adding val loop Signed-off-by: ericharper <complex451@gmail.com> * style Signed-off-by: ericharper <complex451@gmail.com> * adding val loop Signed-off-by: ericharper <complex451@gmail.com> * fix ranks Signed-off-by: ericharper <complex451@gmail.com> * fix model parallel checkpoint saving Signed-off-by: ericharper <complex451@gmail.com> * fix _del_model Signed-off-by: ericharper <complex451@gmail.com> * added megatron batch sampler Signed-off-by: ericharper <complex451@gmail.com> * try to fix num steps Signed-off-by: ericharper <complex451@gmail.com> * add wandb to config Signed-off-by: ericharper <complex451@gmail.com> * log lr Signed-off-by: ericharper <complex451@gmail.com> * add warmup ratio to config Signed-off-by: ericharper <complex451@gmail.com> * update configs Signed-off-by: ericharper <complex451@gmail.com> * update configs Signed-off-by: ericharper <complex451@gmail.com> * add cpu init to args Signed-off-by: ericharper <complex451@gmail.com> * update config Signed-off-by: ericharper <complex451@gmail.com> * update config Signed-off-by: ericharper <complex451@gmail.com> * Initial megatron dataset port Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * Fix merge conflicts Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * License fixes and megatron model porting Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * Style fixes Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * More fixes to import from nemo rather than megatron Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * Fix circular imports Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * Style fixes Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * Revert config file Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * Restructure further to avoid circular imports Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * add Makefile Signed-off-by: ericharper <complex451@gmail.com> * Add megatron modules Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * add license Signed-off-by: ericharper <complex451@gmail.com> * Port from latest megatron Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * update cfg Signed-off-by: ericharper <complex451@gmail.com> * update config Signed-off-by: ericharper <complex451@gmail.com> * add _del_model_without_trainer Signed-off-by: ericharper <complex451@gmail.com> * add data preprocessing script Signed-off-by: ericharper <complex451@gmail.com> * update config Signed-off-by: ericharper <complex451@gmail.com> * use apex mpu Signed-off-by: ericharper <complex451@gmail.com> * replace print_rank_0 with nemo utils logging Signed-off-by: ericharper <complex451@gmail.com> * use apex mpu Signed-off-by: ericharper <complex451@gmail.com> * use apex mpu Signed-off-by: ericharper <complex451@gmail.com> * add use_cpu_initialization Signed-off-by: ericharper <complex451@gmail.com> * fixing autoresume in progress Signed-off-by: ericharper <complex451@gmail.com> * properly removing last checkpoint Signed-off-by: ericharper <complex451@gmail.com> * log consumed samples Signed-off-by: ericharper <complex451@gmail.com> * fix mp autoresume Signed-off-by: ericharper <complex451@gmail.com> * add NLPSaveRestoreConnector Signed-off-by: ericharper <complex451@gmail.com> * Megatron GPT training with NeMo tokenizers (#2818) * Update files from megatron repo Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * Remove non NLP data related files from megatron Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * Merge megatron and nemo tokenizers Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * Remove get_tokenizer() calls from gpt model Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * Update tokenizer yaml config Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * add todo Signed-off-by: ericharper <complex451@gmail.com> * update config Signed-off-by: ericharper <complex451@gmail.com> * make init_method_std configurable Signed-off-by: ericharper <complex451@gmail.com> * make gpu init work by setting random seed earlier Signed-off-by: ericharper <complex451@gmail.com> * fix gpu init after removing debug print in mpu Signed-off-by: ericharper <complex451@gmail.com> * add fused_adam Signed-off-by: ericharper <complex451@gmail.com> * check ds is not none before logging len Signed-off-by: ericharper <complex451@gmail.com> * set fp16 arg to true and fix enum conflict Signed-off-by: ericharper <complex451@gmail.com> * make fp16 arg configurable Signed-off-by: ericharper <complex451@gmail.com> * add grad clip from megatron Signed-off-by: ericharper <complex451@gmail.com> * Linear warmup with cosine annealing and constant holding (#2846) * Testing cosine schedule Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * Style fixes Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * Fixes Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * More fixes Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * update config for constant steps in schedule Signed-off-by: ericharper <complex451@gmail.com> * temporarily import enum from megatron Signed-off-by: ericharper <complex451@gmail.com> * add grad clip for fp32 Signed-off-by: ericharper <complex451@gmail.com> * update check for _del_model_without_trainer Signed-off-by: ericharper <complex451@gmail.com> * updating restore for model parallel Signed-off-by: ericharper <complex451@gmail.com> * add predict script Signed-off-by: ericharper <complex451@gmail.com> * update test iters Signed-off-by: ericharper <complex451@gmail.com> * add barrier Signed-off-by: ericharper <complex451@gmail.com> * return if clip_val is 0 or None Signed-off-by: ericharper <complex451@gmail.com> * when using amp clip grads after they are unscaled Signed-off-by: ericharper <complex451@gmail.com> * make native amp scaler hyperparams configurable Signed-off-by: ericharper <complex451@gmail.com> * (1) nvfuser, (2) amp-casting decoration (#2894) * (1) nvfuser, (2) amp-casting decoration Signed-off-by: Sangkug Lym <slym@nvidia.com> * support bf16 Signed-off-by: Sangkug Lym <slym@nvidia.com> * update package info Signed-off-by: ericharper <complex451@gmail.com> * add set device to constructor Signed-off-by: ericharper <complex451@gmail.com> * set_device in constructor Signed-off-by: ericharper <complex451@gmail.com> * [BigNLP] Remove megatron-lm dependency. (#2910) * remove args Signed-off-by: ericharper <complex451@gmail.com> * remove args Signed-off-by: ericharper <complex451@gmail.com> * remove args Signed-off-by: ericharper <complex451@gmail.com> * remove args Signed-off-by: ericharper <complex451@gmail.com> * remove args in progress Signed-off-by: ericharper <complex451@gmail.com> * remove args in progress Signed-off-by: ericharper <complex451@gmail.com> * remove args in progress Signed-off-by: ericharper <complex451@gmail.com> * remove args in progress Signed-off-by: ericharper <complex451@gmail.com> * add load_fused_kernels Signed-off-by: ericharper <complex451@gmail.com> * add load_fused_kernels Signed-off-by: ericharper <complex451@gmail.com> * update megatron_init Signed-off-by: ericharper <complex451@gmail.com> * add fused kernels Signed-off-by: ericharper <complex451@gmail.com> * add fused kernels Signed-off-by: ericharper <complex451@gmail.com> * update process batch Signed-off-by: ericharper <complex451@gmail.com> * remove erroneous import Signed-off-by: ericharper <complex451@gmail.com> * remove erroneous import Signed-off-by: ericharper <complex451@gmail.com> * remove erroneous import Signed-off-by: ericharper <complex451@gmail.com> * add megatron clip_grad Signed-off-by: ericharper <complex451@gmail.com> * trying to resolve circular import error Signed-off-by: ericharper <complex451@gmail.com> * rename file Signed-off-by: ericharper <complex451@gmail.com> * remove non-gpt models and datasets from __init__ files Signed-off-by: ericharper <complex451@gmail.com> * set device in constructorfor gpu init Signed-off-by: ericharper <complex451@gmail.com> * set device in constructorfor gpu init Signed-off-by: ericharper <complex451@gmail.com> * set_device in constructor Signed-off-by: ericharper <complex451@gmail.com> * clean config Signed-off-by: ericharper <complex451@gmail.com> * update MegatronDataset Signed-off-by: ericharper <complex451@gmail.com> * clean up MegatronModule Signed-off-by: ericharper <complex451@gmail.com> * clean up MegatronModule Signed-off-by: ericharper <complex451@gmail.com> * rename fp16 and bf16 flags to fused_softmax_input_in_fp16/bf16 Signed-off-by: ericharper <complex451@gmail.com> * rename to fused_fp16 Signed-off-by: ericharper <complex451@gmail.com> * add fused_fp16 arg to LayerNorm calls Signed-off-by: ericharper <complex451@gmail.com> * fix arg name Signed-off-by: ericharper <complex451@gmail.com> * fix arg name Signed-off-by: ericharper <complex451@gmail.com> * fix import Signed-off-by: ericharper <complex451@gmail.com> * update arg Signed-off-by: ericharper <complex451@gmail.com> * skip warmup default to True Signed-off-by: ericharper <complex451@gmail.com> * skip warmup default to True Signed-off-by: ericharper <complex451@gmail.com> * Adding complete method to MegatronGPTModel (#2935) Signed-off-by: Oleksii Kuchaiev <okuchaiev@nvidia.com> * make ffn_hidden_size mandatory Signed-off-by: ericharper <complex451@gmail.com> * Manually migrating timing of step into branch (#2937) * 1. Manually migrating timing of step into branch. Signed-off-by: Micha Livne <mlivne@nvidia.com> * 1. Updated file name and content. Signed-off-by: Micha Livne <mlivne@nvidia.com> * 1. Updated to latest code. Signed-off-by: Micha Livne <mlivne@nvidia.com> Co-authored-by: Micha Livne <mlivne@nvidia.com> * remove unused imports Signed-off-by: ericharper <complex451@gmail.com> * remove unused import Signed-off-by: ericharper <complex451@gmail.com> * remove unused import Signed-off-by: ericharper <complex451@gmail.com> * remove unused import Signed-off-by: ericharper <complex451@gmail.com> * check fused_fp16 and fused_bf16 are not both True Signed-off-by: ericharper <complex451@gmail.com> * update predict script for model parallel .nemo Signed-off-by: ericharper <complex451@gmail.com> * typo Signed-off-by: ericharper <complex451@gmail.com> * typo Signed-off-by: ericharper <complex451@gmail.com> Co-authored-by: Oleksii Kuchaiev <okuchaiev@users.noreply.github.com> Co-authored-by: Micha Livne <michalivne@users.noreply.github.com> Co-authored-by: Micha Livne <mlivne@nvidia.com> * NVfuser (#2943) * activation checkpoint recompute Signed-off-by: Sangkug Lym <slym@nvidia.com> * selective nvfuser setup * Megatron gpt bfloat support (#2926) * Save/restore fix Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * Another merge Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * Bf16 args in init Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * Set precision Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * Remove debug stuff Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * add bf16 casting decorator Signed-off-by: Sangkug Lym <slym@nvidia.com> * Bfloat layernorm propagation Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * activation checkpoint recompute Signed-off-by: Sangkug Lym <slym@nvidia.com> * selective nvfuser setup * More arg removal Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * Remove BERTDataset Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * update to latest apex and patch transformer autocast Signed-off-by: ericharper <complex451@gmail.com> Co-authored-by: Sangkug Lym <slym@nvidia.com> Co-authored-by: ericharper <complex451@gmail.com> * don't set jit for bf16 Signed-off-by: ericharper <complex451@gmail.com> * replace apex.mpu Signed-off-by: ericharper <complex451@gmail.com> * fix grad clip Signed-off-by: ericharper <complex451@gmail.com> * NVFuser fixes (#2951) * Fuser fixes Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * Remove dummy handler Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * Remove PTL plugin based logic for fusion Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * remove duplicated file Signed-off-by: ericharper <complex451@gmail.com> * typo (#2960) Signed-off-by: ericharper <complex451@gmail.com> * [BigNLP] Script to convert GPT checkpoint to .nemo (#2958) * remove args Signed-off-by: ericharper <complex451@gmail.com> * remove args Signed-off-by: ericharper <complex451@gmail.com> * remove args Signed-off-by: ericharper <complex451@gmail.com> * remove args Signed-off-by: ericharper <complex451@gmail.com> * remove args in progress Signed-off-by: ericharper <complex451@gmail.com> * remove args in progress Signed-off-by: ericharper <complex451@gmail.com> * remove args in progress Signed-off-by: ericharper <complex451@gmail.com> * remove args in progress Signed-off-by: ericharper <complex451@gmail.com> * add load_fused_kernels Signed-off-by: ericharper <complex451@gmail.com> * add load_fused_kernels Signed-off-by: ericharper <complex451@gmail.com> * update megatron_init Signed-off-by: ericharper <complex451@gmail.com> * add fused kernels Signed-off-by: ericharper <complex451@gmail.com> * add fused kernels Signed-off-by: ericharper <complex451@gmail.com> * update process batch Signed-off-by: ericharper <complex451@gmail.com> * remove erroneous import Signed-off-by: ericharper <complex451@gmail.com> * remove erroneous import Signed-off-by: ericharper <complex451@gmail.com> * remove erroneous import Signed-off-by: ericharper <complex451@gmail.com> * add megatron clip_grad Signed-off-by: ericharper <complex451@gmail.com> * trying to resolve circular import error Signed-off-by: ericharper <complex451@gmail.com> * rename file Signed-off-by: ericharper <complex451@gmail.com> * remove non-gpt models and datasets from __init__ files Signed-off-by: ericharper <complex451@gmail.com> * set device in constructorfor gpu init Signed-off-by: ericharper <complex451@gmail.com> * set device in constructorfor gpu init Signed-off-by: ericharper <complex451@gmail.com> * set_device in constructor Signed-off-by: ericharper <complex451@gmail.com> * clean config Signed-off-by: ericharper <complex451@gmail.com> * update MegatronDataset Signed-off-by: ericharper <complex451@gmail.com> * clean up MegatronModule Signed-off-by: ericharper <complex451@gmail.com> * clean up MegatronModule Signed-off-by: ericharper <complex451@gmail.com> * rename fp16 and bf16 flags to fused_softmax_input_in_fp16/bf16 Signed-off-by: ericharper <complex451@gmail.com> * rename to fused_fp16 Signed-off-by: ericharper <complex451@gmail.com> * add fused_fp16 arg to LayerNorm calls Signed-off-by: ericharper <complex451@gmail.com> * fix arg name Signed-off-by: ericharper <complex451@gmail.com> * fix arg name Signed-off-by: ericharper <complex451@gmail.com> * fix import Signed-off-by: ericharper <complex451@gmail.com> * update arg Signed-off-by: ericharper <complex451@gmail.com> * skip warmup default to True Signed-off-by: ericharper <complex451@gmail.com> * skip warmup default to True Signed-off-by: ericharper <complex451@gmail.com> * Adding complete method to MegatronGPTModel (#2935) Signed-off-by: Oleksii Kuchaiev <okuchaiev@nvidia.com> * make ffn_hidden_size mandatory Signed-off-by: ericharper <complex451@gmail.com> * Manually migrating timing of step into branch (#2937) * 1. Manually migrating timing of step into branch. Signed-off-by: Micha Livne <mlivne@nvidia.com> * 1. Updated file name and content. Signed-off-by: Micha Livne <mlivne@nvidia.com> * 1. Updated to latest code. Signed-off-by: Micha Livne <mlivne@nvidia.com> Co-authored-by: Micha Livne <mlivne@nvidia.com> * remove unused imports Signed-off-by: ericharper <complex451@gmail.com> * remove unused import Signed-off-by: ericharper <complex451@gmail.com> * remove unused import Signed-off-by: ericharper <complex451@gmail.com> * remove unused import Signed-off-by: ericharper <complex451@gmail.com> * check fused_fp16 and fused_bf16 are not both True Signed-off-by: ericharper <complex451@gmail.com> * update predict script for model parallel .nemo Signed-off-by: ericharper <complex451@gmail.com> * typo Signed-off-by: ericharper <complex451@gmail.com> * add script to convert .ckpt to .nemo Signed-off-by: ericharper <complex451@gmail.com> * in progress Signed-off-by: ericharper <complex451@gmail.com> * update Signed-off-by: ericharper <complex451@gmail.com> * convert mp checkpoints to nemo Signed-off-by: ericharper <complex451@gmail.com> * update help Signed-off-by: ericharper <complex451@gmail.com> * add safeguard for model parallel save_to Signed-off-by: ericharper <complex451@gmail.com> * adjust NLPModel save_to to be safer for model parallel Signed-off-by: Oleksii Kuchaiev <okuchaiev@nvidia.com> Co-authored-by: Oleksii Kuchaiev <okuchaiev@users.noreply.github.com> Co-authored-by: Micha Livne <michalivne@users.noreply.github.com> Co-authored-by: Micha Livne <mlivne@nvidia.com> Co-authored-by: Oleksii Kuchaiev <okuchaiev@nvidia.com> * [BigNLP] Update GPT evaluation to work with tensor model parallel (#2959) * in progress Signed-off-by: ericharper <complex451@gmail.com> * update args Signed-off-by: ericharper <complex451@gmail.com> * add request dataset Signed-off-by: ericharper <complex451@gmail.com> * tokenize request Signed-off-by: ericharper <complex451@gmail.com> * in progress Signed-off-by: ericharper <complex451@gmail.com> * able to run Signed-off-by: ericharper <complex451@gmail.com> * reduce logits Signed-off-by: ericharper <complex451@gmail.com> * capture response Signed-off-by: ericharper <complex451@gmail.com> * squeeze and unsqueeze Signed-off-by: ericharper <complex451@gmail.com> * handle non model parallel case Signed-off-by: ericharper <complex451@gmail.com> * clean imports Signed-off-by: ericharper <complex451@gmail.com> * add file Signed-off-by: ericharper <complex451@gmail.com> * convert logits to log_probs Signed-off-by: Oleksii Kuchaiev <okuchaiev@nvidia.com> * rename logits to log_probs Signed-off-by: Oleksii Kuchaiev <okuchaiev@nvidia.com> Co-authored-by: Oleksii Kuchaiev <okuchaiev@nvidia.com> * style Signed-off-by: ericharper <complex451@gmail.com> * fix copyright headers Signed-off-by: ericharper <complex451@gmail.com> * fix copyright headers Signed-off-by: ericharper <complex451@gmail.com> * remove old TimingCallback Signed-off-by: ericharper <complex451@gmail.com> * style Signed-off-by: ericharper <complex451@gmail.com> * update jenkins to use latest apex and sandeep's fork Signed-off-by: ericharper <complex451@gmail.com> * update jenkins Signed-off-by: ericharper <complex451@gmail.com> * update jenkins Signed-off-by: ericharper <complex451@gmail.com> * update jenkins Signed-off-by: ericharper <complex451@gmail.com> * update jenkins Signed-off-by: ericharper <complex451@gmail.com> * try 2109 container Signed-off-by: ericharper <complex451@gmail.com> * try cuda container Signed-off-by: ericharper <complex451@gmail.com> * use internal container Signed-off-by: ericharper <complex451@gmail.com> * update checkpoint tests Signed-off-by: ericharper <complex451@gmail.com> * fix scheduler args Signed-off-by: ericharper <complex451@gmail.com> * update eval Signed-off-by: ericharper <complex451@gmail.com> * style Signed-off-by: ericharper <complex451@gmail.com> * update jenkins to use ptl 1.5 rc Signed-off-by: ericharper <complex451@gmail.com> * add import guard to jenkins Signed-off-by: ericharper <complex451@gmail.com> * add import guard to jenkins Signed-off-by: ericharper <complex451@gmail.com> * remove deterministic Signed-off-by: ericharper <complex451@gmail.com> * install numba .53 Signed-off-by: ericharper <complex451@gmail.com> * allow for more variance Signed-off-by: ericharper <complex451@gmail.com> * update trainer config dataclass Signed-off-by: ericharper <complex451@gmail.com> * test_get_optimizer on gpu Signed-off-by: ericharper <complex451@gmail.com> * revert comment Signed-off-by: ericharper <complex451@gmail.com> * change trainer config default to 32 Signed-off-by: ericharper <complex451@gmail.com> * [BigNLP] Remove fused kernel code instead use Apex (#2984) * remove fused_kernels Signed-off-by: ericharper <complex451@gmail.com> * remove fused_kernels Signed-off-by: ericharper <complex451@gmail.com> * remove fused layer norm and fused softmax and use apex instead Signed-off-by: ericharper <complex451@gmail.com> * update imports Signed-off-by: ericharper <complex451@gmail.com> * remove comment Signed-off-by: ericharper <complex451@gmail.com> * use apex enums Signed-off-by: ericharper <complex451@gmail.com> * use apex enums Signed-off-by: ericharper <complex451@gmail.com> * add tab Signed-off-by: ericharper <complex451@gmail.com> * Timer with sliding window (#3002) Co-authored-by: Micha Livne <michalivne@users.noreply.github.com> * revert tab Signed-off-by: ericharper <complex451@gmail.com> * check for rank zero Signed-off-by: ericharper <complex451@gmail.com> * check for rank zero Signed-off-by: ericharper <complex451@gmail.com> * try explicit log dir Signed-off-by: ericharper <complex451@gmail.com> * add + Signed-off-by: ericharper <complex451@gmail.com> * don't rm Signed-off-by: ericharper <complex451@gmail.com> * make dir if it doesn't exist Signed-off-by: ericharper <complex451@gmail.com> * create mp nemo file in temp directory Signed-off-by: ericharper <complex451@gmail.com> * simplify mp save_to Signed-off-by: ericharper <complex451@gmail.com> * handle mp 1 case Signed-off-by: ericharper <complex451@gmail.com> * style fix Signed-off-by: ericharper <complex451@gmail.com> * remove files Signed-off-by: ericharper <complex451@gmail.com> * fix consumed_samples when resuming Signed-off-by: ericharper <complex451@gmail.com> * fix reinstall.sh Signed-off-by: ericharper <complex451@gmail.com> * update req Signed-off-by: ericharper <complex451@gmail.com> * add more detailed log for dataloaders Signed-off-by: ericharper <complex451@gmail.com> * check if cuda is available before using fused_adam Signed-off-by: ericharper <complex451@gmail.com> * revert comment Signed-off-by: ericharper <complex451@gmail.com> * update eval script to use model.freeze Signed-off-by: ericharper <complex451@gmail.com> * log train loss averaged over gradient accumulation steps Signed-off-by: ericharper <complex451@gmail.com> * check copyright earlier Signed-off-by: ericharper <complex451@gmail.com> * todo Signed-off-by: ericharper <complex451@gmail.com> * override SaveRestoreConnector in NLPModel init Signed-off-by: ericharper <complex451@gmail.com> * move to scripts Signed-off-by: ericharper <complex451@gmail.com> * remove star import Signed-off-by: ericharper <complex451@gmail.com> * remove comments Signed-off-by: ericharper <complex451@gmail.com> * remove unused dataset Signed-off-by: ericharper <complex451@gmail.com> * removed barrier Signed-off-by: ericharper <complex451@gmail.com> * check cfg Signed-off-by: ericharper <complex451@gmail.com> * remove logging Signed-off-by: ericharper <complex451@gmail.com> * freeze, unfreeze Signed-off-by: ericharper <complex451@gmail.com> * return None Signed-off-by: ericharper <complex451@gmail.com> * remove unused imports Signed-off-by: ericharper <complex451@gmail.com> * add TODO Signed-off-by: ericharper <complex451@gmail.com> * typecheck Signed-off-by: ericharper <complex451@gmail.com> * typo Signed-off-by: ericharper <complex451@gmail.com> * todo Signed-off-by: ericharper <complex451@gmail.com> * add common native plugin Signed-off-by: ericharper <complex451@gmail.com> * restore with trainer Signed-off-by: ericharper <complex451@gmail.com> * style Signed-off-by: ericharper <complex451@gmail.com> * deprecate megatron-lm bert Signed-off-by: ericharper <complex451@gmail.com> * deprecate megatron-lm bert Signed-off-by: ericharper <complex451@gmail.com> * compile helpers ont he fly Signed-off-by: ericharper <complex451@gmail.com> * remove amp_level Signed-off-by: ericharper <complex451@gmail.com> * remove amp_level from configs Signed-off-by: ericharper <complex451@gmail.com> * add missing import Signed-off-by: ericharper <complex451@gmail.com> * typo Signed-off-by: ericharper <complex451@gmail.com> * remove amp_level Signed-off-by: ericharper <complex451@gmail.com> * use fast huggingface tokenizers by default Signed-off-by: ericharper <complex451@gmail.com> * deal with huggingface tokenizer positional args Signed-off-by: ericharper <complex451@gmail.com> * deal with huggingface tokenizer positional args Signed-off-by: ericharper <complex451@gmail.com> * deal with huggingface tokenizer positional args Signed-off-by: ericharper <complex451@gmail.com> * revert use_fast default to False Signed-off-by: ericharper <complex451@gmail.com> * return super training_epoch_end Signed-off-by: ericharper <complex451@gmail.com> * remove optimizer_idx arg from training_step Signed-off-by: ericharper <complex451@gmail.com> * remove unused arg from on_train_epoch_end Signed-off-by: ericharper <complex451@gmail.com> * add restore_from_path to nemo config Signed-off-by: ericharper <complex451@gmail.com> * add comment Signed-off-by: ericharper <complex451@gmail.com> * revert Signed-off-by: ericharper <complex451@gmail.com> * override connector if not subclassing NLPSaveRestoreConnector for model parallel save Signed-off-by: ericharper <complex451@gmail.com> * update test optimizer Signed-off-by: ericharper <complex451@gmail.com> * clean up Signed-off-by: ericharper <complex451@gmail.com> * clean up Signed-off-by: ericharper <complex451@gmail.com> * clean up Signed-off-by: ericharper <complex451@gmail.com> * clean up Signed-off-by: ericharper <complex451@gmail.com> * make data_prefix mandatory in config Signed-off-by: ericharper <complex451@gmail.com> * update installation instructions on readme Signed-off-by: ericharper <complex451@gmail.com> * update dockerfile Signed-off-by: ericharper <complex451@gmail.com> * add todo Signed-off-by: ericharper <complex451@gmail.com> * raise error if trying to use always_save_nemo with model parallel model Signed-off-by: ericharper <complex451@gmail.com> * remove comment Signed-off-by: ericharper <complex451@gmail.com> Co-authored-by: Sandeep Subramanian <sandeep.subramanian.1@umontreal.ca> Co-authored-by: Sangkug Lym <slym@nvidia.com> Co-authored-by: Oleksii Kuchaiev <okuchaiev@users.noreply.github.com> Co-authored-by: Micha Livne <michalivne@users.noreply.github.com> Co-authored-by: Micha Livne <mlivne@nvidia.com> Co-authored-by: Oleksii Kuchaiev <okuchaiev@nvidia.com>
2021-10-21 05:06:37 +02:00
If you chose to work with main branch, we recommend using NVIDIA's PyTorch container version 21.10-py3 and then installing from GitHub.
Note NVIDIA's PyTorch 21.10-py3 has not yet been released publicly. Please use a container with the nightly version of PyTorch installed if you are
[BigNLP] Merge Megatron GPT to main (#2975) * fix gpu init after removing debug print in mpu Signed-off-by: ericharper <complex451@gmail.com> * add fused_adam Signed-off-by: ericharper <complex451@gmail.com> * check ds is not none before logging len Signed-off-by: ericharper <complex451@gmail.com> * set fp16 arg to true and fix enum conflict Signed-off-by: ericharper <complex451@gmail.com> * make fp16 arg configurable Signed-off-by: ericharper <complex451@gmail.com> * add grad clip from megatron Signed-off-by: ericharper <complex451@gmail.com> * Linear warmup with cosine annealing and constant holding (#2846) * Testing cosine schedule Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * Style fixes Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * Fixes Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * More fixes Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * update config for constant steps in schedule Signed-off-by: ericharper <complex451@gmail.com> * temporarily import enum from megatron Signed-off-by: ericharper <complex451@gmail.com> * add grad clip for fp32 Signed-off-by: ericharper <complex451@gmail.com> * update check for _del_model_without_trainer Signed-off-by: ericharper <complex451@gmail.com> * updating restore for model parallel Signed-off-by: ericharper <complex451@gmail.com> * add predict script Signed-off-by: ericharper <complex451@gmail.com> * update test iters Signed-off-by: ericharper <complex451@gmail.com> * add barrier Signed-off-by: ericharper <complex451@gmail.com> * return if clip_val is 0 or None Signed-off-by: ericharper <complex451@gmail.com> * when using amp clip grads after they are unscaled Signed-off-by: ericharper <complex451@gmail.com> * make native amp scaler hyperparams configurable Signed-off-by: ericharper <complex451@gmail.com> * (1) nvfuser, (2) amp-casting decoration (#2894) * (1) nvfuser, (2) amp-casting decoration Signed-off-by: Sangkug Lym <slym@nvidia.com> * support bf16 Signed-off-by: Sangkug Lym <slym@nvidia.com> * update package info Signed-off-by: ericharper <complex451@gmail.com> * add set device to constructor Signed-off-by: ericharper <complex451@gmail.com> * set_device in constructor Signed-off-by: ericharper <complex451@gmail.com> * [BigNLP] Remove megatron-lm dependency. (#2910) * remove args Signed-off-by: ericharper <complex451@gmail.com> * remove args Signed-off-by: ericharper <complex451@gmail.com> * remove args Signed-off-by: ericharper <complex451@gmail.com> * remove args Signed-off-by: ericharper <complex451@gmail.com> * remove args in progress Signed-off-by: ericharper <complex451@gmail.com> * remove args in progress Signed-off-by: ericharper <complex451@gmail.com> * remove args in progress Signed-off-by: ericharper <complex451@gmail.com> * remove args in progress Signed-off-by: ericharper <complex451@gmail.com> * add load_fused_kernels Signed-off-by: ericharper <complex451@gmail.com> * add load_fused_kernels Signed-off-by: ericharper <complex451@gmail.com> * update megatron_init Signed-off-by: ericharper <complex451@gmail.com> * add fused kernels Signed-off-by: ericharper <complex451@gmail.com> * add fused kernels Signed-off-by: ericharper <complex451@gmail.com> * update process batch Signed-off-by: ericharper <complex451@gmail.com> * remove erroneous import Signed-off-by: ericharper <complex451@gmail.com> * remove erroneous import Signed-off-by: ericharper <complex451@gmail.com> * remove erroneous import Signed-off-by: ericharper <complex451@gmail.com> * add megatron clip_grad Signed-off-by: ericharper <complex451@gmail.com> * trying to resolve circular import error Signed-off-by: ericharper <complex451@gmail.com> * rename file Signed-off-by: ericharper <complex451@gmail.com> * remove non-gpt models and datasets from __init__ files Signed-off-by: ericharper <complex451@gmail.com> * set device in constructorfor gpu init Signed-off-by: ericharper <complex451@gmail.com> * set device in constructorfor gpu init Signed-off-by: ericharper <complex451@gmail.com> * set_device in constructor Signed-off-by: ericharper <complex451@gmail.com> * clean config Signed-off-by: ericharper <complex451@gmail.com> * update MegatronDataset Signed-off-by: ericharper <complex451@gmail.com> * clean up MegatronModule Signed-off-by: ericharper <complex451@gmail.com> * clean up MegatronModule Signed-off-by: ericharper <complex451@gmail.com> * rename fp16 and bf16 flags to fused_softmax_input_in_fp16/bf16 Signed-off-by: ericharper <complex451@gmail.com> * rename to fused_fp16 Signed-off-by: ericharper <complex451@gmail.com> * add fused_fp16 arg to LayerNorm calls Signed-off-by: ericharper <complex451@gmail.com> * fix arg name Signed-off-by: ericharper <complex451@gmail.com> * fix arg name Signed-off-by: ericharper <complex451@gmail.com> * fix import Signed-off-by: ericharper <complex451@gmail.com> * update arg Signed-off-by: ericharper <complex451@gmail.com> * skip warmup default to True Signed-off-by: ericharper <complex451@gmail.com> * skip warmup default to True Signed-off-by: ericharper <complex451@gmail.com> * Adding complete method to MegatronGPTModel (#2935) Signed-off-by: Oleksii Kuchaiev <okuchaiev@nvidia.com> * make ffn_hidden_size mandatory Signed-off-by: ericharper <complex451@gmail.com> * Manually migrating timing of step into branch (#2937) * 1. Manually migrating timing of step into branch. Signed-off-by: Micha Livne <mlivne@nvidia.com> * 1. Updated file name and content. Signed-off-by: Micha Livne <mlivne@nvidia.com> * 1. Updated to latest code. Signed-off-by: Micha Livne <mlivne@nvidia.com> Co-authored-by: Micha Livne <mlivne@nvidia.com> * remove unused imports Signed-off-by: ericharper <complex451@gmail.com> * remove unused import Signed-off-by: ericharper <complex451@gmail.com> * remove unused import Signed-off-by: ericharper <complex451@gmail.com> * remove unused import Signed-off-by: ericharper <complex451@gmail.com> * check fused_fp16 and fused_bf16 are not both True Signed-off-by: ericharper <complex451@gmail.com> * update predict script for model parallel .nemo Signed-off-by: ericharper <complex451@gmail.com> * typo Signed-off-by: ericharper <complex451@gmail.com> * typo Signed-off-by: ericharper <complex451@gmail.com> Co-authored-by: Oleksii Kuchaiev <okuchaiev@users.noreply.github.com> Co-authored-by: Micha Livne <michalivne@users.noreply.github.com> Co-authored-by: Micha Livne <mlivne@nvidia.com> * NVfuser (#2943) * activation checkpoint recompute Signed-off-by: Sangkug Lym <slym@nvidia.com> * selective nvfuser setup * Megatron gpt bfloat support (#2926) * Save/restore fix Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * Another merge Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * Bf16 args in init Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * Set precision Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * Remove debug stuff Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * add bf16 casting decorator Signed-off-by: Sangkug Lym <slym@nvidia.com> * Bfloat layernorm propagation Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * activation checkpoint recompute Signed-off-by: Sangkug Lym <slym@nvidia.com> * selective nvfuser setup * More arg removal Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * Remove BERTDataset Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * update to latest apex and patch transformer autocast Signed-off-by: ericharper <complex451@gmail.com> Co-authored-by: Sangkug Lym <slym@nvidia.com> Co-authored-by: ericharper <complex451@gmail.com> * don't set jit for bf16 Signed-off-by: ericharper <complex451@gmail.com> * replace apex.mpu Signed-off-by: ericharper <complex451@gmail.com> * fix grad clip Signed-off-by: ericharper <complex451@gmail.com> * NVFuser fixes (#2951) * Fuser fixes Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * Remove dummy handler Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * Remove PTL plugin based logic for fusion Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * remove duplicated file Signed-off-by: ericharper <complex451@gmail.com> * typo (#2960) Signed-off-by: ericharper <complex451@gmail.com> * [BigNLP] Script to convert GPT checkpoint to .nemo (#2958) * remove args Signed-off-by: ericharper <complex451@gmail.com> * remove args Signed-off-by: ericharper <complex451@gmail.com> * remove args Signed-off-by: ericharper <complex451@gmail.com> * remove args Signed-off-by: ericharper <complex451@gmail.com> * remove args in progress Signed-off-by: ericharper <complex451@gmail.com> * remove args in progress Signed-off-by: ericharper <complex451@gmail.com> * remove args in progress Signed-off-by: ericharper <complex451@gmail.com> * remove args in progress Signed-off-by: ericharper <complex451@gmail.com> * add load_fused_kernels Signed-off-by: ericharper <complex451@gmail.com> * add load_fused_kernels Signed-off-by: ericharper <complex451@gmail.com> * update megatron_init Signed-off-by: ericharper <complex451@gmail.com> * add fused kernels Signed-off-by: ericharper <complex451@gmail.com> * add fused kernels Signed-off-by: ericharper <complex451@gmail.com> * update process batch Signed-off-by: ericharper <complex451@gmail.com> * remove erroneous import Signed-off-by: ericharper <complex451@gmail.com> * remove erroneous import Signed-off-by: ericharper <complex451@gmail.com> * remove erroneous import Signed-off-by: ericharper <complex451@gmail.com> * add megatron clip_grad Signed-off-by: ericharper <complex451@gmail.com> * trying to resolve circular import error Signed-off-by: ericharper <complex451@gmail.com> * rename file Signed-off-by: ericharper <complex451@gmail.com> * remove non-gpt models and datasets from __init__ files Signed-off-by: ericharper <complex451@gmail.com> * set device in constructorfor gpu init Signed-off-by: ericharper <complex451@gmail.com> * set device in constructorfor gpu init Signed-off-by: ericharper <complex451@gmail.com> * set_device in constructor Signed-off-by: ericharper <complex451@gmail.com> * clean config Signed-off-by: ericharper <complex451@gmail.com> * update MegatronDataset Signed-off-by: ericharper <complex451@gmail.com> * clean up MegatronModule Signed-off-by: ericharper <complex451@gmail.com> * clean up MegatronModule Signed-off-by: ericharper <complex451@gmail.com> * rename fp16 and bf16 flags to fused_softmax_input_in_fp16/bf16 Signed-off-by: ericharper <complex451@gmail.com> * rename to fused_fp16 Signed-off-by: ericharper <complex451@gmail.com> * add fused_fp16 arg to LayerNorm calls Signed-off-by: ericharper <complex451@gmail.com> * fix arg name Signed-off-by: ericharper <complex451@gmail.com> * fix arg name Signed-off-by: ericharper <complex451@gmail.com> * fix import Signed-off-by: ericharper <complex451@gmail.com> * update arg Signed-off-by: ericharper <complex451@gmail.com> * skip warmup default to True Signed-off-by: ericharper <complex451@gmail.com> * skip warmup default to True Signed-off-by: ericharper <complex451@gmail.com> * Adding complete method to MegatronGPTModel (#2935) Signed-off-by: Oleksii Kuchaiev <okuchaiev@nvidia.com> * make ffn_hidden_size mandatory Signed-off-by: ericharper <complex451@gmail.com> * Manually migrating timing of step into branch (#2937) * 1. Manually migrating timing of step into branch. Signed-off-by: Micha Livne <mlivne@nvidia.com> * 1. Updated file name and content. Signed-off-by: Micha Livne <mlivne@nvidia.com> * 1. Updated to latest code. Signed-off-by: Micha Livne <mlivne@nvidia.com> Co-authored-by: Micha Livne <mlivne@nvidia.com> * remove unused imports Signed-off-by: ericharper <complex451@gmail.com> * remove unused import Signed-off-by: ericharper <complex451@gmail.com> * remove unused import Signed-off-by: ericharper <complex451@gmail.com> * remove unused import Signed-off-by: ericharper <complex451@gmail.com> * check fused_fp16 and fused_bf16 are not both True Signed-off-by: ericharper <complex451@gmail.com> * update predict script for model parallel .nemo Signed-off-by: ericharper <complex451@gmail.com> * typo Signed-off-by: ericharper <complex451@gmail.com> * add script to convert .ckpt to .nemo Signed-off-by: ericharper <complex451@gmail.com> * in progress Signed-off-by: ericharper <complex451@gmail.com> * update Signed-off-by: ericharper <complex451@gmail.com> * convert mp checkpoints to nemo Signed-off-by: ericharper <complex451@gmail.com> * update help Signed-off-by: ericharper <complex451@gmail.com> * add safeguard for model parallel save_to Signed-off-by: ericharper <complex451@gmail.com> * adjust NLPModel save_to to be safer for model parallel Signed-off-by: Oleksii Kuchaiev <okuchaiev@nvidia.com> Co-authored-by: Oleksii Kuchaiev <okuchaiev@users.noreply.github.com> Co-authored-by: Micha Livne <michalivne@users.noreply.github.com> Co-authored-by: Micha Livne <mlivne@nvidia.com> Co-authored-by: Oleksii Kuchaiev <okuchaiev@nvidia.com> * [BigNLP] Update GPT evaluation to work with tensor model parallel (#2959) * in progress Signed-off-by: ericharper <complex451@gmail.com> * update args Signed-off-by: ericharper <complex451@gmail.com> * add request dataset Signed-off-by: ericharper <complex451@gmail.com> * tokenize request Signed-off-by: ericharper <complex451@gmail.com> * in progress Signed-off-by: ericharper <complex451@gmail.com> * able to run Signed-off-by: ericharper <complex451@gmail.com> * reduce logits Signed-off-by: ericharper <complex451@gmail.com> * capture response Signed-off-by: ericharper <complex451@gmail.com> * squeeze and unsqueeze Signed-off-by: ericharper <complex451@gmail.com> * handle non model parallel case Signed-off-by: ericharper <complex451@gmail.com> * clean imports Signed-off-by: ericharper <complex451@gmail.com> * add file Signed-off-by: ericharper <complex451@gmail.com> * convert logits to log_probs Signed-off-by: Oleksii Kuchaiev <okuchaiev@nvidia.com> * rename logits to log_probs Signed-off-by: Oleksii Kuchaiev <okuchaiev@nvidia.com> Co-authored-by: Oleksii Kuchaiev <okuchaiev@nvidia.com> * add megatron gpt pretraining Signed-off-by: ericharper <complex451@gmail.com> * add megatron gpt pretraining Signed-off-by: ericharper <complex451@gmail.com> * add megatron gpt pretraining Signed-off-by: ericharper <complex451@gmail.com> * updating to work with latest megatron Signed-off-by: ericharper <complex451@gmail.com> * updating to work with latest megatron Signed-off-by: ericharper <complex451@gmail.com> * update _del_model Signed-off-by: ericharper <complex451@gmail.com> * adding gpt model Signed-off-by: ericharper <complex451@gmail.com> * adding gpt model Signed-off-by: ericharper <complex451@gmail.com> * adding gpt model Signed-off-by: ericharper <complex451@gmail.com> * instantiate GPTmodel Signed-off-by: ericharper <complex451@gmail.com> * adding build dataset Signed-off-by: ericharper <complex451@gmail.com> * build megatron dataset in .setup Signed-off-by: ericharper <complex451@gmail.com> * setup dataloader Signed-off-by: ericharper <complex451@gmail.com> * add vocab_file and merge_file to megatron init Signed-off-by: ericharper <complex451@gmail.com> * add forward Signed-off-by: ericharper <complex451@gmail.com> * add train loss Signed-off-by: ericharper <complex451@gmail.com> * add optimizer Signed-off-by: ericharper <complex451@gmail.com> * add exp_manager Signed-off-by: ericharper <complex451@gmail.com> * multi-gpu is working Signed-off-by: ericharper <complex451@gmail.com> * adding val loop Signed-off-by: ericharper <complex451@gmail.com> * style Signed-off-by: ericharper <complex451@gmail.com> * adding val loop Signed-off-by: ericharper <complex451@gmail.com> * fix ranks Signed-off-by: ericharper <complex451@gmail.com> * fix model parallel checkpoint saving Signed-off-by: ericharper <complex451@gmail.com> * fix _del_model Signed-off-by: ericharper <complex451@gmail.com> * added megatron batch sampler Signed-off-by: ericharper <complex451@gmail.com> * try to fix num steps Signed-off-by: ericharper <complex451@gmail.com> * add wandb to config Signed-off-by: ericharper <complex451@gmail.com> * log lr Signed-off-by: ericharper <complex451@gmail.com> * add warmup ratio to config Signed-off-by: ericharper <complex451@gmail.com> * update configs Signed-off-by: ericharper <complex451@gmail.com> * update configs Signed-off-by: ericharper <complex451@gmail.com> * add cpu init to args Signed-off-by: ericharper <complex451@gmail.com> * update config Signed-off-by: ericharper <complex451@gmail.com> * update config Signed-off-by: ericharper <complex451@gmail.com> * Initial megatron dataset port Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * Fix merge conflicts Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * License fixes and megatron model porting Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * Style fixes Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * More fixes to import from nemo rather than megatron Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * Fix circular imports Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * Style fixes Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * Revert config file Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * Restructure further to avoid circular imports Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * add Makefile Signed-off-by: ericharper <complex451@gmail.com> * Add megatron modules Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * add license Signed-off-by: ericharper <complex451@gmail.com> * Port from latest megatron Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * update cfg Signed-off-by: ericharper <complex451@gmail.com> * update config Signed-off-by: ericharper <complex451@gmail.com> * add _del_model_without_trainer Signed-off-by: ericharper <complex451@gmail.com> * add data preprocessing script Signed-off-by: ericharper <complex451@gmail.com> * update config Signed-off-by: ericharper <complex451@gmail.com> * use apex mpu Signed-off-by: ericharper <complex451@gmail.com> * replace print_rank_0 with nemo utils logging Signed-off-by: ericharper <complex451@gmail.com> * use apex mpu Signed-off-by: ericharper <complex451@gmail.com> * use apex mpu Signed-off-by: ericharper <complex451@gmail.com> * add use_cpu_initialization Signed-off-by: ericharper <complex451@gmail.com> * fixing autoresume in progress Signed-off-by: ericharper <complex451@gmail.com> * properly removing last checkpoint Signed-off-by: ericharper <complex451@gmail.com> * log consumed samples Signed-off-by: ericharper <complex451@gmail.com> * fix mp autoresume Signed-off-by: ericharper <complex451@gmail.com> * add NLPSaveRestoreConnector Signed-off-by: ericharper <complex451@gmail.com> * Megatron GPT training with NeMo tokenizers (#2818) * Update files from megatron repo Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * Remove non NLP data related files from megatron Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * Merge megatron and nemo tokenizers Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * Remove get_tokenizer() calls from gpt model Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * Update tokenizer yaml config Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * add todo Signed-off-by: ericharper <complex451@gmail.com> * update config Signed-off-by: ericharper <complex451@gmail.com> * make init_method_std configurable Signed-off-by: ericharper <complex451@gmail.com> * make gpu init work by setting random seed earlier Signed-off-by: ericharper <complex451@gmail.com> * fix gpu init after removing debug print in mpu Signed-off-by: ericharper <complex451@gmail.com> * add fused_adam Signed-off-by: ericharper <complex451@gmail.com> * check ds is not none before logging len Signed-off-by: ericharper <complex451@gmail.com> * set fp16 arg to true and fix enum conflict Signed-off-by: ericharper <complex451@gmail.com> * make fp16 arg configurable Signed-off-by: ericharper <complex451@gmail.com> * add grad clip from megatron Signed-off-by: ericharper <complex451@gmail.com> * Linear warmup with cosine annealing and constant holding (#2846) * Testing cosine schedule Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * Style fixes Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * Fixes Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * More fixes Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * update config for constant steps in schedule Signed-off-by: ericharper <complex451@gmail.com> * temporarily import enum from megatron Signed-off-by: ericharper <complex451@gmail.com> * add grad clip for fp32 Signed-off-by: ericharper <complex451@gmail.com> * update check for _del_model_without_trainer Signed-off-by: ericharper <complex451@gmail.com> * updating restore for model parallel Signed-off-by: ericharper <complex451@gmail.com> * add predict script Signed-off-by: ericharper <complex451@gmail.com> * update test iters Signed-off-by: ericharper <complex451@gmail.com> * add barrier Signed-off-by: ericharper <complex451@gmail.com> * return if clip_val is 0 or None Signed-off-by: ericharper <complex451@gmail.com> * when using amp clip grads after they are unscaled Signed-off-by: ericharper <complex451@gmail.com> * make native amp scaler hyperparams configurable Signed-off-by: ericharper <complex451@gmail.com> * (1) nvfuser, (2) amp-casting decoration (#2894) * (1) nvfuser, (2) amp-casting decoration Signed-off-by: Sangkug Lym <slym@nvidia.com> * support bf16 Signed-off-by: Sangkug Lym <slym@nvidia.com> * update package info Signed-off-by: ericharper <complex451@gmail.com> * add set device to constructor Signed-off-by: ericharper <complex451@gmail.com> * set_device in constructor Signed-off-by: ericharper <complex451@gmail.com> * [BigNLP] Remove megatron-lm dependency. (#2910) * remove args Signed-off-by: ericharper <complex451@gmail.com> * remove args Signed-off-by: ericharper <complex451@gmail.com> * remove args Signed-off-by: ericharper <complex451@gmail.com> * remove args Signed-off-by: ericharper <complex451@gmail.com> * remove args in progress Signed-off-by: ericharper <complex451@gmail.com> * remove args in progress Signed-off-by: ericharper <complex451@gmail.com> * remove args in progress Signed-off-by: ericharper <complex451@gmail.com> * remove args in progress Signed-off-by: ericharper <complex451@gmail.com> * add load_fused_kernels Signed-off-by: ericharper <complex451@gmail.com> * add load_fused_kernels Signed-off-by: ericharper <complex451@gmail.com> * update megatron_init Signed-off-by: ericharper <complex451@gmail.com> * add fused kernels Signed-off-by: ericharper <complex451@gmail.com> * add fused kernels Signed-off-by: ericharper <complex451@gmail.com> * update process batch Signed-off-by: ericharper <complex451@gmail.com> * remove erroneous import Signed-off-by: ericharper <complex451@gmail.com> * remove erroneous import Signed-off-by: ericharper <complex451@gmail.com> * remove erroneous import Signed-off-by: ericharper <complex451@gmail.com> * add megatron clip_grad Signed-off-by: ericharper <complex451@gmail.com> * trying to resolve circular import error Signed-off-by: ericharper <complex451@gmail.com> * rename file Signed-off-by: ericharper <complex451@gmail.com> * remove non-gpt models and datasets from __init__ files Signed-off-by: ericharper <complex451@gmail.com> * set device in constructorfor gpu init Signed-off-by: ericharper <complex451@gmail.com> * set device in constructorfor gpu init Signed-off-by: ericharper <complex451@gmail.com> * set_device in constructor Signed-off-by: ericharper <complex451@gmail.com> * clean config Signed-off-by: ericharper <complex451@gmail.com> * update MegatronDataset Signed-off-by: ericharper <complex451@gmail.com> * clean up MegatronModule Signed-off-by: ericharper <complex451@gmail.com> * clean up MegatronModule Signed-off-by: ericharper <complex451@gmail.com> * rename fp16 and bf16 flags to fused_softmax_input_in_fp16/bf16 Signed-off-by: ericharper <complex451@gmail.com> * rename to fused_fp16 Signed-off-by: ericharper <complex451@gmail.com> * add fused_fp16 arg to LayerNorm calls Signed-off-by: ericharper <complex451@gmail.com> * fix arg name Signed-off-by: ericharper <complex451@gmail.com> * fix arg name Signed-off-by: ericharper <complex451@gmail.com> * fix import Signed-off-by: ericharper <complex451@gmail.com> * update arg Signed-off-by: ericharper <complex451@gmail.com> * skip warmup default to True Signed-off-by: ericharper <complex451@gmail.com> * skip warmup default to True Signed-off-by: ericharper <complex451@gmail.com> * Adding complete method to MegatronGPTModel (#2935) Signed-off-by: Oleksii Kuchaiev <okuchaiev@nvidia.com> * make ffn_hidden_size mandatory Signed-off-by: ericharper <complex451@gmail.com> * Manually migrating timing of step into branch (#2937) * 1. Manually migrating timing of step into branch. Signed-off-by: Micha Livne <mlivne@nvidia.com> * 1. Updated file name and content. Signed-off-by: Micha Livne <mlivne@nvidia.com> * 1. Updated to latest code. Signed-off-by: Micha Livne <mlivne@nvidia.com> Co-authored-by: Micha Livne <mlivne@nvidia.com> * remove unused imports Signed-off-by: ericharper <complex451@gmail.com> * remove unused import Signed-off-by: ericharper <complex451@gmail.com> * remove unused import Signed-off-by: ericharper <complex451@gmail.com> * remove unused import Signed-off-by: ericharper <complex451@gmail.com> * check fused_fp16 and fused_bf16 are not both True Signed-off-by: ericharper <complex451@gmail.com> * update predict script for model parallel .nemo Signed-off-by: ericharper <complex451@gmail.com> * typo Signed-off-by: ericharper <complex451@gmail.com> * typo Signed-off-by: ericharper <complex451@gmail.com> Co-authored-by: Oleksii Kuchaiev <okuchaiev@users.noreply.github.com> Co-authored-by: Micha Livne <michalivne@users.noreply.github.com> Co-authored-by: Micha Livne <mlivne@nvidia.com> * NVfuser (#2943) * activation checkpoint recompute Signed-off-by: Sangkug Lym <slym@nvidia.com> * selective nvfuser setup * Megatron gpt bfloat support (#2926) * Save/restore fix Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * Another merge Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * Bf16 args in init Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * Set precision Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * Remove debug stuff Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * add bf16 casting decorator Signed-off-by: Sangkug Lym <slym@nvidia.com> * Bfloat layernorm propagation Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * activation checkpoint recompute Signed-off-by: Sangkug Lym <slym@nvidia.com> * selective nvfuser setup * More arg removal Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * Remove BERTDataset Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * update to latest apex and patch transformer autocast Signed-off-by: ericharper <complex451@gmail.com> Co-authored-by: Sangkug Lym <slym@nvidia.com> Co-authored-by: ericharper <complex451@gmail.com> * don't set jit for bf16 Signed-off-by: ericharper <complex451@gmail.com> * replace apex.mpu Signed-off-by: ericharper <complex451@gmail.com> * fix grad clip Signed-off-by: ericharper <complex451@gmail.com> * NVFuser fixes (#2951) * Fuser fixes Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * Remove dummy handler Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * Remove PTL plugin based logic for fusion Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * remove duplicated file Signed-off-by: ericharper <complex451@gmail.com> * typo (#2960) Signed-off-by: ericharper <complex451@gmail.com> * [BigNLP] Script to convert GPT checkpoint to .nemo (#2958) * remove args Signed-off-by: ericharper <complex451@gmail.com> * remove args Signed-off-by: ericharper <complex451@gmail.com> * remove args Signed-off-by: ericharper <complex451@gmail.com> * remove args Signed-off-by: ericharper <complex451@gmail.com> * remove args in progress Signed-off-by: ericharper <complex451@gmail.com> * remove args in progress Signed-off-by: ericharper <complex451@gmail.com> * remove args in progress Signed-off-by: ericharper <complex451@gmail.com> * remove args in progress Signed-off-by: ericharper <complex451@gmail.com> * add load_fused_kernels Signed-off-by: ericharper <complex451@gmail.com> * add load_fused_kernels Signed-off-by: ericharper <complex451@gmail.com> * update megatron_init Signed-off-by: ericharper <complex451@gmail.com> * add fused kernels Signed-off-by: ericharper <complex451@gmail.com> * add fused kernels Signed-off-by: ericharper <complex451@gmail.com> * update process batch Signed-off-by: ericharper <complex451@gmail.com> * remove erroneous import Signed-off-by: ericharper <complex451@gmail.com> * remove erroneous import Signed-off-by: ericharper <complex451@gmail.com> * remove erroneous import Signed-off-by: ericharper <complex451@gmail.com> * add megatron clip_grad Signed-off-by: ericharper <complex451@gmail.com> * trying to resolve circular import error Signed-off-by: ericharper <complex451@gmail.com> * rename file Signed-off-by: ericharper <complex451@gmail.com> * remove non-gpt models and datasets from __init__ files Signed-off-by: ericharper <complex451@gmail.com> * set device in constructorfor gpu init Signed-off-by: ericharper <complex451@gmail.com> * set device in constructorfor gpu init Signed-off-by: ericharper <complex451@gmail.com> * set_device in constructor Signed-off-by: ericharper <complex451@gmail.com> * clean config Signed-off-by: ericharper <complex451@gmail.com> * update MegatronDataset Signed-off-by: ericharper <complex451@gmail.com> * clean up MegatronModule Signed-off-by: ericharper <complex451@gmail.com> * clean up MegatronModule Signed-off-by: ericharper <complex451@gmail.com> * rename fp16 and bf16 flags to fused_softmax_input_in_fp16/bf16 Signed-off-by: ericharper <complex451@gmail.com> * rename to fused_fp16 Signed-off-by: ericharper <complex451@gmail.com> * add fused_fp16 arg to LayerNorm calls Signed-off-by: ericharper <complex451@gmail.com> * fix arg name Signed-off-by: ericharper <complex451@gmail.com> * fix arg name Signed-off-by: ericharper <complex451@gmail.com> * fix import Signed-off-by: ericharper <complex451@gmail.com> * update arg Signed-off-by: ericharper <complex451@gmail.com> * skip warmup default to True Signed-off-by: ericharper <complex451@gmail.com> * skip warmup default to True Signed-off-by: ericharper <complex451@gmail.com> * Adding complete method to MegatronGPTModel (#2935) Signed-off-by: Oleksii Kuchaiev <okuchaiev@nvidia.com> * make ffn_hidden_size mandatory Signed-off-by: ericharper <complex451@gmail.com> * Manually migrating timing of step into branch (#2937) * 1. Manually migrating timing of step into branch. Signed-off-by: Micha Livne <mlivne@nvidia.com> * 1. Updated file name and content. Signed-off-by: Micha Livne <mlivne@nvidia.com> * 1. Updated to latest code. Signed-off-by: Micha Livne <mlivne@nvidia.com> Co-authored-by: Micha Livne <mlivne@nvidia.com> * remove unused imports Signed-off-by: ericharper <complex451@gmail.com> * remove unused import Signed-off-by: ericharper <complex451@gmail.com> * remove unused import Signed-off-by: ericharper <complex451@gmail.com> * remove unused import Signed-off-by: ericharper <complex451@gmail.com> * check fused_fp16 and fused_bf16 are not both True Signed-off-by: ericharper <complex451@gmail.com> * update predict script for model parallel .nemo Signed-off-by: ericharper <complex451@gmail.com> * typo Signed-off-by: ericharper <complex451@gmail.com> * add script to convert .ckpt to .nemo Signed-off-by: ericharper <complex451@gmail.com> * in progress Signed-off-by: ericharper <complex451@gmail.com> * update Signed-off-by: ericharper <complex451@gmail.com> * convert mp checkpoints to nemo Signed-off-by: ericharper <complex451@gmail.com> * update help Signed-off-by: ericharper <complex451@gmail.com> * add safeguard for model parallel save_to Signed-off-by: ericharper <complex451@gmail.com> * adjust NLPModel save_to to be safer for model parallel Signed-off-by: Oleksii Kuchaiev <okuchaiev@nvidia.com> Co-authored-by: Oleksii Kuchaiev <okuchaiev@users.noreply.github.com> Co-authored-by: Micha Livne <michalivne@users.noreply.github.com> Co-authored-by: Micha Livne <mlivne@nvidia.com> Co-authored-by: Oleksii Kuchaiev <okuchaiev@nvidia.com> * [BigNLP] Update GPT evaluation to work with tensor model parallel (#2959) * in progress Signed-off-by: ericharper <complex451@gmail.com> * update args Signed-off-by: ericharper <complex451@gmail.com> * add request dataset Signed-off-by: ericharper <complex451@gmail.com> * tokenize request Signed-off-by: ericharper <complex451@gmail.com> * in progress Signed-off-by: ericharper <complex451@gmail.com> * able to run Signed-off-by: ericharper <complex451@gmail.com> * reduce logits Signed-off-by: ericharper <complex451@gmail.com> * capture response Signed-off-by: ericharper <complex451@gmail.com> * squeeze and unsqueeze Signed-off-by: ericharper <complex451@gmail.com> * handle non model parallel case Signed-off-by: ericharper <complex451@gmail.com> * clean imports Signed-off-by: ericharper <complex451@gmail.com> * add file Signed-off-by: ericharper <complex451@gmail.com> * convert logits to log_probs Signed-off-by: Oleksii Kuchaiev <okuchaiev@nvidia.com> * rename logits to log_probs Signed-off-by: Oleksii Kuchaiev <okuchaiev@nvidia.com> Co-authored-by: Oleksii Kuchaiev <okuchaiev@nvidia.com> * style Signed-off-by: ericharper <complex451@gmail.com> * fix copyright headers Signed-off-by: ericharper <complex451@gmail.com> * fix copyright headers Signed-off-by: ericharper <complex451@gmail.com> * remove old TimingCallback Signed-off-by: ericharper <complex451@gmail.com> * style Signed-off-by: ericharper <complex451@gmail.com> * update jenkins to use latest apex and sandeep's fork Signed-off-by: ericharper <complex451@gmail.com> * update jenkins Signed-off-by: ericharper <complex451@gmail.com> * update jenkins Signed-off-by: ericharper <complex451@gmail.com> * update jenkins Signed-off-by: ericharper <complex451@gmail.com> * update jenkins Signed-off-by: ericharper <complex451@gmail.com> * try 2109 container Signed-off-by: ericharper <complex451@gmail.com> * try cuda container Signed-off-by: ericharper <complex451@gmail.com> * use internal container Signed-off-by: ericharper <complex451@gmail.com> * update checkpoint tests Signed-off-by: ericharper <complex451@gmail.com> * fix scheduler args Signed-off-by: ericharper <complex451@gmail.com> * update eval Signed-off-by: ericharper <complex451@gmail.com> * style Signed-off-by: ericharper <complex451@gmail.com> * update jenkins to use ptl 1.5 rc Signed-off-by: ericharper <complex451@gmail.com> * add import guard to jenkins Signed-off-by: ericharper <complex451@gmail.com> * add import guard to jenkins Signed-off-by: ericharper <complex451@gmail.com> * remove deterministic Signed-off-by: ericharper <complex451@gmail.com> * install numba .53 Signed-off-by: ericharper <complex451@gmail.com> * allow for more variance Signed-off-by: ericharper <complex451@gmail.com> * update trainer config dataclass Signed-off-by: ericharper <complex451@gmail.com> * test_get_optimizer on gpu Signed-off-by: ericharper <complex451@gmail.com> * revert comment Signed-off-by: ericharper <complex451@gmail.com> * change trainer config default to 32 Signed-off-by: ericharper <complex451@gmail.com> * [BigNLP] Remove fused kernel code instead use Apex (#2984) * remove fused_kernels Signed-off-by: ericharper <complex451@gmail.com> * remove fused_kernels Signed-off-by: ericharper <complex451@gmail.com> * remove fused layer norm and fused softmax and use apex instead Signed-off-by: ericharper <complex451@gmail.com> * update imports Signed-off-by: ericharper <complex451@gmail.com> * remove comment Signed-off-by: ericharper <complex451@gmail.com> * use apex enums Signed-off-by: ericharper <complex451@gmail.com> * use apex enums Signed-off-by: ericharper <complex451@gmail.com> * add tab Signed-off-by: ericharper <complex451@gmail.com> * Timer with sliding window (#3002) Co-authored-by: Micha Livne <michalivne@users.noreply.github.com> * revert tab Signed-off-by: ericharper <complex451@gmail.com> * check for rank zero Signed-off-by: ericharper <complex451@gmail.com> * check for rank zero Signed-off-by: ericharper <complex451@gmail.com> * try explicit log dir Signed-off-by: ericharper <complex451@gmail.com> * add + Signed-off-by: ericharper <complex451@gmail.com> * don't rm Signed-off-by: ericharper <complex451@gmail.com> * make dir if it doesn't exist Signed-off-by: ericharper <complex451@gmail.com> * create mp nemo file in temp directory Signed-off-by: ericharper <complex451@gmail.com> * simplify mp save_to Signed-off-by: ericharper <complex451@gmail.com> * handle mp 1 case Signed-off-by: ericharper <complex451@gmail.com> * style fix Signed-off-by: ericharper <complex451@gmail.com> * remove files Signed-off-by: ericharper <complex451@gmail.com> * fix consumed_samples when resuming Signed-off-by: ericharper <complex451@gmail.com> * fix reinstall.sh Signed-off-by: ericharper <complex451@gmail.com> * update req Signed-off-by: ericharper <complex451@gmail.com> * add more detailed log for dataloaders Signed-off-by: ericharper <complex451@gmail.com> * check if cuda is available before using fused_adam Signed-off-by: ericharper <complex451@gmail.com> * revert comment Signed-off-by: ericharper <complex451@gmail.com> * update eval script to use model.freeze Signed-off-by: ericharper <complex451@gmail.com> * log train loss averaged over gradient accumulation steps Signed-off-by: ericharper <complex451@gmail.com> * check copyright earlier Signed-off-by: ericharper <complex451@gmail.com> * todo Signed-off-by: ericharper <complex451@gmail.com> * override SaveRestoreConnector in NLPModel init Signed-off-by: ericharper <complex451@gmail.com> * move to scripts Signed-off-by: ericharper <complex451@gmail.com> * remove star import Signed-off-by: ericharper <complex451@gmail.com> * remove comments Signed-off-by: ericharper <complex451@gmail.com> * remove unused dataset Signed-off-by: ericharper <complex451@gmail.com> * removed barrier Signed-off-by: ericharper <complex451@gmail.com> * check cfg Signed-off-by: ericharper <complex451@gmail.com> * remove logging Signed-off-by: ericharper <complex451@gmail.com> * freeze, unfreeze Signed-off-by: ericharper <complex451@gmail.com> * return None Signed-off-by: ericharper <complex451@gmail.com> * remove unused imports Signed-off-by: ericharper <complex451@gmail.com> * add TODO Signed-off-by: ericharper <complex451@gmail.com> * typecheck Signed-off-by: ericharper <complex451@gmail.com> * typo Signed-off-by: ericharper <complex451@gmail.com> * todo Signed-off-by: ericharper <complex451@gmail.com> * add common native plugin Signed-off-by: ericharper <complex451@gmail.com> * restore with trainer Signed-off-by: ericharper <complex451@gmail.com> * style Signed-off-by: ericharper <complex451@gmail.com> * deprecate megatron-lm bert Signed-off-by: ericharper <complex451@gmail.com> * deprecate megatron-lm bert Signed-off-by: ericharper <complex451@gmail.com> * compile helpers ont he fly Signed-off-by: ericharper <complex451@gmail.com> * remove amp_level Signed-off-by: ericharper <complex451@gmail.com> * remove amp_level from configs Signed-off-by: ericharper <complex451@gmail.com> * add missing import Signed-off-by: ericharper <complex451@gmail.com> * typo Signed-off-by: ericharper <complex451@gmail.com> * remove amp_level Signed-off-by: ericharper <complex451@gmail.com> * use fast huggingface tokenizers by default Signed-off-by: ericharper <complex451@gmail.com> * deal with huggingface tokenizer positional args Signed-off-by: ericharper <complex451@gmail.com> * deal with huggingface tokenizer positional args Signed-off-by: ericharper <complex451@gmail.com> * deal with huggingface tokenizer positional args Signed-off-by: ericharper <complex451@gmail.com> * revert use_fast default to False Signed-off-by: ericharper <complex451@gmail.com> * return super training_epoch_end Signed-off-by: ericharper <complex451@gmail.com> * remove optimizer_idx arg from training_step Signed-off-by: ericharper <complex451@gmail.com> * remove unused arg from on_train_epoch_end Signed-off-by: ericharper <complex451@gmail.com> * add restore_from_path to nemo config Signed-off-by: ericharper <complex451@gmail.com> * add comment Signed-off-by: ericharper <complex451@gmail.com> * revert Signed-off-by: ericharper <complex451@gmail.com> * override connector if not subclassing NLPSaveRestoreConnector for model parallel save Signed-off-by: ericharper <complex451@gmail.com> * update test optimizer Signed-off-by: ericharper <complex451@gmail.com> * clean up Signed-off-by: ericharper <complex451@gmail.com> * clean up Signed-off-by: ericharper <complex451@gmail.com> * clean up Signed-off-by: ericharper <complex451@gmail.com> * clean up Signed-off-by: ericharper <complex451@gmail.com> * make data_prefix mandatory in config Signed-off-by: ericharper <complex451@gmail.com> * update installation instructions on readme Signed-off-by: ericharper <complex451@gmail.com> * update dockerfile Signed-off-by: ericharper <complex451@gmail.com> * add todo Signed-off-by: ericharper <complex451@gmail.com> * raise error if trying to use always_save_nemo with model parallel model Signed-off-by: ericharper <complex451@gmail.com> * remove comment Signed-off-by: ericharper <complex451@gmail.com> Co-authored-by: Sandeep Subramanian <sandeep.subramanian.1@umontreal.ca> Co-authored-by: Sangkug Lym <slym@nvidia.com> Co-authored-by: Oleksii Kuchaiev <okuchaiev@users.noreply.github.com> Co-authored-by: Micha Livne <michalivne@users.noreply.github.com> Co-authored-by: Micha Livne <mlivne@nvidia.com> Co-authored-by: Oleksii Kuchaiev <okuchaiev@nvidia.com>
2021-10-21 05:06:37 +02:00
unable to access the NVIDIA's PyTorch 21.10 container.
.. code-block:: bash
docker run --gpus all -it --rm -v <nemo_github_folder>:/NeMo --shm-size=8g \
-p 8888:8888 -p 6006:6006 --ulimit memlock=-1 --ulimit \
[BigNLP] Merge Megatron GPT to main (#2975) * fix gpu init after removing debug print in mpu Signed-off-by: ericharper <complex451@gmail.com> * add fused_adam Signed-off-by: ericharper <complex451@gmail.com> * check ds is not none before logging len Signed-off-by: ericharper <complex451@gmail.com> * set fp16 arg to true and fix enum conflict Signed-off-by: ericharper <complex451@gmail.com> * make fp16 arg configurable Signed-off-by: ericharper <complex451@gmail.com> * add grad clip from megatron Signed-off-by: ericharper <complex451@gmail.com> * Linear warmup with cosine annealing and constant holding (#2846) * Testing cosine schedule Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * Style fixes Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * Fixes Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * More fixes Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * update config for constant steps in schedule Signed-off-by: ericharper <complex451@gmail.com> * temporarily import enum from megatron Signed-off-by: ericharper <complex451@gmail.com> * add grad clip for fp32 Signed-off-by: ericharper <complex451@gmail.com> * update check for _del_model_without_trainer Signed-off-by: ericharper <complex451@gmail.com> * updating restore for model parallel Signed-off-by: ericharper <complex451@gmail.com> * add predict script Signed-off-by: ericharper <complex451@gmail.com> * update test iters Signed-off-by: ericharper <complex451@gmail.com> * add barrier Signed-off-by: ericharper <complex451@gmail.com> * return if clip_val is 0 or None Signed-off-by: ericharper <complex451@gmail.com> * when using amp clip grads after they are unscaled Signed-off-by: ericharper <complex451@gmail.com> * make native amp scaler hyperparams configurable Signed-off-by: ericharper <complex451@gmail.com> * (1) nvfuser, (2) amp-casting decoration (#2894) * (1) nvfuser, (2) amp-casting decoration Signed-off-by: Sangkug Lym <slym@nvidia.com> * support bf16 Signed-off-by: Sangkug Lym <slym@nvidia.com> * update package info Signed-off-by: ericharper <complex451@gmail.com> * add set device to constructor Signed-off-by: ericharper <complex451@gmail.com> * set_device in constructor Signed-off-by: ericharper <complex451@gmail.com> * [BigNLP] Remove megatron-lm dependency. (#2910) * remove args Signed-off-by: ericharper <complex451@gmail.com> * remove args Signed-off-by: ericharper <complex451@gmail.com> * remove args Signed-off-by: ericharper <complex451@gmail.com> * remove args Signed-off-by: ericharper <complex451@gmail.com> * remove args in progress Signed-off-by: ericharper <complex451@gmail.com> * remove args in progress Signed-off-by: ericharper <complex451@gmail.com> * remove args in progress Signed-off-by: ericharper <complex451@gmail.com> * remove args in progress Signed-off-by: ericharper <complex451@gmail.com> * add load_fused_kernels Signed-off-by: ericharper <complex451@gmail.com> * add load_fused_kernels Signed-off-by: ericharper <complex451@gmail.com> * update megatron_init Signed-off-by: ericharper <complex451@gmail.com> * add fused kernels Signed-off-by: ericharper <complex451@gmail.com> * add fused kernels Signed-off-by: ericharper <complex451@gmail.com> * update process batch Signed-off-by: ericharper <complex451@gmail.com> * remove erroneous import Signed-off-by: ericharper <complex451@gmail.com> * remove erroneous import Signed-off-by: ericharper <complex451@gmail.com> * remove erroneous import Signed-off-by: ericharper <complex451@gmail.com> * add megatron clip_grad Signed-off-by: ericharper <complex451@gmail.com> * trying to resolve circular import error Signed-off-by: ericharper <complex451@gmail.com> * rename file Signed-off-by: ericharper <complex451@gmail.com> * remove non-gpt models and datasets from __init__ files Signed-off-by: ericharper <complex451@gmail.com> * set device in constructorfor gpu init Signed-off-by: ericharper <complex451@gmail.com> * set device in constructorfor gpu init Signed-off-by: ericharper <complex451@gmail.com> * set_device in constructor Signed-off-by: ericharper <complex451@gmail.com> * clean config Signed-off-by: ericharper <complex451@gmail.com> * update MegatronDataset Signed-off-by: ericharper <complex451@gmail.com> * clean up MegatronModule Signed-off-by: ericharper <complex451@gmail.com> * clean up MegatronModule Signed-off-by: ericharper <complex451@gmail.com> * rename fp16 and bf16 flags to fused_softmax_input_in_fp16/bf16 Signed-off-by: ericharper <complex451@gmail.com> * rename to fused_fp16 Signed-off-by: ericharper <complex451@gmail.com> * add fused_fp16 arg to LayerNorm calls Signed-off-by: ericharper <complex451@gmail.com> * fix arg name Signed-off-by: ericharper <complex451@gmail.com> * fix arg name Signed-off-by: ericharper <complex451@gmail.com> * fix import Signed-off-by: ericharper <complex451@gmail.com> * update arg Signed-off-by: ericharper <complex451@gmail.com> * skip warmup default to True Signed-off-by: ericharper <complex451@gmail.com> * skip warmup default to True Signed-off-by: ericharper <complex451@gmail.com> * Adding complete method to MegatronGPTModel (#2935) Signed-off-by: Oleksii Kuchaiev <okuchaiev@nvidia.com> * make ffn_hidden_size mandatory Signed-off-by: ericharper <complex451@gmail.com> * Manually migrating timing of step into branch (#2937) * 1. Manually migrating timing of step into branch. Signed-off-by: Micha Livne <mlivne@nvidia.com> * 1. Updated file name and content. Signed-off-by: Micha Livne <mlivne@nvidia.com> * 1. Updated to latest code. Signed-off-by: Micha Livne <mlivne@nvidia.com> Co-authored-by: Micha Livne <mlivne@nvidia.com> * remove unused imports Signed-off-by: ericharper <complex451@gmail.com> * remove unused import Signed-off-by: ericharper <complex451@gmail.com> * remove unused import Signed-off-by: ericharper <complex451@gmail.com> * remove unused import Signed-off-by: ericharper <complex451@gmail.com> * check fused_fp16 and fused_bf16 are not both True Signed-off-by: ericharper <complex451@gmail.com> * update predict script for model parallel .nemo Signed-off-by: ericharper <complex451@gmail.com> * typo Signed-off-by: ericharper <complex451@gmail.com> * typo Signed-off-by: ericharper <complex451@gmail.com> Co-authored-by: Oleksii Kuchaiev <okuchaiev@users.noreply.github.com> Co-authored-by: Micha Livne <michalivne@users.noreply.github.com> Co-authored-by: Micha Livne <mlivne@nvidia.com> * NVfuser (#2943) * activation checkpoint recompute Signed-off-by: Sangkug Lym <slym@nvidia.com> * selective nvfuser setup * Megatron gpt bfloat support (#2926) * Save/restore fix Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * Another merge Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * Bf16 args in init Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * Set precision Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * Remove debug stuff Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * add bf16 casting decorator Signed-off-by: Sangkug Lym <slym@nvidia.com> * Bfloat layernorm propagation Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * activation checkpoint recompute Signed-off-by: Sangkug Lym <slym@nvidia.com> * selective nvfuser setup * More arg removal Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * Remove BERTDataset Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * update to latest apex and patch transformer autocast Signed-off-by: ericharper <complex451@gmail.com> Co-authored-by: Sangkug Lym <slym@nvidia.com> Co-authored-by: ericharper <complex451@gmail.com> * don't set jit for bf16 Signed-off-by: ericharper <complex451@gmail.com> * replace apex.mpu Signed-off-by: ericharper <complex451@gmail.com> * fix grad clip Signed-off-by: ericharper <complex451@gmail.com> * NVFuser fixes (#2951) * Fuser fixes Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * Remove dummy handler Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * Remove PTL plugin based logic for fusion Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * remove duplicated file Signed-off-by: ericharper <complex451@gmail.com> * typo (#2960) Signed-off-by: ericharper <complex451@gmail.com> * [BigNLP] Script to convert GPT checkpoint to .nemo (#2958) * remove args Signed-off-by: ericharper <complex451@gmail.com> * remove args Signed-off-by: ericharper <complex451@gmail.com> * remove args Signed-off-by: ericharper <complex451@gmail.com> * remove args Signed-off-by: ericharper <complex451@gmail.com> * remove args in progress Signed-off-by: ericharper <complex451@gmail.com> * remove args in progress Signed-off-by: ericharper <complex451@gmail.com> * remove args in progress Signed-off-by: ericharper <complex451@gmail.com> * remove args in progress Signed-off-by: ericharper <complex451@gmail.com> * add load_fused_kernels Signed-off-by: ericharper <complex451@gmail.com> * add load_fused_kernels Signed-off-by: ericharper <complex451@gmail.com> * update megatron_init Signed-off-by: ericharper <complex451@gmail.com> * add fused kernels Signed-off-by: ericharper <complex451@gmail.com> * add fused kernels Signed-off-by: ericharper <complex451@gmail.com> * update process batch Signed-off-by: ericharper <complex451@gmail.com> * remove erroneous import Signed-off-by: ericharper <complex451@gmail.com> * remove erroneous import Signed-off-by: ericharper <complex451@gmail.com> * remove erroneous import Signed-off-by: ericharper <complex451@gmail.com> * add megatron clip_grad Signed-off-by: ericharper <complex451@gmail.com> * trying to resolve circular import error Signed-off-by: ericharper <complex451@gmail.com> * rename file Signed-off-by: ericharper <complex451@gmail.com> * remove non-gpt models and datasets from __init__ files Signed-off-by: ericharper <complex451@gmail.com> * set device in constructorfor gpu init Signed-off-by: ericharper <complex451@gmail.com> * set device in constructorfor gpu init Signed-off-by: ericharper <complex451@gmail.com> * set_device in constructor Signed-off-by: ericharper <complex451@gmail.com> * clean config Signed-off-by: ericharper <complex451@gmail.com> * update MegatronDataset Signed-off-by: ericharper <complex451@gmail.com> * clean up MegatronModule Signed-off-by: ericharper <complex451@gmail.com> * clean up MegatronModule Signed-off-by: ericharper <complex451@gmail.com> * rename fp16 and bf16 flags to fused_softmax_input_in_fp16/bf16 Signed-off-by: ericharper <complex451@gmail.com> * rename to fused_fp16 Signed-off-by: ericharper <complex451@gmail.com> * add fused_fp16 arg to LayerNorm calls Signed-off-by: ericharper <complex451@gmail.com> * fix arg name Signed-off-by: ericharper <complex451@gmail.com> * fix arg name Signed-off-by: ericharper <complex451@gmail.com> * fix import Signed-off-by: ericharper <complex451@gmail.com> * update arg Signed-off-by: ericharper <complex451@gmail.com> * skip warmup default to True Signed-off-by: ericharper <complex451@gmail.com> * skip warmup default to True Signed-off-by: ericharper <complex451@gmail.com> * Adding complete method to MegatronGPTModel (#2935) Signed-off-by: Oleksii Kuchaiev <okuchaiev@nvidia.com> * make ffn_hidden_size mandatory Signed-off-by: ericharper <complex451@gmail.com> * Manually migrating timing of step into branch (#2937) * 1. Manually migrating timing of step into branch. Signed-off-by: Micha Livne <mlivne@nvidia.com> * 1. Updated file name and content. Signed-off-by: Micha Livne <mlivne@nvidia.com> * 1. Updated to latest code. Signed-off-by: Micha Livne <mlivne@nvidia.com> Co-authored-by: Micha Livne <mlivne@nvidia.com> * remove unused imports Signed-off-by: ericharper <complex451@gmail.com> * remove unused import Signed-off-by: ericharper <complex451@gmail.com> * remove unused import Signed-off-by: ericharper <complex451@gmail.com> * remove unused import Signed-off-by: ericharper <complex451@gmail.com> * check fused_fp16 and fused_bf16 are not both True Signed-off-by: ericharper <complex451@gmail.com> * update predict script for model parallel .nemo Signed-off-by: ericharper <complex451@gmail.com> * typo Signed-off-by: ericharper <complex451@gmail.com> * add script to convert .ckpt to .nemo Signed-off-by: ericharper <complex451@gmail.com> * in progress Signed-off-by: ericharper <complex451@gmail.com> * update Signed-off-by: ericharper <complex451@gmail.com> * convert mp checkpoints to nemo Signed-off-by: ericharper <complex451@gmail.com> * update help Signed-off-by: ericharper <complex451@gmail.com> * add safeguard for model parallel save_to Signed-off-by: ericharper <complex451@gmail.com> * adjust NLPModel save_to to be safer for model parallel Signed-off-by: Oleksii Kuchaiev <okuchaiev@nvidia.com> Co-authored-by: Oleksii Kuchaiev <okuchaiev@users.noreply.github.com> Co-authored-by: Micha Livne <michalivne@users.noreply.github.com> Co-authored-by: Micha Livne <mlivne@nvidia.com> Co-authored-by: Oleksii Kuchaiev <okuchaiev@nvidia.com> * [BigNLP] Update GPT evaluation to work with tensor model parallel (#2959) * in progress Signed-off-by: ericharper <complex451@gmail.com> * update args Signed-off-by: ericharper <complex451@gmail.com> * add request dataset Signed-off-by: ericharper <complex451@gmail.com> * tokenize request Signed-off-by: ericharper <complex451@gmail.com> * in progress Signed-off-by: ericharper <complex451@gmail.com> * able to run Signed-off-by: ericharper <complex451@gmail.com> * reduce logits Signed-off-by: ericharper <complex451@gmail.com> * capture response Signed-off-by: ericharper <complex451@gmail.com> * squeeze and unsqueeze Signed-off-by: ericharper <complex451@gmail.com> * handle non model parallel case Signed-off-by: ericharper <complex451@gmail.com> * clean imports Signed-off-by: ericharper <complex451@gmail.com> * add file Signed-off-by: ericharper <complex451@gmail.com> * convert logits to log_probs Signed-off-by: Oleksii Kuchaiev <okuchaiev@nvidia.com> * rename logits to log_probs Signed-off-by: Oleksii Kuchaiev <okuchaiev@nvidia.com> Co-authored-by: Oleksii Kuchaiev <okuchaiev@nvidia.com> * add megatron gpt pretraining Signed-off-by: ericharper <complex451@gmail.com> * add megatron gpt pretraining Signed-off-by: ericharper <complex451@gmail.com> * add megatron gpt pretraining Signed-off-by: ericharper <complex451@gmail.com> * updating to work with latest megatron Signed-off-by: ericharper <complex451@gmail.com> * updating to work with latest megatron Signed-off-by: ericharper <complex451@gmail.com> * update _del_model Signed-off-by: ericharper <complex451@gmail.com> * adding gpt model Signed-off-by: ericharper <complex451@gmail.com> * adding gpt model Signed-off-by: ericharper <complex451@gmail.com> * adding gpt model Signed-off-by: ericharper <complex451@gmail.com> * instantiate GPTmodel Signed-off-by: ericharper <complex451@gmail.com> * adding build dataset Signed-off-by: ericharper <complex451@gmail.com> * build megatron dataset in .setup Signed-off-by: ericharper <complex451@gmail.com> * setup dataloader Signed-off-by: ericharper <complex451@gmail.com> * add vocab_file and merge_file to megatron init Signed-off-by: ericharper <complex451@gmail.com> * add forward Signed-off-by: ericharper <complex451@gmail.com> * add train loss Signed-off-by: ericharper <complex451@gmail.com> * add optimizer Signed-off-by: ericharper <complex451@gmail.com> * add exp_manager Signed-off-by: ericharper <complex451@gmail.com> * multi-gpu is working Signed-off-by: ericharper <complex451@gmail.com> * adding val loop Signed-off-by: ericharper <complex451@gmail.com> * style Signed-off-by: ericharper <complex451@gmail.com> * adding val loop Signed-off-by: ericharper <complex451@gmail.com> * fix ranks Signed-off-by: ericharper <complex451@gmail.com> * fix model parallel checkpoint saving Signed-off-by: ericharper <complex451@gmail.com> * fix _del_model Signed-off-by: ericharper <complex451@gmail.com> * added megatron batch sampler Signed-off-by: ericharper <complex451@gmail.com> * try to fix num steps Signed-off-by: ericharper <complex451@gmail.com> * add wandb to config Signed-off-by: ericharper <complex451@gmail.com> * log lr Signed-off-by: ericharper <complex451@gmail.com> * add warmup ratio to config Signed-off-by: ericharper <complex451@gmail.com> * update configs Signed-off-by: ericharper <complex451@gmail.com> * update configs Signed-off-by: ericharper <complex451@gmail.com> * add cpu init to args Signed-off-by: ericharper <complex451@gmail.com> * update config Signed-off-by: ericharper <complex451@gmail.com> * update config Signed-off-by: ericharper <complex451@gmail.com> * Initial megatron dataset port Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * Fix merge conflicts Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * License fixes and megatron model porting Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * Style fixes Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * More fixes to import from nemo rather than megatron Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * Fix circular imports Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * Style fixes Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * Revert config file Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * Restructure further to avoid circular imports Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * add Makefile Signed-off-by: ericharper <complex451@gmail.com> * Add megatron modules Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * add license Signed-off-by: ericharper <complex451@gmail.com> * Port from latest megatron Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * update cfg Signed-off-by: ericharper <complex451@gmail.com> * update config Signed-off-by: ericharper <complex451@gmail.com> * add _del_model_without_trainer Signed-off-by: ericharper <complex451@gmail.com> * add data preprocessing script Signed-off-by: ericharper <complex451@gmail.com> * update config Signed-off-by: ericharper <complex451@gmail.com> * use apex mpu Signed-off-by: ericharper <complex451@gmail.com> * replace print_rank_0 with nemo utils logging Signed-off-by: ericharper <complex451@gmail.com> * use apex mpu Signed-off-by: ericharper <complex451@gmail.com> * use apex mpu Signed-off-by: ericharper <complex451@gmail.com> * add use_cpu_initialization Signed-off-by: ericharper <complex451@gmail.com> * fixing autoresume in progress Signed-off-by: ericharper <complex451@gmail.com> * properly removing last checkpoint Signed-off-by: ericharper <complex451@gmail.com> * log consumed samples Signed-off-by: ericharper <complex451@gmail.com> * fix mp autoresume Signed-off-by: ericharper <complex451@gmail.com> * add NLPSaveRestoreConnector Signed-off-by: ericharper <complex451@gmail.com> * Megatron GPT training with NeMo tokenizers (#2818) * Update files from megatron repo Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * Remove non NLP data related files from megatron Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * Merge megatron and nemo tokenizers Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * Remove get_tokenizer() calls from gpt model Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * Update tokenizer yaml config Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * add todo Signed-off-by: ericharper <complex451@gmail.com> * update config Signed-off-by: ericharper <complex451@gmail.com> * make init_method_std configurable Signed-off-by: ericharper <complex451@gmail.com> * make gpu init work by setting random seed earlier Signed-off-by: ericharper <complex451@gmail.com> * fix gpu init after removing debug print in mpu Signed-off-by: ericharper <complex451@gmail.com> * add fused_adam Signed-off-by: ericharper <complex451@gmail.com> * check ds is not none before logging len Signed-off-by: ericharper <complex451@gmail.com> * set fp16 arg to true and fix enum conflict Signed-off-by: ericharper <complex451@gmail.com> * make fp16 arg configurable Signed-off-by: ericharper <complex451@gmail.com> * add grad clip from megatron Signed-off-by: ericharper <complex451@gmail.com> * Linear warmup with cosine annealing and constant holding (#2846) * Testing cosine schedule Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * Style fixes Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * Fixes Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * More fixes Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * update config for constant steps in schedule Signed-off-by: ericharper <complex451@gmail.com> * temporarily import enum from megatron Signed-off-by: ericharper <complex451@gmail.com> * add grad clip for fp32 Signed-off-by: ericharper <complex451@gmail.com> * update check for _del_model_without_trainer Signed-off-by: ericharper <complex451@gmail.com> * updating restore for model parallel Signed-off-by: ericharper <complex451@gmail.com> * add predict script Signed-off-by: ericharper <complex451@gmail.com> * update test iters Signed-off-by: ericharper <complex451@gmail.com> * add barrier Signed-off-by: ericharper <complex451@gmail.com> * return if clip_val is 0 or None Signed-off-by: ericharper <complex451@gmail.com> * when using amp clip grads after they are unscaled Signed-off-by: ericharper <complex451@gmail.com> * make native amp scaler hyperparams configurable Signed-off-by: ericharper <complex451@gmail.com> * (1) nvfuser, (2) amp-casting decoration (#2894) * (1) nvfuser, (2) amp-casting decoration Signed-off-by: Sangkug Lym <slym@nvidia.com> * support bf16 Signed-off-by: Sangkug Lym <slym@nvidia.com> * update package info Signed-off-by: ericharper <complex451@gmail.com> * add set device to constructor Signed-off-by: ericharper <complex451@gmail.com> * set_device in constructor Signed-off-by: ericharper <complex451@gmail.com> * [BigNLP] Remove megatron-lm dependency. (#2910) * remove args Signed-off-by: ericharper <complex451@gmail.com> * remove args Signed-off-by: ericharper <complex451@gmail.com> * remove args Signed-off-by: ericharper <complex451@gmail.com> * remove args Signed-off-by: ericharper <complex451@gmail.com> * remove args in progress Signed-off-by: ericharper <complex451@gmail.com> * remove args in progress Signed-off-by: ericharper <complex451@gmail.com> * remove args in progress Signed-off-by: ericharper <complex451@gmail.com> * remove args in progress Signed-off-by: ericharper <complex451@gmail.com> * add load_fused_kernels Signed-off-by: ericharper <complex451@gmail.com> * add load_fused_kernels Signed-off-by: ericharper <complex451@gmail.com> * update megatron_init Signed-off-by: ericharper <complex451@gmail.com> * add fused kernels Signed-off-by: ericharper <complex451@gmail.com> * add fused kernels Signed-off-by: ericharper <complex451@gmail.com> * update process batch Signed-off-by: ericharper <complex451@gmail.com> * remove erroneous import Signed-off-by: ericharper <complex451@gmail.com> * remove erroneous import Signed-off-by: ericharper <complex451@gmail.com> * remove erroneous import Signed-off-by: ericharper <complex451@gmail.com> * add megatron clip_grad Signed-off-by: ericharper <complex451@gmail.com> * trying to resolve circular import error Signed-off-by: ericharper <complex451@gmail.com> * rename file Signed-off-by: ericharper <complex451@gmail.com> * remove non-gpt models and datasets from __init__ files Signed-off-by: ericharper <complex451@gmail.com> * set device in constructorfor gpu init Signed-off-by: ericharper <complex451@gmail.com> * set device in constructorfor gpu init Signed-off-by: ericharper <complex451@gmail.com> * set_device in constructor Signed-off-by: ericharper <complex451@gmail.com> * clean config Signed-off-by: ericharper <complex451@gmail.com> * update MegatronDataset Signed-off-by: ericharper <complex451@gmail.com> * clean up MegatronModule Signed-off-by: ericharper <complex451@gmail.com> * clean up MegatronModule Signed-off-by: ericharper <complex451@gmail.com> * rename fp16 and bf16 flags to fused_softmax_input_in_fp16/bf16 Signed-off-by: ericharper <complex451@gmail.com> * rename to fused_fp16 Signed-off-by: ericharper <complex451@gmail.com> * add fused_fp16 arg to LayerNorm calls Signed-off-by: ericharper <complex451@gmail.com> * fix arg name Signed-off-by: ericharper <complex451@gmail.com> * fix arg name Signed-off-by: ericharper <complex451@gmail.com> * fix import Signed-off-by: ericharper <complex451@gmail.com> * update arg Signed-off-by: ericharper <complex451@gmail.com> * skip warmup default to True Signed-off-by: ericharper <complex451@gmail.com> * skip warmup default to True Signed-off-by: ericharper <complex451@gmail.com> * Adding complete method to MegatronGPTModel (#2935) Signed-off-by: Oleksii Kuchaiev <okuchaiev@nvidia.com> * make ffn_hidden_size mandatory Signed-off-by: ericharper <complex451@gmail.com> * Manually migrating timing of step into branch (#2937) * 1. Manually migrating timing of step into branch. Signed-off-by: Micha Livne <mlivne@nvidia.com> * 1. Updated file name and content. Signed-off-by: Micha Livne <mlivne@nvidia.com> * 1. Updated to latest code. Signed-off-by: Micha Livne <mlivne@nvidia.com> Co-authored-by: Micha Livne <mlivne@nvidia.com> * remove unused imports Signed-off-by: ericharper <complex451@gmail.com> * remove unused import Signed-off-by: ericharper <complex451@gmail.com> * remove unused import Signed-off-by: ericharper <complex451@gmail.com> * remove unused import Signed-off-by: ericharper <complex451@gmail.com> * check fused_fp16 and fused_bf16 are not both True Signed-off-by: ericharper <complex451@gmail.com> * update predict script for model parallel .nemo Signed-off-by: ericharper <complex451@gmail.com> * typo Signed-off-by: ericharper <complex451@gmail.com> * typo Signed-off-by: ericharper <complex451@gmail.com> Co-authored-by: Oleksii Kuchaiev <okuchaiev@users.noreply.github.com> Co-authored-by: Micha Livne <michalivne@users.noreply.github.com> Co-authored-by: Micha Livne <mlivne@nvidia.com> * NVfuser (#2943) * activation checkpoint recompute Signed-off-by: Sangkug Lym <slym@nvidia.com> * selective nvfuser setup * Megatron gpt bfloat support (#2926) * Save/restore fix Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * Another merge Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * Bf16 args in init Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * Set precision Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * Remove debug stuff Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * add bf16 casting decorator Signed-off-by: Sangkug Lym <slym@nvidia.com> * Bfloat layernorm propagation Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * activation checkpoint recompute Signed-off-by: Sangkug Lym <slym@nvidia.com> * selective nvfuser setup * More arg removal Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * Remove BERTDataset Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * update to latest apex and patch transformer autocast Signed-off-by: ericharper <complex451@gmail.com> Co-authored-by: Sangkug Lym <slym@nvidia.com> Co-authored-by: ericharper <complex451@gmail.com> * don't set jit for bf16 Signed-off-by: ericharper <complex451@gmail.com> * replace apex.mpu Signed-off-by: ericharper <complex451@gmail.com> * fix grad clip Signed-off-by: ericharper <complex451@gmail.com> * NVFuser fixes (#2951) * Fuser fixes Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * Remove dummy handler Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * Remove PTL plugin based logic for fusion Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca> * remove duplicated file Signed-off-by: ericharper <complex451@gmail.com> * typo (#2960) Signed-off-by: ericharper <complex451@gmail.com> * [BigNLP] Script to convert GPT checkpoint to .nemo (#2958) * remove args Signed-off-by: ericharper <complex451@gmail.com> * remove args Signed-off-by: ericharper <complex451@gmail.com> * remove args Signed-off-by: ericharper <complex451@gmail.com> * remove args Signed-off-by: ericharper <complex451@gmail.com> * remove args in progress Signed-off-by: ericharper <complex451@gmail.com> * remove args in progress Signed-off-by: ericharper <complex451@gmail.com> * remove args in progress Signed-off-by: ericharper <complex451@gmail.com> * remove args in progress Signed-off-by: ericharper <complex451@gmail.com> * add load_fused_kernels Signed-off-by: ericharper <complex451@gmail.com> * add load_fused_kernels Signed-off-by: ericharper <complex451@gmail.com> * update megatron_init Signed-off-by: ericharper <complex451@gmail.com> * add fused kernels Signed-off-by: ericharper <complex451@gmail.com> * add fused kernels Signed-off-by: ericharper <complex451@gmail.com> * update process batch Signed-off-by: ericharper <complex451@gmail.com> * remove erroneous import Signed-off-by: ericharper <complex451@gmail.com> * remove erroneous import Signed-off-by: ericharper <complex451@gmail.com> * remove erroneous import Signed-off-by: ericharper <complex451@gmail.com> * add megatron clip_grad Signed-off-by: ericharper <complex451@gmail.com> * trying to resolve circular import error Signed-off-by: ericharper <complex451@gmail.com> * rename file Signed-off-by: ericharper <complex451@gmail.com> * remove non-gpt models and datasets from __init__ files Signed-off-by: ericharper <complex451@gmail.com> * set device in constructorfor gpu init Signed-off-by: ericharper <complex451@gmail.com> * set device in constructorfor gpu init Signed-off-by: ericharper <complex451@gmail.com> * set_device in constructor Signed-off-by: ericharper <complex451@gmail.com> * clean config Signed-off-by: ericharper <complex451@gmail.com> * update MegatronDataset Signed-off-by: ericharper <complex451@gmail.com> * clean up MegatronModule Signed-off-by: ericharper <complex451@gmail.com> * clean up MegatronModule Signed-off-by: ericharper <complex451@gmail.com> * rename fp16 and bf16 flags to fused_softmax_input_in_fp16/bf16 Signed-off-by: ericharper <complex451@gmail.com> * rename to fused_fp16 Signed-off-by: ericharper <complex451@gmail.com> * add fused_fp16 arg to LayerNorm calls Signed-off-by: ericharper <complex451@gmail.com> * fix arg name Signed-off-by: ericharper <complex451@gmail.com> * fix arg name Signed-off-by: ericharper <complex451@gmail.com> * fix import Signed-off-by: ericharper <complex451@gmail.com> * update arg Signed-off-by: ericharper <complex451@gmail.com> * skip warmup default to True Signed-off-by: ericharper <complex451@gmail.com> * skip warmup default to True Signed-off-by: ericharper <complex451@gmail.com> * Adding complete method to MegatronGPTModel (#2935) Signed-off-by: Oleksii Kuchaiev <okuchaiev@nvidia.com> * make ffn_hidden_size mandatory Signed-off-by: ericharper <complex451@gmail.com> * Manually migrating timing of step into branch (#2937) * 1. Manually migrating timing of step into branch. Signed-off-by: Micha Livne <mlivne@nvidia.com> * 1. Updated file name and content. Signed-off-by: Micha Livne <mlivne@nvidia.com> * 1. Updated to latest code. Signed-off-by: Micha Livne <mlivne@nvidia.com> Co-authored-by: Micha Livne <mlivne@nvidia.com> * remove unused imports Signed-off-by: ericharper <complex451@gmail.com> * remove unused import Signed-off-by: ericharper <complex451@gmail.com> * remove unused import Signed-off-by: ericharper <complex451@gmail.com> * remove unused import Signed-off-by: ericharper <complex451@gmail.com> * check fused_fp16 and fused_bf16 are not both True Signed-off-by: ericharper <complex451@gmail.com> * update predict script for model parallel .nemo Signed-off-by: ericharper <complex451@gmail.com> * typo Signed-off-by: ericharper <complex451@gmail.com> * add script to convert .ckpt to .nemo Signed-off-by: ericharper <complex451@gmail.com> * in progress Signed-off-by: ericharper <complex451@gmail.com> * update Signed-off-by: ericharper <complex451@gmail.com> * convert mp checkpoints to nemo Signed-off-by: ericharper <complex451@gmail.com> * update help Signed-off-by: ericharper <complex451@gmail.com> * add safeguard for model parallel save_to Signed-off-by: ericharper <complex451@gmail.com> * adjust NLPModel save_to to be safer for model parallel Signed-off-by: Oleksii Kuchaiev <okuchaiev@nvidia.com> Co-authored-by: Oleksii Kuchaiev <okuchaiev@users.noreply.github.com> Co-authored-by: Micha Livne <michalivne@users.noreply.github.com> Co-authored-by: Micha Livne <mlivne@nvidia.com> Co-authored-by: Oleksii Kuchaiev <okuchaiev@nvidia.com> * [BigNLP] Update GPT evaluation to work with tensor model parallel (#2959) * in progress Signed-off-by: ericharper <complex451@gmail.com> * update args Signed-off-by: ericharper <complex451@gmail.com> * add request dataset Signed-off-by: ericharper <complex451@gmail.com> * tokenize request Signed-off-by: ericharper <complex451@gmail.com> * in progress Signed-off-by: ericharper <complex451@gmail.com> * able to run Signed-off-by: ericharper <complex451@gmail.com> * reduce logits Signed-off-by: ericharper <complex451@gmail.com> * capture response Signed-off-by: ericharper <complex451@gmail.com> * squeeze and unsqueeze Signed-off-by: ericharper <complex451@gmail.com> * handle non model parallel case Signed-off-by: ericharper <complex451@gmail.com> * clean imports Signed-off-by: ericharper <complex451@gmail.com> * add file Signed-off-by: ericharper <complex451@gmail.com> * convert logits to log_probs Signed-off-by: Oleksii Kuchaiev <okuchaiev@nvidia.com> * rename logits to log_probs Signed-off-by: Oleksii Kuchaiev <okuchaiev@nvidia.com> Co-authored-by: Oleksii Kuchaiev <okuchaiev@nvidia.com> * style Signed-off-by: ericharper <complex451@gmail.com> * fix copyright headers Signed-off-by: ericharper <complex451@gmail.com> * fix copyright headers Signed-off-by: ericharper <complex451@gmail.com> * remove old TimingCallback Signed-off-by: ericharper <complex451@gmail.com> * style Signed-off-by: ericharper <complex451@gmail.com> * update jenkins to use latest apex and sandeep's fork Signed-off-by: ericharper <complex451@gmail.com> * update jenkins Signed-off-by: ericharper <complex451@gmail.com> * update jenkins Signed-off-by: ericharper <complex451@gmail.com> * update jenkins Signed-off-by: ericharper <complex451@gmail.com> * update jenkins Signed-off-by: ericharper <complex451@gmail.com> * try 2109 container Signed-off-by: ericharper <complex451@gmail.com> * try cuda container Signed-off-by: ericharper <complex451@gmail.com> * use internal container Signed-off-by: ericharper <complex451@gmail.com> * update checkpoint tests Signed-off-by: ericharper <complex451@gmail.com> * fix scheduler args Signed-off-by: ericharper <complex451@gmail.com> * update eval Signed-off-by: ericharper <complex451@gmail.com> * style Signed-off-by: ericharper <complex451@gmail.com> * update jenkins to use ptl 1.5 rc Signed-off-by: ericharper <complex451@gmail.com> * add import guard to jenkins Signed-off-by: ericharper <complex451@gmail.com> * add import guard to jenkins Signed-off-by: ericharper <complex451@gmail.com> * remove deterministic Signed-off-by: ericharper <complex451@gmail.com> * install numba .53 Signed-off-by: ericharper <complex451@gmail.com> * allow for more variance Signed-off-by: ericharper <complex451@gmail.com> * update trainer config dataclass Signed-off-by: ericharper <complex451@gmail.com> * test_get_optimizer on gpu Signed-off-by: ericharper <complex451@gmail.com> * revert comment Signed-off-by: ericharper <complex451@gmail.com> * change trainer config default to 32 Signed-off-by: ericharper <complex451@gmail.com> * [BigNLP] Remove fused kernel code instead use Apex (#2984) * remove fused_kernels Signed-off-by: ericharper <complex451@gmail.com> * remove fused_kernels Signed-off-by: ericharper <complex451@gmail.com> * remove fused layer norm and fused softmax and use apex instead Signed-off-by: ericharper <complex451@gmail.com> * update imports Signed-off-by: ericharper <complex451@gmail.com> * remove comment Signed-off-by: ericharper <complex451@gmail.com> * use apex enums Signed-off-by: ericharper <complex451@gmail.com> * use apex enums Signed-off-by: ericharper <complex451@gmail.com> * add tab Signed-off-by: ericharper <complex451@gmail.com> * Timer with sliding window (#3002) Co-authored-by: Micha Livne <michalivne@users.noreply.github.com> * revert tab Signed-off-by: ericharper <complex451@gmail.com> * check for rank zero Signed-off-by: ericharper <complex451@gmail.com> * check for rank zero Signed-off-by: ericharper <complex451@gmail.com> * try explicit log dir Signed-off-by: ericharper <complex451@gmail.com> * add + Signed-off-by: ericharper <complex451@gmail.com> * don't rm Signed-off-by: ericharper <complex451@gmail.com> * make dir if it doesn't exist Signed-off-by: ericharper <complex451@gmail.com> * create mp nemo file in temp directory Signed-off-by: ericharper <complex451@gmail.com> * simplify mp save_to Signed-off-by: ericharper <complex451@gmail.com> * handle mp 1 case Signed-off-by: ericharper <complex451@gmail.com> * style fix Signed-off-by: ericharper <complex451@gmail.com> * remove files Signed-off-by: ericharper <complex451@gmail.com> * fix consumed_samples when resuming Signed-off-by: ericharper <complex451@gmail.com> * fix reinstall.sh Signed-off-by: ericharper <complex451@gmail.com> * update req Signed-off-by: ericharper <complex451@gmail.com> * add more detailed log for dataloaders Signed-off-by: ericharper <complex451@gmail.com> * check if cuda is available before using fused_adam Signed-off-by: ericharper <complex451@gmail.com> * revert comment Signed-off-by: ericharper <complex451@gmail.com> * update eval script to use model.freeze Signed-off-by: ericharper <complex451@gmail.com> * log train loss averaged over gradient accumulation steps Signed-off-by: ericharper <complex451@gmail.com> * check copyright earlier Signed-off-by: ericharper <complex451@gmail.com> * todo Signed-off-by: ericharper <complex451@gmail.com> * override SaveRestoreConnector in NLPModel init Signed-off-by: ericharper <complex451@gmail.com> * move to scripts Signed-off-by: ericharper <complex451@gmail.com> * remove star import Signed-off-by: ericharper <complex451@gmail.com> * remove comments Signed-off-by: ericharper <complex451@gmail.com> * remove unused dataset Signed-off-by: ericharper <complex451@gmail.com> * removed barrier Signed-off-by: ericharper <complex451@gmail.com> * check cfg Signed-off-by: ericharper <complex451@gmail.com> * remove logging Signed-off-by: ericharper <complex451@gmail.com> * freeze, unfreeze Signed-off-by: ericharper <complex451@gmail.com> * return None Signed-off-by: ericharper <complex451@gmail.com> * remove unused imports Signed-off-by: ericharper <complex451@gmail.com> * add TODO Signed-off-by: ericharper <complex451@gmail.com> * typecheck Signed-off-by: ericharper <complex451@gmail.com> * typo Signed-off-by: ericharper <complex451@gmail.com> * todo Signed-off-by: ericharper <complex451@gmail.com> * add common native plugin Signed-off-by: ericharper <complex451@gmail.com> * restore with trainer Signed-off-by: ericharper <complex451@gmail.com> * style Signed-off-by: ericharper <complex451@gmail.com> * deprecate megatron-lm bert Signed-off-by: ericharper <complex451@gmail.com> * deprecate megatron-lm bert Signed-off-by: ericharper <complex451@gmail.com> * compile helpers ont he fly Signed-off-by: ericharper <complex451@gmail.com> * remove amp_level Signed-off-by: ericharper <complex451@gmail.com> * remove amp_level from configs Signed-off-by: ericharper <complex451@gmail.com> * add missing import Signed-off-by: ericharper <complex451@gmail.com> * typo Signed-off-by: ericharper <complex451@gmail.com> * remove amp_level Signed-off-by: ericharper <complex451@gmail.com> * use fast huggingface tokenizers by default Signed-off-by: ericharper <complex451@gmail.com> * deal with huggingface tokenizer positional args Signed-off-by: ericharper <complex451@gmail.com> * deal with huggingface tokenizer positional args Signed-off-by: ericharper <complex451@gmail.com> * deal with huggingface tokenizer positional args Signed-off-by: ericharper <complex451@gmail.com> * revert use_fast default to False Signed-off-by: ericharper <complex451@gmail.com> * return super training_epoch_end Signed-off-by: ericharper <complex451@gmail.com> * remove optimizer_idx arg from training_step Signed-off-by: ericharper <complex451@gmail.com> * remove unused arg from on_train_epoch_end Signed-off-by: ericharper <complex451@gmail.com> * add restore_from_path to nemo config Signed-off-by: ericharper <complex451@gmail.com> * add comment Signed-off-by: ericharper <complex451@gmail.com> * revert Signed-off-by: ericharper <complex451@gmail.com> * override connector if not subclassing NLPSaveRestoreConnector for model parallel save Signed-off-by: ericharper <complex451@gmail.com> * update test optimizer Signed-off-by: ericharper <complex451@gmail.com> * clean up Signed-off-by: ericharper <complex451@gmail.com> * clean up Signed-off-by: ericharper <complex451@gmail.com> * clean up Signed-off-by: ericharper <complex451@gmail.com> * clean up Signed-off-by: ericharper <complex451@gmail.com> * make data_prefix mandatory in config Signed-off-by: ericharper <complex451@gmail.com> * update installation instructions on readme Signed-off-by: ericharper <complex451@gmail.com> * update dockerfile Signed-off-by: ericharper <complex451@gmail.com> * add todo Signed-off-by: ericharper <complex451@gmail.com> * raise error if trying to use always_save_nemo with model parallel model Signed-off-by: ericharper <complex451@gmail.com> * remove comment Signed-off-by: ericharper <complex451@gmail.com> Co-authored-by: Sandeep Subramanian <sandeep.subramanian.1@umontreal.ca> Co-authored-by: Sangkug Lym <slym@nvidia.com> Co-authored-by: Oleksii Kuchaiev <okuchaiev@users.noreply.github.com> Co-authored-by: Micha Livne <michalivne@users.noreply.github.com> Co-authored-by: Micha Livne <mlivne@nvidia.com> Co-authored-by: Oleksii Kuchaiev <okuchaiev@nvidia.com>
2021-10-21 05:06:37 +02:00
stack=67108864 --device=/dev/snd nvcr.io/nvidia/pytorch:21.10-py3
Examples
--------
Merge r1.1 bugfixes to main. Update dep versions. (#2437) * Update notebook branch and Jenkinsfile for 1.1.0 testing (#2378) * update branch Signed-off-by: ericharper <complex451@gmail.com> * update jenkinsfile Signed-off-by: ericharper <complex451@gmail.com> * [BUGFIX] NMT Multi-node was incorrectly computing num_replicas (#2380) * fix property when not using model parallel Signed-off-by: ericharper <complex451@gmail.com> * fix property when not using model parallel Signed-off-by: ericharper <complex451@gmail.com> * add debug statement Signed-off-by: ericharper <complex451@gmail.com> * add debug statement Signed-off-by: ericharper <complex451@gmail.com> * instantiate with NLPDDPPlugin with num_nodes from trainer config Signed-off-by: ericharper <complex451@gmail.com> * Update ASR scripts for tokenizer building and tarred dataset building (#2381) * Update ASR scripts for tokenizer building and tarred dataset building Signed-off-by: smajumdar <titu1994@gmail.com> * Update container Signed-off-by: smajumdar <titu1994@gmail.com> * Add STT Zh Citrinet 1024 Gamma 0.25 model Signed-off-by: smajumdar <titu1994@gmail.com> * Update notebook (#2391) Signed-off-by: smajumdar <titu1994@gmail.com> * ASR Notebooks fix for 1.1.0 (#2395) * nb fix for spring clean Signed-off-by: fayejf <fayejf07@gmail.com> * remove outdated instruction Signed-off-by: fayejf <fayejf07@gmail.com> * Mean normalization (#2397) * norm embeddings Signed-off-by: nithinraok <nithinrao.koluguri@gmail.com> * move to utils Signed-off-by: nithinraok <nithinrao.koluguri@gmail.com> * Bugfix adaptive spec augment time masking (#2398) * bugfix adaptive spec augment Signed-off-by: smajumdar <titu1994@gmail.com> * Revert freq mask guard Signed-off-by: smajumdar <titu1994@gmail.com> * Revert freq mask guard Signed-off-by: smajumdar <titu1994@gmail.com> * Remove static time width clamping Signed-off-by: smajumdar <titu1994@gmail.com> * Correct typos and issues with notebooks (#2402) * Fix Primer notebook Signed-off-by: smajumdar <titu1994@gmail.com> * Typo Signed-off-by: smajumdar <titu1994@gmail.com> * remove accelerator=DDP in tutorial notebooks to avoid errors. (#2403) Signed-off-by: Hoo Chang Shin <hshin@nvidia.com> Co-authored-by: Hoo Chang Shin <hshin@nvidia.com> * [BUGFIX] Megatron in NMT was setting vocab_file to None (#2417) * make vocab_file configurable for megatron in nmt Signed-off-by: ericharper <complex451@gmail.com> * update docs Signed-off-by: ericharper <complex451@gmail.com> * update docs Signed-off-by: ericharper <complex451@gmail.com> * Link updates in docs and notebooks and typo fix (#2416) * typo fix for notebooks Signed-off-by: fayejf <fayejf07@gmail.com> * tiny typo fix in docs Signed-off-by: fayejf <fayejf07@gmail.com> * docs branch->stable Signed-off-by: fayejf <fayejf07@gmail.com> * more docs branch -> stable Signed-off-by: fayejf <fayejf07@gmail.com> * tutorial links branch -> stable Signed-off-by: fayejf <fayejf07@gmail.com> * small fix Signed-off-by: fayejf <fayejf07@gmail.com> * add renamed 06 Signed-off-by: fayejf <fayejf07@gmail.com> * more fixes Signed-off-by: fayejf <fayejf07@gmail.com> * Update onnx (#2420) Signed-off-by: smajumdar <titu1994@gmail.com> * Correct version of onnxruntime (#2422) Signed-off-by: smajumdar <titu1994@gmail.com> * update deployment instructions (#2430) Signed-off-by: ericharper <complex451@gmail.com> * Bumping version to 1.1.0 Signed-off-by: Oleksii Kuchaiev <okuchaiev@nvidia.com> * update jenksinfile Signed-off-by: ericharper <complex451@gmail.com> * add upper bounds Signed-off-by: ericharper <complex451@gmail.com> * update readme Signed-off-by: ericharper <complex451@gmail.com> * update requirements Signed-off-by: ericharper <complex451@gmail.com> * update jenkinsfile Signed-off-by: ericharper <complex451@gmail.com> * update version Signed-off-by: ericharper <complex451@gmail.com> Co-authored-by: Somshubra Majumdar <titu1994@gmail.com> Co-authored-by: fayejf <36722593+fayejf@users.noreply.github.com> Co-authored-by: Nithin Rao <nithinrao.koluguri@gmail.com> Co-authored-by: khcs <khcs@users.noreply.github.com> Co-authored-by: Hoo Chang Shin <hshin@nvidia.com> Co-authored-by: Oleksii Kuchaiev <okuchaiev@nvidia.com>
2021-07-02 23:22:44 +02:00
Many example can be found under `"Examples" <https://github.com/NVIDIA/NeMo/tree/stable/examples>`_ folder.
segmentation (#1529) * expanded normalization helpers Signed-off-by: ekmb <ebakhturina@nvidia.com> * wip Signed-off-by: ekmb <ebakhturina@nvidia.com> * wip Signed-off-by: ekmb <ebakhturina@nvidia.com> * wip Signed-off-by: ekmb <ebakhturina@nvidia.com> * wip Signed-off-by: ekmb <ebakhturina@nvidia.com> * wip Signed-off-by: ekmb <ebakhturina@nvidia.com> * additional split symbols exposed Signed-off-by: ekmb <ebakhturina@nvidia.com> * split condition fix Signed-off-by: ekmb <ebakhturina@nvidia.com> * fix in add split symbols Signed-off-by: ekmb <ebakhturina@nvidia.com> * jenkins test added Signed-off-by: ekmb <ebakhturina@nvidia.com> * text update Signed-off-by: ekmb <ebakhturina@nvidia.com> * notebook jupyter upgrade cmd added Signed-off-by: ekmb <ebakhturina@nvidia.com> * jenkins Signed-off-by: ekmb <ebakhturina@nvidia.com> * jenkins Signed-off-by: ekmb <ebakhturina@nvidia.com> * jenkins Signed-off-by: ekmb <ebakhturina@nvidia.com> * jenkins Signed-off-by: ekmb <ebakhturina@nvidia.com> * install requirements Signed-off-by: ekmb <ebakhturina@nvidia.com> * jenkins Signed-off-by: ekmb <ebakhturina@nvidia.com> * test Signed-off-by: ekmb <ebakhturina@nvidia.com> * ffmped install Signed-off-by: ekmb <ebakhturina@nvidia.com> * rearrange steps Signed-off-by: ekmb <ebakhturina@nvidia.com> * rearrange steps Signed-off-by: ekmb <ebakhturina@nvidia.com> * test Signed-off-by: ekmb <ebakhturina@nvidia.com> * test Signed-off-by: ekmb <ebakhturina@nvidia.com> * test Signed-off-by: ekmb <ebakhturina@nvidia.com> * restart ci Signed-off-by: ekmb <ebakhturina@nvidia.com> * restart ci Signed-off-by: ekmb <ebakhturina@nvidia.com> * restart ci Signed-off-by: ekmb <ebakhturina@nvidia.com> * file name update Signed-off-by: ekmb <ebakhturina@nvidia.com> * prefix=0 Signed-off-by: ekmb <ebakhturina@nvidia.com> * separator Signed-off-by: ekmb <ebakhturina@nvidia.com> * refactor Signed-off-by: ekmb <ebakhturina@nvidia.com> * notebook reqs Signed-off-by: ekmb <ebakhturina@nvidia.com> * replace Signed-off-by: ekmb <ebakhturina@nvidia.com> * restart ci Signed-off-by: ekmb <ebakhturina@nvidia.com>
2020-12-11 23:48:59 +01:00
Contributing
------------
Merge r1.1 bugfixes to main. Update dep versions. (#2437) * Update notebook branch and Jenkinsfile for 1.1.0 testing (#2378) * update branch Signed-off-by: ericharper <complex451@gmail.com> * update jenkinsfile Signed-off-by: ericharper <complex451@gmail.com> * [BUGFIX] NMT Multi-node was incorrectly computing num_replicas (#2380) * fix property when not using model parallel Signed-off-by: ericharper <complex451@gmail.com> * fix property when not using model parallel Signed-off-by: ericharper <complex451@gmail.com> * add debug statement Signed-off-by: ericharper <complex451@gmail.com> * add debug statement Signed-off-by: ericharper <complex451@gmail.com> * instantiate with NLPDDPPlugin with num_nodes from trainer config Signed-off-by: ericharper <complex451@gmail.com> * Update ASR scripts for tokenizer building and tarred dataset building (#2381) * Update ASR scripts for tokenizer building and tarred dataset building Signed-off-by: smajumdar <titu1994@gmail.com> * Update container Signed-off-by: smajumdar <titu1994@gmail.com> * Add STT Zh Citrinet 1024 Gamma 0.25 model Signed-off-by: smajumdar <titu1994@gmail.com> * Update notebook (#2391) Signed-off-by: smajumdar <titu1994@gmail.com> * ASR Notebooks fix for 1.1.0 (#2395) * nb fix for spring clean Signed-off-by: fayejf <fayejf07@gmail.com> * remove outdated instruction Signed-off-by: fayejf <fayejf07@gmail.com> * Mean normalization (#2397) * norm embeddings Signed-off-by: nithinraok <nithinrao.koluguri@gmail.com> * move to utils Signed-off-by: nithinraok <nithinrao.koluguri@gmail.com> * Bugfix adaptive spec augment time masking (#2398) * bugfix adaptive spec augment Signed-off-by: smajumdar <titu1994@gmail.com> * Revert freq mask guard Signed-off-by: smajumdar <titu1994@gmail.com> * Revert freq mask guard Signed-off-by: smajumdar <titu1994@gmail.com> * Remove static time width clamping Signed-off-by: smajumdar <titu1994@gmail.com> * Correct typos and issues with notebooks (#2402) * Fix Primer notebook Signed-off-by: smajumdar <titu1994@gmail.com> * Typo Signed-off-by: smajumdar <titu1994@gmail.com> * remove accelerator=DDP in tutorial notebooks to avoid errors. (#2403) Signed-off-by: Hoo Chang Shin <hshin@nvidia.com> Co-authored-by: Hoo Chang Shin <hshin@nvidia.com> * [BUGFIX] Megatron in NMT was setting vocab_file to None (#2417) * make vocab_file configurable for megatron in nmt Signed-off-by: ericharper <complex451@gmail.com> * update docs Signed-off-by: ericharper <complex451@gmail.com> * update docs Signed-off-by: ericharper <complex451@gmail.com> * Link updates in docs and notebooks and typo fix (#2416) * typo fix for notebooks Signed-off-by: fayejf <fayejf07@gmail.com> * tiny typo fix in docs Signed-off-by: fayejf <fayejf07@gmail.com> * docs branch->stable Signed-off-by: fayejf <fayejf07@gmail.com> * more docs branch -> stable Signed-off-by: fayejf <fayejf07@gmail.com> * tutorial links branch -> stable Signed-off-by: fayejf <fayejf07@gmail.com> * small fix Signed-off-by: fayejf <fayejf07@gmail.com> * add renamed 06 Signed-off-by: fayejf <fayejf07@gmail.com> * more fixes Signed-off-by: fayejf <fayejf07@gmail.com> * Update onnx (#2420) Signed-off-by: smajumdar <titu1994@gmail.com> * Correct version of onnxruntime (#2422) Signed-off-by: smajumdar <titu1994@gmail.com> * update deployment instructions (#2430) Signed-off-by: ericharper <complex451@gmail.com> * Bumping version to 1.1.0 Signed-off-by: Oleksii Kuchaiev <okuchaiev@nvidia.com> * update jenksinfile Signed-off-by: ericharper <complex451@gmail.com> * add upper bounds Signed-off-by: ericharper <complex451@gmail.com> * update readme Signed-off-by: ericharper <complex451@gmail.com> * update requirements Signed-off-by: ericharper <complex451@gmail.com> * update jenkinsfile Signed-off-by: ericharper <complex451@gmail.com> * update version Signed-off-by: ericharper <complex451@gmail.com> Co-authored-by: Somshubra Majumdar <titu1994@gmail.com> Co-authored-by: fayejf <36722593+fayejf@users.noreply.github.com> Co-authored-by: Nithin Rao <nithinrao.koluguri@gmail.com> Co-authored-by: khcs <khcs@users.noreply.github.com> Co-authored-by: Hoo Chang Shin <hshin@nvidia.com> Co-authored-by: Oleksii Kuchaiev <okuchaiev@nvidia.com>
2021-07-02 23:22:44 +02:00
We welcome community contributions! Please refer to the `CONTRIBUTING.md <https://github.com/NVIDIA/NeMo/blob/stable/CONTRIBUTING.md>`_ CONTRIBUTING.md for the process.
Publications
------------
We provide an ever growing list of publications that utilize the NeMo framework. Please refer to `PUBLICATIONS.md <https://github.com/NVIDIA/NeMo/blob/main/PUBLICATIONS.md>`_. We welcome the addition of your own articles to this list !
Citation
--------
.. code-block:: bash
@article{kuchaiev2019nemo,
title={Nemo: a toolkit for building ai applications using neural modules},
author={Kuchaiev, Oleksii and Li, Jason and Nguyen, Huyen and Hrinchuk, Oleksii and Leary, Ryan and Ginsburg, Boris and Kriman, Samuel and Beliaev, Stanislav and Lavrukhin, Vitaly and Cook, Jack and others},
journal={arXiv preprint arXiv:1909.09577},
year={2019}
}
License
-------
Merge r1.1 bugfixes to main. Update dep versions. (#2437) * Update notebook branch and Jenkinsfile for 1.1.0 testing (#2378) * update branch Signed-off-by: ericharper <complex451@gmail.com> * update jenkinsfile Signed-off-by: ericharper <complex451@gmail.com> * [BUGFIX] NMT Multi-node was incorrectly computing num_replicas (#2380) * fix property when not using model parallel Signed-off-by: ericharper <complex451@gmail.com> * fix property when not using model parallel Signed-off-by: ericharper <complex451@gmail.com> * add debug statement Signed-off-by: ericharper <complex451@gmail.com> * add debug statement Signed-off-by: ericharper <complex451@gmail.com> * instantiate with NLPDDPPlugin with num_nodes from trainer config Signed-off-by: ericharper <complex451@gmail.com> * Update ASR scripts for tokenizer building and tarred dataset building (#2381) * Update ASR scripts for tokenizer building and tarred dataset building Signed-off-by: smajumdar <titu1994@gmail.com> * Update container Signed-off-by: smajumdar <titu1994@gmail.com> * Add STT Zh Citrinet 1024 Gamma 0.25 model Signed-off-by: smajumdar <titu1994@gmail.com> * Update notebook (#2391) Signed-off-by: smajumdar <titu1994@gmail.com> * ASR Notebooks fix for 1.1.0 (#2395) * nb fix for spring clean Signed-off-by: fayejf <fayejf07@gmail.com> * remove outdated instruction Signed-off-by: fayejf <fayejf07@gmail.com> * Mean normalization (#2397) * norm embeddings Signed-off-by: nithinraok <nithinrao.koluguri@gmail.com> * move to utils Signed-off-by: nithinraok <nithinrao.koluguri@gmail.com> * Bugfix adaptive spec augment time masking (#2398) * bugfix adaptive spec augment Signed-off-by: smajumdar <titu1994@gmail.com> * Revert freq mask guard Signed-off-by: smajumdar <titu1994@gmail.com> * Revert freq mask guard Signed-off-by: smajumdar <titu1994@gmail.com> * Remove static time width clamping Signed-off-by: smajumdar <titu1994@gmail.com> * Correct typos and issues with notebooks (#2402) * Fix Primer notebook Signed-off-by: smajumdar <titu1994@gmail.com> * Typo Signed-off-by: smajumdar <titu1994@gmail.com> * remove accelerator=DDP in tutorial notebooks to avoid errors. (#2403) Signed-off-by: Hoo Chang Shin <hshin@nvidia.com> Co-authored-by: Hoo Chang Shin <hshin@nvidia.com> * [BUGFIX] Megatron in NMT was setting vocab_file to None (#2417) * make vocab_file configurable for megatron in nmt Signed-off-by: ericharper <complex451@gmail.com> * update docs Signed-off-by: ericharper <complex451@gmail.com> * update docs Signed-off-by: ericharper <complex451@gmail.com> * Link updates in docs and notebooks and typo fix (#2416) * typo fix for notebooks Signed-off-by: fayejf <fayejf07@gmail.com> * tiny typo fix in docs Signed-off-by: fayejf <fayejf07@gmail.com> * docs branch->stable Signed-off-by: fayejf <fayejf07@gmail.com> * more docs branch -> stable Signed-off-by: fayejf <fayejf07@gmail.com> * tutorial links branch -> stable Signed-off-by: fayejf <fayejf07@gmail.com> * small fix Signed-off-by: fayejf <fayejf07@gmail.com> * add renamed 06 Signed-off-by: fayejf <fayejf07@gmail.com> * more fixes Signed-off-by: fayejf <fayejf07@gmail.com> * Update onnx (#2420) Signed-off-by: smajumdar <titu1994@gmail.com> * Correct version of onnxruntime (#2422) Signed-off-by: smajumdar <titu1994@gmail.com> * update deployment instructions (#2430) Signed-off-by: ericharper <complex451@gmail.com> * Bumping version to 1.1.0 Signed-off-by: Oleksii Kuchaiev <okuchaiev@nvidia.com> * update jenksinfile Signed-off-by: ericharper <complex451@gmail.com> * add upper bounds Signed-off-by: ericharper <complex451@gmail.com> * update readme Signed-off-by: ericharper <complex451@gmail.com> * update requirements Signed-off-by: ericharper <complex451@gmail.com> * update jenkinsfile Signed-off-by: ericharper <complex451@gmail.com> * update version Signed-off-by: ericharper <complex451@gmail.com> Co-authored-by: Somshubra Majumdar <titu1994@gmail.com> Co-authored-by: fayejf <36722593+fayejf@users.noreply.github.com> Co-authored-by: Nithin Rao <nithinrao.koluguri@gmail.com> Co-authored-by: khcs <khcs@users.noreply.github.com> Co-authored-by: Hoo Chang Shin <hshin@nvidia.com> Co-authored-by: Oleksii Kuchaiev <okuchaiev@nvidia.com>
2021-07-02 23:22:44 +02:00
NeMo is under `Apache 2.0 license <https://github.com/NVIDIA/NeMo/blob/stable/LICENSE>`_.