Satpal Singh Rathore
b12ac8ae85
Typo correction in README.rst ( #3103 )
...
Signed-off-by: Satpal Singh Rathore <satpalsinghrathore001@gmail.com>
2021-11-09 21:23:38 -08:00
Nithin Rao
dc9ed88f78
Modify speaker input ( #3100 )
...
* initial_commit
Signed-off-by: nithinraok <nithinrao.koluguri@gmail.com>
* init diarizer
Signed-off-by: nithinraok <nithinrao.koluguri@gmail.com>
* vad+speaker
Signed-off-by: nithinraok <nithinrao.koluguri@gmail.com>
* vad update
Signed-off-by: nithinraok <nithinrao.koluguri@gmail.com>
* speaker done
Signed-off-by: nithinraok <nithinrao.koluguri@gmail.com>
* initial working version
Signed-off-by: nithinraok <nithinrao.koluguri@gmail.com>
* compare outputs
Signed-off-by: nithinraok <nithinrao.koluguri@gmail.com>
* added uem support
Signed-off-by: nithinraok <nithinrao.koluguri@gmail.com>
* pyannote improvements
Signed-off-by: nithinraok <nithinrao.koluguri@gmail.com>
* updated config and script name
Signed-off-by: nithinraok <nithinrao.koluguri@gmail.com>
* style fix
Signed-off-by: nithinraok <nithinrao.koluguri@gmail.com>
* update Jenkins file
Signed-off-by: nithinraok <nithinrao.koluguri@gmail.com>
* jenkins fix
Signed-off-by: nithinraok <nithinrao.koluguri@gmail.com>
* jenkins fix
Signed-off-by: nithinraok <nithinrao.koluguri@gmail.com>
* update file path in jenkins
Signed-off-by: nithinraok <nithinrao.koluguri@gmail.com>
* update file path in jenkins
Signed-off-by: nithinraok <nithinrao.koluguri@gmail.com>
* update file path in jenkins
Signed-off-by: nithinraok <nithinrao.koluguri@gmail.com>
* jenkins quote fix
Signed-off-by: nithinraok <nithinrao.koluguri@gmail.com>
* update offline speaker diarization notebook
Signed-off-by: nithinraok <nithinrao.koluguri@gmail.com>
* intial working asr_with_diarization
Signed-off-by: nithinraok <nithinrao.koluguri@gmail.com>
* almost done, revist scoring part
Signed-off-by: nithinraok <nithinrao.koluguri@gmail.com>
* fixed eval in offline diarization with asr
Signed-off-by: nithinraok <nithinrao.koluguri@gmail.com>
* update write2manifest to consider only up to max audio duration
Signed-off-by: nithinraok <nithinrao.koluguri@gmail.com>
* asr with diarization notebook
Signed-off-by: nithinraok <nithinrao.koluguri@gmail.com>
* Fixed ASR_with_diarization tutorial.ipynb and diarization_utils and edited config yaml file
Signed-off-by: Taejin Park <tango4j@gmail.com>
* Fixed VAD parameters in Speaker_Diarization_Inference.ipynb
Signed-off-by: Taejin Park <tango4j@gmail.com>
* Added Jenkins test, doc strings and updated README
Signed-off-by: nithinraok <nithinrao.koluguri@gmail.com>
* update jenkins test
Signed-off-by: nithinraok <nithinrao.koluguri@gmail.com>
* Doc info in offline_diarization_with_asr
Signed-off-by: nithinraok <nithinrao.koluguri@gmail.com>
* Review comments
Signed-off-by: nithinraok <nithinrao.koluguri@gmail.com>
* update outdir paths
Signed-off-by: nithinraok <nithinrao.koluguri@gmail.com>
Co-authored-by: Taejin Park <tango4j@gmail.com>
2021-11-06 10:55:32 -04:00
Eric Harper
1c2c268db1
fix readme ( #3070 )
...
Signed-off-by: ericharper <complex451@gmail.com>
2021-10-27 16:23:24 -06:00
Somshubra Majumdar
f8d8d069e5
Add PUBLICATIONS.md ( #3051 )
...
* Add PUBLICATIONS.md
Signed-off-by: smajumdar <titu1994@gmail.com>
* Add NLP
Signed-off-by: smajumdar <titu1994@gmail.com>
* Update PUBLICATIONS.md
* Update PUBLICATIONS.md
* Fix links
Signed-off-by: smajumdar <titu1994@gmail.com>
Co-authored-by: Eric Harper <complex451@gmail.com>
2021-10-27 09:48:16 -07:00
Eric Harper
32fa5cfaf3
[BigNLP] Merge Megatron GPT to main ( #2975 )
...
* fix gpu init after removing debug print in mpu
Signed-off-by: ericharper <complex451@gmail.com>
* add fused_adam
Signed-off-by: ericharper <complex451@gmail.com>
* check ds is not none before logging len
Signed-off-by: ericharper <complex451@gmail.com>
* set fp16 arg to true and fix enum conflict
Signed-off-by: ericharper <complex451@gmail.com>
* make fp16 arg configurable
Signed-off-by: ericharper <complex451@gmail.com>
* add grad clip from megatron
Signed-off-by: ericharper <complex451@gmail.com>
* Linear warmup with cosine annealing and constant holding (#2846 )
* Testing cosine schedule
Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca>
* Style fixes
Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca>
* Fixes
Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca>
* More fixes
Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca>
* update config for constant steps in schedule
Signed-off-by: ericharper <complex451@gmail.com>
* temporarily import enum from megatron
Signed-off-by: ericharper <complex451@gmail.com>
* add grad clip for fp32
Signed-off-by: ericharper <complex451@gmail.com>
* update check for _del_model_without_trainer
Signed-off-by: ericharper <complex451@gmail.com>
* updating restore for model parallel
Signed-off-by: ericharper <complex451@gmail.com>
* add predict script
Signed-off-by: ericharper <complex451@gmail.com>
* update test iters
Signed-off-by: ericharper <complex451@gmail.com>
* add barrier
Signed-off-by: ericharper <complex451@gmail.com>
* return if clip_val is 0 or None
Signed-off-by: ericharper <complex451@gmail.com>
* when using amp clip grads after they are unscaled
Signed-off-by: ericharper <complex451@gmail.com>
* make native amp scaler hyperparams configurable
Signed-off-by: ericharper <complex451@gmail.com>
* (1) nvfuser, (2) amp-casting decoration (#2894 )
* (1) nvfuser, (2) amp-casting decoration
Signed-off-by: Sangkug Lym <slym@nvidia.com>
* support bf16
Signed-off-by: Sangkug Lym <slym@nvidia.com>
* update package info
Signed-off-by: ericharper <complex451@gmail.com>
* add set device to constructor
Signed-off-by: ericharper <complex451@gmail.com>
* set_device in constructor
Signed-off-by: ericharper <complex451@gmail.com>
* [BigNLP] Remove megatron-lm dependency. (#2910 )
* remove args
Signed-off-by: ericharper <complex451@gmail.com>
* remove args
Signed-off-by: ericharper <complex451@gmail.com>
* remove args
Signed-off-by: ericharper <complex451@gmail.com>
* remove args
Signed-off-by: ericharper <complex451@gmail.com>
* remove args in progress
Signed-off-by: ericharper <complex451@gmail.com>
* remove args in progress
Signed-off-by: ericharper <complex451@gmail.com>
* remove args in progress
Signed-off-by: ericharper <complex451@gmail.com>
* remove args in progress
Signed-off-by: ericharper <complex451@gmail.com>
* add load_fused_kernels
Signed-off-by: ericharper <complex451@gmail.com>
* add load_fused_kernels
Signed-off-by: ericharper <complex451@gmail.com>
* update megatron_init
Signed-off-by: ericharper <complex451@gmail.com>
* add fused kernels
Signed-off-by: ericharper <complex451@gmail.com>
* add fused kernels
Signed-off-by: ericharper <complex451@gmail.com>
* update process batch
Signed-off-by: ericharper <complex451@gmail.com>
* remove erroneous import
Signed-off-by: ericharper <complex451@gmail.com>
* remove erroneous import
Signed-off-by: ericharper <complex451@gmail.com>
* remove erroneous import
Signed-off-by: ericharper <complex451@gmail.com>
* add megatron clip_grad
Signed-off-by: ericharper <complex451@gmail.com>
* trying to resolve circular import error
Signed-off-by: ericharper <complex451@gmail.com>
* rename file
Signed-off-by: ericharper <complex451@gmail.com>
* remove non-gpt models and datasets from __init__ files
Signed-off-by: ericharper <complex451@gmail.com>
* set device in constructorfor gpu init
Signed-off-by: ericharper <complex451@gmail.com>
* set device in constructorfor gpu init
Signed-off-by: ericharper <complex451@gmail.com>
* set_device in constructor
Signed-off-by: ericharper <complex451@gmail.com>
* clean config
Signed-off-by: ericharper <complex451@gmail.com>
* update MegatronDataset
Signed-off-by: ericharper <complex451@gmail.com>
* clean up MegatronModule
Signed-off-by: ericharper <complex451@gmail.com>
* clean up MegatronModule
Signed-off-by: ericharper <complex451@gmail.com>
* rename fp16 and bf16 flags to fused_softmax_input_in_fp16/bf16
Signed-off-by: ericharper <complex451@gmail.com>
* rename to fused_fp16
Signed-off-by: ericharper <complex451@gmail.com>
* add fused_fp16 arg to LayerNorm calls
Signed-off-by: ericharper <complex451@gmail.com>
* fix arg name
Signed-off-by: ericharper <complex451@gmail.com>
* fix arg name
Signed-off-by: ericharper <complex451@gmail.com>
* fix import
Signed-off-by: ericharper <complex451@gmail.com>
* update arg
Signed-off-by: ericharper <complex451@gmail.com>
* skip warmup default to True
Signed-off-by: ericharper <complex451@gmail.com>
* skip warmup default to True
Signed-off-by: ericharper <complex451@gmail.com>
* Adding complete method to MegatronGPTModel (#2935 )
Signed-off-by: Oleksii Kuchaiev <okuchaiev@nvidia.com>
* make ffn_hidden_size mandatory
Signed-off-by: ericharper <complex451@gmail.com>
* Manually migrating timing of step into branch (#2937 )
* 1. Manually migrating timing of step into branch.
Signed-off-by: Micha Livne <mlivne@nvidia.com>
* 1. Updated file name and content.
Signed-off-by: Micha Livne <mlivne@nvidia.com>
* 1. Updated to latest code.
Signed-off-by: Micha Livne <mlivne@nvidia.com>
Co-authored-by: Micha Livne <mlivne@nvidia.com>
* remove unused imports
Signed-off-by: ericharper <complex451@gmail.com>
* remove unused import
Signed-off-by: ericharper <complex451@gmail.com>
* remove unused import
Signed-off-by: ericharper <complex451@gmail.com>
* remove unused import
Signed-off-by: ericharper <complex451@gmail.com>
* check fused_fp16 and fused_bf16 are not both True
Signed-off-by: ericharper <complex451@gmail.com>
* update predict script for model parallel .nemo
Signed-off-by: ericharper <complex451@gmail.com>
* typo
Signed-off-by: ericharper <complex451@gmail.com>
* typo
Signed-off-by: ericharper <complex451@gmail.com>
Co-authored-by: Oleksii Kuchaiev <okuchaiev@users.noreply.github.com>
Co-authored-by: Micha Livne <michalivne@users.noreply.github.com>
Co-authored-by: Micha Livne <mlivne@nvidia.com>
* NVfuser (#2943 )
* activation checkpoint recompute
Signed-off-by: Sangkug Lym <slym@nvidia.com>
* selective nvfuser setup
* Megatron gpt bfloat support (#2926 )
* Save/restore fix
Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca>
* Another merge
Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca>
* Bf16 args in init
Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca>
* Set precision
Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca>
* Remove debug stuff
Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca>
* add bf16 casting decorator
Signed-off-by: Sangkug Lym <slym@nvidia.com>
* Bfloat layernorm propagation
Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca>
* activation checkpoint recompute
Signed-off-by: Sangkug Lym <slym@nvidia.com>
* selective nvfuser setup
* More arg removal
Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca>
* Remove BERTDataset
Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca>
* update to latest apex and patch transformer autocast
Signed-off-by: ericharper <complex451@gmail.com>
Co-authored-by: Sangkug Lym <slym@nvidia.com>
Co-authored-by: ericharper <complex451@gmail.com>
* don't set jit for bf16
Signed-off-by: ericharper <complex451@gmail.com>
* replace apex.mpu
Signed-off-by: ericharper <complex451@gmail.com>
* fix grad clip
Signed-off-by: ericharper <complex451@gmail.com>
* NVFuser fixes (#2951 )
* Fuser fixes
Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca>
* Remove dummy handler
Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca>
* Remove PTL plugin based logic for fusion
Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca>
* remove duplicated file
Signed-off-by: ericharper <complex451@gmail.com>
* typo (#2960 )
Signed-off-by: ericharper <complex451@gmail.com>
* [BigNLP] Script to convert GPT checkpoint to .nemo (#2958 )
* remove args
Signed-off-by: ericharper <complex451@gmail.com>
* remove args
Signed-off-by: ericharper <complex451@gmail.com>
* remove args
Signed-off-by: ericharper <complex451@gmail.com>
* remove args
Signed-off-by: ericharper <complex451@gmail.com>
* remove args in progress
Signed-off-by: ericharper <complex451@gmail.com>
* remove args in progress
Signed-off-by: ericharper <complex451@gmail.com>
* remove args in progress
Signed-off-by: ericharper <complex451@gmail.com>
* remove args in progress
Signed-off-by: ericharper <complex451@gmail.com>
* add load_fused_kernels
Signed-off-by: ericharper <complex451@gmail.com>
* add load_fused_kernels
Signed-off-by: ericharper <complex451@gmail.com>
* update megatron_init
Signed-off-by: ericharper <complex451@gmail.com>
* add fused kernels
Signed-off-by: ericharper <complex451@gmail.com>
* add fused kernels
Signed-off-by: ericharper <complex451@gmail.com>
* update process batch
Signed-off-by: ericharper <complex451@gmail.com>
* remove erroneous import
Signed-off-by: ericharper <complex451@gmail.com>
* remove erroneous import
Signed-off-by: ericharper <complex451@gmail.com>
* remove erroneous import
Signed-off-by: ericharper <complex451@gmail.com>
* add megatron clip_grad
Signed-off-by: ericharper <complex451@gmail.com>
* trying to resolve circular import error
Signed-off-by: ericharper <complex451@gmail.com>
* rename file
Signed-off-by: ericharper <complex451@gmail.com>
* remove non-gpt models and datasets from __init__ files
Signed-off-by: ericharper <complex451@gmail.com>
* set device in constructorfor gpu init
Signed-off-by: ericharper <complex451@gmail.com>
* set device in constructorfor gpu init
Signed-off-by: ericharper <complex451@gmail.com>
* set_device in constructor
Signed-off-by: ericharper <complex451@gmail.com>
* clean config
Signed-off-by: ericharper <complex451@gmail.com>
* update MegatronDataset
Signed-off-by: ericharper <complex451@gmail.com>
* clean up MegatronModule
Signed-off-by: ericharper <complex451@gmail.com>
* clean up MegatronModule
Signed-off-by: ericharper <complex451@gmail.com>
* rename fp16 and bf16 flags to fused_softmax_input_in_fp16/bf16
Signed-off-by: ericharper <complex451@gmail.com>
* rename to fused_fp16
Signed-off-by: ericharper <complex451@gmail.com>
* add fused_fp16 arg to LayerNorm calls
Signed-off-by: ericharper <complex451@gmail.com>
* fix arg name
Signed-off-by: ericharper <complex451@gmail.com>
* fix arg name
Signed-off-by: ericharper <complex451@gmail.com>
* fix import
Signed-off-by: ericharper <complex451@gmail.com>
* update arg
Signed-off-by: ericharper <complex451@gmail.com>
* skip warmup default to True
Signed-off-by: ericharper <complex451@gmail.com>
* skip warmup default to True
Signed-off-by: ericharper <complex451@gmail.com>
* Adding complete method to MegatronGPTModel (#2935 )
Signed-off-by: Oleksii Kuchaiev <okuchaiev@nvidia.com>
* make ffn_hidden_size mandatory
Signed-off-by: ericharper <complex451@gmail.com>
* Manually migrating timing of step into branch (#2937 )
* 1. Manually migrating timing of step into branch.
Signed-off-by: Micha Livne <mlivne@nvidia.com>
* 1. Updated file name and content.
Signed-off-by: Micha Livne <mlivne@nvidia.com>
* 1. Updated to latest code.
Signed-off-by: Micha Livne <mlivne@nvidia.com>
Co-authored-by: Micha Livne <mlivne@nvidia.com>
* remove unused imports
Signed-off-by: ericharper <complex451@gmail.com>
* remove unused import
Signed-off-by: ericharper <complex451@gmail.com>
* remove unused import
Signed-off-by: ericharper <complex451@gmail.com>
* remove unused import
Signed-off-by: ericharper <complex451@gmail.com>
* check fused_fp16 and fused_bf16 are not both True
Signed-off-by: ericharper <complex451@gmail.com>
* update predict script for model parallel .nemo
Signed-off-by: ericharper <complex451@gmail.com>
* typo
Signed-off-by: ericharper <complex451@gmail.com>
* add script to convert .ckpt to .nemo
Signed-off-by: ericharper <complex451@gmail.com>
* in progress
Signed-off-by: ericharper <complex451@gmail.com>
* update
Signed-off-by: ericharper <complex451@gmail.com>
* convert mp checkpoints to nemo
Signed-off-by: ericharper <complex451@gmail.com>
* update help
Signed-off-by: ericharper <complex451@gmail.com>
* add safeguard for model parallel save_to
Signed-off-by: ericharper <complex451@gmail.com>
* adjust NLPModel save_to to be safer for model parallel
Signed-off-by: Oleksii Kuchaiev <okuchaiev@nvidia.com>
Co-authored-by: Oleksii Kuchaiev <okuchaiev@users.noreply.github.com>
Co-authored-by: Micha Livne <michalivne@users.noreply.github.com>
Co-authored-by: Micha Livne <mlivne@nvidia.com>
Co-authored-by: Oleksii Kuchaiev <okuchaiev@nvidia.com>
* [BigNLP] Update GPT evaluation to work with tensor model parallel (#2959 )
* in progress
Signed-off-by: ericharper <complex451@gmail.com>
* update args
Signed-off-by: ericharper <complex451@gmail.com>
* add request dataset
Signed-off-by: ericharper <complex451@gmail.com>
* tokenize request
Signed-off-by: ericharper <complex451@gmail.com>
* in progress
Signed-off-by: ericharper <complex451@gmail.com>
* able to run
Signed-off-by: ericharper <complex451@gmail.com>
* reduce logits
Signed-off-by: ericharper <complex451@gmail.com>
* capture response
Signed-off-by: ericharper <complex451@gmail.com>
* squeeze and unsqueeze
Signed-off-by: ericharper <complex451@gmail.com>
* handle non model parallel case
Signed-off-by: ericharper <complex451@gmail.com>
* clean imports
Signed-off-by: ericharper <complex451@gmail.com>
* add file
Signed-off-by: ericharper <complex451@gmail.com>
* convert logits to log_probs
Signed-off-by: Oleksii Kuchaiev <okuchaiev@nvidia.com>
* rename logits to log_probs
Signed-off-by: Oleksii Kuchaiev <okuchaiev@nvidia.com>
Co-authored-by: Oleksii Kuchaiev <okuchaiev@nvidia.com>
* add megatron gpt pretraining
Signed-off-by: ericharper <complex451@gmail.com>
* add megatron gpt pretraining
Signed-off-by: ericharper <complex451@gmail.com>
* add megatron gpt pretraining
Signed-off-by: ericharper <complex451@gmail.com>
* updating to work with latest megatron
Signed-off-by: ericharper <complex451@gmail.com>
* updating to work with latest megatron
Signed-off-by: ericharper <complex451@gmail.com>
* update _del_model
Signed-off-by: ericharper <complex451@gmail.com>
* adding gpt model
Signed-off-by: ericharper <complex451@gmail.com>
* adding gpt model
Signed-off-by: ericharper <complex451@gmail.com>
* adding gpt model
Signed-off-by: ericharper <complex451@gmail.com>
* instantiate GPTmodel
Signed-off-by: ericharper <complex451@gmail.com>
* adding build dataset
Signed-off-by: ericharper <complex451@gmail.com>
* build megatron dataset in .setup
Signed-off-by: ericharper <complex451@gmail.com>
* setup dataloader
Signed-off-by: ericharper <complex451@gmail.com>
* add vocab_file and merge_file to megatron init
Signed-off-by: ericharper <complex451@gmail.com>
* add forward
Signed-off-by: ericharper <complex451@gmail.com>
* add train loss
Signed-off-by: ericharper <complex451@gmail.com>
* add optimizer
Signed-off-by: ericharper <complex451@gmail.com>
* add exp_manager
Signed-off-by: ericharper <complex451@gmail.com>
* multi-gpu is working
Signed-off-by: ericharper <complex451@gmail.com>
* adding val loop
Signed-off-by: ericharper <complex451@gmail.com>
* style
Signed-off-by: ericharper <complex451@gmail.com>
* adding val loop
Signed-off-by: ericharper <complex451@gmail.com>
* fix ranks
Signed-off-by: ericharper <complex451@gmail.com>
* fix model parallel checkpoint saving
Signed-off-by: ericharper <complex451@gmail.com>
* fix _del_model
Signed-off-by: ericharper <complex451@gmail.com>
* added megatron batch sampler
Signed-off-by: ericharper <complex451@gmail.com>
* try to fix num steps
Signed-off-by: ericharper <complex451@gmail.com>
* add wandb to config
Signed-off-by: ericharper <complex451@gmail.com>
* log lr
Signed-off-by: ericharper <complex451@gmail.com>
* add warmup ratio to config
Signed-off-by: ericharper <complex451@gmail.com>
* update configs
Signed-off-by: ericharper <complex451@gmail.com>
* update configs
Signed-off-by: ericharper <complex451@gmail.com>
* add cpu init to args
Signed-off-by: ericharper <complex451@gmail.com>
* update config
Signed-off-by: ericharper <complex451@gmail.com>
* update config
Signed-off-by: ericharper <complex451@gmail.com>
* Initial megatron dataset port
Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca>
* Fix merge conflicts
Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca>
* License fixes and megatron model porting
Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca>
* Style fixes
Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca>
* More fixes to import from nemo rather than megatron
Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca>
* Fix circular imports
Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca>
* Style fixes
Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca>
* Revert config file
Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca>
* Restructure further to avoid circular imports
Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca>
* add Makefile
Signed-off-by: ericharper <complex451@gmail.com>
* Add megatron modules
Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca>
* add license
Signed-off-by: ericharper <complex451@gmail.com>
* Port from latest megatron
Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca>
* update cfg
Signed-off-by: ericharper <complex451@gmail.com>
* update config
Signed-off-by: ericharper <complex451@gmail.com>
* add _del_model_without_trainer
Signed-off-by: ericharper <complex451@gmail.com>
* add data preprocessing script
Signed-off-by: ericharper <complex451@gmail.com>
* update config
Signed-off-by: ericharper <complex451@gmail.com>
* use apex mpu
Signed-off-by: ericharper <complex451@gmail.com>
* replace print_rank_0 with nemo utils logging
Signed-off-by: ericharper <complex451@gmail.com>
* use apex mpu
Signed-off-by: ericharper <complex451@gmail.com>
* use apex mpu
Signed-off-by: ericharper <complex451@gmail.com>
* add use_cpu_initialization
Signed-off-by: ericharper <complex451@gmail.com>
* fixing autoresume in progress
Signed-off-by: ericharper <complex451@gmail.com>
* properly removing last checkpoint
Signed-off-by: ericharper <complex451@gmail.com>
* log consumed samples
Signed-off-by: ericharper <complex451@gmail.com>
* fix mp autoresume
Signed-off-by: ericharper <complex451@gmail.com>
* add NLPSaveRestoreConnector
Signed-off-by: ericharper <complex451@gmail.com>
* Megatron GPT training with NeMo tokenizers (#2818 )
* Update files from megatron repo
Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca>
* Remove non NLP data related files from megatron
Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca>
* Merge megatron and nemo tokenizers
Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca>
* Remove get_tokenizer() calls from gpt model
Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca>
* Update tokenizer yaml config
Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca>
* add todo
Signed-off-by: ericharper <complex451@gmail.com>
* update config
Signed-off-by: ericharper <complex451@gmail.com>
* make init_method_std configurable
Signed-off-by: ericharper <complex451@gmail.com>
* make gpu init work by setting random seed earlier
Signed-off-by: ericharper <complex451@gmail.com>
* fix gpu init after removing debug print in mpu
Signed-off-by: ericharper <complex451@gmail.com>
* add fused_adam
Signed-off-by: ericharper <complex451@gmail.com>
* check ds is not none before logging len
Signed-off-by: ericharper <complex451@gmail.com>
* set fp16 arg to true and fix enum conflict
Signed-off-by: ericharper <complex451@gmail.com>
* make fp16 arg configurable
Signed-off-by: ericharper <complex451@gmail.com>
* add grad clip from megatron
Signed-off-by: ericharper <complex451@gmail.com>
* Linear warmup with cosine annealing and constant holding (#2846 )
* Testing cosine schedule
Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca>
* Style fixes
Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca>
* Fixes
Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca>
* More fixes
Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca>
* update config for constant steps in schedule
Signed-off-by: ericharper <complex451@gmail.com>
* temporarily import enum from megatron
Signed-off-by: ericharper <complex451@gmail.com>
* add grad clip for fp32
Signed-off-by: ericharper <complex451@gmail.com>
* update check for _del_model_without_trainer
Signed-off-by: ericharper <complex451@gmail.com>
* updating restore for model parallel
Signed-off-by: ericharper <complex451@gmail.com>
* add predict script
Signed-off-by: ericharper <complex451@gmail.com>
* update test iters
Signed-off-by: ericharper <complex451@gmail.com>
* add barrier
Signed-off-by: ericharper <complex451@gmail.com>
* return if clip_val is 0 or None
Signed-off-by: ericharper <complex451@gmail.com>
* when using amp clip grads after they are unscaled
Signed-off-by: ericharper <complex451@gmail.com>
* make native amp scaler hyperparams configurable
Signed-off-by: ericharper <complex451@gmail.com>
* (1) nvfuser, (2) amp-casting decoration (#2894 )
* (1) nvfuser, (2) amp-casting decoration
Signed-off-by: Sangkug Lym <slym@nvidia.com>
* support bf16
Signed-off-by: Sangkug Lym <slym@nvidia.com>
* update package info
Signed-off-by: ericharper <complex451@gmail.com>
* add set device to constructor
Signed-off-by: ericharper <complex451@gmail.com>
* set_device in constructor
Signed-off-by: ericharper <complex451@gmail.com>
* [BigNLP] Remove megatron-lm dependency. (#2910 )
* remove args
Signed-off-by: ericharper <complex451@gmail.com>
* remove args
Signed-off-by: ericharper <complex451@gmail.com>
* remove args
Signed-off-by: ericharper <complex451@gmail.com>
* remove args
Signed-off-by: ericharper <complex451@gmail.com>
* remove args in progress
Signed-off-by: ericharper <complex451@gmail.com>
* remove args in progress
Signed-off-by: ericharper <complex451@gmail.com>
* remove args in progress
Signed-off-by: ericharper <complex451@gmail.com>
* remove args in progress
Signed-off-by: ericharper <complex451@gmail.com>
* add load_fused_kernels
Signed-off-by: ericharper <complex451@gmail.com>
* add load_fused_kernels
Signed-off-by: ericharper <complex451@gmail.com>
* update megatron_init
Signed-off-by: ericharper <complex451@gmail.com>
* add fused kernels
Signed-off-by: ericharper <complex451@gmail.com>
* add fused kernels
Signed-off-by: ericharper <complex451@gmail.com>
* update process batch
Signed-off-by: ericharper <complex451@gmail.com>
* remove erroneous import
Signed-off-by: ericharper <complex451@gmail.com>
* remove erroneous import
Signed-off-by: ericharper <complex451@gmail.com>
* remove erroneous import
Signed-off-by: ericharper <complex451@gmail.com>
* add megatron clip_grad
Signed-off-by: ericharper <complex451@gmail.com>
* trying to resolve circular import error
Signed-off-by: ericharper <complex451@gmail.com>
* rename file
Signed-off-by: ericharper <complex451@gmail.com>
* remove non-gpt models and datasets from __init__ files
Signed-off-by: ericharper <complex451@gmail.com>
* set device in constructorfor gpu init
Signed-off-by: ericharper <complex451@gmail.com>
* set device in constructorfor gpu init
Signed-off-by: ericharper <complex451@gmail.com>
* set_device in constructor
Signed-off-by: ericharper <complex451@gmail.com>
* clean config
Signed-off-by: ericharper <complex451@gmail.com>
* update MegatronDataset
Signed-off-by: ericharper <complex451@gmail.com>
* clean up MegatronModule
Signed-off-by: ericharper <complex451@gmail.com>
* clean up MegatronModule
Signed-off-by: ericharper <complex451@gmail.com>
* rename fp16 and bf16 flags to fused_softmax_input_in_fp16/bf16
Signed-off-by: ericharper <complex451@gmail.com>
* rename to fused_fp16
Signed-off-by: ericharper <complex451@gmail.com>
* add fused_fp16 arg to LayerNorm calls
Signed-off-by: ericharper <complex451@gmail.com>
* fix arg name
Signed-off-by: ericharper <complex451@gmail.com>
* fix arg name
Signed-off-by: ericharper <complex451@gmail.com>
* fix import
Signed-off-by: ericharper <complex451@gmail.com>
* update arg
Signed-off-by: ericharper <complex451@gmail.com>
* skip warmup default to True
Signed-off-by: ericharper <complex451@gmail.com>
* skip warmup default to True
Signed-off-by: ericharper <complex451@gmail.com>
* Adding complete method to MegatronGPTModel (#2935 )
Signed-off-by: Oleksii Kuchaiev <okuchaiev@nvidia.com>
* make ffn_hidden_size mandatory
Signed-off-by: ericharper <complex451@gmail.com>
* Manually migrating timing of step into branch (#2937 )
* 1. Manually migrating timing of step into branch.
Signed-off-by: Micha Livne <mlivne@nvidia.com>
* 1. Updated file name and content.
Signed-off-by: Micha Livne <mlivne@nvidia.com>
* 1. Updated to latest code.
Signed-off-by: Micha Livne <mlivne@nvidia.com>
Co-authored-by: Micha Livne <mlivne@nvidia.com>
* remove unused imports
Signed-off-by: ericharper <complex451@gmail.com>
* remove unused import
Signed-off-by: ericharper <complex451@gmail.com>
* remove unused import
Signed-off-by: ericharper <complex451@gmail.com>
* remove unused import
Signed-off-by: ericharper <complex451@gmail.com>
* check fused_fp16 and fused_bf16 are not both True
Signed-off-by: ericharper <complex451@gmail.com>
* update predict script for model parallel .nemo
Signed-off-by: ericharper <complex451@gmail.com>
* typo
Signed-off-by: ericharper <complex451@gmail.com>
* typo
Signed-off-by: ericharper <complex451@gmail.com>
Co-authored-by: Oleksii Kuchaiev <okuchaiev@users.noreply.github.com>
Co-authored-by: Micha Livne <michalivne@users.noreply.github.com>
Co-authored-by: Micha Livne <mlivne@nvidia.com>
* NVfuser (#2943 )
* activation checkpoint recompute
Signed-off-by: Sangkug Lym <slym@nvidia.com>
* selective nvfuser setup
* Megatron gpt bfloat support (#2926 )
* Save/restore fix
Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca>
* Another merge
Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca>
* Bf16 args in init
Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca>
* Set precision
Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca>
* Remove debug stuff
Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca>
* add bf16 casting decorator
Signed-off-by: Sangkug Lym <slym@nvidia.com>
* Bfloat layernorm propagation
Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca>
* activation checkpoint recompute
Signed-off-by: Sangkug Lym <slym@nvidia.com>
* selective nvfuser setup
* More arg removal
Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca>
* Remove BERTDataset
Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca>
* update to latest apex and patch transformer autocast
Signed-off-by: ericharper <complex451@gmail.com>
Co-authored-by: Sangkug Lym <slym@nvidia.com>
Co-authored-by: ericharper <complex451@gmail.com>
* don't set jit for bf16
Signed-off-by: ericharper <complex451@gmail.com>
* replace apex.mpu
Signed-off-by: ericharper <complex451@gmail.com>
* fix grad clip
Signed-off-by: ericharper <complex451@gmail.com>
* NVFuser fixes (#2951 )
* Fuser fixes
Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca>
* Remove dummy handler
Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca>
* Remove PTL plugin based logic for fusion
Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca>
* remove duplicated file
Signed-off-by: ericharper <complex451@gmail.com>
* typo (#2960 )
Signed-off-by: ericharper <complex451@gmail.com>
* [BigNLP] Script to convert GPT checkpoint to .nemo (#2958 )
* remove args
Signed-off-by: ericharper <complex451@gmail.com>
* remove args
Signed-off-by: ericharper <complex451@gmail.com>
* remove args
Signed-off-by: ericharper <complex451@gmail.com>
* remove args
Signed-off-by: ericharper <complex451@gmail.com>
* remove args in progress
Signed-off-by: ericharper <complex451@gmail.com>
* remove args in progress
Signed-off-by: ericharper <complex451@gmail.com>
* remove args in progress
Signed-off-by: ericharper <complex451@gmail.com>
* remove args in progress
Signed-off-by: ericharper <complex451@gmail.com>
* add load_fused_kernels
Signed-off-by: ericharper <complex451@gmail.com>
* add load_fused_kernels
Signed-off-by: ericharper <complex451@gmail.com>
* update megatron_init
Signed-off-by: ericharper <complex451@gmail.com>
* add fused kernels
Signed-off-by: ericharper <complex451@gmail.com>
* add fused kernels
Signed-off-by: ericharper <complex451@gmail.com>
* update process batch
Signed-off-by: ericharper <complex451@gmail.com>
* remove erroneous import
Signed-off-by: ericharper <complex451@gmail.com>
* remove erroneous import
Signed-off-by: ericharper <complex451@gmail.com>
* remove erroneous import
Signed-off-by: ericharper <complex451@gmail.com>
* add megatron clip_grad
Signed-off-by: ericharper <complex451@gmail.com>
* trying to resolve circular import error
Signed-off-by: ericharper <complex451@gmail.com>
* rename file
Signed-off-by: ericharper <complex451@gmail.com>
* remove non-gpt models and datasets from __init__ files
Signed-off-by: ericharper <complex451@gmail.com>
* set device in constructorfor gpu init
Signed-off-by: ericharper <complex451@gmail.com>
* set device in constructorfor gpu init
Signed-off-by: ericharper <complex451@gmail.com>
* set_device in constructor
Signed-off-by: ericharper <complex451@gmail.com>
* clean config
Signed-off-by: ericharper <complex451@gmail.com>
* update MegatronDataset
Signed-off-by: ericharper <complex451@gmail.com>
* clean up MegatronModule
Signed-off-by: ericharper <complex451@gmail.com>
* clean up MegatronModule
Signed-off-by: ericharper <complex451@gmail.com>
* rename fp16 and bf16 flags to fused_softmax_input_in_fp16/bf16
Signed-off-by: ericharper <complex451@gmail.com>
* rename to fused_fp16
Signed-off-by: ericharper <complex451@gmail.com>
* add fused_fp16 arg to LayerNorm calls
Signed-off-by: ericharper <complex451@gmail.com>
* fix arg name
Signed-off-by: ericharper <complex451@gmail.com>
* fix arg name
Signed-off-by: ericharper <complex451@gmail.com>
* fix import
Signed-off-by: ericharper <complex451@gmail.com>
* update arg
Signed-off-by: ericharper <complex451@gmail.com>
* skip warmup default to True
Signed-off-by: ericharper <complex451@gmail.com>
* skip warmup default to True
Signed-off-by: ericharper <complex451@gmail.com>
* Adding complete method to MegatronGPTModel (#2935 )
Signed-off-by: Oleksii Kuchaiev <okuchaiev@nvidia.com>
* make ffn_hidden_size mandatory
Signed-off-by: ericharper <complex451@gmail.com>
* Manually migrating timing of step into branch (#2937 )
* 1. Manually migrating timing of step into branch.
Signed-off-by: Micha Livne <mlivne@nvidia.com>
* 1. Updated file name and content.
Signed-off-by: Micha Livne <mlivne@nvidia.com>
* 1. Updated to latest code.
Signed-off-by: Micha Livne <mlivne@nvidia.com>
Co-authored-by: Micha Livne <mlivne@nvidia.com>
* remove unused imports
Signed-off-by: ericharper <complex451@gmail.com>
* remove unused import
Signed-off-by: ericharper <complex451@gmail.com>
* remove unused import
Signed-off-by: ericharper <complex451@gmail.com>
* remove unused import
Signed-off-by: ericharper <complex451@gmail.com>
* check fused_fp16 and fused_bf16 are not both True
Signed-off-by: ericharper <complex451@gmail.com>
* update predict script for model parallel .nemo
Signed-off-by: ericharper <complex451@gmail.com>
* typo
Signed-off-by: ericharper <complex451@gmail.com>
* add script to convert .ckpt to .nemo
Signed-off-by: ericharper <complex451@gmail.com>
* in progress
Signed-off-by: ericharper <complex451@gmail.com>
* update
Signed-off-by: ericharper <complex451@gmail.com>
* convert mp checkpoints to nemo
Signed-off-by: ericharper <complex451@gmail.com>
* update help
Signed-off-by: ericharper <complex451@gmail.com>
* add safeguard for model parallel save_to
Signed-off-by: ericharper <complex451@gmail.com>
* adjust NLPModel save_to to be safer for model parallel
Signed-off-by: Oleksii Kuchaiev <okuchaiev@nvidia.com>
Co-authored-by: Oleksii Kuchaiev <okuchaiev@users.noreply.github.com>
Co-authored-by: Micha Livne <michalivne@users.noreply.github.com>
Co-authored-by: Micha Livne <mlivne@nvidia.com>
Co-authored-by: Oleksii Kuchaiev <okuchaiev@nvidia.com>
* [BigNLP] Update GPT evaluation to work with tensor model parallel (#2959 )
* in progress
Signed-off-by: ericharper <complex451@gmail.com>
* update args
Signed-off-by: ericharper <complex451@gmail.com>
* add request dataset
Signed-off-by: ericharper <complex451@gmail.com>
* tokenize request
Signed-off-by: ericharper <complex451@gmail.com>
* in progress
Signed-off-by: ericharper <complex451@gmail.com>
* able to run
Signed-off-by: ericharper <complex451@gmail.com>
* reduce logits
Signed-off-by: ericharper <complex451@gmail.com>
* capture response
Signed-off-by: ericharper <complex451@gmail.com>
* squeeze and unsqueeze
Signed-off-by: ericharper <complex451@gmail.com>
* handle non model parallel case
Signed-off-by: ericharper <complex451@gmail.com>
* clean imports
Signed-off-by: ericharper <complex451@gmail.com>
* add file
Signed-off-by: ericharper <complex451@gmail.com>
* convert logits to log_probs
Signed-off-by: Oleksii Kuchaiev <okuchaiev@nvidia.com>
* rename logits to log_probs
Signed-off-by: Oleksii Kuchaiev <okuchaiev@nvidia.com>
Co-authored-by: Oleksii Kuchaiev <okuchaiev@nvidia.com>
* style
Signed-off-by: ericharper <complex451@gmail.com>
* fix copyright headers
Signed-off-by: ericharper <complex451@gmail.com>
* fix copyright headers
Signed-off-by: ericharper <complex451@gmail.com>
* remove old TimingCallback
Signed-off-by: ericharper <complex451@gmail.com>
* style
Signed-off-by: ericharper <complex451@gmail.com>
* update jenkins to use latest apex and sandeep's fork
Signed-off-by: ericharper <complex451@gmail.com>
* update jenkins
Signed-off-by: ericharper <complex451@gmail.com>
* update jenkins
Signed-off-by: ericharper <complex451@gmail.com>
* update jenkins
Signed-off-by: ericharper <complex451@gmail.com>
* update jenkins
Signed-off-by: ericharper <complex451@gmail.com>
* try 2109 container
Signed-off-by: ericharper <complex451@gmail.com>
* try cuda container
Signed-off-by: ericharper <complex451@gmail.com>
* use internal container
Signed-off-by: ericharper <complex451@gmail.com>
* update checkpoint tests
Signed-off-by: ericharper <complex451@gmail.com>
* fix scheduler args
Signed-off-by: ericharper <complex451@gmail.com>
* update eval
Signed-off-by: ericharper <complex451@gmail.com>
* style
Signed-off-by: ericharper <complex451@gmail.com>
* update jenkins to use ptl 1.5 rc
Signed-off-by: ericharper <complex451@gmail.com>
* add import guard to jenkins
Signed-off-by: ericharper <complex451@gmail.com>
* add import guard to jenkins
Signed-off-by: ericharper <complex451@gmail.com>
* remove deterministic
Signed-off-by: ericharper <complex451@gmail.com>
* install numba .53
Signed-off-by: ericharper <complex451@gmail.com>
* allow for more variance
Signed-off-by: ericharper <complex451@gmail.com>
* update trainer config dataclass
Signed-off-by: ericharper <complex451@gmail.com>
* test_get_optimizer on gpu
Signed-off-by: ericharper <complex451@gmail.com>
* revert comment
Signed-off-by: ericharper <complex451@gmail.com>
* change trainer config default to 32
Signed-off-by: ericharper <complex451@gmail.com>
* [BigNLP] Remove fused kernel code instead use Apex (#2984 )
* remove fused_kernels
Signed-off-by: ericharper <complex451@gmail.com>
* remove fused_kernels
Signed-off-by: ericharper <complex451@gmail.com>
* remove fused layer norm and fused softmax and use apex instead
Signed-off-by: ericharper <complex451@gmail.com>
* update imports
Signed-off-by: ericharper <complex451@gmail.com>
* remove comment
Signed-off-by: ericharper <complex451@gmail.com>
* use apex enums
Signed-off-by: ericharper <complex451@gmail.com>
* use apex enums
Signed-off-by: ericharper <complex451@gmail.com>
* add tab
Signed-off-by: ericharper <complex451@gmail.com>
* Timer with sliding window (#3002 )
Co-authored-by: Micha Livne <michalivne@users.noreply.github.com>
* revert tab
Signed-off-by: ericharper <complex451@gmail.com>
* check for rank zero
Signed-off-by: ericharper <complex451@gmail.com>
* check for rank zero
Signed-off-by: ericharper <complex451@gmail.com>
* try explicit log dir
Signed-off-by: ericharper <complex451@gmail.com>
* add +
Signed-off-by: ericharper <complex451@gmail.com>
* don't rm
Signed-off-by: ericharper <complex451@gmail.com>
* make dir if it doesn't exist
Signed-off-by: ericharper <complex451@gmail.com>
* create mp nemo file in temp directory
Signed-off-by: ericharper <complex451@gmail.com>
* simplify mp save_to
Signed-off-by: ericharper <complex451@gmail.com>
* handle mp 1 case
Signed-off-by: ericharper <complex451@gmail.com>
* style fix
Signed-off-by: ericharper <complex451@gmail.com>
* remove files
Signed-off-by: ericharper <complex451@gmail.com>
* fix consumed_samples when resuming
Signed-off-by: ericharper <complex451@gmail.com>
* fix reinstall.sh
Signed-off-by: ericharper <complex451@gmail.com>
* update req
Signed-off-by: ericharper <complex451@gmail.com>
* add more detailed log for dataloaders
Signed-off-by: ericharper <complex451@gmail.com>
* check if cuda is available before using fused_adam
Signed-off-by: ericharper <complex451@gmail.com>
* revert comment
Signed-off-by: ericharper <complex451@gmail.com>
* update eval script to use model.freeze
Signed-off-by: ericharper <complex451@gmail.com>
* log train loss averaged over gradient accumulation steps
Signed-off-by: ericharper <complex451@gmail.com>
* check copyright earlier
Signed-off-by: ericharper <complex451@gmail.com>
* todo
Signed-off-by: ericharper <complex451@gmail.com>
* override SaveRestoreConnector in NLPModel init
Signed-off-by: ericharper <complex451@gmail.com>
* move to scripts
Signed-off-by: ericharper <complex451@gmail.com>
* remove star import
Signed-off-by: ericharper <complex451@gmail.com>
* remove comments
Signed-off-by: ericharper <complex451@gmail.com>
* remove unused dataset
Signed-off-by: ericharper <complex451@gmail.com>
* removed barrier
Signed-off-by: ericharper <complex451@gmail.com>
* check cfg
Signed-off-by: ericharper <complex451@gmail.com>
* remove logging
Signed-off-by: ericharper <complex451@gmail.com>
* freeze, unfreeze
Signed-off-by: ericharper <complex451@gmail.com>
* return None
Signed-off-by: ericharper <complex451@gmail.com>
* remove unused imports
Signed-off-by: ericharper <complex451@gmail.com>
* add TODO
Signed-off-by: ericharper <complex451@gmail.com>
* typecheck
Signed-off-by: ericharper <complex451@gmail.com>
* typo
Signed-off-by: ericharper <complex451@gmail.com>
* todo
Signed-off-by: ericharper <complex451@gmail.com>
* add common native plugin
Signed-off-by: ericharper <complex451@gmail.com>
* restore with trainer
Signed-off-by: ericharper <complex451@gmail.com>
* style
Signed-off-by: ericharper <complex451@gmail.com>
* deprecate megatron-lm bert
Signed-off-by: ericharper <complex451@gmail.com>
* deprecate megatron-lm bert
Signed-off-by: ericharper <complex451@gmail.com>
* compile helpers ont he fly
Signed-off-by: ericharper <complex451@gmail.com>
* remove amp_level
Signed-off-by: ericharper <complex451@gmail.com>
* remove amp_level from configs
Signed-off-by: ericharper <complex451@gmail.com>
* add missing import
Signed-off-by: ericharper <complex451@gmail.com>
* typo
Signed-off-by: ericharper <complex451@gmail.com>
* remove amp_level
Signed-off-by: ericharper <complex451@gmail.com>
* use fast huggingface tokenizers by default
Signed-off-by: ericharper <complex451@gmail.com>
* deal with huggingface tokenizer positional args
Signed-off-by: ericharper <complex451@gmail.com>
* deal with huggingface tokenizer positional args
Signed-off-by: ericharper <complex451@gmail.com>
* deal with huggingface tokenizer positional args
Signed-off-by: ericharper <complex451@gmail.com>
* revert use_fast default to False
Signed-off-by: ericharper <complex451@gmail.com>
* return super training_epoch_end
Signed-off-by: ericharper <complex451@gmail.com>
* remove optimizer_idx arg from training_step
Signed-off-by: ericharper <complex451@gmail.com>
* remove unused arg from on_train_epoch_end
Signed-off-by: ericharper <complex451@gmail.com>
* add restore_from_path to nemo config
Signed-off-by: ericharper <complex451@gmail.com>
* add comment
Signed-off-by: ericharper <complex451@gmail.com>
* revert
Signed-off-by: ericharper <complex451@gmail.com>
* override connector if not subclassing NLPSaveRestoreConnector for model parallel save
Signed-off-by: ericharper <complex451@gmail.com>
* update test optimizer
Signed-off-by: ericharper <complex451@gmail.com>
* clean up
Signed-off-by: ericharper <complex451@gmail.com>
* clean up
Signed-off-by: ericharper <complex451@gmail.com>
* clean up
Signed-off-by: ericharper <complex451@gmail.com>
* clean up
Signed-off-by: ericharper <complex451@gmail.com>
* make data_prefix mandatory in config
Signed-off-by: ericharper <complex451@gmail.com>
* update installation instructions on readme
Signed-off-by: ericharper <complex451@gmail.com>
* update dockerfile
Signed-off-by: ericharper <complex451@gmail.com>
* add todo
Signed-off-by: ericharper <complex451@gmail.com>
* raise error if trying to use always_save_nemo with model parallel model
Signed-off-by: ericharper <complex451@gmail.com>
* remove comment
Signed-off-by: ericharper <complex451@gmail.com>
Co-authored-by: Sandeep Subramanian <sandeep.subramanian.1@umontreal.ca>
Co-authored-by: Sangkug Lym <slym@nvidia.com>
Co-authored-by: Oleksii Kuchaiev <okuchaiev@users.noreply.github.com>
Co-authored-by: Micha Livne <michalivne@users.noreply.github.com>
Co-authored-by: Micha Livne <mlivne@nvidia.com>
Co-authored-by: Oleksii Kuchaiev <okuchaiev@nvidia.com>
2021-10-20 21:06:37 -06:00
Jason
be7114e2d9
Update README.rst ( #2973 )
...
Signed-off-by: Jason <jasoli@nvidia.com>
2021-10-08 11:33:11 -06:00
Eric Harper
91fd9ea970
Merge final doc and bug fixes from r1.4.0 to main ( #2952 )
...
* update branch for jenkinsfile and dockerfile
Signed-off-by: ericharper <complex451@gmail.com>
* Typos (#2884 )
* segmentation tutorial fix
Signed-off-by: ekmb <ebakhturina@nvidia.com>
* data fixes
Signed-off-by: ekmb <ebakhturina@nvidia.com>
* Minor Fixes (#2922 )
* typo
Signed-off-by: Jason <jasoli@nvidia.com>
* remove notebook from docs
Signed-off-by: Jason <jasoli@nvidia.com>
* Adding Conformer-Transducer docs. (#2920 )
* added Conformer-Transducer docs.
Signed-off-by: Vahid <vnoroozi@nvidia.com>
* Added contextnet.
Signed-off-by: Vahid <vnoroozi@nvidia.com>
* fixed the title.
Signed-off-by: Vahid <vnoroozi@nvidia.com>
* Fix numba spec augment for cases where batch size > MAX_THREAD_BUFFER (#2924 )
* Fix numba spec augment for cases where batch size > MAX_THREAD_BUFFER
Signed-off-by: smajumdar <titu1994@gmail.com>
* Revert print in test
Signed-off-by: smajumdar <titu1994@gmail.com>
* Update readme for r1.4.0 (#2927 )
* Updated readme for r1.4.0.
Signed-off-by: Vahid <vnoroozi@nvidia.com>
* Updated readme for r1.4.0.
Signed-off-by: Vahid <vnoroozi@nvidia.com>
* Updated readme for r1.4.0.
Signed-off-by: Vahid <vnoroozi@nvidia.com>
* Updated readme for r1.4.0.
Signed-off-by: Vahid <vnoroozi@nvidia.com>
* Updated readme for r1.4.0.
Signed-off-by: Vahid <vnoroozi@nvidia.com>
* Updated readme for r1.4.0.
Signed-off-by: Vahid <vnoroozi@nvidia.com>
* Updated readme for r1.4.0.
Signed-off-by: Vahid <vnoroozi@nvidia.com>
* New NMT Models (#2925 )
* New pretrained models
Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca>
* Update NMT docs
Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca>
Co-authored-by: Eric Harper <complex451@gmail.com>
* update branch
Signed-off-by: ericharper <complex451@gmail.com>
* revert
Signed-off-by: ericharper <complex451@gmail.com>
Co-authored-by: Evelina <10428420+ekmb@users.noreply.github.com>
Co-authored-by: Jason <jasoli@nvidia.com>
Co-authored-by: Vahid Noroozi <VahidooX@users.noreply.github.com>
Co-authored-by: Somshubra Majumdar <titu1994@gmail.com>
Co-authored-by: Sandeep Subramanian <sandeep.subramanian.1@umontreal.ca>
2021-10-06 08:21:54 -06:00
Eric Harper
58bc1d2c6c
Merge r1.4 bugfixes to main ( #2918 )
...
* update package info
Signed-off-by: ericharper <complex451@gmail.com>
* update branch for jenkinsfile and dockerfile
Signed-off-by: ericharper <complex451@gmail.com>
* Adding conformer-transducer models. (#2717 )
* added the models.
Signed-off-by: Vahid <vnoroozi@nvidia.com>
* added contextnet models.
Signed-off-by: Vahid <vnoroozi@nvidia.com>
* added german and chinese models.
Signed-off-by: Vahid <vnoroozi@nvidia.com>
* fix the abs_pos of conformer. (#2863 )
Signed-off-by: Vahid <vnoroozi@nvidia.com>
* update to match sde (#2867 )
Signed-off-by: ekmb <ebakhturina@nvidia.com>
* updated german ngc model (#2871 )
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* Lower bound PTL to safe version (#2876 )
Signed-off-by: smajumdar <titu1994@gmail.com>
* Update notebooks with onnxruntime (#2880 )
Signed-off-by: smajumdar <titu1994@gmail.com>
* Upperbound PTL (#2881 )
Signed-off-by: smajumdar <titu1994@gmail.com>
* minor typo and broken link fixes (#2883 )
Signed-off-by: nithinraok <nithinrao.koluguri@gmail.com>
* Remove numbers from TTS tutorial names (#2882 )
* Remove numbers from TTS tutorial names
Signed-off-by: Jocelyn Huang <jocelynh@nvidia.com>
* Update documentation links
Signed-off-by: Jocelyn Huang <jocelynh@nvidia.com>
* Typos (#2884 )
* segmentation tutorial fix
Signed-off-by: ekmb <ebakhturina@nvidia.com>
* data fixes
Signed-off-by: ekmb <ebakhturina@nvidia.com>
* updated the messages in eval_beamsearch_ngram.py. (#2889 )
Signed-off-by: Vahid <vnoroozi@nvidia.com>
* style (#2890 )
Signed-off-by: Jason <jasoli@nvidia.com>
* Fix broken link (#2891 )
* fix broken link
Signed-off-by: fayejf <fayejf07@gmail.com>
* more
Signed-off-by: fayejf <fayejf07@gmail.com>
* Update sclite eval for new transcription method (#2893 )
* Update sclite to use updated inference
Signed-off-by: smajumdar <titu1994@gmail.com>
* Remove WER
Signed-off-by: smajumdar <titu1994@gmail.com>
* Update sclite script to use new inference methods
Signed-off-by: smajumdar <titu1994@gmail.com>
* Remove hub 5
Signed-off-by: smajumdar <titu1994@gmail.com>
* Fix TransformerDecoder export - r1.4 (#2875 )
* export fix
Signed-off-by: Abhinav Khattar <aklife97@gmail.com>
* embedding pos
Signed-off-by: Abhinav Khattar <aklife97@gmail.com>
* remove bool param
Signed-off-by: Abhinav Khattar <aklife97@gmail.com>
* changes
Signed-off-by: Abhinav Khattar <aklife97@gmail.com>
Co-authored-by: Sandeep Subramanian <sandeep.subramanian.1@umontreal.ca>
* Update Finetuning notebook (#2906 )
* update notebook
Signed-off-by: Jason <jasoli@nvidia.com>
* rename
Signed-off-by: Jason <jasoli@nvidia.com>
* rename
Signed-off-by: Jason <jasoli@nvidia.com>
* revert branch to main
Signed-off-by: ericharper <complex451@gmail.com>
Co-authored-by: Vahid Noroozi <VahidooX@users.noreply.github.com>
Co-authored-by: Evelina <10428420+ekmb@users.noreply.github.com>
Co-authored-by: Yang Zhang <yzhang123@users.noreply.github.com>
Co-authored-by: Somshubra Majumdar <titu1994@gmail.com>
Co-authored-by: Nithin Rao <nithinrao.koluguri@gmail.com>
Co-authored-by: Jocelyn <jocelynh@nvidia.com>
Co-authored-by: Jason <jasoli@nvidia.com>
Co-authored-by: fayejf <36722593+fayejf@users.noreply.github.com>
Co-authored-by: Abhinav Khattar <aklife97@gmail.com>
Co-authored-by: Sandeep Subramanian <sandeep.subramanian.1@umontreal.ca>
2021-09-28 20:13:55 -06:00
Evelina
8cf9aad8ec
update readme with the tools sections ( #2895 )
...
Signed-off-by: ekmb <ebakhturina@nvidia.com>
2021-09-24 21:44:14 -07:00
Nithin Rao
0aa5b4526a
Move speaker folders ( #2777 )
...
* initial push
Signed-off-by: nithinraok <nithinrao.koluguri@gmail.com>
change folder
Signed-off-by: nithinraok <nithinrao.koluguri@gmail.com>
readme
Signed-off-by: nithinraok <nithinrao.koluguri@gmail.com>
Create README.md
initial diar readme
scp_manifest
Signed-off-by: nithinraok <nithinrao.koluguri@gmail.com>
rebase and move folders
Signed-off-by: nithinraok <nithinrao.koluguri@gmail.com>
updated scp to manifest script
Signed-off-by: nithinraok <nithinrao.koluguri@gmail.com>
small_fix
Signed-off-by: nithinraok <nithinrao.koluguri@gmail.com>
Update README.md
add recogniton read me
tutorial update
Signed-off-by: nithinraok <nithinrao.koluguri@gmail.com>
initial push
Signed-off-by: nithinraok <nithinrao.koluguri@gmail.com>
readme
Signed-off-by: nithinraok <nithinrao.koluguri@gmail.com>
scp_manifest
Signed-off-by: nithinraok <nithinrao.koluguri@gmail.com>
rebase and move folders
Signed-off-by: nithinraok <nithinrao.koluguri@gmail.com>
updated scp to manifest script
Signed-off-by: nithinraok <nithinrao.koluguri@gmail.com>
add recogniton read me
tutorial update
Signed-off-by: nithinraok <nithinrao.koluguri@gmail.com>
add diarization README
initial push
Signed-off-by: nithinraok <nithinrao.koluguri@gmail.com>
readme
Signed-off-by: nithinraok <nithinrao.koluguri@gmail.com>
scp_manifest
Signed-off-by: nithinraok <nithinrao.koluguri@gmail.com>
rebase and move folders
Signed-off-by: nithinraok <nithinrao.koluguri@gmail.com>
updated scp to manifest script
Signed-off-by: nithinraok <nithinrao.koluguri@gmail.com>
add recogniton read me
tutorial update
Signed-off-by: nithinraok <nithinrao.koluguri@gmail.com>
initial push
Signed-off-by: nithinraok <nithinrao.koluguri@gmail.com>
readme
Signed-off-by: nithinraok <nithinrao.koluguri@gmail.com>
scp_manifest
Signed-off-by: nithinraok <nithinrao.koluguri@gmail.com>
rebase and move folders
Signed-off-by: nithinraok <nithinrao.koluguri@gmail.com>
updated scp to manifest script
Signed-off-by: nithinraok <nithinrao.koluguri@gmail.com>
add recogniton read me
tutorial update
Signed-off-by: nithinraok <nithinrao.koluguri@gmail.com>
Updated README.md 001
Updated README.md and committing for saving purpose
Update README.md
conf changes
Signed-off-by: nithinraok <nithinrao.koluguri@gmail.com>
Update README.md 002
Added examples for input and output.
Added diarization_utils.py and asr_with_diarization.py
Signed-off-by: Taejin Park <tango4j@gmail.com>
slight changes diarization
oracle null and style --fix
Signed-off-by: nithinraok <nithinrao.koluguri@gmail.com>
Reflected LGTM comments.
Signed-off-by: Taejin Park <tango4j@gmail.com>
reflected changes
Signed-off-by: nithinraok <nithinrao.koluguri@gmail.com>
remove duplicate seeds
Signed-off-by: nithinraok <nithinrao.koluguri@gmail.com>
Reflected PR review and removed unused variables
Signed-off-by: Taejin Park <tango4j@gmail.com>
Update README.md 003
Added a few titles and revised the descriptions.
Update README.md 003
Added a few titles and revised the descriptions.
Signed-off-by: Taejin Park <tango4j@gmail.com>
scripts and tutorial link fixes
Signed-off-by: nithinraok <nithinrao.koluguri@gmail.com>
LGTM fixes
Signed-off-by: nithinraok <nithinrao.koluguri@gmail.com>
Added more docstrings and reused get_DER
Signed-off-by: Taejin Park <tango4j@gmail.com>
style fix
Signed-off-by: nithinraok <nithinrao.koluguri@gmail.com>
* update ecapa config
Signed-off-by: nithinraok <nithinrao.koluguri@gmail.com>
2021-09-08 20:58:08 -07:00
Eric Harper
2ff89fdf56
Merge 1.3 bugfixes into main ( #2715 )
...
* update jenkins branch
Signed-off-by: ericharper <complex451@gmail.com>
* update notebooks branch
Signed-off-by: ericharper <complex451@gmail.com>
* update package info
Signed-off-by: ericharper <complex451@gmail.com>
* update readme
Signed-off-by: ericharper <complex451@gmail.com>
* update nemo version for Dockerfile
Signed-off-by: ericharper <complex451@gmail.com>
* update notebook branch
Signed-off-by: ericharper <complex451@gmail.com>
* Update colab links to Transducer notebooks (#2654 )
Signed-off-by: smajumdar <titu1994@gmail.com>
* Fix nmt grpc server, concatdataset for raw text files (#2656 )
* Fix nmt grpc server and concatdataset for raw text files
Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca>
* Check if lang direction is provided correctly
Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca>
* Style fixes
Signed-off-by: MaximumEntropy <sandeep.subramanian.1@umontreal.ca>
* add missing init (#2662 )
Signed-off-by: ekmb <ebakhturina@nvidia.com>
* fix qa inference for single example (#2668 )
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* Fix max symbol per step updating for RNNT (#2672 )
* Fix max symbol per step updating for RNNT
Signed-off-by: smajumdar <titu1994@gmail.com>
* Fix notebooks
Signed-off-by: smajumdar <titu1994@gmail.com>
* Replaced unfold() with split_view() (#2671 )
* Replaced unfold() with split_view()
Signed-off-by: Boris Fomitchev <bfomitchev@nvidia.com>
* fixed typo
Signed-off-by: Boris Fomitchev <bfomitchev@nvidia.com>
Co-authored-by: Somshubra Majumdar <titu1994@gmail.com>
* Correct voice app demo (#2682 )
Signed-off-by: smajumdar <titu1994@gmail.com>
* Import guard (#2692 )
* add asr and pynini import guard
Signed-off-by: ekmb <ebakhturina@nvidia.com>
* remove asrmodel type
Signed-off-by: ekmb <ebakhturina@nvidia.com>
* remove asrmodel type
Signed-off-by: ekmb <ebakhturina@nvidia.com>
* fixing branch (#2695 )
Signed-off-by: Ghasem Pasandi <gpasandi@nvidia.com>
Co-authored-by: Ghasem Pasandi <gpasandi@nvidia.com>
* fix for emojis (#2675 )
* fix for emojis
Signed-off-by: ekmb <ebakhturina@nvidia.com>
* remove redundant line
Signed-off-by: ekmb <ebakhturina@nvidia.com>
* raise error
Signed-off-by: ekmb <ebakhturina@nvidia.com>
* use app_state
Signed-off-by: ekmb <ebakhturina@nvidia.com>
Co-authored-by: Eric Harper <complex451@gmail.com>
* Fix issues with ASR notebooks (#2698 )
Signed-off-by: smajumdar <titu1994@gmail.com>
* Allow non divisible split_size (#2699 )
* bugfix
Signed-off-by: Jason <jasoli@nvidia.com>
* bugfix
Signed-off-by: Jason <jasoli@nvidia.com>
* TN fix for corner cases (#2689 )
* serial added, weights to common defaults, decimal bug fix
Signed-off-by: ekmb <ebakhturina@nvidia.com>
* one failing
Signed-off-by: ekmb <ebakhturina@nvidia.com>
* all tests pass
Signed-off-by: ekmb <ebakhturina@nvidia.com>
* remove redundant file
Signed-off-by: ekmb <ebakhturina@nvidia.com>
* fix telephone, add test cases
Signed-off-by: ekmb <ebakhturina@nvidia.com>
* money fix
Signed-off-by: ekmb <ebakhturina@nvidia.com>
* clean format
Signed-off-by: ekmb <ebakhturina@nvidia.com>
* fix edge case of greedy decoding for greedy_batch mode (#2701 )
Signed-off-by: smajumdar <titu1994@gmail.com>
* Remove time macro (#2703 )
Signed-off-by: smajumdar <titu1994@gmail.com>
* Minor FastPitch Fixes (#2697 )
* fixes
Signed-off-by: Jason <jasoli@nvidia.com>
* update CI
Signed-off-by: Jason <jasoli@nvidia.com>
* refix
Signed-off-by: Jason <jasoli@nvidia.com>
* Fix ddp error. (#2678 )
To avoid "MisconfigurationException: Selected distributed backend ddp is not compatible with an interactive environment." error.
Co-authored-by: ekmb <ebakhturina@nvidia.com>
* update jenkins
Signed-off-by: ericharper <complex451@gmail.com>
* update notebooks
Signed-off-by: ericharper <complex451@gmail.com>
* add split_view back
Signed-off-by: ericharper <complex451@gmail.com>
Co-authored-by: Somshubra Majumdar <titu1994@gmail.com>
Co-authored-by: Sandeep Subramanian <sandeep.subramanian.1@umontreal.ca>
Co-authored-by: Evelina <10428420+ekmb@users.noreply.github.com>
Co-authored-by: Yang Zhang <yzhang123@users.noreply.github.com>
Co-authored-by: Boris Fomitchev <borisfom@users.noreply.github.com>
Co-authored-by: Ghasem <35242805+pasandi20@users.noreply.github.com>
Co-authored-by: Ghasem Pasandi <gpasandi@nvidia.com>
Co-authored-by: Jason <jasoli@nvidia.com>
Co-authored-by: khcs <khcs@users.noreply.github.com>
Co-authored-by: ekmb <ebakhturina@nvidia.com>
2021-08-24 16:21:59 -06:00
Tuan Manh Lai
4abe5d5f6d
Add back tagger data augmentation + Fixes for analyze_errors.py ( #2637 )
...
* Fixes to analyze_errors.py
Signed-off-by: Tuan Lai <tuanl@nvidia.com>
* Add back tagger data augmentation
Signed-off-by: Tuan Lai <tuanl@nvidia.com>
* Add duplex neural tn to README
Signed-off-by: Tuan Lai <tuanl@nvidia.com>
* Fixed typos
Signed-off-by: Tuan Lai <tuanl@nvidia.com>
2021-08-11 10:16:49 -07:00
Jason
846b150082
Update TTS Docs to recommend fastpitch and hifigan ( #2498 )
...
* update docs
Signed-off-by: Jason <jasoli@nvidia.com>
* update
Signed-off-by: Jason <jasoli@nvidia.com>
2021-07-19 16:30:35 -07:00
vadam5
e3f6867dd2
Entity linking documentation ( #2357 )
...
* Update tutorials.rst
Signed-off-by: Virginia Adams <vadams@nvidia.com>
* Update tutorials.rst
Signed-off-by: Virginia Adams <vadams@nvidia.com>
* Update models.rst
Signed-off-by: Virginia Adams <vadams@nvidia.com>
* Add files via upload
Signed-off-by: Virginia Adams <vadams@nvidia.com>
* Create entity_linking.rst
Signed-off-by: Virginia Adams <vadams@nvidia.com>
* Update README.rst
Signed-off-by: Virginia Adams <vadams@nvidia.com>
* Update entity_linking.rst
Signed-off-by: Virginia Adams <vadams@nvidia.com>
* Update nlp_all.bib
Signed-off-by: Virginia Adams <vadams@nvidia.com>
* Update entity_linking.rst
Signed-off-by: Virginia Adams <vadams@nvidia.com>
* Update entity_linking.rst
Signed-off-by: Virginia Adams <vadams@nvidia.com>
* fixed base typos and doc link
Signed-off-by: Virginia Adams <vadams@nvidia.com>
Co-authored-by: Sandeep Subramanian <sandeep.subramanian.1@umontreal.ca>
Co-authored-by: Oleksii Kuchaiev <okuchaiev@users.noreply.github.com>
2021-07-19 16:10:19 -07:00
Yang Zhang
159952d71f
add sgdqa to readme ( #2492 )
...
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
Co-authored-by: Eric Harper <complex451@gmail.com>
2021-07-15 23:14:32 -06:00
Eric Harper
765920cd68
Update README.rst
2021-07-15 18:17:19 -06:00
Eric Harper
c5dbf4508a
Merge r1.1 bugfixes to main. Update dep versions. ( #2437 )
...
* Update notebook branch and Jenkinsfile for 1.1.0 testing (#2378 )
* update branch
Signed-off-by: ericharper <complex451@gmail.com>
* update jenkinsfile
Signed-off-by: ericharper <complex451@gmail.com>
* [BUGFIX] NMT Multi-node was incorrectly computing num_replicas (#2380 )
* fix property when not using model parallel
Signed-off-by: ericharper <complex451@gmail.com>
* fix property when not using model parallel
Signed-off-by: ericharper <complex451@gmail.com>
* add debug statement
Signed-off-by: ericharper <complex451@gmail.com>
* add debug statement
Signed-off-by: ericharper <complex451@gmail.com>
* instantiate with NLPDDPPlugin with num_nodes from trainer config
Signed-off-by: ericharper <complex451@gmail.com>
* Update ASR scripts for tokenizer building and tarred dataset building (#2381 )
* Update ASR scripts for tokenizer building and tarred dataset building
Signed-off-by: smajumdar <titu1994@gmail.com>
* Update container
Signed-off-by: smajumdar <titu1994@gmail.com>
* Add STT Zh Citrinet 1024 Gamma 0.25 model
Signed-off-by: smajumdar <titu1994@gmail.com>
* Update notebook (#2391 )
Signed-off-by: smajumdar <titu1994@gmail.com>
* ASR Notebooks fix for 1.1.0 (#2395 )
* nb fix for spring clean
Signed-off-by: fayejf <fayejf07@gmail.com>
* remove outdated instruction
Signed-off-by: fayejf <fayejf07@gmail.com>
* Mean normalization (#2397 )
* norm embeddings
Signed-off-by: nithinraok <nithinrao.koluguri@gmail.com>
* move to utils
Signed-off-by: nithinraok <nithinrao.koluguri@gmail.com>
* Bugfix adaptive spec augment time masking (#2398 )
* bugfix adaptive spec augment
Signed-off-by: smajumdar <titu1994@gmail.com>
* Revert freq mask guard
Signed-off-by: smajumdar <titu1994@gmail.com>
* Revert freq mask guard
Signed-off-by: smajumdar <titu1994@gmail.com>
* Remove static time width clamping
Signed-off-by: smajumdar <titu1994@gmail.com>
* Correct typos and issues with notebooks (#2402 )
* Fix Primer notebook
Signed-off-by: smajumdar <titu1994@gmail.com>
* Typo
Signed-off-by: smajumdar <titu1994@gmail.com>
* remove accelerator=DDP in tutorial notebooks to avoid errors. (#2403 )
Signed-off-by: Hoo Chang Shin <hshin@nvidia.com>
Co-authored-by: Hoo Chang Shin <hshin@nvidia.com>
* [BUGFIX] Megatron in NMT was setting vocab_file to None (#2417 )
* make vocab_file configurable for megatron in nmt
Signed-off-by: ericharper <complex451@gmail.com>
* update docs
Signed-off-by: ericharper <complex451@gmail.com>
* update docs
Signed-off-by: ericharper <complex451@gmail.com>
* Link updates in docs and notebooks and typo fix (#2416 )
* typo fix for notebooks
Signed-off-by: fayejf <fayejf07@gmail.com>
* tiny typo fix in docs
Signed-off-by: fayejf <fayejf07@gmail.com>
* docs branch->stable
Signed-off-by: fayejf <fayejf07@gmail.com>
* more docs branch -> stable
Signed-off-by: fayejf <fayejf07@gmail.com>
* tutorial links branch -> stable
Signed-off-by: fayejf <fayejf07@gmail.com>
* small fix
Signed-off-by: fayejf <fayejf07@gmail.com>
* add renamed 06
Signed-off-by: fayejf <fayejf07@gmail.com>
* more fixes
Signed-off-by: fayejf <fayejf07@gmail.com>
* Update onnx (#2420 )
Signed-off-by: smajumdar <titu1994@gmail.com>
* Correct version of onnxruntime (#2422 )
Signed-off-by: smajumdar <titu1994@gmail.com>
* update deployment instructions (#2430 )
Signed-off-by: ericharper <complex451@gmail.com>
* Bumping version to 1.1.0
Signed-off-by: Oleksii Kuchaiev <okuchaiev@nvidia.com>
* update jenksinfile
Signed-off-by: ericharper <complex451@gmail.com>
* add upper bounds
Signed-off-by: ericharper <complex451@gmail.com>
* update readme
Signed-off-by: ericharper <complex451@gmail.com>
* update requirements
Signed-off-by: ericharper <complex451@gmail.com>
* update jenkinsfile
Signed-off-by: ericharper <complex451@gmail.com>
* update version
Signed-off-by: ericharper <complex451@gmail.com>
Co-authored-by: Somshubra Majumdar <titu1994@gmail.com>
Co-authored-by: fayejf <36722593+fayejf@users.noreply.github.com>
Co-authored-by: Nithin Rao <nithinrao.koluguri@gmail.com>
Co-authored-by: khcs <khcs@users.noreply.github.com>
Co-authored-by: Hoo Chang Shin <hshin@nvidia.com>
Co-authored-by: Oleksii Kuchaiev <okuchaiev@nvidia.com>
2021-07-02 14:22:44 -07:00
Somshubra Majumdar
3e94696e21
Update container version to 21.05 ( #2309 )
...
* Update container version
Signed-off-by: smajumdar <titu1994@gmail.com>
* Temporarily change export format of waveglow
Signed-off-by: smajumdar <titu1994@gmail.com>
* Add conda update for numba
Signed-off-by: smajumdar <titu1994@gmail.com>
* Update numba compat via global flag for strictness level `--relax_numba_compat`, remove pytorchlightning.metrics, refactor out numba utils to core, update tests
Signed-off-by: smajumdar <titu1994@gmail.com>
* Correct order of numba minimum verion, remove wrong flag from test
Signed-off-by: smajumdar <titu1994@gmail.com>
* Double test of cuda numba
Signed-off-by: smajumdar <titu1994@gmail.com>
* Double test of cuda numba
Signed-off-by: smajumdar <titu1994@gmail.com>
* Enable RNNT tests
Signed-off-by: smajumdar <titu1994@gmail.com>
2021-06-14 17:39:45 -06:00
Oleksii Kuchaiev
d41307c641
Merge branch 'r1.0.2' into main
...
Signed-off-by: Oleksii Kuchaiev <okuchaiev@nvidia.com>
2021-06-10 18:47:16 -07:00
Oleksii Kuchaiev
245bd49efb
update version number
...
Signed-off-by: Oleksii Kuchaiev <okuchaiev@nvidia.com>
2021-06-10 16:54:10 -07:00
Oleksii Kuchaiev
d8b69d7fb6
update README ( #2332 )
...
Signed-off-by: Oleksii Kuchaiev <okuchaiev@nvidia.com>
2021-06-09 20:47:58 -07:00
Oleksii Kuchaiev
5839aee402
Merge branch 'r1.0.1' into main
...
Signed-off-by: Oleksii Kuchaiev <okuchaiev@nvidia.com>
2021-06-08 22:44:37 -07:00
Oleksii Kuchaiev
2763c67a0d
update readmes
...
Signed-off-by: Oleksii Kuchaiev <okuchaiev@nvidia.com>
2021-06-08 22:36:06 -07:00
Oleksii Kuchaiev
de15462857
fix docs table
...
Signed-off-by: Oleksii Kuchaiev <okuchaiev@nvidia.com>
2021-06-03 15:55:40 -07:00
Oleksii Kuchaiev
00375818c3
update readme
...
Signed-off-by: Oleksii Kuchaiev <okuchaiev@nvidia.com>
2021-06-03 15:42:45 -07:00
Oleksii Kuchaiev
94d8afe279
update readme
...
Signed-off-by: Oleksii Kuchaiev <okuchaiev@nvidia.com>
2021-06-03 14:32:38 -07:00
Oleksii Kuchaiev
5d1f005551
Update readmes
...
Signed-off-by: Oleksii Kuchaiev <okuchaiev@nvidia.com>
2021-06-03 14:31:51 -07:00
Somshubra Majumdar
81df4358cd
Correct branch version for v1.0.0 ( #2157 )
...
* Correct branch version
Signed-off-by: smajumdar <titu1994@gmail.com>
* Correct Jenkinsfile
Signed-off-by: smajumdar <titu1994@gmail.com>
* Update rst files
Signed-off-by: smajumdar <titu1994@gmail.com>
2021-05-05 14:58:13 -07:00
Vahid Noroozi
1f02d56b3a
fixing the ASR LM docs. ( #2102 )
...
* fixed the filename.
Signed-off-by: Vahid <vnoroozi@nvidia.com>
* fixed the filename.
Signed-off-by: Vahid <vnoroozi@nvidia.com>
2021-04-23 14:19:56 -07:00
Oleksii Kuchaiev
aa10332f7b
fixing some typos in readme
...
Signed-off-by: Oleksii Kuchaiev <okuchaiev@nvidia.com>
2021-04-21 16:25:17 -07:00
Somshubra Majumdar
400632a229
Update Dockerfile to latest nemo container ( #2061 )
...
Signed-off-by: smajumdar <titu1994@gmail.com>
2021-04-14 16:06:55 -07:00
Oleksii Kuchaiev
f63f632e2e
Merge branch 'r1.0.0rc1' into main
...
Signed-off-by: Oleksii Kuchaiev <okuchaiev@nvidia.com>
2021-04-06 22:33:40 -07:00
Oleksii Kuchaiev
9fc8042669
update readme
...
Signed-off-by: Oleksii Kuchaiev <okuchaiev@nvidia.com>
2021-04-06 22:30:50 -07:00
Yang Zhang
360eb0422f
Text denormalization ( #1797 )
...
* move do_training flag to config
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* adding text denorm
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* add google header
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* delete unused code
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* fix lgtm
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* adding unittests
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* add pynini dependency
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* fix missing import
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* add header
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* fix pytests
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* fix pytests
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* change jenkins
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* add text denorm container
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* add export files
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* add export files
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* add export files
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* add export files
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* try to fix jenkins
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* fix jenkins
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* fix jenkins
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* fix jenkins
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* fix jenkins
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* fix jenkins
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* fix jenkins
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* fix jenkins
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* fix jenkins
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* fix jenkins
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* fix jenkins
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* fix jenkins
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* try fix import
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* try fix import
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* try fix import
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* try fix import
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* try fix import
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* try fix import
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* try fix import
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* try fix import
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* try fix import
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* try fix import
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* try fix import
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* try fix import
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* try fix import
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* try fix import
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* try fix import
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* try fix import
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* try fix import
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* try fix import
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* try fix import
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* try fix import
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* try fix import
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* try fix import
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* try fix import
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* try fix import
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* add test
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* rename tools to nemo_tools
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* rename tools to nemo_tools
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* fix bug
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* adding missing file
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* lgtm
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* add missing header
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* fix pytests
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* try to clean all workspaces
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* try to clean all workspaces
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* try to clean all workspaces
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* try to clean all workspaces
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* try to clean all workspaces
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* try to clean all workspaces
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* try to clean all workspaces
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* try to clean all workspaces
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* try to clean all workspaces
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* try to clean all workspaces
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* try to clean all workspaces
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* try to clean all workspaces
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* try to clean all workspaces
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* try to clean all workspaces
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* try to clean all workspaces
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* try to clean all workspaces
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* try to clean all workspaces
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* try to clean all workspaces
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* try to clean all workspaces
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* try to clean all workspaces
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* try to clean all workspaces
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* try to clean all workspaces
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* try to clean all workspaces
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* try to clean all workspaces
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* try to clean all workspaces
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* try to clean all workspaces
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* move back tools
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* try something
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* try something
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* add package info
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* test jenkins
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* test jenkins
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* adding setup
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* adding setup
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* adding pytests
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* adding requirements
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* add cpu tests
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* try fix
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* try fix
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* try fix
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* try fix
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* try fix
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* try fix
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* try fix
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* fix
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* fix
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* fix
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* fix
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* fix
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* fix
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* fix
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* fix pytests for nlp
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* fix tests
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* jenkins docker test
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* jenkins docker user test
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* jenkins docker user test
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* jenkins docker user test
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* jenkins docker user test
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* jenkins docker user test
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* jenkins docker user test
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* jenkins docker user test
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* jenkins docker user test
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* jenkins docker user test
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* jenkins docker user test
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* jenkins docker less root test
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* jenkins docker less root test
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* jenkins docker less root test
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* jenkins docker less root test
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* jenkins docker less root test
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* jenkins docker less root test
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* jenkins docker less root test
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* jenkins docker less root test
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* jenkins docker less root test
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* jenkins docker less root test
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* jenkins docker less root test
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* jenkins docker less root test
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* jenkins docker less root test
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* jenkins docker less root test
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* jenkins docker less root test
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* jenkins docker less root test
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* jenkins docker less root test
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* jenkins docker less root test
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* jenkins docker less root test
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* jenkins docker less root test
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* jenkins docker less root test
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* jenkins docker less root test
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* jenkins docker less root test
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* jenkins docker less root test
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* jenkins docker less root test
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* jenkins docker less root test
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* jenkins docker less root test
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* jenkins docker less root test
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* jenkins docker less root test
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* jenkins docker less root test
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* jenkins docker less root test
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* jenkins docker less root test
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* jenkins docker less root test
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* jenkins docker less root test
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* jenkins docker less root test
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* jenkins docker less root test
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* jenkins docker less root test
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* jenkins docker less root test
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* jenkins docker less root test
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* jenkins docker less root test
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* jenkins docker less root test
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* jenkins docker less root test
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* jenkins docker less root test
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* jenkins docker less root test
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* jenkins docker less root test
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* jenkins docker less root test
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* jenkins docker less root test
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* delete SH from ci
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* delete
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* fix new nemo_tools path in ci
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* rm output content after ci
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* delete inflect
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* change new weights
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* fix jenkinsfile
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* style fix
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* fix tests
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* fix weight
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* jenkins
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* delete requirement
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* adding docstring
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* fix jenkins
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* add nemo_tools readme
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* add nemo_tools readme
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* add nemo_tools readme
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* add nemo_tools readme
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* address PR review
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
* update nemo_tools readme
Signed-off-by: Yang Zhang <yangzhang@nvidia.com>
Co-authored-by: Oleksii Kuchaiev <okuchaiev@users.noreply.github.com>
2021-03-31 13:31:19 -07:00
Oleksii Kuchaiev
480bbc39b8
Merge branch 'r1.0.0rc1' into main
2021-03-25 16:48:11 -07:00
Oleksii Kuchaiev
9bc52efbf7
fix statuses on readme
...
Signed-off-by: Oleksii Kuchaiev <okuchaiev@nvidia.com>
2021-03-25 16:44:08 -07:00
Oleksii Kuchaiev
cf1b6787e9
Merge branch 'r1.0.0rc1' into main
...
Signed-off-by: Oleksii Kuchaiev <okuchaiev@nvidia.com>
2021-03-16 16:11:30 -07:00
Oleksii Kuchaiev
ff421068b9
minor readme fix
...
Signed-off-by: Oleksii Kuchaiev <okuchaiev@nvidia.com>
2021-03-16 16:09:01 -07:00
Oleksii Kuchaiev
305b694213
Getting Started notebook and docs changes ( #1918 )
...
Signed-off-by: Oleksii Kuchaiev <okuchaiev@nvidia.com>
2021-03-16 16:02:48 -07:00
Advanced AI Technologies
88fffdde38
Typo Github URL -> Colab URL ( #1856 )
2021-03-12 16:19:08 -08:00
Oleksii Kuchaiev
5fd5b595f7
Merge branch 'r1.0.0rc1' into main
...
Signed-off-by: Oleksii Kuchaiev <okuchaiev@nvidia.com>
2021-03-12 11:13:41 -08:00
Nithin Rao
b809dd9367
Renamed pretrained names ( #1882 )
...
* Renamed pretrained names
Signed-off-by: nithinraok <nithinrao.koluguri@gmail.com>
* Updated pretrained description format and updated README tutorials table with Speaker Diarization tutorials
Signed-off-by: nithinraok <nithinrao.koluguri@gmail.com>
Co-authored-by: Oleksii Kuchaiev <okuchaiev@users.noreply.github.com>
Co-authored-by: Jason <jasoli@nvidia.com>
2021-03-11 08:59:24 -08:00
Oleksii Kuchaiev
5d80627ca6
Merge branch 'r1.0.0rc1' into main
2021-03-10 15:40:38 -08:00
Oleksii Kuchaiev
7995395e87
Minor core docs updates ( #1876 )
...
* minor core docs updates
Signed-off-by: Oleksii Kuchaiev <okuchaiev@nvidia.com>
* change all tutorial links to r1.0.0rc1
Signed-off-by: ericharper <complex451@gmail.com>
* fix replace
Signed-off-by: ericharper <complex451@gmail.com>
* small changes
Signed-off-by: ericharper <complex451@gmail.com>
Co-authored-by: ericharper <complex451@gmail.com>
2021-03-10 11:12:22 -07:00
Eric Harper
52c20a17d6
Update README.rst
...
README is pointing to a container that hasn't been released yet.
2021-03-08 10:13:30 -07:00
Somshubra Majumdar
9156510cc9
Update notebooks to RC1 ( #1782 )
...
* update model primer tutorial
Signed-off-by: smajumdar <titu1994@gmail.com>
* Update all notebooks to RC1
Signed-off-by: smajumdar <titu1994@gmail.com>
* Update all notebooks to RC1 + README.rst
Signed-off-by: smajumdar <titu1994@gmail.com>
* Update docker instructions
Signed-off-by: smajumdar <titu1994@gmail.com>
2021-02-23 09:06:41 -08:00
Evelina
eff003635f
segmentation tutorial dir fix ( #1765 )
...
* fix dir
Signed-off-by: ekmb <ebakhturina@nvidia.com>
* dir fix for colab
Signed-off-by: ekmb <ebakhturina@nvidia.com>
2021-02-18 12:55:42 -08:00
Somshubra Majumdar
562087498d
Correct docs for ASR (RTD) ( #1755 )
...
* Correct docs for ASR
Signed-off-by: smajumdar <titu1994@gmail.com>
* Pin webdataset
Signed-off-by: smajumdar <titu1994@gmail.com>
2021-02-12 16:20:32 -08:00
Oleksii Kuchaiev
6b946a5a8f
Tacotron notebook link fix
...
Signed-off-by: Oleksii Kuchaiev <okuchaiev@nvidia.com>
2021-02-11 21:14:34 -08:00
Oleksii Kuchaiev
7944c5ba2c
README improvements
...
Signed-off-by: Oleksii Kuchaiev <okuchaiev@nvidia.com>
2021-02-11 17:04:30 -08:00