fixed two typos (#3157)

Signed-off-by: Alexandra Antonova <aleksandraa@nvidia.com>

Co-authored-by: Alexandra Antonova <aleksandraa@nvidia.com>
This commit is contained in:
bene-ges 2021-11-10 18:41:39 +03:00 committed by GitHub
parent cfcf694e30
commit 9ab22a40bd
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
2 changed files with 2 additions and 2 deletions

View file

@ -1188,7 +1188,7 @@
"\n",
"In general, once the model is trained and saved to a PyTorch Lightning checkpoint, or to a .nemo tarfile, it will no longer contain the training configuration - no configuration information for the Trainer or Experiment Manager.\n",
"\n",
"**These config files have good defaults pre-set to run an experiment with NeMo, so it is adviced to base your own training configuration on these configs.**\n",
"**These config files have good defaults pre-set to run an experiment with NeMo, so it is advised to base your own training configuration on these configs.**\n",
"\n",
"\n",
"Let's take a deeper look at some of the examples inside each domain.\n",

View file

@ -1927,7 +1927,7 @@
"\n",
"Note, for NeMo Models; the `configure_optimizers` is implemented as a trivial call to `setup_optimization()` followed by returning the generated optimizer and scheduler! So we can override the `configure_optimizer` method and manage the optimizer creation manually!\n",
"\n",
"NeMo's goal is to provide usable defaults for the general case and simply back off to either PyTorch Lightning or PyTorch nn.Module itself in cases which the additional flexibility becomes necessary!"
"NeMo's goal is to provide usable defaults for the general case and simply back off to either PyTorch Lightning or PyTorch nn.Module itself in cases when the additional flexibility becomes necessary!"
]
},
{