fixed two typos (#3157)
Signed-off-by: Alexandra Antonova <aleksandraa@nvidia.com> Co-authored-by: Alexandra Antonova <aleksandraa@nvidia.com>
This commit is contained in:
parent
cfcf694e30
commit
9ab22a40bd
|
@ -1188,7 +1188,7 @@
|
|||
"\n",
|
||||
"In general, once the model is trained and saved to a PyTorch Lightning checkpoint, or to a .nemo tarfile, it will no longer contain the training configuration - no configuration information for the Trainer or Experiment Manager.\n",
|
||||
"\n",
|
||||
"**These config files have good defaults pre-set to run an experiment with NeMo, so it is adviced to base your own training configuration on these configs.**\n",
|
||||
"**These config files have good defaults pre-set to run an experiment with NeMo, so it is advised to base your own training configuration on these configs.**\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"Let's take a deeper look at some of the examples inside each domain.\n",
|
||||
|
|
|
@ -1927,7 +1927,7 @@
|
|||
"\n",
|
||||
"Note, for NeMo Models; the `configure_optimizers` is implemented as a trivial call to `setup_optimization()` followed by returning the generated optimizer and scheduler! So we can override the `configure_optimizer` method and manage the optimizer creation manually!\n",
|
||||
"\n",
|
||||
"NeMo's goal is to provide usable defaults for the general case and simply back off to either PyTorch Lightning or PyTorch nn.Module itself in cases which the additional flexibility becomes necessary!"
|
||||
"NeMo's goal is to provide usable defaults for the general case and simply back off to either PyTorch Lightning or PyTorch nn.Module itself in cases when the additional flexibility becomes necessary!"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
|
Loading…
Reference in a new issue