This allows the PlaybookExecutor to receive more information regarding
what happened internal to the TaskQueueManager and strategy, to determine
things like whether or not the play iteration should stop.
Fixes#15523
The nxos cli provider would not properly handle ssh key files passed
from the playbook task. The ssh_keyfile argument is now properly
passed to the ssh authentication method
This fix address the bug reported in #3862
Also updates doc on variable precedence, as it was incorrect for the
order of play vars/vars_prompt/vars_files in relation to set_fact and
registered variables.
Fixes#14702Fixes#14826
Since we now use the PlayIterator to carry forward failures from previous
play executions, in the event that some hosts which had previously failed
are not in the current inventory we now create a stub state instead of
raising an error.
Exception was raised when trying to use ssh-agent for authentication to
ios devices. This fix enables ssh-agent and enable use of password
protected ssh keys. There is one additional fix to capture authentication
exceptions nicely.
* Port urls.py to python3
Fixes (largely normalizing byte vs text strings) for python3
* Rework what we do with attributes that aren't set already.
* Comments
Has already been transferred as a tempfile.
This fixes the error in https://github.com/ansible/ansible/issues/16125
but there may be higher level issues that should be fixed as well (other
modules might be able to cause status fields like failed and changed to
return a censored string instead of a bool). So leaving 16125 open for
now.
If someone run:
ansible all -m file state=present
The error message is "Missing target hosts" which is misleading, since
the target hosts is here, the problem is the missing '-a'.
* In the VariableManager, we were not properly tracking if a file
had already been loaded, so we continuously append data to the end
of the list there for host and group vars, meaning large sets of data
are duplicated multiple times
* In the inventory, we were merging the host/group vars with the vars
local to the host needlessly, as the VariableManager already handles that.
This leads to needless duplication of the data and makes combining the
vars in VariableManager take even longer.
The output of 'ansible-galaxy info' was formatting the
'galaxy_info' key with one char per line.
Previously, when building the output string, items in
role_info that had a dict for value, the label for
it's key ('galaxy_info' for ex) was being added to
the text list in addition to being appended. Only
the append is needed.
Also added a unit test in test/units/cli/test_galaxy.py,
but skip it on py3 until galaxy is py3 compatible.
fixes#15177
Ansible excessively checks the file system for the potential presence of
`group_vars` and `host_vars` files.
For large numbers of groups this leads to combinatorial performance
issues.
This commit generates a set of group_vars and host_vars filenames using
`os.listdir()` in every possible location and then checks against the sets
before making a stat of the file system.
Also included in this commit is caching of the base directory lookup
for the inventory.
This makes it possible to use anything other than a list (e.g., a
tuple, or dict.keys() in py3k) for argument_spec choices. It also
improves the error messages if you don't use a list type.
Child blocks (whether nested or via includes) don't get a copy of the
dependency chain, so the above method should be used to ensure the block
looks at its parents dep chain.
Fixes#15996
* readd the service action plugin, was removed cause it created unexpected fact gathering and there are no split service plugins that would make this useful (yet)
Revert "removed action plugin as service facts and separate modules don't work yet and this forces gathering facts"
This reverts commit 7368030651.
* now only does minimal fact gathering
This class can be used by F5 modules for raising exceptions.
This should be used to handle known errors and raise them so
that they can be printed in the fail_json method.
The common Exception class built-in should not be used because
it hides tracebacks that are necessary to have when debugging
problems with the module.
* Catch DistributionNotFound when pycrypto is absent
On Solaris 11, module `pkg_resources` throws `DistributionNotFound` on import if `cryptography` is installed but `pycrypto` is not. This change causes that situation to be handled gracefully.
I'm not using Paramiko or Vault, so I my understanding is that I don't
need `pycrpto`. I could install `pycrypto` to make the error go away, but:
- The latest released version of `pycrypto` doesn't build cleanly on Solaris (https://github.com/dlitz/pycrypto/issues/184).
- Solaris includes an old version of GMP that triggers warnings every time Ansible runs (https://github.com/ansible/ansible/issues/6941). I notice that I can silence these warnings with `system_warnings` in `ansible.cfg`, but not installing `pycrypto` seems like a safer solution.
* Ignore only `pkg_resources.DistributionNotFound`, not other exceptions.
With some earlier changes, continuing to forward failed hosts on
to the iterator with each TQM run() call was causing plays with
max_fail_pct set to fail, as hosts which failed in previous plays
were counting those old failures against the % calculation.
Also changed the linear strategy's calculation to use the internal
failed list, rather than the iterator, as this now represents the
hosts failed during the current run only.
This change makes it so we know when it is safe to get rid of the module
(when we stop supporting python2.4) and makes it easier for us to find
code that is using the functions in there to update.
If needed, we'll create a pycompat26 and pycompat27 as well. These
files are for functions that are needed on that python version to write
portable code. So python-2.4 compatible modules may need code in
pycompat24, python26+ modules may need code in pycompat26, etc. If
a function is needed in multiple python versions, we should implement it
in an internal common file and use import to put it in the namespace for
each pycompatXY module.
As noted in the comment, the TQM may be used for more than one play. As such,
after creating the new PlayIterator object it is necessary to mark any failed
hosts from previous calls to run() as failed in the iterator, so they are
properly skipped during any future calls to run().