After post_validate() is called on an object, there should be no
need to continue looking up at parent attributes. This patch adds a
new flag (_finalized) which is set to True at the end of post_validate,
and getattr will not look beyond its own attributes from that point on.
* Allow to make the jsonfile cache files pretty (indented and sorted)
Since the json cache files are condensed, it is not very practical to look for something in them. Having indented/sorted cache files makes debugging and playbook/inventory development a lot easier to do.
I made it configurable in case people would object to the performance hit this would have, but to be honest, then they probably should be looking at other cache plugins instead IMO.
* Removed the config option and documentation changes
* Query lookup plugin
* Add license and docstrings
* Add python3-ish imports
* Change query plugin type from lookup to filter
* Switch from dq to jsonpath_rw
* Add integration test for query filter
* Rename query filter to json_query
* Add jsonpath-rw
* Rename query filter to json_query
* Switch query implementation from jsonpath-rw to jmespath
Run setfacl/chown/chmod on each temp dir and file.
This fixes temp file permissions handling on platforms such as FreeBSD
which always return success when using find -exec. This is done by
eliminating the use of find when setting up temp files and directories.
Additionally, tests that now pass on FreeBSD have been enabled for CI.
Due to the way we load plugins, internally to Python there can be issues when
the debug strategy is loaded after the linear strategy. To work around this,
we're changing the import line for the linear strategy to avoid the problem.
Related to #16825
uri:
follow_redirects: no
Will lead yaml to set follow_redirects=False. This is problematic when
the module parameter is not a boolean value but a string. For instance:
follow_redirects = dict(required=False, default='safe', choices=['all', 'safe', 'none', 'yes', 'no']),
Our parameter validation code ends up getting follow_redirects="False"
instead of "no". The 100% fix is for the user to quote their strings in
playbooks like:
uri:
follow_redirects: "no"
But we can fix quite a few common cases by trying to switch "False" back
into the string that it was specified as. We only do this if there is
only one correct choices value that could have been specified. In the
follow_redirects example, a value of "True" only maps back to "yes" and
a value of "False" only maps back to "no" so we can do this. If choices
also contained "on" and "off" then we couldn't map back safely and would
need to force the module author to change the module to handle this
case.
Fixes parts of the following PRs:
* https://github.com/ansible/ansible-modules-core/pull/4220
* https://github.com/ansible/ansible-modules-extras/pull/2593
* These can still race when multiple ansible processes are created at
the same time.
* Reverse order of expanduser and expandvars in unfrakpath(). So that
tildes in environment variables will be handled.
When a task result has an empty results list, the
list should be ignored when determining the results
of `_check_key`. Here the empty list is treated the
same as a non-existent list.
This fixes a bug that manifests itself with squashed
items - namely the task result contains the correct
value for the key, but an empty results list. The
empty results list was treated as zero failures
when deciding which handler to call - so the task
show as a success in the output, but is deemed to
have failed when deciding whether to continue.
This also demonstrates a mismatch between task
result processing and play iteration.
A test is also added for this case, but it would not
have caught the bug - because the bug is really in
the display, and not the success/failure of the
task (visually the test is more accurate).
Fixesansible/ansible-modules-core#4214
This feature changes the scalar value of `serial:` to a list, which
allows users to specify a list of values, so batches can be ramped
up (commonly called "canary" setups):
- hosts: all
serial: [1, 5, 10, "100%"]
tasks:
...
* Revert "There can be only one localhost"
This reverts commit 5f1bbb4fcd.
this broke several usages of localhost, see #16882, #16898 and #16886
* ensure there is only 1 localhost
fixes#16886, #16882 and #16898
- make sure localhost exists before returning it
- optimzed host caching
- ensure we always return a host object
The module level function defs for gcdns_connect() and
gce_connect() provide a default arg for 'provider' that
references into the libcloud module. If the libcloud
modules were not installed, the gce/gcdns python modules
would throw ImportError.
Let the provider arg default to None and if not provided,
set it to the default libcloud.compute.types.Provider.*
value if the modules are installed.
The lack of a comma caused the statement to always evaluate as a
`TypeError` when python interpreted `value (list, tuple, dict)` to call
value with the arguments list, tuple, and dict.
This is a refactoring of the existing GCE utility module to support other projects on Google Cloud Platform.
The previous gce.py module was hard-coded specifically for GCE, and attempting to use it with other projects in GCP failed.
See https://github.com/ansible/ansible/pull/15918#issuecomment-220165913 for more detail.
This has also been an issue for others in the past, although they've handled it by simply
duplicating some of the logic of gce.py in their own modules.
- The existing gce.py module was renamed to gcp.py, and modified to remove any
imports or other code that refers to libcloud.compute or GCE (the GCE_* params were
retained for compatibility). I also renamed the gce_connect function to gcp_connect,
and modified the function signature to make supplying a provider, driver, and agent
information mandatory.
- A new gce.py module was created to handle connectivity to GCE. It imports the
appropriate libcloud.compute providers and drivers, and then passes them on
to gcp_connect in gcp.py. The constants and function signatures are the same
as the old gce.py, so compatibility with existing modules is retained.
- A new gcdns.py module was created to support PR ansible/ansible-modules-extras#2252
for two new Google Cloud DNS modules, and to demonstrate support for a non-GCE
Google Cloud service. It follows the same basic structure as the new gce.py module,
but imports from libcloud.dns instead.
I'm not sure why that would be desirable -- we really want __version__
to come from the controller whereas importing will come from the client
node. If it turns out there was a reason to do that, please be sure to
use an exception handler that catches all exceptions instead of only
catching ImportError:
```
try:
from ansible.release import __version__, __author__
except:
__version__ = [...]
```
Fixes#16523