Due to the way we load plugins, internally to Python there can be issues when
the debug strategy is loaded after the linear strategy. To work around this,
we're changing the import line for the linear strategy to avoid the problem.
Related to #16825
uri:
follow_redirects: no
Will lead yaml to set follow_redirects=False. This is problematic when
the module parameter is not a boolean value but a string. For instance:
follow_redirects = dict(required=False, default='safe', choices=['all', 'safe', 'none', 'yes', 'no']),
Our parameter validation code ends up getting follow_redirects="False"
instead of "no". The 100% fix is for the user to quote their strings in
playbooks like:
uri:
follow_redirects: "no"
But we can fix quite a few common cases by trying to switch "False" back
into the string that it was specified as. We only do this if there is
only one correct choices value that could have been specified. In the
follow_redirects example, a value of "True" only maps back to "yes" and
a value of "False" only maps back to "no" so we can do this. If choices
also contained "on" and "off" then we couldn't map back safely and would
need to force the module author to change the module to handle this
case.
Fixes parts of the following PRs:
* https://github.com/ansible/ansible-modules-core/pull/4220
* https://github.com/ansible/ansible-modules-extras/pull/2593
* These can still race when multiple ansible processes are created at
the same time.
* Reverse order of expanduser and expandvars in unfrakpath(). So that
tildes in environment variables will be handled.
When a task result has an empty results list, the
list should be ignored when determining the results
of `_check_key`. Here the empty list is treated the
same as a non-existent list.
This fixes a bug that manifests itself with squashed
items - namely the task result contains the correct
value for the key, but an empty results list. The
empty results list was treated as zero failures
when deciding which handler to call - so the task
show as a success in the output, but is deemed to
have failed when deciding whether to continue.
This also demonstrates a mismatch between task
result processing and play iteration.
A test is also added for this case, but it would not
have caught the bug - because the bug is really in
the display, and not the success/failure of the
task (visually the test is more accurate).
Fixesansible/ansible-modules-core#4214
This feature changes the scalar value of `serial:` to a list, which
allows users to specify a list of values, so batches can be ramped
up (commonly called "canary" setups):
- hosts: all
serial: [1, 5, 10, "100%"]
tasks:
...
* Revert "There can be only one localhost"
This reverts commit 5f1bbb4fcd.
this broke several usages of localhost, see #16882, #16898 and #16886
* ensure there is only 1 localhost
fixes#16886, #16882 and #16898
- make sure localhost exists before returning it
- optimzed host caching
- ensure we always return a host object
The module level function defs for gcdns_connect() and
gce_connect() provide a default arg for 'provider' that
references into the libcloud module. If the libcloud
modules were not installed, the gce/gcdns python modules
would throw ImportError.
Let the provider arg default to None and if not provided,
set it to the default libcloud.compute.types.Provider.*
value if the modules are installed.
The lack of a comma caused the statement to always evaluate as a
`TypeError` when python interpreted `value (list, tuple, dict)` to call
value with the arguments list, tuple, and dict.
This is a refactoring of the existing GCE utility module to support other projects on Google Cloud Platform.
The previous gce.py module was hard-coded specifically for GCE, and attempting to use it with other projects in GCP failed.
See https://github.com/ansible/ansible/pull/15918#issuecomment-220165913 for more detail.
This has also been an issue for others in the past, although they've handled it by simply
duplicating some of the logic of gce.py in their own modules.
- The existing gce.py module was renamed to gcp.py, and modified to remove any
imports or other code that refers to libcloud.compute or GCE (the GCE_* params were
retained for compatibility). I also renamed the gce_connect function to gcp_connect,
and modified the function signature to make supplying a provider, driver, and agent
information mandatory.
- A new gce.py module was created to handle connectivity to GCE. It imports the
appropriate libcloud.compute providers and drivers, and then passes them on
to gcp_connect in gcp.py. The constants and function signatures are the same
as the old gce.py, so compatibility with existing modules is retained.
- A new gcdns.py module was created to support PR ansible/ansible-modules-extras#2252
for two new Google Cloud DNS modules, and to demonstrate support for a non-GCE
Google Cloud service. It follows the same basic structure as the new gce.py module,
but imports from libcloud.dns instead.
I'm not sure why that would be desirable -- we really want __version__
to come from the controller whereas importing will come from the client
node. If it turns out there was a reason to do that, please be sure to
use an exception handler that catches all exceptions instead of only
catching ImportError:
```
try:
from ansible.release import __version__, __author__
except:
__version__ = [...]
```
Fixes#16523
* switch cwd to basedir of task
This restores previous behaviour in pre 2.0 and allows for 'local type' plugins
and actions to have a more predictable relative path.
fixes#14489
* removed FIXME since prev commit 'fixes' this
* fix tests, now they need a loader (thanks jimi!)
* add check_mode option for tasks
includes example testcases for the template module
* extend check_mode option
* replace always_run, see also proposal rename_always_run
* rename always_run where used and add deprecation warning
* add some documentation
* have check_mode overwrite always_run
* use unique template name to prevent conflicts
test_check_mode was right before, but failed due to using the same filename as other roles
* still mention always_run in the docs
* set deprecation of always_run to version 2.4
* fix rst style
* expand documentation on per-task check mode
now systemd will run even if service module is inovked with parameters that it does not support
these will be removed before invoking systemd and issue a warning.
this facility will work for any new service modules.
A simple import of cryptography can throw several types of errors. For example,
if `setuptools` is less than cryptography's minimum requirement of 11.3, then
this import of cryptography will throw a VersionConflict here. An earlier case
threw a DistributionNotFound exception.
An optional dependency should not stop ansible. If the error is more than
an ImportError, log a warning, so that errors can be fixed in ansible or
elsewhere.
This bug was introduced in 3ced6d3, where getting vars from a role
did not follow the dep chain. This was originally hidden by the fact
that we got vars twice (from the block and from the roles directly).
Fixes#16729
* fixed lookup search path
added ansible_search_path var that contains the proper list and in order
removed roledir var which was only used by first_found, rest used role_path
added needle function for lookups that mirrors the action plugin one, now
both types of plugins use same pathing.
* added missing os import
* renamed as per feedback
* fixed missing rename in first_found
* also fixed first_found
* fixed import to match new error class
* fixed getattr ref
* moved tests from filters to actual jinja2 tests
also removed some unused declarations and imports
* split tests into their own docs
removed isnan as existing jinja2's 'number' already covers same
added missing docs for several tests
* updated as per feedback
2e003adb added the ability for tasks using any_errors_fatal to fail
when there were unreachable hosts. However that patch used the running
unreachable hosts data rather than the results from the current task,
which causes failures when any run_once or BYPASS_HOST_LOOP task is hit
after an unreachable host causes a failure. This patch corrects that by
using the current set of results to determine if any hosts were
unreachable during the last task only.
Fixesansible/ansible-modules-core#4160
This adds a action plugin that will allow config and template modules
to be merged into a single module. Once completed this will supercede
the net_template action plugin.
* add load_config() for loading a set of configuration commands
* add load_candidate() function for loading a candidate config
* updates shared module to provide NetworKModule instead of get_module
* fixes Cli transport implementation for 2.2 refactor
* updates ios documentation fragments with new options
* diff functions now split out for easier troubleshooting
* added dumps() function to serialize config objects to strings
* difference() can now expand all blocks instead of just singluar blocks
includes changes from PR ansible/ansible#16636 and refactors for the
NetworkModule changes
new features
* ios now supports transport=restcon will additional arguments
* ModuleStub refactored into common network shared module
* import temporary get_module() function (to be removed prior to 2.2 final)
This is a temporary change to keep the get_module() function until all
of the network module refactoring is completed to avoid breaking them
in devel. The get_module() function should not be used and will be
removed before 2.2 final.
* Update IOS with new NetworkModule
* Remove redundant EOS code
* `authorize` can get rolled into NetCli
* Fix up IOS to where EOS is.
* Update IOSXR for NetworkModule
* collections is unnecessary
Since Ansiballz, we no longer need to import basic directly into
a new-style module. Some modules, like the Networking modules, may
import basic in their own module_utils files and the module will import
that specialized module_util file rather than basic.
* Don't treat parsing problems as async task timeout
If there is a problem reading/writing the status file that manifests as
not being able to parse the data, that doesn't mean the task timed out,
it means there was what was likely a tempoarary problem. Move on and
keep polling for success. The only things that should cause the async
status to not be parseable are bugs in the async_runner.
* Add comment explaining not bailing out of loop
* Return different error when result is unparseable
* Remove extraneous else
* Instead of rebuilding the handler list all over the place, we now
compile the handlers at the point the play is post-validated so that
the view of the play in the PlayIterator contains the definitive list
* Assign the dep_chain to the handlers as they're compiling, just as we
do for regular tasks
* Clean up the logic used to find a given handler, which is greatly
simplified by the above changes
Fixes#15418
The `boto3_conn` function requires a module argument, and calls
`module.fail_json` if the connection doesn't receive enough arguments.
In non-module settings like inventory scripts, there is no module to be
passed.
The `boto3_inventory_conn` function takes the same arguments except for
`module`, and both call _boto3_conn which doesn't require a module be
passed.
* fixes lots of bugs with get_config function to perform correctly
* refactors load_config into load_candidate
* adds load_config function to convert commands to NetworkConfig
The Command object can now store the response from executing the command
to allow it to be retrieved later by command name. This update will
update the Command instance with the response before returning.
This adds a new method that will return the output from a specified
command that has already been excuted by the CommandRunner. The new
method, get_command takes a single argument which is the full name
of the command to retrieve.
6eefc11c converted task.loop_control into an object, but while the other
callers were updated to use .loop_var instead of .get('loop_var'), this
site was overlooked.
This can be reproduced by including with loop_control a file that does
set_fact; a simple regression test along these lines is included.
This fix prevents a broken pipe exception from occurring when password-less
SSH is configured and the sshpass process exits and closes the pipe before
the password is written to the pipe.
We want to update host vars for all hosts (even those that might
have failed), and the in case of a refresh_inventory, the code has
a stale restrictions list at this point anyway.