With some earlier changes, continuing to forward failed hosts on
to the iterator with each TQM run() call was causing plays with
max_fail_pct set to fail, as hosts which failed in previous plays
were counting those old failures against the % calculation.
Also changed the linear strategy's calculation to use the internal
failed list, rather than the iterator, as this now represents the
hosts failed during the current run only.
As noted in the comment, the TQM may be used for more than one play. As such,
after creating the new PlayIterator object it is necessary to mark any failed
hosts from previous calls to run() as failed in the iterator, so they are
properly skipped during any future calls to run().
By default the `Shell` class disables ssh agents. The `junos_netconf`
module uses this class, but doesn't re-enable agents.
Here it's explicitly enabled again, so an ssh agent can be used to
connect to and configure Junos devices.
Since this is now the default package manager, it got moved
to another location on Netbsd :
netbsd# type pkgin
pkgin is a tracked alias for /usr/pkg/bin/pkgin
netbsd# uname -a
NetBSD netbsd.example.org 6.1.4 NetBSD 6.1.4 (GENERIC) amd64
But since the package manager is also used outside of NetBSD, we
have to keep the /opt/local path too.
The change is needed to support the multiple include statements
inside the jinja2 template file, as in '{% include ['another.j2'] %}'.
statement. I need this capability, as OpenSwitch `switch` role needs
to handle multiple *.j2 files and supporting the include statement
inside jinja2 file is essential, otherwise I need to combine multiple
template files into a single file, which easily causes conflicts
between developers working on different parts of the teamplate, ports
and interface.
Prior to this patch, the retry/until logic would fail any task that
succeeded if it took all of the alloted retries to succeed. This patch
reworks the retry/until logic to make things more simple and clear.
Fixes#15697
When using run_once, there is only one dict of facts so passing that
to the VariableManager results in the fact cache containing the same
dictionary reference for all hosts in inventory. This patch fixes that
by making sure we pass a copy of the facts dict to VariableManager.
Fixes#14279
Previously the changed code was necessary, however it is now problematic
as we've started using the is_failed() method in other places in the code.
Additional changes at the strategy layer should make this safe to remove
now.
Fixes#15625
In VariableManager, we fetch the params specifically in the next step,
so including them in the prior step is unnecessary and could lead to things
being overridden in an improper order.
In Block, we should not be getting the params for the role as they are
included earlier via the VariableManager.
Fixes#14411
Fixes#15745
Applies conditional forwarding to all tasks/roles within the included playbook.
The existing line only applies forwarded conditionals to the main Task block, and misses pre_, post_, and roles.
Typo ::
Made a selection mistake when I copied over the one line change
* Update GCE module to use JSON credentials
* Ensure minimum libcloud version when using JSON crednetials for GCE
* Relax langauge around libcloud requirements
In the free strategy, we mark a host as blocked when it has work to do
(the PlayIterator returns a task) to prevent multiple tasks from being sent
to the host. However, we check for role duplicates after setting the blocked
flag, but were not clearing that when the task was skipped leading to an
infinite loop. This patch corrects that by clearing the blocked flag when
the task is skipped.
Fixes#15681
In `lib/ansible/executor/play_iterator.py`, ansible sets a host's
`_gathered_facts` property to `True` without checking to see if there
are any tasks to be executed. In the event that the entire play is
skipped, `_gathered_facts` will be `True` even though the `setup`
module was never run.
This patch modifies the logic to only set `_gathered_facts` to `True`
when there are tasks to execute.
Closes#15744.
Issue #15633 observes that a meta: inventory_refresh task causes the playbook
to exit. An inventory refresh flushes all caches and rebuilds all host
objects, assigning new UUIDs to each. These new host UUIDs currently fail to
match those on host objects stored for restrictions in the inventory, causing
the playbook to exit for having no hosts to run further tasks against.
This changeset attempts to address this issue by storing host restrictions
by name, and comparing inventory host names against these names when applying
restrictions in get_hosts.