This addresses two issues with the nxos shared module. The first issue is
argument precedence checking. The module should prefer explicit arguments
over arguments passed vi the provider. This is now fixed to honor that
precedence. The second issue is collecting output from nxapi and returning
the response. Prior to this change the entire json structure was returned.
Now just the output is returned to align it better with cli based output
The eos shared module should prefer to use explicit task arguments over
arguments provided through the provider. This fixes a problem where
that was not the case
So far, when a 'diff' dict is returned with module results, it is
checked for 'before' and 'after' texts, which are processed in
_get_diff() by python difflib. This generates the changes to display
when CLI users specify --diff.
However, some modules will generate changes that cannot easily be
expressed in a conventional diff. One example is the output of the
synchronize module, which presents changed files in a common log format
as in `rsync --itemize-changes`.
Add a check for a diff['prepared'] key, which can contain prepared diff text
from modules.
* In 2.0.0.x become was reversed for synchronize. It was happening on
the local machine instead of the remote machine. This restores the
ansible-1.9.x behaviour of doing become on the remote machine.
However, there's aspects of this that are hacky (no hackier than
ansible-1.9 but not using 2.0 features). The big problem is that it
does not understand any become method except sudo. I'm willing to use
a partial fix now because we don't want people to get used to the
reversed semantics in their playbooks.
* synchronize copying to the wrong host when inventory_hostname is
localhost
* Fix problem with unicode arguments (first seen as a bug on synchronize)
Fixes#14041Fixes#13825
Role definitions typically require params to be different from those
which are specified as FieldAttributes on the playbook classes used
for roles, however a certain subset should be allowed (typically those
used for connection stuff).
Fixes#14095
The dep chain for roles created during the compile step had bugs, in
which the dep chain was overwriten and the original tasks in the role
were not assigned a dep chain. This lead to problems in determining
whether roles had already run when in a "diamond" structure, and in
some cases roles were not correctly getting variables from parents.
Fixes#14046
by moving to en-bloc unicode conversion to act on scripts stdout
Both python-json and simplejson always return unicode strings when using
their loads() method on unicode strings. This is true at least since
2009. This makes checking each substring unnecessary, because we do not
need to recursively check the strings contained in the inventory dict
later one-by-one
This commit makes parsing of large dynamic inventory at least 2 seconds
faster.
cf: https://github.com/towolf/ansible-large-inventory-testcase
This prevents a bug where the existing cache outside of the class
is not cleared when creating a new Inventory object. This only really
affects people using the API directly right now, but wanted to fix it
to prevent weird errors from popping up.
Instead of bombing out of the strategy, we now properly mark hosts failed
so that the play iterator can handle block rescue/always properly.
Fixes#14024
When using a playbook-level include, we now catch any errors raised during
the conditional evaluation step and set a flag to indicate we need to pass
those conditionals on to the included play (most likely because they contain
inventory variables for evaluation).
Fixes#14003
This causes problems when fetching parent attributes, as the include
was being skipped because the parent block would fetch the attribute
from the parent play first.
Fixes#13872
this was taken out in an effort to default to the user's shell but creates issues as this is not known ahead of time
and its painful to set executable and shell_type for all servers, it should only be needed for those that restrict the user
to specific shells and when /bin/sh is not available. raw and command may still bypass this by explicitly passing None.
fixes#13882
still conditional
The provider argument accepts the set of device common arguments as a
dict object. Individual connection arguments can still be included and
take priority over the provider argument. This update includes additions
to the nxos doc fragment
New argument `provider` added to the ios shared module that provides
the ability to pass all of the common ios arguments as a dict. This commit
includes some minor bugfixes and refactoring of names. It also includes
udpates to the ios documentation fragment for the new argument
Adds a new argument `provider` to the eos shared module and updates the
eos doc fragment. This commit includes some additional minor fixes and
code refactors for naming conventions. The `provider` argument allows the
shared module arguments to be passed as a dict object instead of having
to pass each argument invididually.
This commit adds a new argument `provider` to the iosxr shared module that
allows common connection parameters to be passed as a dict object. The
constraints on the args still applies. This commit also updates the iosxr
doc fragment.
Adds new argument `provider` to the openswitch shared module. The provider
argument can pass all openswitch connection arguments as a dict object. This
update includes adding the provider argument to the openswitch doc fragment
This commit adds a new argument `provider` to the junos shared module. The
argument allows the set of common connection args to be passed to the
junos shared module. This commit also updates the junos doc fragment
This commit provides an argument to provide a path to the private key
file. This will allow paramiko to use the key file as opposed to only
username / password combinations for CLI connections.
Letting it pass would just cause an error later on (no such file found)
so it's better to catch it here and know that we have users dealing with
non-utf8 pathnames than to have to track it down from later on.
Note that the fix for display normalizing to unicode is correct but the
fix for pathnames is probably not. Changing pathnames to unicode type
means that we will handle utf8 pathnames fine but pathnames can be any
sequence of bytes that do not contain null. We do not handle sequences
of bytes that are not valid utf8 here. To do that we need to revamp the
handling of basedir and paths to transform to bytes instead of unicode.
Didn't want to do that in 2.0.x as it will potentially introduce other
bugs as we find all the places that we combine basedir with other path
elements. Since no one has raised that as an issue thus far so it's not
something we need to handle yet. But it's something to keep in mind for
the future.
To test utf8 handling, create a utf8 directory and run a playbook from
within there.
To test non-utf8 handling (currently doesn't work as stated above), create
a directory with non-utf8 chars an run a playbook from there. In bash,
create that directory like this: mkdir $'\377'
Fixes#13937
* Don't re-use the existing connection if the remote_addr field of
the play context has changed
* When overriding variables in PlayContext (from task/variables),
don't set the same attribute based on a different variable name
if we had already previously set it from another variable name
Fixes#13880
* Relocate the assignment of the host address to the remote_addr field
in the play context, which was only done when the connection was created
(it's now done after the post_validate() is called on the play context)
* Make the assignment of the play context to the connection an else, since
it's not required if the connection is not reused
This is because we pass arguments to non-newstyle modules via an
external file. If we pipeline, then the interpreter thinks it has to
run the arguments as the script instead of what is piped in via stdin.
keeps backwards compat by not removing the previouslly non grammer matching states
and introduces new ones so user can decide which one he wants
(or keep both and still be inconsistent to annoy those that care)
If this isnt updated, the _connection is reused, and thus has an outdated _play_context
This results in outdated `success_key` and `prompt` causing issues if sudo is run in a loop
Refer to the issue #13763 for more debugging and details
The nxapi module has been superseded by the nxos shared module and is not longer needed. This commit removes (deletes) nxapi from module_utils. All custom modules that have used nxapi should be using nxos instead.
This commit adds a new shared module that parses network device configuration
files. It is used to build modules that work with the various supported
network device operating systems
This commit adds a new shared module for working with network devices running
the Juniper Junos operating system. The commit includes a new document
fragment junos to be used when building modules. The junos shared module
currently only supports CLI
This commit adds a new shared module openswitch for building modules that
work with OpenSwitch. This shared module supports connectivity to
OpenSwitch devices over SSH, CLI or REST. It also adds an openswitch
documentation fragment for use in modules
This commit refactors the nxapi into a new shared module nxos that supports
connectivity over both ssh (cli) and nxapi. It supercedes the nxapi shared
module and removes it from module_utils. This commit also adds a
documentation fragement supporting the nxos shared module
This commit adds a new shared module for working with Cisco IOS XR devices over
CLI (SSH). It also provides a documentation fragement for the commmon arguments
provided by the iosxr module.
This update refactor the ios shared module to use the new shell shared
library instead of issh and cli. It also adds the ios documentation
fragment to be used when building ios based modules.
This adds a shared module for communicating with Arista EOS devices over
SSH (cli) or JSON-RPC (eapi). This modules replaces the eapi.py module
previously added to module_utils. This commit includes a documentation
fragment that describes the eos common arguments
pushed it to use the existing propmpt from display and moved the vars prompt code there also for uniformity
changed vars_prompt to check extra vars vs the empty play.vars to restore 1.9 behaviour
sipmlified the code as it didn't need to check for syntax again (tqm is made none prior based on that)
fixes#13770
Still is a warning as we don't want to repeat it multiple times nor additional callbacks to stop ansible execution.
hopefully we can avoid shipping w/o exceptions in the default/minimal callbacks...
Also added feature that now allows for 'preformated' strings passed to warning
This commit add a new shared module shell that is used to build connections
to network devices that operate in a CLI environment. This commit supercedes
the issh.py and cli.py commits and removes them from module_utils.
and without hosts and vars
Without this patch, the simplified syntax is triggered when a group
is defined like this:
"platforms": {
"children": [
"cloudstack"
]
}
Which results in a group 'platforms' with 1 host 'platforms'.
more details in https://github.com/ansible/ansible/issues/13655
* Added additional methods to the iterator code to assess host failures
while also taking into account the block rescue/always states
* Fixed bugs in the free strategy, where results were not always being
processed after being collected
* Added some prettier printing to the state output from iterator
Fixes#13699
commit 24efa310b58c431b4d888a6315d1285da918f670
Author: James Cammarata <jimi@sngx.net>
Date: Tue Dec 29 11:23:52 2015 -0500
Adding an additional test for copy exclusion
Adds a negative test for the situation when an exclusion doesn't
exist in the target to be copied.
commit 643ba054877cf042177d65e6e2958178bdd2fe88
Merge: e6ee59f66a8f7e
Author: James Cammarata <jimi@sngx.net>
Date: Tue Dec 29 10:59:18 2015 -0500
Merge branch 'speedup' of https://github.com/chrismeyersfsu/ansible into chrismeyersfsu-speedup
commit 66a8f7e873
Author: Chris Meyers <chris.meyers.fsu@gmail.com>
Date: Mon Dec 28 09:47:00 2015 -0500
better api and tests added
* _copy_results = deepcopy for better performance
* _copy_results_exclude to deepcopy but exclude certain fields. Pop
fields that do not need to be deep copied. Re-assign popped fields
after deep copy so we don't modify the original, to be copied, object.
* _copy_results_exclude unit tests
commit 93490960ff
Author: Chris Meyers <chris.meyers.fsu@gmail.com>
Date: Fri Dec 25 23:17:26 2015 -0600
remove uneeded deepcopy fields
* Fix to error if validate_cert is True and python doesn't support it.
* Only globally disable certificate checking if really needed. Use
bigip verify parameter if available instead.
* Remove public disable certificate function to make it less likely
people will attempt to reuse that
* now module errors clearly state msg=MODULE FAILURE
* module's stdout and stderr go into module_stdout and module_stderr keys
which only appear during parsing failure
* invocation module_args are deleted from results provided by action
plugin as errors can keep us from overwriting and then disclosing info that
was meant to be kept hidden due to no_log
* fixed invocation module_args set by basic.py as it was creating different
keys as the invocation in action plugin base.
* results now merge
This plugin filters output for any task that is 'ok' or 'skipped'.
It works by subclassing the 'default' stdout callback plugin and
overriding certain functions. It will suppress display of the task
banner until there is a 'changed' or 'failed' result or an
unreachable host.
* Changed parse_addresses to throw exceptions instead of passing None
* Switched callers to trap and pass through the original values.
* Added very verbose notice
* Look at deprecating this and possibly validate at plugin instead
fixes#13608
Because the fail_state is potentially non-zero in these block sections,
the prior logic led to included tasks not being inserted at all.
Related issue: #13605
This was added in 1.9 and 2.0 tried to copy, but since it cannot
obey no_log restrictions I commented it out. I did not remove as
it is still very useful for module invocation debugging.
* Saving of the registered variable was occuring after the tests for
changed/failed_when.
* Each of the above fields and until were being post_validated too early,
so variables which were not defined at that time were causing task
failures.
Fixes#13591
Environments were not being templated individually, so a variable environment
value was causing the exception regarding dicts to be hit. Also, environments
as inherited were coming through with the tasks listed first, followed by the
parents, so they were being merged backwards. Reversing the list of environments
fixed this.
Also fixes a bug where we were passing an incorrect number of parameters to
_do_handler_run() when processing an include file in a handler task/block.
Fixes#13560
Otherwise, each relative include path is checked on its own, rather
than in relation to the (possibly relative) path of its parent, meaning
includes multiple level deep may fail to find the correct (or any) file.
Fixes#13472
The current ssh shared module forces only password based authentication. This
change will allow the ssh module to use keys if a password is not provided.
We were logging the command to be executed many times, which made debug
logs very hard to read. Now we do it only once.
Also makes the logged ssh command line cut-and-paste-able (the lack of
which has confused a number of people by now; the problem being that we
pass the command as a single argument to execve(), so it doesn't need an
extra level of quoting as it does when you try to run it by hand).
moved from the field attribute declaration and created a placeholder
which then is resolved in the field attribute class.
this is to avoid unwanted persistent of the defaults across objects which introduces
stealth bugs when multiple objects of the same kind are used in succession while
not overriding the default values.
OS X El Capitan moved the /etc/ssh_* files into /etc/ssh/. This fix
adds a distribution version check for Darwin to set the keydir
appropriately on El Capitan and later.
tasks were overriding commandline with their defaults, not with the
explicit setting, removed the setting of defaults from task init and
pushed down to play context at last possible moment.
fixes#13362
Ansible previously added hosts to the host list multiple times for commands
like `ansible -i 'localhost,' -c local -m ping 'localhost,localhost'
--list-hosts`.
8d5f36a fixed the obvious error, but still added the un-deduplicated list to a
cache, so all future invocations of get_hosts() would retrieve a
non-deduplicated list.
This caused problems down the line: For some reason, Ansible only ever
schedules "flush_handlers" tasks (instead of scheduling any actual tasks from
the playbook) for hosts that are contained in the host lists multiple times.
This probably happens because the host states are stored in a dictionary
indexed by the hostnames, so duplicate hostname would cause the state to be
overwritten by subsequent invocations of … something.
* sudo was not working, now it supports full become
* now default checkout dir works, not only when specifying
* paths for checkout dir get expanded
* fixed limit options for playbook
* added verbose and debug info
This should fix issues with fish shell users as && and || are
not valid syntax, fish uses actual 'and' and 'or' programs.
Also updated to allow for fish backticks pushed quotes to subshell,
fish seems to handle spaces w/o them.
Lastly, removed encompassing subshell () for fish compatibility.
fixes#13199
This patch fixes a bug in module_utils/ios.py where the the wrong shared
module arguments are being generated. This bug prevented the shared module
from operating correctly. This patch should be generally applied.
* Move self._tqm.load_callbacks() earlier to ensure that v2_on_playbook_start can fire
* Pass the playbook instance to v2_on_playbook_start
* Add a _file_name instance attribute to the playbook
At its most basic, this is nothing more than an array or hash lookup,
but when used in conjunction with map, it is very useful. For example,
while constructing an "ssh-keyscan …" command to update known_hosts on
all hosts in a group, one can get a list of IP addresses with:
groups['x']|map('extract', hostvars, 'ec2_ip_address')|list
This returns hostvars[a].ec2_ip_address, hostvars[b].ec2_ip_address, and
so on. You can even specify an array of keys for a recursive lookup, and
mix string and integer keys depending on what you're looking up:
['localhost']|map('extract', hostvars, ['vars','group_names',0])|first
== hostvars['localhost']['vars']['group_names'][0]
== 'ungrouped'
Includes documentation and tests.
The comment was taken literally from lib/plugins/strategy/linear.py and
makes no sense in free.py where we have no noop tasks.
Also update the debug messages.
This patch fixes an issue with the common args dict in the eapi shared
module. This patch is required for the eapi shared module to be properly
imported and is therefore should be applied to all instances.
This commit changes the way modules create an instance of AnsibleModule to
now use a common function, eapi_module. This function will now automatically
append the common argument spec to the module argument_spec. Module
arguments can override common module arguments
Pipelining is a *significant* performance benefit, because each task can
be completed with a single SSH connection (vs. one ssh connection at the
start to mkdir, plus one sftp and one ssh per task).
Pipelining is disabled by default in Ansible because it conflicts with
the use of sudo if 'Defaults requiretty' is set in /etc/sudoers (as it
is on Red Hat) and su (which always requires a tty).
We can (and already do) make sudo/su happy by using "ssh -t" to allocate
a tty, but then the python interpreter goes into interactive mode and is
unhappy with module source being written to its stdin, per the following
comment from connections/ssh.py:
# we can only use tty when we are not pipelining the modules.
# piping data into /usr/bin/python inside a tty automatically
# invokes the python interactive-mode but the modules are not
# compatible with the interactive-mode ("unexpected indent"
# mainly because of empty lines)
Instead of the (current) drastic solution of turning off pipelining when
we use a tty, we can instead use a tty but suppress the behaviour of the
Python interpreter to switch to interactive mode. The easiest way to do
this is to make its stdin *not* be a tty, e.g. with cat|python.
This works, but there's a problem: ssh will ignore -t if its input isn't
really a tty. So we could open a pseudo-tty and use that as ssh's stdin,
but if we then write Python source into it, it's all echoed back to us
(because we're a tty). So we have to use -tt to force tty allocation; in
that case, however, ssh puts the tty into "raw" mode (~ICANON), so there
is no good way for the process on the other end to detect EOF on stdin.
So if we do:
echo -e "print('hello world')\n"|ssh -tt someho.st "cat|python"
…it hangs forever, because cat keeps on reading input even after we've
closed our pipe into ssh's stdin. We can get around this by writing a
special __EOF__ marker after writing in_data, and doing this:
echo -e "print('hello world')\n__EOF__\n"|ssh -tt someho.st "sed -ne '/__EOF__/q' -e p|python"
This works fine, but in fact I use a clever python one-liner by mgedmin
to achieve the same effect without depending on sed (at the expense of a
much longer command line, alas; Python really isn't one-liner-friendly).
We also enable pipelining by default as a consequence.
since all the --ask pass options end up triggering the same code
and are functionally equivalent, ignore them when it comes to checking
privilege escalation conflicts. This allows using -K when --become-method=su
and so on.
The secret_key parameter especially can contain non-ascii characters and
will throw an error if such a string is passed as a byte str.
Potential fix for #13303
It is natural that an argument_spec with choises=BOOLEAN accepts
boolean literal (True, False) though the current implementation
allows only string or int.
* StandardError doesn't exist in python3
* because it is the root of builtin expections, we can't catch it
separate from the builtin exceptions
* It doesn't tell us anything about the error being thrown as it's too
generic
This ssh shared module is used for building modules that require an
interactive shell environment such as those required for connecting
to network devices
If we request escalation with a password, we start in expecting_prompt
state. If the escalation then succeeds without the password, i.e., the
become_success response arrives, we must explicitly move into the next
state (awaiting_escalation, which immediately goes into ready_to_send),
so that we no longer try to apply the timeout.
Otherwise, we would leak the success notification and eventually
timeout. But if the module response did arrive before the timeout
expired, the "process has already exited" test would do the right
thing by accident (which is why it didn't fail more often).
Fixes#13289
This was caused by accessing the cache using the passed in mod_type
rather than the suffix that we calculate with knowledge of whether this
is a module or non-module plugin.
Ensure that ansible-galaxy version can be a branch, a tag, or any tree-ish
supported by git including specific commit IDs. For git scm roles, adds an
explicit git checkout of the specified role_version prior to the git archive.
This means that we'll always archive from HEAD of whatever role_version is
checked out. role_version can be a branch, a tag, or any <tree-ish> supported
by git including specific commit IDs. These changes also ensure
ansible-galaxy works for scm clones when specified version differs from
repository default branch.
Previously, we were filtering the task list on tags for each host
that was including the file, based on the idea that the variables
had to include the host information. However, the top level task
filtering is play-context only, which should also apply to the
included tasks. Tags cannot and should not be based on hostvars.
Looks like there are two pattern caches that need to be cleared for this to work- added the second one.
Added integration tests for add_host to prevent future regressions.
Looks like someone forgot to create an instance of undefined here- we were returning the undefined type object, which broke all the undefined checks.
Added an integration test around add_host that will catch this (separate PR to follow)
This callback plugin will generate json objects to be sent to the
logentries service for auditing/debugging purposes.
To use:
Add this to your ansible.cfg file in the defaults block
[defaults]
callback_plugins = ./callback_plugins
callback_stdout = logentries
callback_whitelist = logentries
Copy the callback plugin into the callback_plugings directory
Either set the environment variables
export LOGENTRIES_API=data.logentries.com
export LOGENTRIES_PORT=10000
export LOGENTRIES_ANSIBLE_TOKEN=dd21fc88-f00a-43ff-b977-e3a4233c53af
Or create a logentries.ini config file that sites next to the plugin with the following contents
[logentries]
api = data.logentries.com
port = 10000
tls_port = 20000
use_tls = no
token = dd21fc88-f00a-43ff-b977-e3a4233c53af
It was set to match the SSH connect timeout. Unfortunately, they would
race when ssh fails to connect, and the connect timeout usually failed.
This led to some misleading error messages.
Fixes#12916
Error reporting was broken for GCE modules- pprint didn't work with exceptions, so you'd always get "Unexpected response: {}" instead of the real error.
Code for a plugin is usually loaded by a PluginLoader(), and henceforth
available from self._module_cache, which prevents duplicate loading.
However there are situations (e.g. where one action plugin imports code
from another one) where the plugin module might be already imported (and
resident in sys.modules), but not present in the PluginLoader's
_module_cache, which causes imp.load_source() to effectively reload the
module, overwriting global class declarations and causing subtle latent
bugs.
Fixes#13110.
Fixes#12979.
* Always cache and return unique list objects, so that if the list
is changed later it does not impact the cached results
* Take additional parameters and the type of the pattern into account
when building the hash string
Also displays a warning now, because users should not be using that variable
name as it causes a collision with the internal variable of the same name.
This commit adds the shared module support for Cisco NXAPI. The shared
module builds on top of the urls shared module. The urls module provides
the http/s transport. This module only supports the JSON request message
format.