This implements solution #1 in the proposal #14860.
It only shows the diff if the task induced a change, which means that if the changed_when control overrides the task, not diff will be produced.
See #14860 for a rationale and the use-case.
porting @dominis 's ansible-shell tool from 1.9 and integrating it into ansible
added verbosity control
made more resilitent to several errors
added highlight color, to configurable colors
more resilient on exception and interruptions
prompt coloring, goes red and changes to # when using become = true and root
become setting is now explicit and not a toggle
* fetch_url shouldn't both accept follow_redirects and support follow_redircts via module.params
* Default follow_redirects for open_url should be 'urllib2'
* Add redirect test for get_url
This commit adds the multiline flag to the regexp search and match test
plugin. It defaults to re.M = False for backwards compatibility. To use
the multiline feature add multiline=True to the test filter
{{ config | search('^hostname', multiline=True) }}
main_q is not used anywhere in the codebase.
It is created in TaskQueueManager._initialize_processes, bundled with rslt_q
into TaskQueueManger._workers, later unwrapped in StrategyBase but not used.
This queue is closed in TaskQueueManger._cleanup_processes.
Historically, it is passed as a init parameter into WorkerProcess,
introduced in 62d7956, but this behavior is changed in 120b9a7.
Signed-off-by: 夏恺(Xia Kai) <xiaket@gmail.com>
Update the profile task callback plugin to include a fix for duplicate named tasks. Added additional features to adjust the number of tasks output and the sort order.
For example:
$ ansible web --list-hosts | head -n1
hosts (7):
ERROR! Unexpected Exception: [Errno 32] Broken pipe
Traceback (most recent call last):
File "/home/lamby/git/private/lamby-ansible2/.venv/bin/ansible", line 114, in <module>
display.display("to see the full traceback, use -vvv")
File "/home/lamby/git/private/lamby-ansible2/.venv/local/lib/python2.7/site-packages/ansible/utils/display.py", line 133, in display
sys.stdout.flush()
IOError: [Errno 32] Broken pipe
Such a pipe target will close up shop early when its seen enough input,
causing ansible to print an ugly traceback.
Signed-off-by: Chris Lamb <chris@chris-lamb.co.uk>
This commit fixes two bugs in the openswitch shared module. The first
bug was a wrong argument type for the use_ssl argument. It was set
to int and should be bool. The second changes the default ports for http
(was 80, now 8091) and https (was 443, now 18091). This change aligns
the default port values with the OS
This commit changes the key the ops_template will search for in order
to backup the current configuration to local disk on the Ansible control
host. This change was made to make ops_template consistent with the
other network template modules.
Note that this will break if we deal with non-utf8 paths. Fixing this
way because converting everythig to byte strings instead is a very
invasive task so it should be done as a specific feature to provide
support for non-utf8 paths at some point in the future (if needed).
This is the same fix we applied to v1.9 in PR #14565, however it does not fix#14678 completely !
The dictionaries are not being merged as tey are on v1.9.
The use of realpath means when following symlinks the actual path is
used when loading these files in the VariableManager, which may not
line up with the host or group name specified.
Fixes#14545
The find_mount_point function does not resolve the mount point of paths with a soft-link correctly and returns the wrong mount-point.
I have mounted an NFS filesystem on /nfs-mount. This directory contains a directory called "directory". I also created a soft-link to this last directory: /soft-link-to-directory -> /nfs-mount/directory. I created the following task to copy a file into /soft-link-to-directory:
- name: copy file to nfs-mount
copy:
src: "file"
dest: "/soft-link-to-directory/file"
This throws an exception:
invalid selinux context: [Errno 95] Operation not supported
This is caused by the find_mount_point function to return '/' as the mount point for '/soft-link-to-directory/file'. This should have been /nfs-mount. Because the find_mount_point returns the wrong mount-point, the is_special_selinux_path function does not recognise the file is on an NFS mount and tries to set the default SELinux context (system_u:object_r:default_t:s0), which fails. The context should have been: system_u:object_r:nfs_t:s0
Full Ansible output:
TASK [copy file to nfs-mount] **************************************************
fatal: [hostname]: FAILED! => {"changed": false, "checksum": "f34b60930a5d6d689cf49a4c16bd7f9806be608c", "cur_context": ["system_u", "object_r", "nfs_t", "s0"], "failed": true, "gid": 24170, "group": "foundation", "input_was": ["system_u", "object_r", "default_t", "s0"], "mode": "0644", "msg": "invalid selinux context: [Errno 95] Operation not supported", "new_context": ["system_u", "object_r", "default_t", "s0"], "owner": "root", "path": "/soft-link-to-directory/.ansible_tmpWCT6Z4file", "secontext": "system_u:object_r:nfs_t:s0", "size": 37, "state": "file", "uid": 0}
- now workers passes queue to task_executor so it can send back events per item and on retry attempt
- updated result class to pass along events to strategy
- base strategy updated to forward new events to callback
- callbacks now remove 'items' on final result but process them directly when invoked per item
- new callback method to deal with retry attempt messages (also now obeys nolog)
- updated tests to match new signature of task_executor
fixes#14558fixes#14072
* Fixes bug where the task was not marked as failed if the number of
retries were exceeded (#14461)
* Reorganizing logic to be a bit cleaner, and so retrie messages are
shown before sleeping (which makes way more sense)
Fixes#14461Fixes#14580
Prior to 75b6f61, we strictly limited variables we re-injected. After that
patch however, we re-injected everything which causes problems under certain
circumstances. For now, we'll continue to filter out some properties of
PlayContext for re-injection.
Fixes#14352
This is related to #14559, but only the part for Ansible v2.0
This commit makes merging empty dicts, or equal dicts more efficient.
I noticed that while debugging merge_hash a lot of merges related to empty dictionaries and sometimes also identical dictionaries.
will display on certain verbosity levels, both playbook/file info
and non empty options with which it's running.
avoid errors when not using CLI classes
The setup module calls /bin/lsblk once for each device appearing in the /etc/mtab file. However, the same device appears there mutliple times when the system uses bind-mounts. As a result, /bin/lsblk is being called repeatedly to get the uuid of the same device.
On a system with many mounts, this leads to a TimeoutError in the get_mount_facts function of the setup module as described in #14551.
Fixes#14551
ansible_os_family on openSUSE Leap has the wrong value:
"ansible_os_family": "openSUSE Leap",
It should be:
"ansible_os_family": "Suse",
This change fixes that by adding the relevant key and ensuring that dict
lookups replace ' ' with '_' so the key does not contain a space.
This commit fixes a situation where connection errors would be caught
but no useful information display. The connection error is now caught
and emitted in a call to fail_json
This commit fixes a situation where connection errors would be caught
but no useful information display. The connection error is now caught
and emitted in a call to fail_json
This commit fixes a situation where connection errors would be caught
but no useful information display. The connection error is now caught
and emitted in a call to fail_json
This commit fixes a situation where connection errors would be caught
but no useful information display. The connection error is now caught
and emitted in a call to fail_json
This commit fixes a situation where connection errors would be caught
but no useful information display. The connection error is now caught
and emitted in a call to fail_json
- added new function for action plugins this avoids the very fragile checksum code that is shell dependant.
- ported copy module to it
- converted assemble to new stat function
- some corrections and ported temlpate
- updated old checksum function to use new stat one under the hood
- documented revamped remote checksum method
When working around "bad systems that insist on not allowing
updates in an atomic manner", we should not run previous exception
management code that tries to perform atomic move in case of
exception since the dirty non atomic move has already been
performed.
* Fix the way task_include fields were created and copied
* Have blocks get_dep_chain() look at task_include's blocks for proper
dep chain inheritance
* Fix the way task_include fields are copied to prevent a recursive
degradation
Fixes#14460
This adds a new action plugin iosxr_template that allows the
iosxr_template module to pass network device configurations through the
template engine. It also allows configurations to be backed up.
* Make sure dep chains are checked recursively for nested blocks
* Fixing iterator is_failed() check to make sure we're not in a
rescue block before returning True
* Use is_failed() to test whether a host should be added to the TQM
failed_hosts list
* Use is_failed() when compiling the list of hosts left to iterate
over in both the linear and free strategies
Fixes#14222
- moved to base cli class to handle centrally and duplicate less code
- now avoids duplication and reiteration of signal handler by reassigning it
- left note on how to do non-graceful in case we add in future
as I won't remember everything i did here and don't want to 'relearn' it.
- adhoc now terminates gracefully
- avoid race condition on terminations by ignoring errors if
worker might have been reaped between checking if active and termination call
- ansible-playbook now properly exits on sigint/term
- adhoc and playbook now give exceptions that we should not normally capture
and rely on top level finally to reap children
- handle systemexit breaks in workers
- added debug to see at which frame we exit
partial fix for #14346
* Raise an error if the action is using BYPASS_HOST_LOOP, to prevent
unexpected behavior from those actions
* Show a warning regarding tasks marked as run_once, as the free strategy
does not yet support that behavior
* Minor tweak to linear strategies run_once code to make sure we don't
raise an error if an action isn't found
* If the internal value is None, do not add the variable
* Make sure all aliases for a given variable name are set (if they're
not already set in the dictionary)
Fixes#14310
just 'cause people build bad systems that insist on not allowing
updates in an atomic manner and force us to do them in a very
unsafe way that has race conditions and can lead to many issues.
if using this option you should really be opening a bug report with
the system that only allows for this type of update.
and now i shower though i doubt i'll feel clean
* Fixed a bug in PlayIterator when ITERATING_ALWAYS, where the block
was advanced but the incorrect data structure elements were cleared
* Cleaned up the logic of is_failed() in PlayIterator
* Fixed a bug in the free strategy which had not been updated to use
the base strategy _execute_meta() method
* Stopped strategies from using is_failed() to determine if tasks should
still be fetched for a host
Fixes#14040
The net_config local action handles templating for network configuration
file. It will also allow network device configurations to be backed up
to the control host
Note: this plugin was originally named net_config but has been refactored to
net_template
now deprecation message appears with variable name in all spots where this occurs
debug's var= option is excluded as this is only place where bare variables shold actually
be accepted.
it was assumed it could only be a dict or string (it starts out as a list)
also a 2nd assumption that bare vars only would appear in one of the dict keys.
removed deprecation warnings from here as they should be signaled in the bare conversion itself.
Adds new local action ops_config for handling openswitch configurations using
either dc or cli based configurations. Implements the common net_config
local action.
Note this refactors the ops_config plugin to ops_template
Adds a new local action ios_config for working with cisco ios configuration
files. Implements the common net_confing local action
Note this plugin was refactored from ios_config to ios_template
Adds new local action for working with cisco nxos configurations. Implemements
the net_config local action.
Note this action plugin was refactored from nxos_config to nxos_template
Adds a new local action for eos_config module to handle templating configs
and backing up running configurations. Implements the local action
net_config
Note this action was refactored from eos_config to eos_template
This fixes a minor bug in the nxos config module to ensure that both the
cli and nxapi transport return the running config as a string and not
a list object.
The module docs and vault changes solve issues where tracebacks can
happen. The galaxy changes are mostly refactoring to be more pythonic
with a small chance that a unicode traceback could have occurred there
without the changes. The change in __init__.py when we actually call
the pager makes things more robust but could hide places where we had
bytes coming in already so I didn't want to change that without auditing
where the text was coming from.
Fixes#14178
This addresses two issues with the nxos shared module. The first issue is
argument precedence checking. The module should prefer explicit arguments
over arguments passed vi the provider. This is now fixed to honor that
precedence. The second issue is collecting output from nxapi and returning
the response. Prior to this change the entire json structure was returned.
Now just the output is returned to align it better with cli based output
The eos shared module should prefer to use explicit task arguments over
arguments provided through the provider. This fixes a problem where
that was not the case
So far, when a 'diff' dict is returned with module results, it is
checked for 'before' and 'after' texts, which are processed in
_get_diff() by python difflib. This generates the changes to display
when CLI users specify --diff.
However, some modules will generate changes that cannot easily be
expressed in a conventional diff. One example is the output of the
synchronize module, which presents changed files in a common log format
as in `rsync --itemize-changes`.
Add a check for a diff['prepared'] key, which can contain prepared diff text
from modules.
* In 2.0.0.x become was reversed for synchronize. It was happening on
the local machine instead of the remote machine. This restores the
ansible-1.9.x behaviour of doing become on the remote machine.
However, there's aspects of this that are hacky (no hackier than
ansible-1.9 but not using 2.0 features). The big problem is that it
does not understand any become method except sudo. I'm willing to use
a partial fix now because we don't want people to get used to the
reversed semantics in their playbooks.
* synchronize copying to the wrong host when inventory_hostname is
localhost
* Fix problem with unicode arguments (first seen as a bug on synchronize)
Fixes#14041Fixes#13825
Role definitions typically require params to be different from those
which are specified as FieldAttributes on the playbook classes used
for roles, however a certain subset should be allowed (typically those
used for connection stuff).
Fixes#14095
The dep chain for roles created during the compile step had bugs, in
which the dep chain was overwriten and the original tasks in the role
were not assigned a dep chain. This lead to problems in determining
whether roles had already run when in a "diamond" structure, and in
some cases roles were not correctly getting variables from parents.
Fixes#14046
by moving to en-bloc unicode conversion to act on scripts stdout
Both python-json and simplejson always return unicode strings when using
their loads() method on unicode strings. This is true at least since
2009. This makes checking each substring unnecessary, because we do not
need to recursively check the strings contained in the inventory dict
later one-by-one
This commit makes parsing of large dynamic inventory at least 2 seconds
faster.
cf: https://github.com/towolf/ansible-large-inventory-testcase
This prevents a bug where the existing cache outside of the class
is not cleared when creating a new Inventory object. This only really
affects people using the API directly right now, but wanted to fix it
to prevent weird errors from popping up.
Instead of bombing out of the strategy, we now properly mark hosts failed
so that the play iterator can handle block rescue/always properly.
Fixes#14024
When using a playbook-level include, we now catch any errors raised during
the conditional evaluation step and set a flag to indicate we need to pass
those conditionals on to the included play (most likely because they contain
inventory variables for evaluation).
Fixes#14003
This causes problems when fetching parent attributes, as the include
was being skipped because the parent block would fetch the attribute
from the parent play first.
Fixes#13872
this was taken out in an effort to default to the user's shell but creates issues as this is not known ahead of time
and its painful to set executable and shell_type for all servers, it should only be needed for those that restrict the user
to specific shells and when /bin/sh is not available. raw and command may still bypass this by explicitly passing None.
fixes#13882
still conditional
The provider argument accepts the set of device common arguments as a
dict object. Individual connection arguments can still be included and
take priority over the provider argument. This update includes additions
to the nxos doc fragment
New argument `provider` added to the ios shared module that provides
the ability to pass all of the common ios arguments as a dict. This commit
includes some minor bugfixes and refactoring of names. It also includes
udpates to the ios documentation fragment for the new argument
Adds a new argument `provider` to the eos shared module and updates the
eos doc fragment. This commit includes some additional minor fixes and
code refactors for naming conventions. The `provider` argument allows the
shared module arguments to be passed as a dict object instead of having
to pass each argument invididually.
This commit adds a new argument `provider` to the iosxr shared module that
allows common connection parameters to be passed as a dict object. The
constraints on the args still applies. This commit also updates the iosxr
doc fragment.
Adds new argument `provider` to the openswitch shared module. The provider
argument can pass all openswitch connection arguments as a dict object. This
update includes adding the provider argument to the openswitch doc fragment
This commit adds a new argument `provider` to the junos shared module. The
argument allows the set of common connection args to be passed to the
junos shared module. This commit also updates the junos doc fragment
This commit provides an argument to provide a path to the private key
file. This will allow paramiko to use the key file as opposed to only
username / password combinations for CLI connections.
Letting it pass would just cause an error later on (no such file found)
so it's better to catch it here and know that we have users dealing with
non-utf8 pathnames than to have to track it down from later on.
Note that the fix for display normalizing to unicode is correct but the
fix for pathnames is probably not. Changing pathnames to unicode type
means that we will handle utf8 pathnames fine but pathnames can be any
sequence of bytes that do not contain null. We do not handle sequences
of bytes that are not valid utf8 here. To do that we need to revamp the
handling of basedir and paths to transform to bytes instead of unicode.
Didn't want to do that in 2.0.x as it will potentially introduce other
bugs as we find all the places that we combine basedir with other path
elements. Since no one has raised that as an issue thus far so it's not
something we need to handle yet. But it's something to keep in mind for
the future.
To test utf8 handling, create a utf8 directory and run a playbook from
within there.
To test non-utf8 handling (currently doesn't work as stated above), create
a directory with non-utf8 chars an run a playbook from there. In bash,
create that directory like this: mkdir $'\377'
Fixes#13937
* Don't re-use the existing connection if the remote_addr field of
the play context has changed
* When overriding variables in PlayContext (from task/variables),
don't set the same attribute based on a different variable name
if we had already previously set it from another variable name
Fixes#13880
* Relocate the assignment of the host address to the remote_addr field
in the play context, which was only done when the connection was created
(it's now done after the post_validate() is called on the play context)
* Make the assignment of the play context to the connection an else, since
it's not required if the connection is not reused
This is because we pass arguments to non-newstyle modules via an
external file. If we pipeline, then the interpreter thinks it has to
run the arguments as the script instead of what is piped in via stdin.
keeps backwards compat by not removing the previouslly non grammer matching states
and introduces new ones so user can decide which one he wants
(or keep both and still be inconsistent to annoy those that care)
If this isnt updated, the _connection is reused, and thus has an outdated _play_context
This results in outdated `success_key` and `prompt` causing issues if sudo is run in a loop
Refer to the issue #13763 for more debugging and details
The nxapi module has been superseded by the nxos shared module and is not longer needed. This commit removes (deletes) nxapi from module_utils. All custom modules that have used nxapi should be using nxos instead.
This commit adds a new shared module that parses network device configuration
files. It is used to build modules that work with the various supported
network device operating systems
This commit adds a new shared module for working with network devices running
the Juniper Junos operating system. The commit includes a new document
fragment junos to be used when building modules. The junos shared module
currently only supports CLI
This commit adds a new shared module openswitch for building modules that
work with OpenSwitch. This shared module supports connectivity to
OpenSwitch devices over SSH, CLI or REST. It also adds an openswitch
documentation fragment for use in modules
This commit refactors the nxapi into a new shared module nxos that supports
connectivity over both ssh (cli) and nxapi. It supercedes the nxapi shared
module and removes it from module_utils. This commit also adds a
documentation fragement supporting the nxos shared module
This commit adds a new shared module for working with Cisco IOS XR devices over
CLI (SSH). It also provides a documentation fragement for the commmon arguments
provided by the iosxr module.
This update refactor the ios shared module to use the new shell shared
library instead of issh and cli. It also adds the ios documentation
fragment to be used when building ios based modules.
This adds a shared module for communicating with Arista EOS devices over
SSH (cli) or JSON-RPC (eapi). This modules replaces the eapi.py module
previously added to module_utils. This commit includes a documentation
fragment that describes the eos common arguments
pushed it to use the existing propmpt from display and moved the vars prompt code there also for uniformity
changed vars_prompt to check extra vars vs the empty play.vars to restore 1.9 behaviour
sipmlified the code as it didn't need to check for syntax again (tqm is made none prior based on that)
fixes#13770
Still is a warning as we don't want to repeat it multiple times nor additional callbacks to stop ansible execution.
hopefully we can avoid shipping w/o exceptions in the default/minimal callbacks...
Also added feature that now allows for 'preformated' strings passed to warning
This commit add a new shared module shell that is used to build connections
to network devices that operate in a CLI environment. This commit supercedes
the issh.py and cli.py commits and removes them from module_utils.
and without hosts and vars
Without this patch, the simplified syntax is triggered when a group
is defined like this:
"platforms": {
"children": [
"cloudstack"
]
}
Which results in a group 'platforms' with 1 host 'platforms'.
more details in https://github.com/ansible/ansible/issues/13655
Previously, the lookup plugin passes all its keyword arguments to
credstash's `getSecret`; while this works for passing the standard
parameters (version, region and table), this does not allow passing
a dictionary of key-value pairs as `getSecret`'s context parameter.
Instead, pop `version`, `region` and `table` from `kwargs`, supplying
the default value if they are not defined, and pass the rest of the `kwargs`
as the `context` parameter.
* Added additional methods to the iterator code to assess host failures
while also taking into account the block rescue/always states
* Fixed bugs in the free strategy, where results were not always being
processed after being collected
* Added some prettier printing to the state output from iterator
Fixes#13699
commit 24efa310b58c431b4d888a6315d1285da918f670
Author: James Cammarata <jimi@sngx.net>
Date: Tue Dec 29 11:23:52 2015 -0500
Adding an additional test for copy exclusion
Adds a negative test for the situation when an exclusion doesn't
exist in the target to be copied.
commit 643ba054877cf042177d65e6e2958178bdd2fe88
Merge: e6ee59f66a8f7e
Author: James Cammarata <jimi@sngx.net>
Date: Tue Dec 29 10:59:18 2015 -0500
Merge branch 'speedup' of https://github.com/chrismeyersfsu/ansible into chrismeyersfsu-speedup
commit 66a8f7e873
Author: Chris Meyers <chris.meyers.fsu@gmail.com>
Date: Mon Dec 28 09:47:00 2015 -0500
better api and tests added
* _copy_results = deepcopy for better performance
* _copy_results_exclude to deepcopy but exclude certain fields. Pop
fields that do not need to be deep copied. Re-assign popped fields
after deep copy so we don't modify the original, to be copied, object.
* _copy_results_exclude unit tests
commit 93490960ff
Author: Chris Meyers <chris.meyers.fsu@gmail.com>
Date: Fri Dec 25 23:17:26 2015 -0600
remove uneeded deepcopy fields
* Fix to error if validate_cert is True and python doesn't support it.
* Only globally disable certificate checking if really needed. Use
bigip verify parameter if available instead.
* Remove public disable certificate function to make it less likely
people will attempt to reuse that
* now module errors clearly state msg=MODULE FAILURE
* module's stdout and stderr go into module_stdout and module_stderr keys
which only appear during parsing failure
* invocation module_args are deleted from results provided by action
plugin as errors can keep us from overwriting and then disclosing info that
was meant to be kept hidden due to no_log
* fixed invocation module_args set by basic.py as it was creating different
keys as the invocation in action plugin base.
* results now merge
This plugin filters output for any task that is 'ok' or 'skipped'.
It works by subclassing the 'default' stdout callback plugin and
overriding certain functions. It will suppress display of the task
banner until there is a 'changed' or 'failed' result or an
unreachable host.
* Changed parse_addresses to throw exceptions instead of passing None
* Switched callers to trap and pass through the original values.
* Added very verbose notice
* Look at deprecating this and possibly validate at plugin instead
fixes#13608
Because the fail_state is potentially non-zero in these block sections,
the prior logic led to included tasks not being inserted at all.
Related issue: #13605
This was added in 1.9 and 2.0 tried to copy, but since it cannot
obey no_log restrictions I commented it out. I did not remove as
it is still very useful for module invocation debugging.
* Saving of the registered variable was occuring after the tests for
changed/failed_when.
* Each of the above fields and until were being post_validated too early,
so variables which were not defined at that time were causing task
failures.
Fixes#13591
Environments were not being templated individually, so a variable environment
value was causing the exception regarding dicts to be hit. Also, environments
as inherited were coming through with the tasks listed first, followed by the
parents, so they were being merged backwards. Reversing the list of environments
fixed this.
Also fixes a bug where we were passing an incorrect number of parameters to
_do_handler_run() when processing an include file in a handler task/block.
Fixes#13560
Otherwise, each relative include path is checked on its own, rather
than in relation to the (possibly relative) path of its parent, meaning
includes multiple level deep may fail to find the correct (or any) file.
Fixes#13472
The current ssh shared module forces only password based authentication. This
change will allow the ssh module to use keys if a password is not provided.
We were logging the command to be executed many times, which made debug
logs very hard to read. Now we do it only once.
Also makes the logged ssh command line cut-and-paste-able (the lack of
which has confused a number of people by now; the problem being that we
pass the command as a single argument to execve(), so it doesn't need an
extra level of quoting as it does when you try to run it by hand).
moved from the field attribute declaration and created a placeholder
which then is resolved in the field attribute class.
this is to avoid unwanted persistent of the defaults across objects which introduces
stealth bugs when multiple objects of the same kind are used in succession while
not overriding the default values.
OS X El Capitan moved the /etc/ssh_* files into /etc/ssh/. This fix
adds a distribution version check for Darwin to set the keydir
appropriately on El Capitan and later.
tasks were overriding commandline with their defaults, not with the
explicit setting, removed the setting of defaults from task init and
pushed down to play context at last possible moment.
fixes#13362
Ansible previously added hosts to the host list multiple times for commands
like `ansible -i 'localhost,' -c local -m ping 'localhost,localhost'
--list-hosts`.
8d5f36a fixed the obvious error, but still added the un-deduplicated list to a
cache, so all future invocations of get_hosts() would retrieve a
non-deduplicated list.
This caused problems down the line: For some reason, Ansible only ever
schedules "flush_handlers" tasks (instead of scheduling any actual tasks from
the playbook) for hosts that are contained in the host lists multiple times.
This probably happens because the host states are stored in a dictionary
indexed by the hostnames, so duplicate hostname would cause the state to be
overwritten by subsequent invocations of … something.
* sudo was not working, now it supports full become
* now default checkout dir works, not only when specifying
* paths for checkout dir get expanded
* fixed limit options for playbook
* added verbose and debug info
This should fix issues with fish shell users as && and || are
not valid syntax, fish uses actual 'and' and 'or' programs.
Also updated to allow for fish backticks pushed quotes to subshell,
fish seems to handle spaces w/o them.
Lastly, removed encompassing subshell () for fish compatibility.
fixes#13199
This patch fixes a bug in module_utils/ios.py where the the wrong shared
module arguments are being generated. This bug prevented the shared module
from operating correctly. This patch should be generally applied.
* Move self._tqm.load_callbacks() earlier to ensure that v2_on_playbook_start can fire
* Pass the playbook instance to v2_on_playbook_start
* Add a _file_name instance attribute to the playbook
At its most basic, this is nothing more than an array or hash lookup,
but when used in conjunction with map, it is very useful. For example,
while constructing an "ssh-keyscan …" command to update known_hosts on
all hosts in a group, one can get a list of IP addresses with:
groups['x']|map('extract', hostvars, 'ec2_ip_address')|list
This returns hostvars[a].ec2_ip_address, hostvars[b].ec2_ip_address, and
so on. You can even specify an array of keys for a recursive lookup, and
mix string and integer keys depending on what you're looking up:
['localhost']|map('extract', hostvars, ['vars','group_names',0])|first
== hostvars['localhost']['vars']['group_names'][0]
== 'ungrouped'
Includes documentation and tests.
The comment was taken literally from lib/plugins/strategy/linear.py and
makes no sense in free.py where we have no noop tasks.
Also update the debug messages.
This patch fixes an issue with the common args dict in the eapi shared
module. This patch is required for the eapi shared module to be properly
imported and is therefore should be applied to all instances.
This commit changes the way modules create an instance of AnsibleModule to
now use a common function, eapi_module. This function will now automatically
append the common argument spec to the module argument_spec. Module
arguments can override common module arguments
Pipelining is a *significant* performance benefit, because each task can
be completed with a single SSH connection (vs. one ssh connection at the
start to mkdir, plus one sftp and one ssh per task).
Pipelining is disabled by default in Ansible because it conflicts with
the use of sudo if 'Defaults requiretty' is set in /etc/sudoers (as it
is on Red Hat) and su (which always requires a tty).
We can (and already do) make sudo/su happy by using "ssh -t" to allocate
a tty, but then the python interpreter goes into interactive mode and is
unhappy with module source being written to its stdin, per the following
comment from connections/ssh.py:
# we can only use tty when we are not pipelining the modules.
# piping data into /usr/bin/python inside a tty automatically
# invokes the python interactive-mode but the modules are not
# compatible with the interactive-mode ("unexpected indent"
# mainly because of empty lines)
Instead of the (current) drastic solution of turning off pipelining when
we use a tty, we can instead use a tty but suppress the behaviour of the
Python interpreter to switch to interactive mode. The easiest way to do
this is to make its stdin *not* be a tty, e.g. with cat|python.
This works, but there's a problem: ssh will ignore -t if its input isn't
really a tty. So we could open a pseudo-tty and use that as ssh's stdin,
but if we then write Python source into it, it's all echoed back to us
(because we're a tty). So we have to use -tt to force tty allocation; in
that case, however, ssh puts the tty into "raw" mode (~ICANON), so there
is no good way for the process on the other end to detect EOF on stdin.
So if we do:
echo -e "print('hello world')\n"|ssh -tt someho.st "cat|python"
…it hangs forever, because cat keeps on reading input even after we've
closed our pipe into ssh's stdin. We can get around this by writing a
special __EOF__ marker after writing in_data, and doing this:
echo -e "print('hello world')\n__EOF__\n"|ssh -tt someho.st "sed -ne '/__EOF__/q' -e p|python"
This works fine, but in fact I use a clever python one-liner by mgedmin
to achieve the same effect without depending on sed (at the expense of a
much longer command line, alas; Python really isn't one-liner-friendly).
We also enable pipelining by default as a consequence.
since all the --ask pass options end up triggering the same code
and are functionally equivalent, ignore them when it comes to checking
privilege escalation conflicts. This allows using -K when --become-method=su
and so on.
The secret_key parameter especially can contain non-ascii characters and
will throw an error if such a string is passed as a byte str.
Potential fix for #13303
It is natural that an argument_spec with choises=BOOLEAN accepts
boolean literal (True, False) though the current implementation
allows only string or int.
* StandardError doesn't exist in python3
* because it is the root of builtin expections, we can't catch it
separate from the builtin exceptions
* It doesn't tell us anything about the error being thrown as it's too
generic
This ssh shared module is used for building modules that require an
interactive shell environment such as those required for connecting
to network devices
If we request escalation with a password, we start in expecting_prompt
state. If the escalation then succeeds without the password, i.e., the
become_success response arrives, we must explicitly move into the next
state (awaiting_escalation, which immediately goes into ready_to_send),
so that we no longer try to apply the timeout.
Otherwise, we would leak the success notification and eventually
timeout. But if the module response did arrive before the timeout
expired, the "process has already exited" test would do the right
thing by accident (which is why it didn't fail more often).
Fixes#13289
This was caused by accessing the cache using the passed in mod_type
rather than the suffix that we calculate with knowledge of whether this
is a module or non-module plugin.
Ensure that ansible-galaxy version can be a branch, a tag, or any tree-ish
supported by git including specific commit IDs. For git scm roles, adds an
explicit git checkout of the specified role_version prior to the git archive.
This means that we'll always archive from HEAD of whatever role_version is
checked out. role_version can be a branch, a tag, or any <tree-ish> supported
by git including specific commit IDs. These changes also ensure
ansible-galaxy works for scm clones when specified version differs from
repository default branch.
Previously, we were filtering the task list on tags for each host
that was including the file, based on the idea that the variables
had to include the host information. However, the top level task
filtering is play-context only, which should also apply to the
included tasks. Tags cannot and should not be based on hostvars.
Looks like there are two pattern caches that need to be cleared for this to work- added the second one.
Added integration tests for add_host to prevent future regressions.
Looks like someone forgot to create an instance of undefined here- we were returning the undefined type object, which broke all the undefined checks.
Added an integration test around add_host that will catch this (separate PR to follow)
This callback plugin will generate json objects to be sent to the
logentries service for auditing/debugging purposes.
To use:
Add this to your ansible.cfg file in the defaults block
[defaults]
callback_plugins = ./callback_plugins
callback_stdout = logentries
callback_whitelist = logentries
Copy the callback plugin into the callback_plugings directory
Either set the environment variables
export LOGENTRIES_API=data.logentries.com
export LOGENTRIES_PORT=10000
export LOGENTRIES_ANSIBLE_TOKEN=dd21fc88-f00a-43ff-b977-e3a4233c53af
Or create a logentries.ini config file that sites next to the plugin with the following contents
[logentries]
api = data.logentries.com
port = 10000
tls_port = 20000
use_tls = no
token = dd21fc88-f00a-43ff-b977-e3a4233c53af
It was set to match the SSH connect timeout. Unfortunately, they would
race when ssh fails to connect, and the connect timeout usually failed.
This led to some misleading error messages.
Fixes#12916
Error reporting was broken for GCE modules- pprint didn't work with exceptions, so you'd always get "Unexpected response: {}" instead of the real error.
Code for a plugin is usually loaded by a PluginLoader(), and henceforth
available from self._module_cache, which prevents duplicate loading.
However there are situations (e.g. where one action plugin imports code
from another one) where the plugin module might be already imported (and
resident in sys.modules), but not present in the PluginLoader's
_module_cache, which causes imp.load_source() to effectively reload the
module, overwriting global class declarations and causing subtle latent
bugs.
Fixes#13110.
Fixes#12979.
* Always cache and return unique list objects, so that if the list
is changed later it does not impact the cached results
* Take additional parameters and the type of the pattern into account
when building the hash string
Also displays a warning now, because users should not be using that variable
name as it causes a collision with the internal variable of the same name.
This commit adds the shared module support for Cisco NXAPI. The shared
module builds on top of the urls shared module. The urls module provides
the http/s transport. This module only supports the JSON request message
format.
The code isn't sophisticated enough to understand lists and dicts yet.
This mirrors how 1.9.x handled non-string items so its not a regression.
One portion of a fix for #12976
I PR'd a change to pywinrm to allow server certs to be ignored; but it's only on the SSL transport (which we were previously ignoring). For this to work more generally, we're also now pulling the named ansible_winrm_* args from the merged set of host/group vars, not just host_vars.
These were mostly saving exceptions but not using them. Getting rid of
those will help with eventually running modules via either python2.4 or
python3.x.
* remove requirement for host patterns, use the defaults
* require destination directory (None in cwd is not a good default)
* fixed usage messages
* updated default inventory to use , and not deprecated :
* Properly mark hosts with failures in includes as failed
* Don't send callbacks until we're sure we're done, and also fix how
we increment stats so failures don't show up as ok's
* Fix a bug in the include file logic where a failed include could lead
to an infinite loop in the task iteration logic
Fixes#12933
also remove condition to bypass setting user if user matches current user
this enables forcing user when set to the same user as current user and ignoring .ssh/config
while keeping .ssh/config with current user if nothing is specified.
prior to this commit, an attempt to use the `include:` directive would
fail in a `rescue:` or `always:` block if there were failures in the
main block task list.
Resolves#12876.
* Fix the task_vars parameter to not default to a mutable type (dict)
* Implement invocation in the base class's run() method have each action
module call the run() method's implemention in the base class.
* Return values from the action plugins' run() method takes the return
value from the base class run() method into account so that invocation
makes its way to the output.
Fixes#12869
Revert "Remove auto-added invocation return value as it is not used by v2 and could leak sensitive data."
This reverts commit 6ce6b20268.
Remove the note that invocation was removed as we've now restored it.
Revert "keyword not in ubuntu 14.04"
This reverts commit 5c01622457.
Revert "remove invocation keyword check"
This reverts commit 5177cb3f74.
ansible-playbook now works when run with a playbook
that includes a role that includes another role
specified using csv format
Updated one of the roles used in the tests to fix
broken tests - `make test_galaxy` now works
Fixes#11486. Also addresses the problem alluded to in #10620.
For some situations like Vagrant, the remote_addr may be a localhost addr, but ssh
is still desired. This corrects the assumption that any localhost remote_addr should
be using the local connection by checking the inventory_hostname value as well.
Fixes#12817
Simplifies logic and prevents us from accidentally post_validating
an include that would otherwise be skipped due to tags causing a
problem because of potentially missing variables.
Fixes#12793
When using 'local' connections, privilege escalation would fail if
ansible_ssh_user was in the current context to the same value as
become_user.
This commit ensures that for 'local' connections we reset remote_user to
the local username.
This fixes#12782.
I inadvertently introduced it in
ca826508d9 and didn't notice, because
there are no unit tests for playbook_executor.py. Sorry!
(The "from ansible.errors import *" was used *only* to get the 'os'
module, which makes go "what?")
If you convert the error string to bytes and embed it inside another
error string, you get
Prefix:
b'Embedded\nerror\nstring'
which is not what we want.
But we also don't want Unicode in error messages causing unexpected
UnicodeEncodeErrors when on Python 2.
So let's convert the error message into the native string type (bytes on
Python 2, unicode on Python 3).
* Don't throw away the full path of the module code being loaded,
as this can cause conflicts when files of the same name are being
instantiated
* Generalize the module loading code
Fixes#12738
* corrupt/invalid file causes tracebacks
* incorrect initialization of display/_display in BaseCacheModule class
* tweaking the way errors in get() on jsonfile caches work, to raise
a proper AnsibleError in that situation so the playbook/task is stopped
Fixes#12708
The first call to persisting facts would work due to the assignment of a
MutableMapping calling __setitem__ but subsequent module fact data would
not be propogated to the fact cache plugins because update() doesn't
invoke __setitem__. This changes the behavior a little bit and ensures
set() is called on cache plugins.
Also does some reorganization/cleanup on the magic vars/delegated
variable generation portions of VariableManager to make the above
possible.
Fixes#12633
better error reporting on fetching errors
use scm if it exists over src
unified functions in requirements
simplified logic
added verbose to tests
cleanup code refs, unused options and dead code
moved get_opt to base class
fixes#11920fixes#12612fixes#10454
This is because we pass the whole dd command string into the shell
that's running on the contained environment rather than running it
directly from python via subprocess without a shell.
corrected output from default callback
added new tests for no_log loops
updated makefile test to check for both positive and negative occurrences of no_log
The earlier code behaved exactly as though this default had been set,
but it was actually handled as a(n unnecessary) special case inside the
connection plugin, rather than set as an explicit default.
If the default is overriden either in ansible.cfg or the environment,
the new code will continue to work (in fact, it won't know or care,
since it just uses the value set in the PlayContext).
This is submitted as a separate commit for easier review to address
backwards-compatibility concerns.
Using set_host_overrides() in the connection plugin to access the ssh
argument variables from the inventory didn't see group_vars/host_vars
settings, as noted earlier. Instead, we can set the correct values in
the PlayContext, which has access to all command-line options, task
settings, and variables.
The only downside of doing so is that the source of the settings is no
longer available in ssh.py, and therefore can't be logged. But the code
is simpler, and it actually works.
This change was suggested by @jimi-c in response to the FIXME in the
earlier commit.
Now we have the following ways to set additional arguments:
1. [ssh_connection]ssh_args in ansible.cfg: global setting, prepended to
every command line for ssh/scp/sftp. Overrides default ControlPersist
settings.
2. ansible_ssh_common_args inventory variable. Appended to every command
line for ssh/scp/sftp. Used in addition to ssh_args, if set above, or
the default settings.
3. ansible_{sftp,scp,ssh}_extra_args inventory variables. Appended to
every command line for the relevant binary only. Used in addition to
#1 and #2, if set above, or the default settings.
3. Using the --ssh-common-args or --{sftp,scp,ssh}-extra-args command
line options (which are overriden by #2 and #3 above).
This preserves backwards compatibility (for ssh_args in ansible.cfg),
but also permits global settings (e.g. ProxyCommand via _common_args) or
ssh-specific options (e.g. -R via ssh_extra_args).
Fixes#12576
<crab> jimi|ansible: do you think it should be possible to add both
foo:22 and foo:23 to the inventory?
<jimi|ansible> no
…so we don't want an invitation to FIXME.
CLI already provides a pager() method that feeds $PAGER on stdin, so we
just feed that the plaintext from the vault file. We can also eliminate
the redundant and now-unused shell_pager_command method in VaultEditor.
(Reminder: cannot use six here, module_utils get shipped to remote
machines that may not have six installed -- besides six doens't support
Python 2.4.)
Since c8f2483d, ini.py expects to always be passed in a pre-created list
of groups, and can no longer deal sensibly with an empty list; this just
makes that expectation clear.
This fixes a corner case where ini files live in a subdir
of the main inventory directory.
Reproducing the original error:
mkdir -p inventory/ini
cat > inventory/ini/hosts << EOF
[www]
www1
EOF
$ ansible -i inventory/ all -m ping
ERROR! 'all'
(or without the [www] group, it would complain about 'ungrouped')
Fixes another failing test.
(I don't want to do a global search/replace for 'basestring' because I
want to have unit tests covering each occurrence. When I run out of
existing failing tests, I'll try to write new ones.)
* Remove extraneous imports
* Fix some error handling
* Enable pipelining
* Disable su since it doesn't work
* Add error message when installed docker is not recent enough to
support this plugin
* Move nested functions to class level
* Make transport a class attribute
* Make exec_command, put_file and fetch_file more robust
Removed deletion of salt param from lookup file by 'password' lookup_filter.
Old behaviour leads to constant changed status when two tasks uses same lookup,
one with 'encrypt' parameter, and other without.
For example:
tasks:
- name: Create user
user:
password: "{{ lookup('password', inventory_dir + '/creds/user/pass' ncrypt=sha512_crypt) }}"
...
# Lookup file 'creds/user/pass' now contain password with salt
- name: Create htpasswd
htpasswd:
password: "{{ lookup('password', inventory_dir + '/creds/user/pass') }}"
...
# Salt gets deleted from lookup file 'creds/user/pass'
# Next run of "Create user" task will create it again and will have 'changed' status
* Disable su as it's not currently working 100% (and was disabled in v1).
* Move BUFSIZE out of the class to match other conenction plugins
* _connect shouldn't return self.
This is also peripheral to what _build_command needs, can be improved
and tested independently, and so makes more sense in a separate method.
This commit doesn't change any functionality (and I've verified that it
works with the various combinations: control_path set in ansible.cfg,
ssh_args adding or not adding ControlMaster/ControlPersist, etc.).
SSH pipelining can be a significant performance improvement, but it will
not work if sudoers is configured to requiretty. With this change, one
could have pipelining enabled in ansible.cfg, but use sudo to turn off
requiretty in a separate play (or task) where pipelining is disabled:
- hosts: foo
vars:
ansible_pipelining: no
tasks:
- lineinfile: dest=/etc/sudoers line='Defaults requiretty' state=absent
sudo_user: root
(Note that sudoers has a complicated syntax, so the above lineinfile
invocation may be too simplistic for production use; but the point is
that a separate play can do something to disable requiretty.)
Also get pipelining working for people who look to chroot as an example
for their own connection plugins
Note: In the latest v2 API, action handles become but chroot doesn't
reliably handle become. Maybe we need to add a has_become attribute
that the action can display an appropriate error.
* allow global no_log setting, no need to set at play or task level, but can be overriden by them
* allow turning off syslog only on task execution from target host (manage_syslog), overlaps with no_log functionality
* created log function for task modules to use, now we can remove all syslog references, will use systemd journal if present
* added debug flag to modules, so they can make it call new log function conditionally
* added debug logging in module's run_command
Due to the way we're now calculating delegate_to, if that value is based
on a loop variable ('item') we need to calculate all of the possible
delegated_to variables for that loop.
Fixes#12499
There doesn't appear to be anything that actually uses tmp_path in the
connection plugins so we don't need to pass that in to exec_command.
That change also means that we don't need to pass tmp_path around in
many places in the action plugins any more. there may be more cleanup
that can be done there as well (the action plugin's public run() method
takes tmp as a keyword arg but that may not be necessary).
As a sideeffect of this patch, some potential problems with chmod and
the patch, assemble, copy, and template modules has been fixed (those
modules called _remote_chmod() with the wrong order for their
parameters. Removing the tmp parameter fixed them.)
The process is already gone, so there's not going to be any new data
showing up on its stderr; we only want to make sure that we haven't
missed something that was already written. So polling once is enough.
This change is motivated by an ssh oddity: when ControlPersist is
enabled, the first (i.e. master) connection goes into the background; we
see EOF on its stdout and the process exits, but we never see EOF on its
stderr. So if we ran a command like this:
ANSIBLE_SSH_PIPELINING=1 ansible -T 30 -vvv somehost -u someuser -m command -a whoami
We would first do select([stdout,stderr], timeout) and read the command
module output, then select([stdout,stderr], timeout) again and read EOF
on stdout, then select([stderr], timeout) AGAIN (though the process has
exited), and select() would wait for the full timeout before returning
rfd=[], and then we would exit. The use of a very short timeout in the
code masked the underlying problem (that we don't see EOF on stderr).
It's always preferable to call select() with a long timeout so that the
process doesn't use any CPU until one of the events it's interested in
happens (and then select will return independent of elapsed time).
(A long timeout value means "if nothing happens, sleep for up to <x>";
omitting the timeout value means "if nothing happens, sleep forever";
specifying a zero timeout means "don't sleep at all", i.e. poll for
events and return immediately.)
This commit uses a long timeout, but explicitly detects the condition
where we've seen EOF on stdout and the process has exited, but we have
not seen EOF on stderr. If and only if that happens, it reruns select()
with a short timeout (in practice it could just exit at that point, but
I chose to be extra cautious). As a result, we end up calling select()
far less often, and use less CPU while waiting, but don't sleep for a
long time waiting for something that will never happen.
Note that we don't omit the timeout to select() altogether because if
we're waiting for an escalation prompt, we DO want to give up with an
error after some time. We also don't set exceptfds, because we're not
actually acting on any notifications of exceptional conditions.
On Python 2, shlex.split() raises if you pass it a unicode object with
non-ASCII characters in it. The Ansible codebase copes by explicitly
converting the string using to_bytes() before passing it to
shlex.split().
On Python 3, shlex.split() raises ('bytes' object has no attribute 'read')
if you pass a bytes object. Oops.
This commit introduces a new wrapper function, shlex_split, that
transparently performs the to_bytes/to_unicode conversions only on
Python 2.
Currently I've only converted one call site (the one that was causing a
unit test to fail on Python 3). If this approach is deemed suitable,
I'll convert them all.
Without this, we could execute «ssh -q ...» and call select(), which
would timeout after the default 10s, and only then send initial data.
(This is a relic of the earlier change where we always ran ssh with
-vvv, so the situation where it would sit quietly never happened in
practice; but this would have been the right thing to do even then.)
Make the code compatible with Pythons 2.4 through 3.5 by using
sys.exc_info()[1] instead.
This is necessary but not sufficient for Python 3 compatibility.
The event loop (even after it was brought into one place in _run in the
previous commit) was hard to follow. The states and transitions weren't
clear or documented, and the privilege escalation code was non-blocking
while the rest was blocking.
Now we have a state machine with four states: awaiting_prompt,
awaiting_escalation, ready_to_send (initial data), and awaiting_exit.
The actions in each state and the transitions between then are clearly
documented.
The check_incorrect_password() method no longer checks for empty strings
(since they will always match), and check_become_success() uses equality
rather than a substring match to avoid thinking an echoed command is an
indication of successful escalation. Also adds a check_missing_password
connection method to detect the error from sudo -n/doas -n.
The main exec_command/put_file/fetch_file methods now _build_command and
call _run to handle input from/output to the ssh process. The purpose is
to bring connection handling together in one place so that the locking
doesn't have to be split across functions.
Note that this doesn't change the privilege escalation and connection IO
code at all—just puts it all into one function.
Most of the changes are just moving code from one place to another (e.g.
from _connect to _build_command, from _exec_command and _communicate to
_run), but there are some other notable changes:
1. We test for the existence of sshpass the first time we need to use
password authentication, and remember the result.
2. We set _persistent in _build_command if we're using ControlPersist,
for later use in close(). (The detection could be smarter.)
3. Some apparently inadvertent inconsistencies between put_file and
fetch_file (e.g. argument quoting, sftp -b use) have been removed.
Also reorders functions into a logical sequence, removes unused imports
and functions, etc.
Aside: the high-level EXEC/PUT/FETCH description should really be logged
from ConnectionBase, while individual subclasses log transport-specific
details.
* Make LookupBase an abc with required methods (run()) marked as an
abstractmethod
* Mark methods that don't use self as @staticmethod
* Document how to implement the run method of a lookup plugin.
Follow up to 8769f03c, which allows the undefined var error to be raised
if we're getting vars with a full context (play/host/task) and the host
has already gathered facts. In this way, vars_files containing variables
that fail to be templated are not silently ignored.
This fixes a failing unit test.
In actual use (which is still quite far), I'm not sure if bytes ->
unicode conversion should be done here (in which case the code will fail
with an AttributeError: 'bytes' object has no attribute 'readlines'), or
inside self._connection.exec_command() (in which case my change is
correct).
Now, instead of relying on hostvars on the executor side, we compile
the vars for the delegated to host in a special internal variable and
have the PlayContext object look for things there when applying task/
var overrides, which is much cleaner and takes advantage of the code
already dealing with all of the magic variable variations.
Fixes#12127Fixes#12079
* Clearing interpreter settings from variables, so those set for the
original host aren't incorrectly applied to the delegated to host
* Fixed incorrect string for remote user in delegated hosts hostvars
* Properly looking for multiple possiblities in the delegated-to hosts
hostvars (ansible_ssh_host vs. ansible_host)
Use six.moves.range instead (aliased to xrange on Python 2, aliased to
range on Python 3).
Also I couldn't resist replacing the elaborate chr/ord/randrange dance
with the simpler random.choice(string.ascii_lowercase) that was already
used elsewhere in the Ansible codebase.
The earlier distinction was never used; .ipv6_address was always a copy
of .ipv4_address, and the latter was always used to set the remote_addr
field in the PlayContext.
Also uses the canonical ansible_host/ansible_port names when setting the
address and port from variables.
The earlier-recommended "pat1:pat2:pat3[x:y]" notation doesn't work well
with IPv6 addresses, so we recommend ',' as a separator instead. We know
that commas can't occur within a pattern, so we can just split on it.
We still have to accept the "foo:bar" notation because it's so commonly
used, but we issue a deprecation warning for it.
Fixes#12296Closes#12404Closes#12329
* Add exception handling when running PowerShell modules to provide exception message and stack trace.
* Enable strict mode for all PowerShell modules and internal commands.
* Update common PowerShell code to fix strict mode errors.
* Fix an issue with Set-Attr where it would not replace an existing property if already set.
* Add tests for exception handling using modified win_ping modules.
Hi @amenonsen - thanks for fixing up the hunting down the unicode bug and expanding test_addresses. The code looks good, merging!-- Be systematic about parsing and validating hostnames and addresses
These used to go in vars_cache, so merging them in after that as they
are "live" variables and the user would most likely want to see these
above anything else.
Labels must start with an alphanumeric character, may contain
alphanumeric characters or hyphens, but must not end with a hyphen.
We enforce those rules, but allow underscores wherever hyphens are
accepted, and allow alphanumeric ranges anywhere.
We relax the definition of "alphanumeric" to include Unicode characters
even though such inventory hostnames cannot be used in practice unless
an ansible_ssh_host is set for each of them.
We still don't enforce length restrictions—the fact that we have to
accept ranges makes it more complex, and it doesn't seem especially
worthwhile.
This adds a parse_address(pattern) utility function that returns
(host,port), and uses it wherever where we accept IPv4 and IPv6
addresses and hostnames (or host patterns): the inventory parser
the the add_host action plugin.
It also introduces a more extensive set of unit tests that supersedes
the old add_host unit tests (which didn't actually test add_host, but
only the parsing function).
There was code to support set literals (on Python 2.7 and newer), but it
was buggy: SAFE_NODES.union() doesn't modify SAFE_NODES in place,
instead it returns a new set object that is then silently discarded.
I added a unit test and fixed the code. I also changed the version
check to use sys.version_tuple instead of a string comparison, for
consistency with the subsequent Python 3.4 version check that I added in
the previous commit.
Two things changed in Python 3.4:
- 'basestring' is no longer defined, so use six.string_types
- True/False are now special AST node types (NamedConstant) rather than
just names
(Good thing we had tests, or I wouldn't have noticed the 2nd thing!)
I found only one place where safe_eval() is called inside the ansible
codebase: in lib/template/__init__.py. The call to safe_eval(result,
...) is protected by result.startswith('...'), which means result cannot
possibly be a byte string on Python 3 (or startswith() would raise, so
six.string_types (which excludes byte strings on Python 3) is fine here.
PyYAML has a SafeRepresenter in lib/... that defines
def represent_unicode(self, data):
return self.represent_scalar(u'tag:yaml.org,2002:str', data)
and a different SafeRepresenter in lib3/... that defines
def represent_str(self, data):
return self.represent_scalar('tag:yaml.org,2002:str', data)
so the right thing to do on Python 3 is to use represent_str.
(AnsibleUnicode is a subclass of six.text_type, i.e. 'str' on Python 3.)
needed for winrm, disabled closing connections in ssh to avoid issues with that persistance, need to normalize all this in future
This reverts commit 23a22397bf.
Required some rewiring in inventory code to make sure we're using
the DataLoader class for some data file operations, which makes mocking
them much easier.
Also identified two corner cases not currently handled by the code, related
to inventory variable sources and which one "wins". Also noticed we weren't
properly merging variables from multiple group/host_var file locations
(inventory directory vs. playbook directory locations) so fixed as well.
This was commented out earlier because of the lack of interprocess
locking and prepare_writeable_dir in v2.
The locking was not needed: it could only protect against other siblings
of this process (since they were all locking a temporary file that was
opened in the parent), and those would be running as the same user and
with the same umask. Also, os.makedirs() tolerates intermediate paths
being created by other processes. For any other kind of error, both
locking and non-locking code paths would fail in the same way.
So all we really need to do is make sure we have write permissions.
(We also move the cp_dir handling code to where we actually set the
ControlPath ourselves; if the user has set it via ssh_*args already,
we don't need to bother.)
commit 9921bb9d20
Author: Abhijit Menon-Sen <ams@2ndQuadrant.com>
Date: Mon Aug 10 20:19:44 2015 +0530
Document --ssh-extra-args command-line option
commit 8b25595e7b
Author: Abhijit Menon-Sen <ams@2ndQuadrant.com>
Date: Thu Aug 13 13:24:57 2015 +0530
Don't disable GSSAPI/Pubkey authentication when using --ask-pass
This commit is based on a bug report and PR by kolbyjack (#6846) which
was subsequently closed and rebased as #11690. The original problem was:
«The password on the delegated host is different from the one I
provided on the command line, so it had to use the pubkey, and the
main host doesn't have a pubkey on it yet, so it had to use the
password.»
(This commit is revised and included here because #11690 would conflict
with the changes in #11908 otherwise.)
Closes#11690
commit 119d032389
Author: Abhijit Menon-Sen <ams@2ndQuadrant.com>
Date: Thu Aug 13 11:16:42 2015 +0530
Be more explicit about why SSH arguments are added
This adds vvvvv log messages that spell out in detail where each SSH
command-line argument is obtained from.
Unfortunately, we can't be sure if, say, self._play_context.remote_user
is obtained from ANSIBLE_REMOTE_USER in the environment, remote_user in
ansible.cfg, -u on the command line, or an ansible_ssh_user setting in
the inventory or on a task or play. In some cases, e.g. timeout, we
can't even be sure if it was set by the user or just a default.
Nevertheless, on the theory that at five v's you can use all the hints
available, I've mentioned the possible sources in the log messages.
Note that this caveat applies only to the arguments that ssh.py adds by
itself. In the case of ssh_args and ssh_extra_args, we know where they
are from, and say so, though we can't say WHERE in the inventory they
may be set (e.g. in host_vars or group_vars etc.).
commit b605c285ba
Author: Abhijit Menon-Sen <ams@2ndQuadrant.com>
Date: Tue Aug 11 15:19:43 2015 +0530
Add a FAQ entry about ansible_ssh_extra_args
commit 49f8edd035
Author: Abhijit Menon-Sen <ams@2ndQuadrant.com>
Date: Mon Aug 10 20:48:50 2015 +0530
Allow ansible_ssh_args to be set as an inventory variable
Before this change, ssh_args could be set only in the [ssh_connection]
section of ansible.cfg, and was applied to all hosts. Now it's possible
to set ansible_ssh_args as an inventory variable (directly, or through
group_vars or host_vars) to selectively override the global setting.
Note that the default ControlPath settings are applied only if ssh_args
is not set, and this is true of ansible_ssh_args as well. So if you want
to override ssh_args but continue to set ControlPath, you'll need to
repeat the appropriate options when setting ansible_ssh_args.
(If you only need to add options to the default ssh_args, you may be
able to use the ansible_ssh_extra_args inventory variable instead.)
commit 37c1a5b679
Author: Abhijit Menon-Sen <ams@2ndQuadrant.com>
Date: Mon Aug 10 19:42:30 2015 +0530
Allow overriding ansible_ssh_extra_args on the command-line
This patch makes it possible to do:
ansible somehost -m setup \
--ssh-extra-args '-o ProxyCommand="ssh -W %h:%p -q user@bouncer.example.com"'
This overrides the inventory setting, if any, of ansible_ssh_extra_args.
Based on a patch originally by @Richard2ndQuadrant.
commit b023ace8a8
Author: Abhijit Menon-Sen <ams@2ndQuadrant.com>
Date: Mon Aug 10 19:06:19 2015 +0530
Add an ansible_ssh_extra_args inventory variable
This can be used to configure a per-host or per-group ProxyCommand to
connect to hosts through a jumphost, e.g.:
inventory:
[gatewayed]
foo ansible_ssh_host=192.0.2.1
group_vars/gatewayed.yml:
ansible_ssh_extra_args: '-o ProxyCommand="ssh -W %h:%p -q bounceuser@gateway.example.com"'
Note that this variable is used in addition to any ssh_args configured
in the [ssh_connection] section of ansible.cfg (so you don't need to
repeat the ControlPath settings in ansible_ssh_extra_args).
* When iterating over a child state, a failure should be propagated
up so parent blocks don't continue iterating
* Make sure a child state exists before trying to search it
Fixes#12210
I don't think six.iteritems is available here, but I also don't expect
there to be enough platforms to ever make the speed difference between
.items() and .iteritems() noticeable.
Replace .iteritems() with six.iteritems() everywhere except in
module_utils (because there's no 'six' on the remote host). And except
in lib/ansible/galaxy/data/metadata_template.j2, because I'm not sure
six is available there.
The lock file is (a temporary file) opened in the parent process, whose
open fd is inherited by the workers after fork, and passed down through
the PlayContext. Connection grows lock/unlock methods which can be used
by individual connection plugins.
Right now, we don't do any locking, but we still scan known_hosts files
twice per connection. That's completely unnecessary, and the proposed
solutions to the locking problem wouldn't need known_hosts scanning
anyway, so this code can go away.
In v1, a trailing newline was kept if the parameter was passed as key=value. If
the parameter was passed as yaml dict the trailing newline was
discarded. Since key-value and yaml dict were unified in v2 we have to
make a choice as to which behaviour we want. Decided that keeping trailing
newlines by default made the most sense.
Fixes#12200Fixes#12199
dbd755e0 previously assigned the value to self._templar.environment.searchpath,
which is incorrect - it needs to be assigned to the environment.loader.searchpath
value instead.
Fixes#11931
This information was earlier shown only with ANSIBLE_DEBUG, but it's
extremely useful in a user context, especially with module invocations
with deeply nested args like the ec2_vpc/ec2 modules.
Closes#11680
The contributor's name on line 10 (originally line 7) includes a character
that the default Python encoding (ASCII) raises an error on when interpreting
the file.
Specifying the utf-8 encoding, as is done in other modules, resolves
the error.
The error being raised is
SyntaxError: Non-ASCII character '\xc3' in file /.../lib/ansible/module_utils/f5.py
on line 7, but no encoding declared; see http://www.python.org/peps/pep-0263.html
for details
Rewrite function `get_fqdn`. It returns fqdn for all kinds of urls now.
`add_git_host_key` determines whether a url is ssh and whether its host
key should be added.
FieldAttributes will now by default not be post_validated unless a flag
is set on them in the class, as a large number of fields are really there
simply to be inherited by Task/PlayContext and shouldn't be templated too
early.
The other (unrelated to the base issue) in #12084 is also fixed here, where
the roles field is loaded before vars/vars_files, meaning there are no vars
yet loaded in the play when the templating occurs.
Fixes#12084
You cannot call bytes(obj) to get a simple representation of obj on
Python 3! E.g. bytes(42) returns a byte string with 42 NUL characters
instead of b'42'.
Python has had automatic int-to-long promotion for a long long time now.
Even Python 2.4 does that automatically.
Python 3 drops support for the L suffix altogether.
This is based on some code from (closed) PR #7872, but reworked based on
suggestions by @abadger and the other core team members.
Closes#7872 by @darkk (hash_merge/hash_replace filters)
Closes#11153 by @telbizov (merged_dicts lookup plugin)
Now we issue a "Reading … from stdin" prompt if our input isatty(), as
gpg does. We also suppress the "x successful" confirmation message at
the end if we're part of a pipeline.
(The latter requires that we not close sys.stdout in VaultEditor, and
for symmetry we do the same for sys.stdin, though it doesn't matter in
that case.)
This allows the following invocations:
# Interactive use, like gpg
ansible-vault encrypt --output x
# Non-interactive, for scripting
echo plaintext|ansible-vault encrypt --output x
# Separate input and output files
ansible-vault encrypt input.yml --output output.yml
# Existing usage (in-place encryption) unchanged
ansible-vault encrypt inout.yml
…and the analogous cases for ansible-vault decrypt as well.
In all cases, the input and output files can be '-' to read from stdin
or write to stdout. This permits sensitive data to be encrypted and
decrypted without ever hitting disk.
Now that VaultLib always decides to use AES256 to encrypt, we don't need
this broken code any more. We need to be able to decrypt this format for
a while longer, but encryption support can be safely dropped.
Now we don't have to recreate VaultEditor objects for each file, and so
on. It also paves the way towards specifying separate input and output
files later.
It's unused and unnecessary; VaultLib can decide for itself what cipher
to use when encrypting. There's no need (and no provision) for the user
to override the cipher via options, so there's no need for code to see
if that has been done either.
This commit deprecates the earlier groupname[x-y] syntax in favour of
the inclusive groupname[x:y] syntax. It also makes the subscripting
code simpler and adds explanatory comments.
One problem addressed by the cleanup is that _enumeration_info used to
be called twice, and its results discarded the first time because of the
convoluted control flow.
The possibilities are complicated enough that I didn't want to make
changes without having a complete description of what it actually
accepts/matches. Note that this text documents current behaviour, not
necessarily the behaviour we want. Some of this is undocumented and may
not be intended.
The --new-vault-password-file option works the same as
--vault-password-file but applies only to rekeying (when
--vault-password-file sets the old password). Also update the manpage
to document these options more fully.
`if method in dir(self):` is very inefficient:
- it must construct a list object listing all the object attributes & methods
- it must then perform a O(N) linear scan of that list
Replace it with the idiomatic `if hasattr(self, method):`, which is a
O(1) expected time hash lookup.
Should fix#11981.
Apart from ansible-vault create, every vault subcommand is happy to deal
with multiple filenames, so we can check that there's at least one, and
make create check separately that there aren't any extra.
* Add exception handling when running PowerShell modules to provide exception message and stack trace.
* Enable strict mode for all PowerShell modules and internal commands.
* Update common PowerShell code to fix strict mode errors.
* Fix an issue with Set-Attr where it would not replace an existing property if already set.
* Add tests for exception handling using modified win_ping modules.
* Fixes hostvar serialization issue (#12005)
* Fixes regression in include_vars from within a role (#9498), where
we had the precedence order for vars_cache (include_vars, set_fact)
incorrectly before role vars.
* Fixes another bug in which vars loaded from files in the format of
a list instead of dictionary would cause a failure.
Fixes#9498Fixes#12005
Now we accept IPv6 addresses _with port numbers_ only in the standard
[xxx]:NN notation (though bare IPv6 addresses may be given, as before,
and non-IPv6 addresses may also be placed in square brackets), and any
other host identifiers (IPv4/hostname/host pattern) as before, with an
optional :NN suffix.
The new code parses INI-format inventory files in a single pass using a
well-documented state machine that reports precise errors and eliminates
the duplications and inconsistencies and outright errors in the earlier
three-phase parsing code (e.g. three ways to skip comments). It is also
much easier now to follow what decisions are being taken on the basis of
the parsed data. The comments point out various potential improvements,
particularly in the area of consistent IPv6 handling.
On the ornate marble tombstone of the old code, the following
inscription is one last baffling memento from a bygone age:
- def _before_comment(self, msg):
- ''' what's the part of a string before a comment? '''
- msg = msg.replace("\#","**NOT_A_COMMENT**")
- msg = msg.split("#")[0]
- msg = msg.replace("**NOT_A_COMMENT**","#")
- return msg
This change is similar to https://github.com/ansible/ansible/pull/10465
It extends the logic there to also support none types. Right now if you have
a '!!null' in yaml, and that var gets passed around, it will get converted to
a string.
eg. defaults/main.yml
```
ENABLE_AWESOME_FEATURE: !!null # Yaml Null
OTHER_CONFIG:
secret1: "so_secret"
secret2: "even_more_secret"
CONFIG:
hostname: "some_hostname"
features:
awesame_feature: "{{ ENABLE_AWESOME_FEATURE}}"
secrets: "{{ OTHER_CONFIG }}"
```
If you output `CONFIG` to json or yaml, the feature flag would get represented in the output
as a string instead of as a null, but secrets would get represented as a dictionary. This is
a mis-match in behaviour where some "types" are retained and others are not. This change
should fix the issue.
I also updated the template test to test for this and made the changes to v2.
Added a changelog entry specifically for the change from empty string to null as the default.
Made the null representation configurable.
It still defaults to the python NoneType but can be overriden to be an emptystring by updating
the DEFAULT_NULL_REPRESENTATION config.
first off, we add an oddly slow basic test of 10k item inventory
Before:
```
Ran 229 tests in 13.214s
OK
real 0m13.403s
user 0m12.106s
sys 0m1.155s
```
After:
```
Ran 230 tests in 21.328s
OK
real 0m21.516s
user 0m20.099s
sys 0m1.275s
```
since that seems like a bit long for the test to add to runtime, lets profile
`python -m cProfile -s time ./bin/ansible all -i test/units/inventory_test_data/huge_range --list-hosts`
Before:
```
1272607 function calls (1259689 primitive calls) in 8.497 seconds
Ordered by: internal time
ncalls tottime percall cumtime percall filename:lineno(function)
10000 4.393 0.000 4.396 0.000 __init__.py:395(_get_host)
20000 2.695 0.000 2.697 0.000 __init__.py:341(__append_host_to_results)
40369 0.113 0.000 0.113 0.000 {posix.lstat}
50006 0.102 0.000 0.153 0.000 __init__.py:1490(combine_vars)
40008 0.089 0.000 0.202 0.000 __init__.py:1546(_load_vars_from_path)
20195 0.088 0.000 0.088 0.000 {posix.stat}
10011 0.087 0.000 0.087 0.000 {posix.getcwd}
```
The top two lines are promising optimization targets
- populate Inventory's host cache more in _get_host, as we are looping
over all the groups anyways.
- eliminate duplicate check of whether we've already included a host
in the construction around __append_host_to_results we can infer
presence of a host in the results list implies the presence of its
name in the hostnames set, allowing us to only to the less expensive
of the two checks
After:
```
1252610 function calls (1239692 primitive calls) in 1.320 seconds
Ordered by: internal time
ncalls tottime percall cumtime percall filename:lineno(function)
40369 0.105 0.000 0.105 0.000 {posix.lstat}
50006 0.094 0.000 0.141 0.000 __init__.py:1490(combine_vars)
40008 0.081 0.000 0.184 0.000 __init__.py:1546(_load_vars_from_path)
10011 0.080 0.000 0.080 0.000 {posix.getcwd}
20195 0.074 0.000 0.074 0.000 {posix.stat}
10002 0.069 0.000 0.261 0.000 __init__.py:1517(load_vars)
```