Compare commits

...

372 commits

Author SHA1 Message Date
Johannes Truschnigg b377301195 Fix sizes reported for devices with phys. bs != 512b (#15521)
The `setup` module reports incorrectly computed disk and partition size facts on (afaict) all Linux kernels for block devices that report a physical block size other than 512b.

This happens because `facts.py` incorrectly assumes that sysfs reports a device's block count in units of that device's physical sector size. The kernel, however, always reports and exports sector counts in units of 512b sectors, even if the device's hardware interface cannot address individual blocks this small. The results we see are inflated capacity figures for things like some SSD models and 4kn-HDDs that do report a hardware sector size greater than 512b.
2016-12-02 12:30:40 -05:00
Toshio Kuratomi 819c51cd80 Add apt_key fix to changelog 2016-10-24 15:41:11 -07:00
Toshio Kuratomi 345f940fc4 Update core submodule ref to pick up apt_key fix 2016-10-24 15:39:04 -07:00
Jim Ladd b2f117fafc Increase local version for unofficial rpms (#17044) 2016-08-11 17:36:26 -07:00
Matt Clay 763b73389a Remove coveralls from .travis.yml. 2016-06-30 20:48:19 -07:00
Toshio Kuratomi 5ed3f9404f Fix a unicode problem when parsing playbooks (#16507)
Fixes #16373
2016-06-30 12:09:20 -04:00
James Laska da937586fb Allow specifying an alternative RPMDIST 2016-05-04 20:10:20 -04:00
Toshio Kuratomi a3a5c484df One more Makefile change 2016-05-04 13:27:07 -07:00
Toshio Kuratomi 0dfb3ea0d5 One more fix 2016-05-04 11:37:54 -07:00
Toshio Kuratomi 7ffc44522f Push another fix to the Makefile for building 1.9 rpm as ansible1.9 2016-05-04 11:13:51 -07:00
Toshio Kuratomi 8394c02781 Fix a bug in the spec file change 2016-05-04 10:21:55 -07:00
Toshio Kuratomi 71524dba13 On stable-1.9 branch, build ansible1.9 package instead of ansible. 2016-05-04 10:19:16 -07:00
Toshio Kuratomi 10a38a7652 Update extras submodule ref to pick up lxc_container fix 2016-04-20 15:01:45 -07:00
Toshio Kuratomi 03d0e8b6b5 Add lxc_container fix to changelog 2016-04-20 15:01:38 -07:00
Brian Coca b31e29f124 fixed boto import as per #11755 2016-04-18 10:39:27 -04:00
James Cammarata 7644312b20 New release v1.9.6-1 2016-04-15 14:51:48 -04:00
James Cammarata e7c4ea4d1c New release v1.9.6-0.1.rc1 2016-04-06 15:20:52 -04:00
James Cammarata 3509e9cdd4 Don't fallback to using the roles name from the spec unless 'role' is missing
Fixes #15104
2016-04-04 14:02:05 -04:00
Toshio Kuratomi 331f6ba52e Add lxc_container fix to CHANGELOG 2016-04-02 01:41:53 -07:00
Toshio Kuratomi d46d49d43c Update extras submodule ref for lxc_container fix 2016-04-02 01:40:09 -07:00
Toshio Kuratomi 02ec04616a Add fix for inventory vars loading (15093) to the Changelog 2016-03-23 13:59:26 -07:00
Toshio Kuratomi 507b9032f0 Merge pull request #15130 from ansible/fix-inventory-vars-with-limit
Limit should not affect the calculation of host variables as the variables may be referenced by another host that is not limited.
2016-03-23 13:55:48 -07:00
Toshio Kuratomi f25e4eea67 Limit should not affect the calculation of host variables as the variables may be referenced by another host that is not limited.
Fixes #13556
Fixes #13557
Fixes #12174
2016-03-23 13:40:09 -07:00
Toshio Kuratomi 81c481739d Revert "Added workaround for inventory directories"
This reverts commit e856ac2320.

That commit was intended to fix --limit not honoring the playbook
directory as a source of inventory variable information.  However, the
commit changes the inventory basedir to where it thinks the playbook
basedir which breaks finding inventory variables inside of inventory
directories #15093.  Reverting this and looking for where limit might be
affecting the playbook basedir rather than the inventory basedir.
2016-03-23 12:36:38 -07:00
James Cammarata cdfbc4243a New release v1.9.5-1 2016-03-21 18:55:51 -04:00
Toshio Kuratomi 050c1b46b8 Change url so that we don't test https in the tests for file perms 2016-03-20 08:50:55 -07:00
Toshio Kuratomi a420b4952f Update submodule ref 2016-03-20 08:07:27 -07:00
Toshio Kuratomi 653f165028 Add integration test for #11821 2016-03-20 08:01:48 -07:00
Toshio Kuratomi 764f44fedb Document the issue with modules being created world-readable on the client in certain circumstances 2016-03-16 11:34:43 -07:00
James Cammarata ee2c442486 New release v1.9.5-0.1.rc1 2016-03-10 15:06:20 -05:00
nitzmahone 94e3ab5445 added winrm and user module backports to changelog 2016-03-07 20:33:10 -08:00
nitzmahone 797b2e8b3b update core submodule ref for user module fixes 2016-03-07 20:21:18 -08:00
Toshio Kuratomi 8785f9fc3c Merge pull request #14652 from alexandrem/fix_role_vars_precedence_interpolation
Fix bug where extra vars highest precedence is violated when used ins…
2016-03-07 13:44:29 -08:00
Brian Coca 46ae226607 avoid running assemble on check mode
fixes #14175
2016-03-07 16:01:38 -05:00
Toshio Kuratomi 26eb9a8bb9 Update core submodule for pip fix 2016-03-07 11:42:47 -08:00
Toshio Kuratomi ae36d35595 Add pip fix to changelog 2016-03-07 11:42:04 -08:00
Brian Coca c3b874755f avoid extra info being passed into mode
only permission info is valid
2016-03-04 14:48:57 -05:00
Toshio Kuratomi 8494d3867c Merge pull request #13556 from xytis/inventory_dir
fix loading host/group vars when inventory is directory and using --limit
2016-03-04 11:03:18 -08:00
Toshio Kuratomi b11f2a0267 Merge pull request #13802 from TimJones/tj-fix-issue-13800
Correctly parse dependency YAML dict
2016-03-04 10:58:04 -08:00
Brian Coca 402e375698 Merge pull request #14619 from dagwieers/patch-10
Check for closing sequence before templating
2016-03-04 13:32:17 -05:00
Brian Coca d2bd6604b0 Merge pull request #14713 from chouseknecht/galaxy1.9_paging
Fix bug 14715: Galaxy CLI paging error
2016-02-29 22:43:47 -05:00
chouseknecht 41d6531fe7 Fix bug 14715: Galaxy CLI paging error 2016-02-29 21:17:50 -05:00
Alexandre Mclean 0358d473ba Fix bug where extra vars highest precedence is violated when used inside an interpolation within another variable
Extra vars lose their precedence when they overwrite a variable inside another variable interpolation structure.

Fixes #10896
2016-02-24 22:54:23 -05:00
Brian Coca ca98e74251 Merge pull request #14622 from dagwieers/remove-v2-references
Remove references to v2 codebase
2016-02-23 09:36:24 -05:00
Dag Wieers bfff091a9e Remove references to v2 codebase 2016-02-23 14:15:32 +01:00
Dag Wieers edf3164bc7 Check for closing sequence for templating (Ansible v1.9)
This fixes #14573 for Ansible v1.9.
2016-02-23 11:42:48 +01:00
James Cammarata a05df837aa Merge pull request #14565 from dagwieers/fix-role_params-merge_hash
Template role_params to avoid merging dict and unicode
2016-02-22 11:11:32 -05:00
Dag Wieers ccbc849b20 Merge branch 'stable-1.9' of github.com:ansible/ansible into fix-role_params-merge_hash
Implement new fix from @jimi-c
2016-02-22 17:08:42 +01:00
James Cammarata f36896b2bb Merge pull request #14562 from dagwieers/combine_vars_backport
Backport combine_vars() logic from Ansible v2.0
2016-02-22 10:39:09 -05:00
Brian Coca 1e0cf69b1c Merge pull request #14559 from dagwieers/merge_hash
Improve efficiency of merge_hash (Ansible v1.9)
2016-02-20 12:39:23 -05:00
Dag Wieers fb442206ca Template role_params to avoid merging dict and unicode
This fixes #12915

Since combine_vars() is being run directly on role_params, we have to avoid merge_hash() to complain about merging a dict with a string (jinja template).
2016-02-19 01:57:23 +01:00
Dag Wieers aeaddc5559 Backport combine_vars() logic from Ansible v2.0
While debugging I noticed that _validate_both_dicts() was evaluated twice through combine_vars (when merge_hash). Looking at v2.0 the logic was improved to not do _validate_both_dicts() twice in this case.

I also backported the 'update' behaviour as it looks more pythonic.
2016-02-18 16:48:15 +01:00
Dag Wieers a935d61489 Improve efficiency of merge_hash
This commit improves 2 things:

- It makes merging empty dicts, or equal dicts faster
- It makes merging dicts faster (backported from v2.0)

I noticed that while debugging merge_hash a lot of merges related to empty dictionaries and sometimes identical dictionaries.
2016-02-18 16:03:54 +01:00
Brian Coca 92f387f681 backport fix for #12062 csvfile plugin strings 2016-02-12 17:30:55 -05:00
Brian Coca 4a043d5e82 switched threading to multiprocessing
both really work the same for the Lock but this hopefully will
avoid confusing people into thinking we are threaded or thread safe
Also did pyflakes cleanup and made note why checksums import exists
2016-02-11 22:19:41 -05:00
Brian Coca d99955596e Merge pull request #13992 from electrofelix/accelerate-race
Fix race in accelerate connection plugin
2016-02-11 22:00:38 -05:00
Brian Coca f48fef67bf Merge pull request #14253 from dagwieers/allow-key-auth-when-ask-pass
Allow key authentication when using `--ask-pass` (just like Ansible v2)
2016-02-11 01:45:13 -05:00
Brian Coca 051f4e5d3e Merge pull request #13697 from mvdbeek/stable-1.9
Set executable to None, fixes issue #13696
2016-02-11 01:43:10 -05:00
Toshio Kuratomi f1db951a74 Merge pull request #14402 from ansible/no-log-diff-before-fix
Fix hiding of original value of files in diff output with no_log
2016-02-09 21:00:45 -08:00
Toshio Kuratomi 504c0e6201 Fix hiding of original value of files in diff output with no_log 2016-02-09 17:43:01 -08:00
Toshio Kuratomi 0cf0efa280 Add apt locale fix to changelog 2016-02-07 14:29:54 -08:00
Toshio Kuratomi 95b1f8b49b Update submodule refs to pick up apt locale fix 2016-02-07 14:29:29 -08:00
Toshio Kuratomi 28df3a0793 Add locale fixes to changelog 2016-02-07 13:11:46 -08:00
Toshio Kuratomi cbcfa2df5e Update submodule ref for git locale fix 2016-02-07 13:10:36 -08:00
Toshio Kuratomi 43fdc6aee3 Allow setting run_command environment overrides for the life of an AnsibleModule 2016-02-07 13:07:28 -08:00
Toshio Kuratomi f1033f2194 rework run_command's env setting to not change os.environ for the rest of the module.
New param to run_command to modify the environment for just this invocation.
Documentation and comment adjustments.
2016-02-07 13:05:16 -08:00
Toshio Kuratomi 3d7efff30c Update submodule refs 2016-02-05 10:33:56 -08:00
Toshio Kuratomi dcce51853a Add changelog entry for no_log change 2016-02-05 09:50:26 -08:00
Toshio Kuratomi 42064446c4 Merge pull request #14339 from ansible/diff-no_log-fix
Fix --diff to respect no_log task parameter.
2016-02-05 09:39:04 -08:00
Toshio Kuratomi 0bcbcb20b0 Fix --diff to respect no_log task parameter. 2016-02-05 08:59:50 -08:00
James Cammarata f0c1058b7b Merge pull request #14336 from dagwieers/fix-eval-json-booleans-1.9
Defined JSON booleans in global context for python eval()
2016-02-05 11:36:00 -05:00
Dag Wieers b6e6c52b12 Defined JSON booleans in global context for python eval()
We define 'false', 'true' and 'null' as variables so that python eval() recognizes them as False, True and None.

This is a backport of a fix from 2.0.0.2 which also affects 1.9.4 (See issue #14291 and PR #14293)

This fixes #14291 for 1.9.4.
2016-02-05 17:24:48 +01:00
Toshio Kuratomi ab0904d051 Merge pull request #14305 from dagwieers/patch-7
Double import sys, removed one
2016-02-04 07:02:19 -08:00
Toshio Kuratomi 5db11fae6e Merge pull request #14306 from dagwieers/patch-8
Double import tempfile, remove one
2016-02-04 07:01:19 -08:00
Dag Wieers 63b49a0025 Double import tempfile, remove one
tempfile was imported twice.
2016-02-04 14:16:18 +01:00
Dag Wieers 982bd28b34 Double import sys, removed one
sys was imported twice.
2016-02-04 14:11:29 +01:00
Dag Wieers 009164227e Allow key authentication when using --ask-pass (similar to Ansible v2)
This closes #14250.

It should not have any ill-effects for existing use-cases as we would only allow additional authentication methods on top of password authentication. And since the user can authenticate in other ways already, it also has no security impact.
2016-02-02 10:05:08 +01:00
Toshio Kuratomi 8e2c5337f5 Merge pull request #13808 from chouseknecht/chouse
Added --ignore-certs option to ansible-galaxy init, install and info …
2016-01-20 10:00:07 -08:00
chouseknecht bb993f3aea Fix typo. 2016-01-20 12:55:35 -05:00
chouseknecht 007d05c4a1 Added note to 1.9.5 changelog regarding --ignore-certs, c option being added to allow work-around when behind a proxy server. 2016-01-20 12:50:58 -05:00
Darragh Bailey 1200a70879 Fix race in daemon initialize using delegate_to
Ensure only one thread can start up an accelerate daemon on a target
host where multiple hosts may be specified in the play, gather facts is
disabled and the first task delegates to the same target host.

This will slow down the initial connection to only allowing a single
thread setup a connection at a time, however this should be of a
negligible impact overall.
2016-01-19 12:44:23 +00:00
Darragh Bailey 3d4dc206a1 Prevent race in key setup for accelerate daemon
Ensure that initial setup in creating the key directory for ansible
accelerate mode keys, and generation/storage of the key for a
particular host are completed in a thread safe manner.

Creating directories/files and then assigning permissions and contents
to them means that paths may exist and satisfy the os.path.exists()
method in python for other threads before they are usuable.

Use a combination of locking around operations with use of unique named
files and an OS file system move to ensure that the conditions of
checking whether a file or directory exists, where it is potentially
created by another thread, will only succeed when the file has both the
correct contents and permissions.

Fixes #13850
2016-01-19 11:59:43 +00:00
chouseknecht f2a32566d5 Added --ignore-certs option to ansible-galaxy init, install and info commands. 2016-01-11 18:15:24 -05:00
Tim Jones 1254b4391b Correctly parse dependency YAML dict 2016-01-11 16:50:38 +01:00
Marius van den Beek cfd509d32e Set executable to None, fixes issue #13696 2015-12-30 14:32:10 +01:00
Brian Coca b24daecd6e minor fix to become docs 2015-12-28 10:26:22 -05:00
Stephen Medina 7a91b05e84 clarify idempotence explanation
Small typo; wasn't sure what to replace it with.
2015-12-28 10:25:09 -05:00
Brian Coca 6e9f622856 updated release cycle to 4 months instead of 2 2015-12-27 14:17:20 -05:00
Branko Majic 9a856b04ea Adding documentation for the 'dig' lookup (#13126). 2015-12-21 13:49:56 -05:00
Brian Coca ad47725713 allow for packaging to be in release tarball 2015-12-17 12:40:52 -05:00
Vytis Valentinavičius e856ac2320 Added workaround for inventory directories 2015-12-15 15:50:58 +02:00
nitzmahone 54d4225e23 backport ansible_winrm_* kwarg support
fixes #13508
2015-12-14 17:08:56 -08:00
Brian Coca 64148a84ca Merge pull request #13454 from qduxiaoliang/issue
quit plays with an error if there were failed tasks and handler execu…
2015-12-13 00:18:30 -05:00
Toshio Kuratomi 13a6f03082 Update core submodule ref 2015-12-10 08:09:51 -08:00
Leon Xie 0da8c8bdd5 quit plays with an error if there were failed tasks and handler execution is forced 2015-12-07 16:48:11 +08:00
Brian Coca ab18fd2171 added , back to inventory spliting 2015-12-05 01:07:12 -05:00
Toshio Kuratomi 5eeb4ef2b6 Update submodule refs 2015-12-04 10:02:27 -08:00
Toshio Kuratomi 85528e76b4 Note the fix to literal_eval in the changelog 2015-11-30 12:41:42 -08:00
Toshio Kuratomi f59bd76972 Call the function :-)
Fixes #13330

Conflicts:
	lib/ansible/module_utils/basic.py
2015-11-30 12:40:15 -08:00
Brian Coca 309996504f actually writes info now
- also minor updates to error messages to be more informative
2015-11-22 08:39:02 -08:00
Brian Coca 2d914d4b1e Merge pull request #13171 from kwaaioak/stable-1.9-10914
backport fix for ignoring errors produced by non posix file systems
2015-11-14 10:14:08 -08:00
Brian Coca 08395047af fixed success to also not include skipped, backport from 2.0 2015-11-14 14:41:06 -08:00
Brian Coca 09dde923d2 added missing : 2015-11-14 09:16:06 -08:00
Brian Coca 8698cf29a8 hack to prevent tempalte/copy errors on vagrant synced folders that report incorrectly errno 26
fixes #9526
2015-11-14 09:15:41 -08:00
James Cammarata ce77d2fa76 Merge pull request #13149 from unprofession-al/backport_fix_jsonfile_fact_caching_connection
Backport of the jsonfile cache fix for filepath substitution
2015-11-13 08:45:17 -05:00
Daniel Menet 6e04cae21c substitute tilde and env vars before storing C.CACHE_PLUGIN_CONNECTION as instance attribute 2015-11-13 09:56:01 +01:00
Toshio Kuratomi c1d9649c13 Note plugin loader fix in changelog 2015-11-03 07:41:18 -08:00
Toshio Kuratomi baa16b6fdf list => tuple 2015-11-03 07:41:02 -08:00
Toshio Kuratomi d8a6659d73 Merge pull request #13015 from j0057/fix-powershell-shebang-not-found
Make sure potential_names is not dependent on hashing order
2015-11-03 07:36:56 -08:00
Joost Molenaar 7c674fc19c Make sure potential_names is not dependent on hashing order 2015-11-03 15:17:35 +01:00
Toshio Kuratomi 62be954577 Second part of the script not honoring complex-args fix
I could have sworn I already committed this but it's not there so
recreating it.
2015-10-27 22:11:04 -07:00
Toshio Kuratomi 8b5588f98a Note the script and raw yaml dict fix 2015-10-27 12:24:14 -07:00
Toshio Kuratomi f8cac24cb2 Use complex_args as well as k=v args in script and raw 2015-10-27 10:16:57 -07:00
Toshio Kuratomi e8cc63aba5 Add fix for ini_file module and empty string 2015-10-26 13:08:34 -07:00
Brian Coca 8b644af1d8 Merge pull request #11638 from bcoca/fix_delegate_to_badtype
capture error when smoeone puts a list or some other complex type in
2015-10-26 13:23:58 -04:00
Toshio Kuratomi 4472889632 Use to_bytes instead of encode() to avoid traceback 2015-10-21 07:58:54 -07:00
Toshio Kuratomi bfe743c38a Fix leftover debugging statement 2015-10-19 10:22:02 -07:00
Toshio Kuratomi aa35154bc5 Fix uri module not handling all binary files
Fixes #2088
2015-10-19 10:17:29 -07:00
Toshio Kuratomi e3d7c470f9 Fix crypttab bug 2015-10-14 07:50:44 -07:00
Toshio Kuratomi fffdf5fb46 docker module fix 2015-10-14 07:34:49 -07:00
James Cammarata 5af1cda7c9 Version bump for release 1.9.4-1 2015-10-09 15:45:58 -04:00
Toshio Kuratomi 027dd6e67b Note the second fix to yum module (state=latest and name is a wildcard) 2015-10-08 09:59:09 -07:00
Toshio Kuratomi d388bd97cc Update submodule refs 2015-10-08 09:58:27 -07:00
James Cammarata e40f0fd66a Version bump for release candidate 1.9.4 rc3 2015-10-02 10:42:33 -04:00
James Cammarata a07ca04a75 Merge pull request #12571 from w1r0x/feat-git-ansible-pull-options
Fixes #12309
2015-10-01 10:46:45 -04:00
w1r0x bf2d1832aa Fixes #12309 2015-09-30 15:23:48 +03:00
James Cammarata af0fa26308 Merge pull request #12566 from cchurch/winrm_put_empty_file_19
Enable winrm put_file to upload an empty file.
2015-09-30 08:08:57 -04:00
Chris Church e62ca77aeb Enable winrm put_file to upload an empty file. 2015-09-29 19:52:49 -04:00
Toshio Kuratomi 7fbaf3aa4a Fixes #12488 2015-09-23 14:13:46 -07:00
Toshio Kuratomi a0fd450e64 Switch ansible-galaxy and the hipchat callback plugin to use open_url 2015-09-17 04:54:29 -07:00
James Cammarata 5d67420df0 Merge pull request #12398 from donckers/stable-1.9
Stable 1.9 - Backport fix for #6653
2015-09-16 16:43:52 -04:00
Daniel Donckers 57389d55b1 Removing unnecessary import 2015-09-16 10:31:54 -06:00
Daniel Donckers ed5ac932a5 Backport fix for properly use local variables from templates including other templates to 1.9
Fixes #6653
2015-09-15 07:54:55 -06:00
James Cammarata d2d3162a8b Also updating submodules for stable-1.9 branch 2015-09-11 18:15:20 -04:00
James Cammarata 72bf509729 Adding one more item to the CHANGELOG (vars_prompt private default fix) 2015-09-11 18:13:58 -04:00
James Cammarata 35686027d5 Revert "Fix order of loading of modules."
This reverts commit c0f416b510.
2015-09-11 18:12:54 -04:00
James Cammarata 08ac7224be Version bump for 1.9.4-rc2 2015-09-11 18:12:10 -04:00
Toshio Kuratomi c0f416b510 Fix order of loading of modules.
Allows ANSIBLE_LIBRARY to overload core modules even if the module in
ANSIBLE_LIBRARY doesn't have a .py extension.

Equivalent of devel commit: 4b895f04e3
2015-09-09 15:03:49 -07:00
Brian Coca d3dca5606c corrected private default when vars_prompt is dict 2015-09-09 15:38:32 -04:00
James Cammarata e5d6d1be44 Version bump for new release candidate 1.9.4-rc1 2015-09-04 17:07:35 -04:00
Toshio Kuratomi bf2a996547 Add changelog for yum bugfix 2015-09-04 13:16:13 -07:00
Toshio Kuratomi c9ddd7a3d8 Update core module ref to pick up fix for yum
Fixes #2013
2015-09-04 08:53:54 -07:00
James Cammarata d444ab507e Version bump for release 1.9.3-1 2015-09-03 18:26:11 -04:00
Nick Irvine 066b7079ef Clean non-printable chars from stdout instead of dropping the whole thing 2015-09-03 12:22:45 -04:00
Toshio Kuratomi f80494e434 Avoid a traceback when turning an exception into a message 2015-09-02 21:57:11 -07:00
Brian Coca 4956a33130 removed obsolete v2 tree 2015-09-01 07:13:05 -04:00
James Cammarata f006a743b6 Version bump for release candidate 1.9.3 rc3 2015-08-24 10:48:49 -04:00
James Cammarata 176b5bc502 Version bump for 1.9.3-rc2 2015-08-20 16:35:03 -04:00
Toshio Kuratomi 73383ff649 Add yum change to the CHANGELOG 2015-08-20 13:24:57 -07:00
Toshio Kuratomi 446b8cede2 Update core modules ref 2015-08-20 13:16:16 -07:00
Toshio Kuratomi 836a6e7e66 Update core module ref to pick up latest docker fix 2015-08-20 09:57:47 -07:00
Toshio Kuratomi a4a95eb7ac Add CVE-2015-6240 id for the zone/chroot security bug. 2015-08-18 09:47:43 -07:00
James Cammarata 0034a101cb Fix tagging on explict meta tasks
Fixes #11025
2015-08-14 19:29:43 -04:00
Brian Coca 152096c85c Merge pull request #11946 from nitzmahone/no_log_censor
prevent local logging of module args under -vv when no_log specified
2015-08-12 15:32:58 -04:00
nitzmahone c7fc812c6b prevent local logging of module args under -vv when no_log specified 2015-08-12 11:15:39 -07:00
James Cammarata bf5353767e Version bump for 1.9.3-rc1 2015-08-11 13:28:41 -04:00
Brian Coca 5d9b0f16a6 Merge pull request #11890 from abarnas/stable-1.9
Fix powershell splatting leaving 'ExecutionPolicy Unrestricted' intact'
2015-08-07 10:41:56 -04:00
Brian Coca 2b909b4eb5 Merge pull request #11888 from zfil/fix-patch-plugin
patch runner action plugin in ansible 1.9.x is broken with remote source patch file
2015-08-07 10:22:05 -04:00
Ard-Jan Barnas 4f0adfb865 Fix powershell splatting leaving 'ExecutionPolicy Unrestricted' intact'
Changed powershell.py to fix powershell splatting. To make sure
the ExectutionPolicy stays working, added 'ExecutionPolicy Unrestricted'
to _common-args.

This restores support for: myscript.ps1 @{'Key'='Value';'Another'='Value'}
2015-08-07 09:50:04 -04:00
Philippe Jandot 74bf670414 fix remote source patch file 2015-08-07 15:12:23 +02:00
Toshio Kuratomi 6e43065142 Changelog entry for docker fix 2015-08-06 11:55:42 -07:00
Toshio Kuratomi d8e9d78c2f Update core modules to pull in docker fix 2015-08-06 09:54:01 -07:00
Toshio Kuratomi 773a3becf5 Update submodule refs for apt_repository fix 2015-07-31 07:35:12 -07:00
Toshio Kuratomi 6c21f3c0fd iUpdate submodules 2015-07-30 18:10:24 -07:00
Toshio Kuratomi 6dfa93befd Add docker module fix 2015-07-29 12:39:33 -07:00
Toshio Kuratomi 2714080a78 Update changelog with list of modules we made tls fixes for 2015-07-28 11:36:42 -07:00
Brian Coca f893d2e0e6 Merge pull request #11522 from DazWorrall/patch-2
Add complex_args to logging callback data
2015-07-28 12:36:24 -04:00
Brian Coca 1026e06e31 applied fix from #11034 by @ubergeek42 2015-07-25 11:38:08 -04:00
Toshio Kuratomi 2afb7f717d Guard the PROTOCOL setting so that we work on older pythons 2015-07-24 15:08:38 -07:00
Toshio Kuratomi f074c6ad35 Update extras submodule ref 2015-07-23 07:33:39 -07:00
James Cammarata 6c3e8f214a Port of d412bc7 to stable-1.9 2015-07-22 16:19:18 -04:00
Toshio Kuratomi 742c6a1ffb Update module pointers to get the latest in the certificate fixes 2015-07-22 07:22:17 -07:00
Toshio Kuratomi cbc7301a76 Start list of extras modules with certificate checking added 2015-07-22 07:20:47 -07:00
Toshio Kuratomi 29d5271c1f Note certificate fix for ec2_ami_search 2015-07-21 13:51:42 -07:00
Toshio Kuratomi 604fbbb4a5 Pull in ec2_ami_search ssl fix 2015-07-21 13:51:17 -07:00
Toshio Kuratomi 01d2663687 update submodule refs for stable branch 2015-07-21 12:49:34 -07:00
Toshio Kuratomi 122c53ba38 Detect the old python-json library
Fixes #11654
2015-07-20 12:38:57 -07:00
Toshio Kuratomi c86e55dd02 Changelog for SNI and tls fixes 2015-07-20 09:52:15 -07:00
Toshio Kuratomi 990350a0fd Have openssl autonegotiate tls protocol on python < 2.7.9
This allows usage of tls-1.1 and tls-1.2 if the underlying openssl
library supports it.  Unfortunately it also allows sslv2 and sslv3 if
the server is only configured to support those.  In this day and age,
that's probably something that the server administrator should fix
anyhow.
2015-07-20 09:47:04 -07:00
Toshio Kuratomi a4691991ad Add support for SNI and TLS-1.1 and TLS-1.2 to the fetch_url() helper
Fixes #1716
Fixes #1695

Conflicts:
	test/integration/roles/test_uri/tasks/main.yml
2015-07-20 09:40:30 -07:00
Brian Coca 731dbeb712 capture error when smoeone puts a list or some other complex type in
delegate_to
2015-07-17 23:02:29 -04:00
Toshio Kuratomi 5d1fb380fc Add entry for yum incompatibility 2015-07-09 09:55:33 -07:00
Darren Worrall 66b92df568 Add complex_args to logging callback data
Callback plugins don't get given any complex module arguments on task invocation, this fixes that.
2015-07-08 08:06:24 +01:00
Brian Coca def94da14a reversed cache check condition to actually work
fixes #11505
2015-07-07 08:56:00 -04:00
Toshio Kuratomi 8b3875f286 Test unquote works as expected and fix two bugs:
* escaped end quote
* a single quote character
2015-07-06 13:17:28 -07:00
Brian Coca ec4ce7821e removed quotes that actually break detection 2015-07-06 15:44:16 -04:00
Brian Coca 01d9af26e0 pbrun not forced to use local daemon anymore 2015-07-05 15:50:16 -04:00
verm666 15ad02102e facts: add aliases to ansible_all_ipv4_addresses on OpenBSD 2015-07-03 13:58:32 -04:00
Brian Coca 5e78c5c672 updated submodule refs 2015-07-03 13:37:20 -04:00
Brian Coca e2c4cc70d6 added tests for sequence with a count of 1
now checks for stride being 0 so it does not skip counts of 1 but still skips counts of 0
fixes #11422
2015-07-03 13:34:03 -04:00
James Cammarata 676c686994 Updating CHANGELOG 2015-07-02 15:08:17 -04:00
James Cammarata 051d04c8d7 Fix bug related to keyczar messing up encodings
Also increases default AES key size to 256 for accelerated keys.
2015-07-02 15:05:28 -04:00
Toshio Kuratomi c15a6cc634 Convert whole string to unicode to fix UnicodeError
Fixes #11472
2015-07-02 10:23:57 -07:00
Toshio Kuratomi 62a1efa0c6 Fix traceback in on_unreachable
Fixes #10960
2015-06-29 08:33:45 -07:00
Toshio Kuratomi bd7e7c59a3 Pull in docs fix so that ansible-doc -l functions 2015-06-29 05:55:52 -07:00
Toshio Kuratomi db5ea10ed2 Merge pull request #11391 from emonty/backport/openstack
Backport openstack module_utils from devel
2015-06-26 08:34:26 -07:00
Toshio Kuratomi f974ff6972 Merge pull request #11390 from j2sol/pin-to-stable
Pin modules to the matching stable-1.9 branch
2015-06-25 11:39:15 -07:00
Monty Taylor 59cc4ead3d Backport openstack module_utils from devel
There were no modules in 1.9 that used these functions, and there are
people who would like to use devel modules with 1.9 ansible.
2015-06-25 14:28:28 -04:00
Jesse Keating 3e5add9ea1 Pin modules to the matching stable-1.9 branch
This is to keep module changes on devel from breaking when ran on
stable-1.9 Ansible.
2015-06-25 11:24:06 -07:00
Toshio Kuratomi aeb194d4ff Vendorize match_hostname code so that ansible can push it out to clients along with the code that uses it. 2015-06-25 08:20:15 -07:00
James Cammarata 705ab6c1e2 Version bump for 1.9.2-1 release 2015-06-24 13:35:44 -04:00
Toshio Kuratomi 8fe06a3bbd Add changelog for the chroot/jail/zone fixes 2015-06-24 09:02:50 -07:00
Toshio Kuratomi 30ff9270ac Use BUFSIZE when putting file as well as fetching file. 2015-06-24 08:54:06 -07:00
Toshio Kuratomi 784fb8ff8e Fix exec_command to not use a shell 2015-06-24 08:54:00 -07:00
Toshio Kuratomi 480ad7413a Fix fetch_file() method 2015-06-24 08:53:54 -07:00
Toshio Kuratomi 8564dbaf95 Fix problem with chroot connection plugin and symlinks from within the chroot.
Manually apply changes from 952166f48e
because git won't cherry-pick successfully
2015-06-24 08:52:46 -07:00
Toshio Kuratomi 0845151c9c Better error messages when the file to be transferred does not exist. 2015-06-24 08:51:55 -07:00
Toshio Kuratomi 0056f025c3 Bumpt the BUFSIZE to 64k for better performance 2015-06-24 08:51:45 -07:00
Toshio Kuratomi f0302f2e2d Fix problem with jail and zone connection plugins and symlinks from within the jail/zone. 2015-06-24 08:51:38 -07:00
Toshio Kuratomi a431098fc5 Fix for symlink problem in jail and zone
Manually pull in changes from ca2f2c4ebd
as git won't cherry-pick those files successfully.
2015-06-24 08:49:19 -07:00
Toshio Kuratomi 24fd4719cc Fix forwarding the user-given params from fetch_url() to open_url() 2015-06-23 15:21:14 -07:00
Toshio Kuratomi 9bf06bfe84 Update the submodule refs for stable-1.9 2015-06-19 09:17:42 -07:00
Kirk Strauser 65fc0161a8 Don't panic if AIX's uname doesn't support -W
The current code expects "uname -W" on AIX to always succeed. The AIX 5
instance I have doesn't support the -W flag and facts gathering always
crashes on it.

This skips some WPAR handling code if "uname -W" doesn't work.
2015-06-16 19:10:28 -04:00
Toshio Kuratomi e3e4f7af6b Test is a bit different in v1 as the lookup generates a failure in v2 but a warning in v1 2015-06-15 16:52:36 -07:00
Toshio Kuratomi 968070729a Add test that url lookup checks tls certificates 2015-06-15 16:46:29 -07:00
Toshio Kuratomi 2829d0f6ca Add etcd and url lookup plugin security fixes to changelog 2015-06-15 16:14:24 -07:00
Toshio Kuratomi 73c7e98260 Backport security fixes to etcd and url lookup plugins 2015-06-15 16:13:05 -07:00
Toshio Kuratomi 08c1ddd24e Split the fetch_url() function into fetch_url and open_url().
open_url() is suitable for use outside of a module environment.  Will
let us use open_url to do SSL cert verification in other, non-module
code.
2015-06-15 12:30:15 -07:00
Toshio Kuratomi 4a5a8ed963 Add dnf to the list of module that we squash loop items for 2015-06-11 08:58:12 -07:00
Jon Hawkesworth 0868679f13 Get-FileChecksum allways returns a string now,
and the test_win_copy integration tests that depend on the checksum
have been updated in this change too.
2015-06-10 20:48:15 -04:00
Brian Coca 74afd24387 added test for notify in case it is defined as an empty keyword 2015-06-10 18:52:50 -04:00
Brian Coca b70caac618 updated submodule refs 2015-06-03 14:38:51 -04:00
Brian Coca 7661f86c05 backported fix of missing import 2015-06-03 14:36:30 -04:00
Brian Coca 54103d23fc updated banners as per marketing's request 2015-06-03 11:22:35 -04:00
Brian Coca 7c428d1f51 added google addwords tag 2015-06-03 11:22:34 -04:00
Brian Coca 8a4deba013 Merge pull request #11139 from wenottingham/galaxy-fix
Handle when role_dependencies is None.
2015-06-03 09:48:24 -04:00
Bill Nottingham 2a003cba88 Handle when role_dependencies is None. 2015-06-02 22:31:53 -04:00
Toshio Kuratomi 43385e2683 Add CVE number 2015-06-01 08:54:12 -07:00
Brian Coca 35ca1dd073 Merge pull request #11110 from myniva/t/11109-fix-keyerror
Fix KeyError which occurs when not-existing entry is tried to be removed.
2015-06-01 11:34:20 -04:00
Brian Coca bc44e85f58 Merge pull request #11112 from jkleckner/apply-10563
Apply #10563 to stable-1.9
2015-06-01 09:33:03 -04:00
Basil Brunner 2945a462f6 Fix KeyError which occurs when not-existing entry is tried to be removed. Fixes #11109 2015-06-01 11:02:54 +02:00
Matt Martz 873d01dfe2 egg_info is now written directly to lib 2015-05-30 08:39:31 -07:00
Toshio Kuratomi 2c9337157b Update stable submodule refs 2015-05-29 18:58:41 -07:00
Toshio Kuratomi 1f972c697f Add dnf fix to CHANGELOG 2015-05-29 13:49:11 -07:00
Toshio Kuratomi 867a119425 Update core module ref to pull in yum fix 2015-05-29 13:47:54 -07:00
Toshio Kuratomi 4716f3f268 Add yum module fixes 2015-05-29 13:47:12 -07:00
Toshio Kuratomi 8a4a342996 Oops, accidentally updated the submodule refs to devel instead of the
stable-1.9 branch
2015-05-29 13:27:17 -07:00
Monty Taylor 870afac287 Add defaults and a link to os-client-config docs
Conflicts:
	lib/ansible/utils/module_docs_fragments/openstack.py
2015-05-29 13:22:13 -07:00
Monty Taylor 7a76d4e03b Remove unneeded required_one_of for openstack
We're being too strict - there is a third possibility, which is that a
user will have defined the OS_* environment variables and expect them to
pass through.

Conflicts:
	lib/ansible/module_utils/openstack.py
	lib/ansible/utils/module_docs_fragments/openstack.py
	v2/ansible/module_utils/openstack.py
2015-05-29 13:18:14 -07:00
Toshio Kuratomi 8286253c81 Test on fields that exist 2015-05-28 17:02:01 -07:00
Toshio Kuratomi b5b5e7afba Add uri to modules that have been fixed to check server certificates 2015-05-28 15:40:01 -07:00
Toshio Kuratomi 9caedb1f63 Add test that validate_certs=no works 2015-05-28 15:38:44 -07:00
Toshio Kuratomi b5e25a57af Changelog entry for get_url fixes 2015-05-28 13:38:20 -07:00
Toshio Kuratomi be7c59c7bb Make fetch_url check the server's certificate on https connections 2015-05-28 13:28:05 -07:00
Simon Dick c0265f80fb Allow the use of HTTP on custom ports in the fetch_url function 2015-05-28 13:27:59 -07:00
Brian Coca 54fb04afc4 made sequence more flexible, can handle descending and negative sequences and is skipped if start==end 2015-05-22 12:48:29 -07:00
Toshio Kuratomi f18a128f12 Add entry for sequence fix 2015-05-22 12:27:00 -07:00
Brian Coca 2816b8679b fixed corner case when counting backwards, added test cases for count=0 and backwards counts 2015-05-22 12:14:17 -07:00
James Cammarata 196d9e2893 Version update for release candidate 1.9.2-0.2.rc2 2015-05-22 14:01:16 -05:00
James Cammarata 8fab8aedc0 Submodule update for stable-1.9 2015-05-22 13:30:45 -05:00
James Cammarata 982bad7886 Version bump for release candidate 1.9.2-0.1.rc1 2015-05-15 21:02:17 -05:00
Brian Coca 127a669a23 made special treatment of certain filesystem for selinux configurable 2015-05-15 18:12:06 -04:00
Brian Coca 3a7cb413d1 Merge pull request #10985 from jmhodges/correct_unbound_error
correct unbound error in ec2.py's RDS code path
2015-05-11 23:01:55 -04:00
Jeff Hodges c2c56eefa8 correct unbound error variable in rds code path
Fixes #10910
2015-05-11 15:54:32 -07:00
Brian Coca 4eff3d5dc1 now properly inherit data from ansible.cfg for sudo/su ask pass
fixes #10891
2015-05-04 16:57:46 -04:00
Toshio Kuratomi e79535d42f Update core module ref to pull in docker module fixes for 1.9 2015-05-01 08:00:54 -07:00
James Cammarata b47d1d7e69 Version bump for release 1.9.1-1 2015-04-27 16:16:27 -05:00
James Cammarata 99af3a8dc1 version update for release candidate 1.9.1-0.4.rc4 2015-04-23 10:31:57 -05:00
Steve Gargan 286d9be512 avoid path issues by determining the path of ansible-pull and using its path to run ansible and ansible-playbook 2015-04-23 10:37:17 -04:00
Peter Oliver 3a5a6685a0 Consistently use "OracleLinux" in OS detection.
Previously, a mixture of "OracleLinux" and "Oracle Linux" was used,
causing the `ansible_os_family` fact not to be set to `RedHat`.

Fixes #10742.
2015-04-20 18:54:37 -04:00
James Laska 09a6c0c906 Fix traceback with using GCE on EL6 with python-crypto2.6
This fix resolves an issue on EL6 systems where there may be multiple versions
of pycrypto installed.  EPEL provides both `python-crypto` and
`python-crypto2.6`.  These packages are co-installable.  However, modules
importing the `Crypto` library must specify which version to use, otherwise the
default will be used.

This change follows the same pattern established in `bin/ansible` for
specifying python library requirements.
2015-04-20 15:51:07 -04:00
James Cammarata 9a07855151 Version bump for release candidate 1.9.1-0.3.rc3 2015-04-17 14:47:21 -05:00
James Cammarata 763f44a52b Fix tag handling on meta:flush_handlers tasks
Fixes #10758
2015-04-17 13:01:39 -05:00
Brian Coca 8f4c97fdbe adjusted for the posibolity of lsblk not existing for fact gathering 2015-04-17 11:16:09 -04:00
Chris Church a7a218349a Add -ExecutionPolicy Unrestricted back, was removed by #9602. 2015-04-16 08:47:45 -04:00
Chris Church 1cbc45700e Only try kerberos auth when username contains @ and pass realm to pywinrm. Alternative to #10644, fixes #10577. 2015-04-16 08:47:45 -04:00
Chris Church baa6426c57 Remove winrm connection cache (only useful when running against one host). Also fixes #10391. 2015-04-16 08:47:45 -04:00
James Cammarata e16e2b171c Version bump for 1.9.1-0.2.rc2 release candidate 2015-04-15 10:31:12 -05:00
Brian Coca ea9db2a0cc bad hack to maybe fix some corner cases with pbrun custom prompts 2015-04-15 11:18:02 -04:00
Brian Coca 2e5bad3385 fixed indent when looking at delegate_to vars 2015-04-14 19:00:15 -04:00
Brian Coca b1b78a4fd6 fixed another typo 2015-04-13 10:58:36 -04:00
Brian Coca f8b5e0814c typo fix 2015-04-13 10:49:31 -04:00
Brian Coca e609670fee fix for when calling bootinfo throws permmission errors (AIX)
fixes https://github.com/ansible/ansible-modules-core/issues/1108
2015-04-13 10:26:08 -04:00
Jesse Rusak d13646dcc5 Fix --force-handlers, and allow it in plays and ansible.cfg
The --force-handlers command line argument was not correctly running
handlers on hosts which had tasks that later failed. This corrects that,
and also allows you to specify force_handlers in ansible.cfg or in a
play.
2015-04-13 10:20:11 -04:00
Toshio Kuratomi efa93d4239 Reverse the error messages from jsonfile get and set 2015-04-09 10:41:29 -07:00
Kimmo Koskinen 9bd2e3b752 Use codecs module while reading & writing json cache file 2015-04-09 10:41:23 -07:00
James Laska 7923a1a2c5 Improve generation of debian changelog 2015-04-09 09:49:06 -04:00
Brian Coca 8d703c459e Merge pull request #10639 from detiber/module_utils_facts_1_9
Fix indentation
2015-04-08 03:19:36 -04:00
Jason DeTiberus 626b2fc7ef Fix indentation 2015-04-07 23:00:54 -04:00
James Cammarata b186f7b85e Version bump for 1.9.1-0.1.rc1 2015-04-06 13:38:33 -05:00
Brian Coca b855456844 updated submodule refs 2015-04-02 16:00:22 -04:00
Brian Coca 277658835a capture IOErrors on backup_local (happens on non posix filesystems)
fixes #10591
2015-04-02 15:58:21 -04:00
Brian Coca 97a4483c7c removed folding sudo/su to become logic from constants as it is already present downstream in playbook/play/tasks 2015-04-02 15:57:26 -04:00
Brian Coca b965d12f1e now ansible ignores tempate errors on passwords
they could be caused by random character combinations, fixes #10468
2015-04-02 15:53:58 -04:00
Brian Coca 84b8a80aa7 converted error on play var initialization into warning with more information 2015-04-02 15:53:16 -04:00
Brian Coca 1d4b96479f dont break everything when one of the vars in inject does not template correctly, wait till its used 2015-04-02 15:53:16 -04:00
Brian Coca 81e4a74c89 added note that custom connection plugins need update with this version 2015-03-31 11:14:02 -04:00
Brian Coca 6ab57081ec readded sudo/su vars to allow role/includes to work with passed sudo/su 2015-03-29 10:17:41 -04:00
Brian Coca b4662c3eda Merge pull request #10556 from kristous/patch-1
Update README.md
2015-03-27 08:22:05 -04:00
kristous d5dded43da Update README.md
I think since ansible and the ansible-modules have been splitted --recursive should be added
2015-03-27 06:20:13 +01:00
Brian Coca c0afe27e2f updated sumbmodule refs 2015-03-26 18:02:59 -04:00
deimosfr 290c74d4f4 fix consul inventory issue (missing method param) 2015-03-26 18:02:06 -04:00
Toshio Kuratomi a00056723f Make run_command() work when we get byte str with non-ascii characters (instead of unicode type like we were expecting)
Fix and test.

Fixes #10536
2015-03-26 07:52:05 -07:00
Toshio Kuratomi d6afb5d80e Fix assert to work with unicode values 2015-03-26 07:52:05 -07:00
James Cammarata 29809bb83d Version bump for 1.9.0.1-1 2015-03-25 18:44:56 -05:00
James Cammarata 717ffe2bea Version bump for 1.9.0-2 2015-03-25 17:07:21 -05:00
Brian Coca a0c7381a37 updated ref to extras 2015-03-25 18:05:09 -04:00
James Cammarata d76f8deed5 Submodule updates for 1.9 2015-03-25 14:57:16 -05:00
James Cammarata b268970564 Updating submodule pointers for 1.9 2015-03-25 14:46:54 -05:00
James Cammarata 03c6492726 cleaning up CHANGELOG with incorrect entries 2015-03-25 14:43:02 -05:00
James Cammarata 4ca9fc7e6f Version bump for 1.9.0 final release 2015-03-25 14:37:22 -05:00
James Cammarata 3798a6623a tweaking the CHANGELOG 2015-03-25 14:18:15 -05:00
Toshio Kuratomi dc8b7bc8d2 And all of core module changes added 2015-03-25 14:18:10 -05:00
Brian Coca 0a334a160d updated to latest ref 2015-03-24 15:16:09 -04:00
Brian Coca cf3313be0c makes raw module have quiet ssh so as to avoid extra output when not requried 2015-03-24 15:00:51 -04:00
James Cammarata 5d28d46b16 VERSION bump and submodule update for 1.9.0-0.2.rc2 2015-03-20 14:52:50 -05:00
Toshio Kuratomi 30af3166af Update core modules for asg tag fix 2015-03-20 11:39:12 -07:00
Eri Bastos 22b10a8f6e Patch for bug #10485 - ansible_distribution fact populates as 'RedHat' on Oracle Linux systems 2015-03-20 14:07:24 -04:00
Brian Coca bb58cbcd91 now use combine vars to preserve existing cached host vars 2015-03-20 11:34:53 -04:00
Brian Coca 608496dbd3 removed debug play from tests 2015-03-20 11:24:35 -04:00
Brian Coca e3e97f6e06 now correctly aplies add_host passed variables last to override existing vars. 2015-03-20 11:23:50 -04:00
Brian Coca 6dca95b309 now add_host loads hostvars 2015-03-20 10:34:51 -04:00
Toshio Kuratomi fd47c4d687 Pull ec2_asg fixes by updating core modules 2015-03-19 22:47:14 -07:00
Toshio Kuratomi 0551f36d6b Have selinux allow docker<=>nginx communication 2015-03-19 13:53:35 -07:00
Toshio Kuratomi cfda56908a Okay, let's see if these pauses are enough to get this passing 2015-03-19 13:53:26 -07:00
Toshio Kuratomi 4009c053f7 Fix the removal of busybox image 2015-03-19 13:53:10 -07:00
Toshio Kuratomi c571a0e6ea Some debugging for why docker tests are failing in jenkins 2015-03-19 13:53:02 -07:00
Toshio Kuratomi 9082905a62 Add more tests for private docker registries 2015-03-19 13:52:54 -07:00
Toshio Kuratomi f4878a2bec Remove debug statements 2015-03-19 13:52:41 -07:00
Toshio Kuratomi 67a559ed2b Add tests using a docker private registry 2015-03-19 13:52:33 -07:00
Toshio Kuratomi cfc90dff4b And ran into a different problem with centos6. Sigh. 2015-03-19 13:52:19 -07:00
Toshio Kuratomi e02fbcdb0f Attempt to enable docker tests for rhel/centos6 as well 2015-03-19 13:52:09 -07:00
Toshio Kuratomi 2fbfe5cdb2 Would help if I added these files in the right directory 2015-03-19 13:52:01 -07:00
Toshio Kuratomi dc6a1f42af Ugh, looks like very few distros have the proper packages to run the docker module.
break up the tests so that we can maybe  run this on at least one
platform
2015-03-19 13:51:43 -07:00
Toshio Kuratomi af90817622 Initial test of the docker module 2015-03-19 13:51:29 -07:00
Toshio Kuratomi 6a803f6582 Pull another fix in from core modules 2015-03-19 12:49:42 -07:00
Brian Coca 2459eb6dbc updated module ref for core 2015-03-19 15:20:28 -04:00
Brian Coca b34b50d8d1 updated core to latests stable 1.9 2015-03-19 14:47:47 -04:00
Brian Coca da62887233 ignore PE methods that are not sudo for checksums until we get them working universally 2015-03-19 14:46:40 -04:00
Brian Coca 824fc036e7 removed bare variable detection as this confuses people and forced us to allow for bare expressions 2015-03-19 09:28:04 -04:00
Toshio Kuratomi 7d2915442e Update the extras module pointer 2015-03-18 20:25:06 -07:00
Toshio Kuratomi ebc8193c48 Update core module pointer 2015-03-18 19:55:55 -07:00
Toshio Kuratomi dbe0c4f771 Update docker module 2015-03-18 18:29:18 -07:00
Brian Coca a4f2407328 added missing become method inventory override 2015-03-17 19:20:21 -04:00
Toshio Kuratomi e819263820 Update core modules pointer 2015-03-17 11:02:53 -07:00
Brian Coca 2a229bdb6c fixed issue with su in plays 2015-03-16 19:39:16 -04:00
Brian Coca 7fba952a9e slight changes to allow for checksum and other commands to work correctly with quoting 2015-03-16 19:10:10 -04:00
Steve Gargan 103dc01817 log errors and explicitly exit rather than raising exceptions 2015-03-16 15:12:40 -07:00
Steve Gargan 5488bc395e fix for issue #10422. outputs informative error message when AWS credentials are not available 2015-03-16 15:12:32 -07:00
James Laska 0898920eb0 Enable assert_raises_regexp on py26 2015-03-16 12:43:05 -07:00
James Laska fd4f541ded Add tox and travis-ci support
Add tox integration to run unittests in supported python releases.
Travis-CI is used for test execution.

Additionally, the unittest TestQuotePgIdentifier was updated to support
using assert_raises_regexp on python-2.6.

Sample travis-ci output available at
https://travis-ci.org/ansible/ansible/builds/54189977
2015-03-16 12:16:02 -07:00
Toshio Kuratomi dc434dd74e Update core pointer to make use of DOCKER_TLS_VERIFY env var:
https://github.com/ansible/ansible-modules-core/issues/946
2015-03-16 11:45:27 -07:00
Toshio Kuratomi 29407752c5 Update core modules pointer 2015-03-16 11:35:51 -07:00
Brian Coca 2823c8c987 fixed raw return check for privilege escalation 2015-03-16 14:00:50 -04:00
Toshio Kuratomi 28892cca14 Update core module pointer 2015-03-13 13:54:08 -07:00
Hartmut Goebel 41c892baf4 Fix detect of docker as virtualization_type.
Not only match`/docker/`, but also `docker-` followed by a hex-id.

Example (shortened):
```
$ cat /proc/1/cgroup
8:blkio:/system.slice/docker-de73f4d207861cf8757b69213ee67bb234b897a18bea7385964b6ed2d515da94.scope
7:net_cls:/
```
2015-03-13 11:43:25 -07:00
jhermann 7ba49ed430 added test requirements for pip 2015-03-12 14:00:39 -07:00
Toshio Kuratomi 1cf533f8df Comma is also dependent on position within the hash 2015-03-12 13:21:46 -07:00
Brian Coca 7768ab2b5c fixed and reintroduced syncronize test, fakerunner object needed become_method to be it's default 'sudo' 2015-03-12 14:48:21 -04:00
Toshio Kuratomi 6aaf77ac76 Hash randomization makes one of the heuristic_log_sanitize checks not work.
Nothing we can do, when it sanitizes ssh_urls it's simply overzealous.
2015-03-12 11:40:19 -07:00
James Cammarata 8d847efa37 Fix issue with unarchive disabling pipelining mode
Was using persist_files=True when specifying the create paramater,
which breaks pipelining. Switched to use delete_remote_tmp=False instead,
which is the proper way to preserve the remove tmp dir when running
other modules from the action plugin.
2015-03-12 10:24:38 -05:00
Toshio Kuratomi 0e2a21f1fa Update core pointer to pick up docker fix 2015-03-12 08:21:39 -07:00
Brian Coca 5f7cc8f0c1 changed from hash_merge to combine vars which resets default to
overwrite and not merge hashing
corrected merge vs combined in all pertinent sections
fixed typoe in combined_Vars
removed redundant inventory call, moved grousp to proper priority
readded inventory vars to runner's vars
correclty added inventory this time
2015-03-12 11:03:35 -04:00
Brian Coca 4db4fcd5a6 fixed missed conversion of su to become 2015-03-12 10:02:03 -04:00
Toshio Kuratomi 4941755851 Test case for #10426 2015-03-11 20:59:02 -07:00
Shirou WAKAYAMA 1c09660c44 set 'nonstring' arg to passthru. 2015-03-11 20:42:40 -07:00
Shirou WAKAYAMA a388cd69c0 use to_unicode() in _jinja2_vars if type is str. 2015-03-11 20:42:31 -07:00
Jürgen Hermann c33d2fa283 Generic package_dir mapping in setup.py (closes #10437) 2015-03-11 19:17:02 -07:00
Toshio Kuratomi 8f05824dda Update for another 3c2 fix 2015-03-11 19:09:07 -07:00
Toshio Kuratomi 7ea7279080 Update core modules to pull in fixes 2015-03-11 18:43:02 -07:00
Brian Coca c997228896 fixed missed su to become conversion 2015-03-11 19:24:03 -04:00
Brian Coca 839d2a2a79 fixes password error detection for ssh connection plugin
removes sycnronize test that does not work with current sudo setup
Fixes #10434
2015-03-11 19:09:34 -04:00
Brian Coca 4459686099 fix tag test that broke with new tag info displayed in list tasks 2015-03-11 16:29:31 -04:00
James Cammarata 435371d3fc Fix deb packaging version in the changelog 2015-03-11 14:50:45 -05:00
Brian Coca a2ff6cd5d0 removed uneeded reference to su_user 2015-03-11 11:58:49 -05:00
Brian Coca 44d9c02ba5 fixed traceback when x_user implicitly sets the become method
Fixes #10430

Also removed redundant resolution of sudo/su for backwards compatibility which
confused the conflict detection code.
2015-03-11 11:58:32 -05:00
Jeff Widman 4478ed60b4 Typo: lead --> led 2015-03-11 11:53:36 -05:00
Brian Coca 63469fceaa fix issue with ask pass signature 2015-03-11 10:29:18 -04:00
Brian Coca f0bdf0145a fixed bad paren in connection plugin 2015-03-11 09:31:24 -04:00
Brian Coca bce4bb2ce2 preliminary privlege escalation unification + pbrun
- become constants inherit existing sudo/su ones
- become command line options, marked sudo/su as deprecated and moved sudo/su passwords to runas group
- changed method signatures as privlege escalation is collapsed to become
- added tests for su and become, diabled su for lack of support in local.py
- updated playbook,play and task objects to become
- added become to runner
- added whoami test for become/sudo/su
- added home override dir for plugins
- removed useless method from ask pass
- forced become pass to always be string also uses to_bytes
- fixed fakerunner for tests
- corrected reference in synchronize action plugin
- added pfexec (needs testing)
- removed unused sudo/su in runner init
- removed deprecated info
- updated pe tests to allow to run under sudo and not need root
- normalized become options into a funciton to avoid duplication and inconsistencies
- pushed suppored list to connection classs property
- updated all connection plugins to latest 'become' pe

- includes fixes from feedback (including typos)
- added draft docs
- stub of become_exe, leaving for future v2 fixes
2015-03-10 17:42:52 -05:00
James Cammarata f4329c8977 Submodule update for stable-1.9 branch 2015-03-10 17:27:58 -05:00
James Cammarata 6d5a5883fe Setting up new release candidate versioning 2015-03-10 17:16:33 -05:00
404 changed files with 3626 additions and 35110 deletions

4
.coveragerc Normal file
View file

@ -0,0 +1,4 @@
[report]
omit =
*/python?.?/*
*/site-packages/nose/*

1
.gitignore vendored
View file

@ -42,6 +42,7 @@ deb-build
credentials.yml
# test output
.coverage
.tox
results.xml
coverage.xml
/test/units/cover-html

12
.gitmodules vendored
View file

@ -1,16 +1,8 @@
[submodule "lib/ansible/modules/core"]
path = lib/ansible/modules/core
url = https://github.com/ansible/ansible-modules-core.git
branch = devel
branch = stable-1.9
[submodule "lib/ansible/modules/extras"]
path = lib/ansible/modules/extras
url = https://github.com/ansible/ansible-modules-extras.git
branch = devel
[submodule "v2/ansible/modules/core"]
path = v2/ansible/modules/core
url = https://github.com/ansible/ansible-modules-core.git
branch = devel
[submodule "v2/ansible/modules/extras"]
path = v2/ansible/modules/extras
url = https://github.com/ansible/ansible-modules-extras.git
branch = devel
branch = stable-1.9

9
.travis.yml Normal file
View file

@ -0,0 +1,9 @@
sudo: false
language: python
env:
- TOXENV=py26
- TOXENV=py27
install:
- pip install tox
script:
- tox

View file

@ -1,35 +1,252 @@
Ansible Changes By Release
==========================
## 1.9 "Dancing In the Street" - ACTIVE DEVELOPMENT
## 2.0 "TBD" - ACTIVE DEVELOPMENT
in progress, details pending
Major Changes:
* Add a clone parameter to git module that allows you to get information about a remote repo even if it doesn't exist locally.
* Safety changes: several modules have force parameters that defaulted to true.
New Modules:
Other Notable Changes:
## 1.9.7 "Dancing in the Street" - TBD
* Fix for lxc_container backport which was broken because it tried to use a feature from ansible-2.x
* Fix for apt_key not deleting keys when given a long key_id.
## 1.9.6 "Dancing in the Street" - Apr 15, 2016
* Fix a regression in the loading of inventory variables where they were not
found when placed inside of an inventory directory.
* Fix lxc_container having predictable temp file names. Addresses CVE-2016-3096
## 1.9.5 "Dancing In the Street" - Mar 21, 2016
* Compatibility fix with docker 1.8.
* Fix a bug with the crypttab module omitting certain characters from the name of the device
* Fix bug with uri module not handling all binary files
* Fix bug with ini_file not removing options set to an empty string
* Fix bug with script and raw modules not honoring parameters passed via yaml dict syntax
* Fix bug with plugin loading finding the wrong modules because the suffix checking was not ordered
* Fix bug in the literal_eval module code used when we need python-2.4 compat
* Added --ignore-certs, -c option to ansible-galaxy. Allows ansible-galaxy to work behind a proxy
when the proxy fails to forward server certificates.
* Fixed bug where tasks marked no_log were showing hidden values in output if
ansible's --diff option was used.
* Fix bug with non-english locales in git and apt modules
* Compatibility fix for using state=absent with the pip ansible module and pip-6.1.0+
* Backported support for ansible_winrm_server_cert_validation flag to disable cert validation on Python 2.7.9+ (and support for other passthru args to pywinrm transport).
* Backported various updates to user module (prevent accidental OS X group membership removals, various checkmode fixes).
## 1.9.4 "Dancing In the Street" - Oct 10, 2015
* Fixes a bug where yum state=latest would error if there were no updates to install.
* Fixes a bug where yum state=latest did not work with wildcard package names.
* Fixes a bug in lineinfile relating to escape sequences.
* Fixes a bug where vars_prompt was not keeping passwords private by default.
* Fix ansible-galaxy and the hipchat callback plugin to check that the host it
is contacting matches its TLS Certificate.
## 1.9.3 "Dancing In the Street" - Sep 3, 2015
* Fixes a bug related to keyczar messing up encodings internally, resulting in decrypted
messages coming out as empty strings.
* AES Keys generated for use in accelerated mode are now 256-bit by default instead of 128.
* Fix url fetching for SNI with python-2.7.9 or greater. SNI does not work
with python < 2.7.9. The best workaround is probably to use the command
module with curl or wget.
* Fix url fetching to allow tls-1.1 and tls-1.2 if the system's openssl library
supports those protocols
* Fix ec2_ami_search module to check TLS Certificates
* Fix the following extras modules to check TLS Certificates:
* campfire
* layman
* librarto_annotate
* twilio
* typetalk
* Fix docker module's parsing of docker-py version for dev checkouts
* Fix docker module to work with docker server api 1.19
* Change yum module's state=latest feature to update all packages specified in
a single transaction. This is the same type of fix as was made for yum's
state=installed in 1.9.2 and both solves the same problems and with the same caveats.
* Fixed a bug where stdout from a module might be blank when there were were non-printable
ASCII characters contained within it
## 1.9.2 "Dancing In the Street" - Jun 26, 2015
* Security fixes to check that hostnames match certificates with https urls (CVE-2015-3908)
- get_url and uri modules
- url and etcd lookup plugins
* Security fixes to the zone (Solaris containers), jail (bsd containers),
and chroot connection plugins. These plugins can be used to connect to
their respective container types in leiu of the standard ssh connection.
Prior to this fix being applied these connection plugins didn't properly
handle symlinks within the containers which could lead to files intended to
be written to or read from the container being written to or read from the
host system instead. (CVE-2015-6240)
* Fixed a bug in the service module where init scripts were being incorrectly used instead of upstart/systemd.
* Fixed a bug where sudo/su settings were not inherited from ansible.cfg correctly.
* Fixed a bug in the rds module where a traceback may occur due to an unbound variable.
* Fixed a bug where certain remote file systems where the SELinux context was not being properly set.
* Re-enabled several windows modules which had been partially merged (via action plugins):
- win_copy.ps1
- win_copy.py
- win_file.ps1
- win_file.py
- win_template.py
* Fix bug using with_sequence and a count that is zero. Also allows counting backwards isntead of forwards
* Fix get_url module bug preventing use of custom ports with https urls
* Fix bug disabling repositories in the yum module.
* Fix giving yum module a url to install a package from on RHEL/CENTOS5
* Fix bug in dnf module preventing it from working when yum-utils was not already installed
* Change yum module to install all packages specified in a single transaction.
This fixes problems with dependencies between packages specified by filename
or URL. However, if you are installing packages which install or modify repository
information (for instance, epel-release) then you may need to make a separate
task to install the package that modifies the repo otherwise the correct
repository information may not be available for other packages you are trying to install.
## 1.9.1 "Dancing In the Street" - Apr 27, 2015
* Fixed a bug related to Kerberos auth when using winrm with a domain account.
* Fixing several bugs in the s3 module.
* Fixed a bug with upstart service detection in the service module.
* Fixed several bugs with the user module when used on OSX.
* Fixed unicode handling in some module situations (assert and shell/command execution).
* Fixed a bug in redhat_subscription when using the activationkey parameter.
* Fixed a traceback in the gce module on EL6 distros when multiple pycrypto installations are available.
* Added support for PostgreSQL 9.4 in rds_param_group
* Several other minor fixes.
## 1.9 "Dancing In the Street" - Mar 25, 2015
Major changes:
* Added kerberos support to winrm connection plugin.
* Tags rehaul: added 'all', 'always', 'untagged' and 'tagged' special tags and normalized
tag resolution. Added tag information to --list-tasks and new --list-tags option.
* Privilege Escalation generalization, new 'Become' system and variables now will
handle existing and new methods. Sudo and su have been kept for backwards compatibility.
New methods pbrun and pfexec in 'alpha' state, planned adding 'runas' for winrm connection plugin.
Existing custom connection plugins will need to be updated.
* Improved ssh connection error reporting, now you get back the specific message from ssh.
* Added facility to document task module return values for registered vars, both for
ansible-doc and the docsite. Documented copy, stats and acl modules, the rest must be
updated individually (we will start doing so incrementally).
* Optimize the plugin loader to cache available plugins much more efficiently.
For some use cases this can lead to dramatic improvements in startup time.
* Overhaul of the checksum system, now supports more systems and more cases more reliably and uniformly.
* Fix skipped tasks to not display their parameters if no_log is specified.
* Many fixes to unicode support, standarized functions to make it easier to add to input/output boundries.
* Added travis integration to github for basic tests, this should speed up ticket triage and merging.
* environment: directive now can also be applied to play and is inhertited by tasks, which can still overridde it.
* expanded facts and OS/distribution support for existing facts and improved performance with pypy.
* new 'wantlist' option to lookups allows for selecting a list typed variable vs a commad delimited string as the return.
* the shared module code for file backups now uses a timestamp resolution of seconds (previouslly minutes).
* allow for empty inventories, this is now a warning and not an error (for those using localhost and cloud modules).
* sped up YAML parsing in ansible by up to 25% by switching to CParser loader.
New Modules:
* cryptab: manages linux encrypted block devices
* gce_img: for utilizing GCE image resources
* gluster_volume: manage glusterfs volumes
* haproxy: for the load balancer of same name
* known_hosts: manages the ssh known_hosts file
* lxc_container: manage lxc containers
* patch: allows for patching files on target systems
* pkg5: installing and uninstalling packages on Solaris
* pkg5_publisher: manages Solaris pkg5 repository configuration
* postgresql_ext: manage postgresql extensions
* snmp_facts: gather facts via snmp
* svc: manages daemontools based services
* uptimerobot: manage monitoring with this service
New Filters:
* ternary: allows for trueval/falseval assignement dependint on conditional
* cartesian: returns the cartesian product of 2 lists
* to_uuid: given a string it will return an ansible domain specific UUID
* checksum: uses the ansible internal checksum to return a hash from a string
* hash: get a hash from a string (md5, sha1, etc)
* password_hash: get a hash form as string that can be used as a password in the user module (and others)
* A whole set of ip/network manipulation filters: ipaddr,ipwrap,ipv4,ipv6ipsubnet,nthhost,hwaddr,macaddr
Other Notable Changes:
* New lookup plugins:
* dig: does dns resolution and returns IPs.
* url: allows pulling data from a url.
* New callback plugins:
* syslog_json: allows logging play output to a syslog network server using json format
* Many new enhancements to the amazon web service modules:
* ec2 now applies all specified security groups when creating a new instance. Previously it was only applying one
* ec2_vol gained the ability to specify the EBS volume type
* ec2_vol can now detach volumes by specifying instance=None
* Fix ec2_group to purge specific grants rather than whole rules
* Added tenancy support for the ec2 module
* rds module has gained the ability to manage tags and set charset and public accessibility
* ec2_snapshot module gained the capability to remove snapshots
* Add alias support for route53
* Add private_zones support to route53
* ec2_asg: Add wait_for_instances parameter that waits until an instance is ready before ending the ansible task
* Many new docker improvements:
* restart_policy parameters to configure when the container automatically restarts
* If the docker client or server doesn't support an option, the task will now fail instead of silently ignoring the option
* Add insecure_registry parameter for connecting to registries via http
* New parameter to set a container's domainname
* Undeprecated docker_image module until there's replacement functionality
* Allow setting the container's pid namespace
* Add a pull parameter that chooses when ansible will look for more recent images in the registry
* docker module states have been greatly enhanced. The reworked and new states are:
* present now creates but does not start containers
* restarted always restarts a container
* reloaded restarts a container if ansible detects that the configuration is different than what is spcified
* reloaded accounts for exposed ports, env vars, and volumes
* Can now connect to the docker server using TLS
* Several source control modules had force parameters that defaulted to true.
These have been changed to default to false so as not to accidentally lose
work. Playbooks that depended on the former behaviour simply to add
work. Playbooks that depended on the former behaviour simply need to add
force=True to the task that needs it. Affected modules:
* bzr: When local modifications exist in a checkout, the bzr module used to
default to temoving the modifications on any operation. Now the module
default to removing the modifications on any operation. Now the module
will not remove the modifications unless force=yes is specified.
Operations that depend on a clean working tree may fail unless force=yes is
added.
* git: When local modifications exist in a checkout, the git module will now
fail unless force is explictly specified. Specifying force will allow the
module to revert and overwrite local modifications to make git actions
fail unless force is explictly specified. Specifying force=yes will allow
the module to revert and overwrite local modifications to make git actions
succeed.
* hg: When local modifications exist in a checkout, the hg module used to
default to removing the modifications on any operation. Now the module
will not remove the modifications unless force=yes is specified.
* subversion: When updating a checkout with local modifications, you now need
to add force so the module will revert the modifications before updating.
* Optimize the plugin loader to cache available plugins much more efficiently.
For some use cases this can lead to dramatic improvements in startup time.
* Fix skipped tasks to not display their parameters if no_log is specified.
to add force=yes so the module will revert the modifications before updating.
* New inventory scripts:
* vbox: virtualbox
* consul: use consul as an inventory source
* gce gained the ip_forward parameter to forward ip packets
* disk_auto_delete parameter to gce that will remove the boot disk after an instance is destroyed
* gce can now spawn instances with no external ip
* gce_pd gained the ability to choose a disk type
* gce_net gained target_tags parameter for creating firewall rules
* rax module has new parameters for making use of a boot volume
* Add scheduler_hints to the nova_compute module for optional parameters
* vsphere_guest now supports deploying guests from a template
* Many fixes for hardlink and softlink handling in file-related modules
* Implement user, group, mode, and selinux parameters for the unarchive module
* authorized_keys can now use url as a key source
* authorized_keys has a new exclusive paameter that determines if keys that weren't specified in the task
* The selinux module now sets the current running state to permissive if state='disabled'
* Can now set accounts to expire via the user module
* Overhaul of the service module to make code simpler and behave better for systems running several popular init systems
* yum module now has a parameter to refresh its cache of package metadata
* apt module gained a build_dep parameter to install a package's build dependencies
* Add parameters to the postgres modules to specify a unix socket to connect to the db
* The mount module now supports bind mounts
* Add a clone parameter to git module that allows you to get information about a remote repo even if it doesn't exist locally.
* Add a refspec argument to the git module that allows pulling commits that aren't part of a branch
* Many documentation additions and fixes.
## 1.8.4 "You Really Got Me" - Feb 19, 2015

View file

@ -1,10 +1,9 @@
prune v2
prune docsite
prune ticket_stubs
prune packaging
prune test
prune hacking
include README.md packaging/rpm/ansible.spec COPYING
include README.md COPYING
include examples/hosts
include examples/ansible.cfg
include lib/ansible/module_utils/powershell.ps1

View file

@ -34,7 +34,8 @@ PYTHON=python
SITELIB = $(shell $(PYTHON) -c "from distutils.sysconfig import get_python_lib; print get_python_lib()")
# VERSION file provides one place to update the software version
VERSION := $(shell cat VERSION)
VERSION := $(shell cat VERSION | cut -f1 -d' ')
RELEASE := $(shell cat VERSION | cut -f2 -d' ')
# Get the branch information from git
ifneq ($(shell which git),)
@ -52,15 +53,16 @@ DEBUILD_BIN ?= debuild
DEBUILD_OPTS = --source-option="-I"
DPUT_BIN ?= dput
DPUT_OPTS ?=
DEB_DATE := $(shell date +"%a, %d %b %Y %T %z")
ifeq ($(OFFICIAL),yes)
DEB_RELEASE = 1ppa
DEB_RELEASE = $(RELEASE)ppa
# Sign OFFICIAL builds using 'DEBSIGN_KEYID'
# DEBSIGN_KEYID is required when signing
ifneq ($(DEBSIGN_KEYID),)
DEBUILD_OPTS += -k$(DEBSIGN_KEYID)
endif
else
DEB_RELEASE = 0.git$(DATE)
DEB_RELEASE = 100.git$(DATE)
# Do not sign unofficial builds
DEBUILD_OPTS += -uc -us
DPUT_OPTS += -u
@ -73,12 +75,12 @@ DEB_DIST ?= unstable
# RPM build parameters
RPMSPECDIR= packaging/rpm
RPMSPEC = $(RPMSPECDIR)/ansible.spec
RPMDIST = $(shell rpm --eval '%{?dist}')
RPMRELEASE = 1
RPMDIST ?= $(shell rpm --eval '%{?dist}')
RPMRELEASE = $(RELEASE)
ifneq ($(OFFICIAL),yes)
RPMRELEASE = 0.git$(DATE)
RPMRELEASE = 100.git$(DATE)
endif
RPMNVR = "$(NAME)-$(VERSION)-$(RPMRELEASE)$(RPMDIST)"
RPMNVR = $(NAME)1.9-$(VERSION)-$(RPMRELEASE)$(RPMDIST)
# MOCK build parameters
MOCK_BIN ?= mock
@ -93,13 +95,7 @@ NOSETESTS3 ?= nosetests-3.3
all: clean python
tests:
PYTHONPATH=./lib $(NOSETESTS) -d -w test/units -v # Could do: --with-coverage --cover-package=ansible
newtests:
PYTHONPATH=./v2:./lib $(NOSETESTS) -d -w v2/test -v --with-coverage --cover-package=ansible --cover-branches
newtests-py3:
PYTHONPATH=./v2:./lib $(NOSETESTS3) -d -w v2/test -v --with-coverage --cover-package=ansible --cover-branches
PYTHONPATH=./lib $(NOSETESTS) -d -w test/units -v --with-coverage --cover-package=ansible --cover-branches
authors:
sh hacking/authors.sh
@ -175,7 +171,7 @@ mock-srpm: /etc/mock/$(MOCK_CFG).cfg rpmcommon
@echo "#############################################"
mock-rpm: /etc/mock/$(MOCK_CFG).cfg mock-srpm
$(MOCK_BIN) -r $(MOCK_CFG) --resultdir rpm-build/ --rebuild rpm-build/$(NAME)-*.src.rpm
$(MOCK_BIN) -r $(MOCK_CFG) --resultdir rpm-build/ --rebuild rpm-build/$(RPMNVR).src.rpm
@echo "#############################################"
@echo "Ansible RPM is built:"
@echo rpm-build/*.noarch.rpm
@ -202,7 +198,7 @@ rpm: rpmcommon
--define "_srcrpmdir %{_topdir}" \
--define "_specdir $(RPMSPECDIR)" \
--define "_sourcedir %{_topdir}" \
--define "_rpmfilename %%{NAME}-%%{VERSION}-%%{RELEASE}.%%{ARCH}.rpm" \
--define "_rpmfilename $(RPMNVR).%%{ARCH}.rpm" \
--define "__python `which $(PYTHON)`" \
-ba rpm-build/$(NAME).spec
@rm -f rpm-build/$(NAME).spec
@ -216,7 +212,7 @@ debian: sdist
mkdir -p deb-build/$${DIST} ; \
tar -C deb-build/$${DIST} -xvf dist/$(NAME)-$(VERSION).tar.gz ; \
cp -a packaging/debian deb-build/$${DIST}/$(NAME)-$(VERSION)/ ; \
sed -ie "s#^$(NAME) (\([^)]*\)) \([^;]*\);#ansible (\1-$(DEB_RELEASE)~$${DIST}) $${DIST};#" deb-build/$${DIST}/$(NAME)-$(VERSION)/debian/changelog ; \
sed -ie "s|%VERSION%|$(VERSION)|g;s|%RELEASE%|$(DEB_RELEASE)|;s|%DIST%|$${DIST}|g;s|%DATE%|$(DEB_DATE)|g" deb-build/$${DIST}/$(NAME)-$(VERSION)/debian/changelog ; \
done
deb: debian

View file

@ -1,4 +1,6 @@
[![PyPI version](https://badge.fury.io/py/ansible.png)](http://badge.fury.io/py/ansible) [![PyPI downloads](https://pypip.in/d/ansible/badge.png)](https://pypi.python.org/pypi/ansible)
[![PyPI version](https://badge.fury.io/py/ansible.png)](http://badge.fury.io/py/ansible)
[![PyPI downloads](https://pypip.in/d/ansible/badge.png)](https://pypi.python.org/pypi/ansible)
[![Build Status](https://travis-ci.org/ansible/ansible.svg?branch=tox_and_travis)](https://travis-ci.org/ansible/ansible)
Ansible

View file

@ -4,12 +4,23 @@ Ansible Releases at a Glance
Active Development
++++++++++++++++++
1.9 "Dancing In the Street" - in progress
2.0 "TBD" - in progress
Released
++++++++
1.8.1 "You Really Got Me" -- 11-26-2014
1.9.6 "Dancing In the Streets" 04-15-2016
1.9.5 "Dancing In the Streets" 03-21-2016
1.9.4 "Dancing In the Streets" 10-09-2015
1.9.3 "Dancing In the Streets" 09-03-2015
1.9.2 "Dancing In the Streets" 06-24-2015
1.9.1 "Dancing In the Streets" 04-27-2015
1.9.0 "Dancing In the Streets" 03-25-2015
1.8.4 "You Really Got Me" ---- 02-19-2015
1.8.3 "You Really Got Me" ---- 02-17-2015
1.8.2 "You Really Got Me" ---- 12-04-2014
1.8.1 "You Really Got Me" ---- 11-26-2014
1.8 "You Really Got Me" ---- 11-25-2014
1.7.2 "Summer Nights" -------- 09-24-2014
1.7.1 "Summer Nights" -------- 08-14-2014
1.7 "Summer Nights" -------- 08-06-2014

View file

@ -1 +1 @@
1.9
1.9.6 1

View file

@ -58,12 +58,12 @@ class Cli(object):
''' create an options parser for bin/ansible '''
parser = utils.base_parser(
constants=C,
runas_opts=True,
subset_opts=True,
constants=C,
runas_opts=True,
subset_opts=True,
async_opts=True,
output_opts=True,
connect_opts=True,
output_opts=True,
connect_opts=True,
check_opts=True,
diff_opts=False,
usage='%prog <host-pattern> [options]'
@ -82,12 +82,8 @@ class Cli(object):
parser.print_help()
sys.exit(1)
# su and sudo command line arguments need to be mutually exclusive
if (options.su or options.su_user or options.ask_su_pass) and \
(options.sudo or options.sudo_user or options.ask_sudo_pass):
parser.error("Sudo arguments ('--sudo', '--sudo-user', and '--ask-sudo-pass') "
"and su arguments ('-su', '--su-user', and '--ask-su-pass') are "
"mutually exclusive")
# privlege escalation command line arguments need to be mutually exclusive
utils.check_mutually_exclusive_privilege(options, parser)
if (options.ask_vault_pass and options.vault_password_file):
parser.error("--ask-vault-pass and --vault-password-file are mutually exclusive")
@ -101,20 +97,20 @@ class Cli(object):
pattern = args[0]
sshpass = None
sudopass = None
su_pass = None
vault_pass = None
sshpass = becomepass = vault_pass = become_method = None
options.ask_pass = options.ask_pass or C.DEFAULT_ASK_PASS
# Never ask for an SSH password when we run with local connection
if options.connection == "local":
options.ask_pass = False
options.ask_sudo_pass = options.ask_sudo_pass or C.DEFAULT_ASK_SUDO_PASS
options.ask_su_pass = options.ask_su_pass or C.DEFAULT_ASK_SU_PASS
else:
options.ask_pass = options.ask_pass or C.DEFAULT_ASK_PASS
options.ask_vault_pass = options.ask_vault_pass or C.DEFAULT_ASK_VAULT_PASS
(sshpass, sudopass, su_pass, vault_pass) = utils.ask_passwords(ask_pass=options.ask_pass, ask_sudo_pass=options.ask_sudo_pass, ask_su_pass=options.ask_su_pass, ask_vault_pass=options.ask_vault_pass)
# become
utils.normalize_become_options(options)
prompt_method = utils.choose_pass_prompt(options)
(sshpass, becomepass, vault_pass) = utils.ask_passwords(ask_pass=options.ask_pass, become_ask_pass=options.become_ask_pass, ask_vault_pass=options.ask_vault_pass, become_method=prompt_method)
# read vault_pass from a file
if not options.ask_vault_pass and options.vault_password_file:
@ -126,6 +122,7 @@ class Cli(object):
if options.subset:
inventory_manager.subset(options.subset)
hosts = inventory_manager.list_hosts(pattern)
if len(hosts) == 0:
callbacks.display("No hosts matched", stderr=True)
sys.exit(0)
@ -135,16 +132,10 @@ class Cli(object):
callbacks.display(' %s' % host)
sys.exit(0)
if ((options.module_name == 'command' or options.module_name == 'shell')
and not options.module_args):
if options.module_name in ['command','shell'] and not options.module_args:
callbacks.display("No argument passed to %s module" % options.module_name, color='red', stderr=True)
sys.exit(1)
if options.su_user or options.ask_su_pass:
options.su = True
options.sudo_user = options.sudo_user or C.DEFAULT_SUDO_USER
options.su_user = options.su_user or C.DEFAULT_SU_USER
if options.tree:
utils.prepare_writeable_dir(options.tree)
@ -160,17 +151,15 @@ class Cli(object):
forks=options.forks,
pattern=pattern,
callbacks=self.callbacks,
sudo=options.sudo,
sudo_pass=sudopass,
sudo_user=options.sudo_user,
transport=options.connection,
subset=options.subset,
check=options.check,
diff=options.check,
su=options.su,
su_pass=su_pass,
su_user=options.su_user,
vault_pass=vault_pass,
become=options.become,
become_method=options.become_method,
become_pass=becomepass,
become_user=options.become_user,
extra_vars=extra_vars,
)

View file

@ -31,7 +31,6 @@ import sys
import tarfile
import tempfile
import urllib
import urllib2
import yaml
from collections import defaultdict
@ -42,6 +41,7 @@ from optparse import OptionParser
import ansible.constants as C
import ansible.utils
from ansible.errors import AnsibleError
from ansible.module_utils.urls import open_url
default_meta_template = """---
galaxy_info:
@ -209,6 +209,8 @@ def build_option_parser(action):
parser.add_option(
'-s', '--server', dest='api_server', default="galaxy.ansible.com",
help='The API server destination')
parser.add_option('-c', '--ignore-certs', action='store_true', dest='ignore_certs', default=False,
help='Ignore SSL certificate validation errors.')
if action in ("init","install"):
parser.add_option(
@ -245,15 +247,18 @@ def exit_without_ignore(options, rc=1):
# Galaxy API functions
#-------------------------------------------------------------------------------------
def api_get_config(api_server):
def api_get_config(api_server, ignore_certs=False):
"""
Fetches the Galaxy API current version to ensure
the API server is up and reachable.
"""
validate_certs = True
if ignore_certs:
validate_certs = False
try:
url = 'https://%s/api/' % api_server
data = json.load(urllib2.urlopen(url))
data = json.load(open_url(url, validate_certs=validate_certs))
if not data.get("current_version",None):
return None
else:
@ -261,11 +266,15 @@ def api_get_config(api_server):
except:
return None
def api_lookup_role_by_name(api_server, role_name, notify=True):
def api_lookup_role_by_name(api_server, role_name, parser, notify=True, ignore_certs=False):
"""
Uses the Galaxy API to do a lookup on the role owner/name.
"""
validate_certs = True
if ignore_certs:
validate_certs = False
role_name = urllib.quote(role_name)
try:
@ -281,7 +290,7 @@ def api_lookup_role_by_name(api_server, role_name, notify=True):
url = 'https://%s/api/v1/roles/?owner__username=%s&name=%s' % (api_server,user_name,role_name)
try:
data = json.load(urllib2.urlopen(url))
data = json.load(open_url(url, validate_certs=validate_certs))
if len(data["results"]) == 0:
return None
else:
@ -289,49 +298,56 @@ def api_lookup_role_by_name(api_server, role_name, notify=True):
except:
return None
def api_fetch_role_related(api_server, related, role_id):
def api_fetch_role_related(api_server, related, role_id, ignore_certs=False):
"""
Uses the Galaxy API to fetch the list of related items for
the given role. The url comes from the 'related' field of
the role.
"""
validate_certs = True
if ignore_certs:
validate_certs = False
try:
url = 'https://%s/api/v1/roles/%d/%s/?page_size=50' % (api_server, int(role_id), related)
data = json.load(urllib2.urlopen(url))
data = json.load(open_url(url, validate_certs=validate_certs))
results = data['results']
done = (data.get('next', None) == None)
done = (data.get('next_link', None) == None)
while not done:
url = 'https://%s%s' % (api_server, data['next'])
url = 'https://%s%s' % (api_server, data['next_link'])
print url
data = json.load(urllib2.urlopen(url))
data = json.load(open_url(url))
results += data['results']
done = (data.get('next', None) == None)
done = (data.get('next_link', None) == None)
return results
except:
return None
def api_get_list(api_server, what):
def api_get_list(api_server, what, ignore_certs=False):
"""
Uses the Galaxy API to fetch the list of items specified.
"""
validate_certs = True
if ignore_certs:
validate_certs = False
try:
url = 'https://%s/api/v1/%s/?page_size' % (api_server, what)
data = json.load(urllib2.urlopen(url))
data = json.load(open_url(url, validate_certs=validate_certs))
if "results" in data:
results = data['results']
else:
results = data
done = True
if "next" in data:
done = (data.get('next', None) == None)
if "next_link" in data:
done = (data.get('next_link', None) == None)
while not done:
url = 'https://%s%s' % (api_server, data['next'])
url = 'https://%s%s' % (api_server, data['next_link'])
print url
data = json.load(urllib2.urlopen(url))
data = json.load(open_url(url))
results += data['results']
done = (data.get('next', None) == None)
done = (data.get('next_link', None) == None)
return results
except:
print "- failed to download the %s list" % what
@ -423,18 +439,16 @@ def get_galaxy_install_info(role_name, options):
Returns the YAML data contained in 'meta/.galaxy_install_info',
if it exists.
"""
info_data = None
try:
info_path = os.path.join(get_role_path(role_name, options), 'meta/.galaxy_install_info')
if os.path.isfile(info_path):
f = open(info_path, 'r')
info_data = yaml.safe_load(f)
f.close()
return info_data
else:
return None
except:
return None
pass
return info_data
def write_galaxy_install_info(role_name, role_version, options):
"""
@ -451,6 +465,7 @@ def write_galaxy_install_info(role_name, role_version, options):
info_path = os.path.join(get_role_path(role_name, options), 'meta/.galaxy_install_info')
f = open(info_path, 'w+')
info_data = yaml.safe_dump(info, f)
f.write(info_data)
f.close()
except:
return False
@ -475,6 +490,11 @@ def fetch_role(role_name, target, role_data, options):
Downloads the archived role from github to a temp location, extracts
it, and then copies the extracted role to the role library path.
"""
ignore_certs = get_opt(options, "ignore_certs")
validate_certs = True
if ignore_certs:
validate_certs = False
# first grab the file and save it to a temp location
if '://' in role_name:
@ -484,7 +504,7 @@ def fetch_role(role_name, target, role_data, options):
print "- downloading role from %s" % archive_url
try:
url_file = urllib2.urlopen(archive_url)
url_file = open_url(archive_url, validate_certs=validate_certs)
temp_file = tempfile.NamedTemporaryFile(delete=False)
data = url_file.read()
while data:
@ -495,7 +515,7 @@ def fetch_role(role_name, target, role_data, options):
except Exception, e:
# TODO: better urllib2 error handling for error
# messages that are more exact
print "- error: failed to download the file."
print "- error: failed to download the file: %s" % str(e)
return False
def install_role(role_name, role_version, role_filename, options):
@ -568,7 +588,7 @@ def install_role(role_name, role_version, role_filename, options):
# write out the install info file for later use
write_galaxy_install_info(role_name, role_version, options)
except OSError, e:
print "- error: you do not have permission to modify files in %s" % role_path
print "- error: you do not have permission to modify files in %s: %s" % (role_path, str(e))
return False
# return the parsed yaml metadata
@ -585,13 +605,14 @@ def execute_init(args, options, parser):
of a role that complies with the galaxy metadata format.
"""
init_path = get_opt(options, 'init_path', './')
api_server = get_opt(options, "api_server", "galaxy.ansible.com")
force = get_opt(options, 'force', False)
offline = get_opt(options, 'offline', False)
init_path = get_opt(options, 'init_path', './')
api_server = get_opt(options, "api_server", "galaxy.ansible.com")
force = get_opt(options, 'force', False)
offline = get_opt(options, 'offline', False)
ignore_certs = get_opt(options, 'ignore_certs', False)
if not offline:
api_config = api_get_config(api_server)
api_config = api_get_config(api_server, ignore_certs)
if not api_config:
print "- the API server (%s) is not responding, please try again later." % api_server
sys.exit(1)
@ -613,7 +634,7 @@ def execute_init(args, options, parser):
sys.exit(1)
except Exception, e:
parser.print_help()
print "- no role name specified for init"
print "- could not init specified role name: %s" % str(e)
sys.exit(1)
ROLE_DIRS = ('defaults','files','handlers','meta','tasks','templates','vars')
@ -641,10 +662,10 @@ def execute_init(args, options, parser):
# dependencies section
platforms = []
if not offline:
platforms = api_get_list(api_server, "platforms") or []
platforms = api_get_list(api_server, "platforms", ignore_certs) or []
categories = []
if not offline:
categories = api_get_list(api_server, "categories") or []
categories = api_get_list(api_server, "categories", ignore_certs) or []
# group the list of platforms from the api based
# on their names, with the release field being
@ -688,9 +709,10 @@ def execute_info(args, options, parser):
print "- you must specify a user/role name"
sys.exit(1)
api_server = get_opt(options, "api_server", "galaxy.ansible.com")
api_config = api_get_config(api_server)
roles_path = get_opt(options, "roles_path")
api_server = get_opt(options, "api_server", "galaxy.ansible.com")
api_config = api_get_config(api_server)
roles_path = get_opt(options, "roles_path")
ignore_certs = get_opt(options, "ignore_certs", False)
for role in args:
@ -703,7 +725,7 @@ def execute_info(args, options, parser):
del install_info['version']
role_info.update(install_info)
remote_data = api_lookup_role_by_name(api_server, role, False)
remote_data = api_lookup_role_by_name(api_server, role, parser, False, ignore_certs)
if remote_data:
role_info.update(remote_data)
@ -756,11 +778,11 @@ def execute_install(args, options, parser):
print "- please specify a user/role name, or a roles file, but not both"
sys.exit(1)
api_server = get_opt(options, "api_server", "galaxy.ansible.com")
no_deps = get_opt(options, "no_deps", False)
roles_path = get_opt(options, "roles_path")
api_server = get_opt(options, "api_server", "galaxy.ansible.com")
no_deps = get_opt(options, "no_deps", False)
roles_path = get_opt(options, "roles_path")
ignore_certs = get_opt(options, "ignore_certs")
roles_done = []
if role_file:
f = open(role_file, 'r')
if role_file.endswith('.yaml') or role_file.endswith('.yml'):
@ -799,18 +821,18 @@ def execute_install(args, options, parser):
tmp_file = fetch_role(role_src, None, None, options)
else:
# installing from galaxy
api_config = api_get_config(api_server)
api_config = api_get_config(api_server, ignore_certs)
if not api_config:
print "- the API server (%s) is not responding, please try again later." % api_server
sys.exit(1)
role_data = api_lookup_role_by_name(api_server, role_src)
role_data = api_lookup_role_by_name(api_server, role_src, parser, True, ignore_certs)
if not role_data:
print "- sorry, %s was not found on %s." % (role_src, api_server)
exit_without_ignore(options)
continue
role_versions = api_fetch_role_related(api_server, 'versions', role_data['id'])
role_versions = api_fetch_role_related(api_server, 'versions', role_data['id'], ignore_certs)
if "version" not in role or role['version'] == '':
# convert the version names to LooseVersion objects
# and sort them to get the latest version. If there
@ -842,9 +864,12 @@ def execute_install(args, options, parser):
if not no_deps and installed:
if not role_data:
role_data = get_role_metadata(role.get("name"), options)
role_dependencies = role_data['dependencies']
role_dependencies = role_data.get('dependencies',[])
else:
role_dependencies = role_data['summary_fields']['dependencies'] # api_fetch_role_related(api_server, 'dependencies', role_data['id'])
role_dependencies = role_data['summary_fields'].get('dependencies',[])
# api_fetch_role_related(api_server, 'dependencies', role_data['id'])
if not role_dependencies:
role_dependencies = []
for dep in role_dependencies:
if isinstance(dep, basestring):
dep = ansible.utils.role_spec_parse(dep)

View file

@ -97,7 +97,8 @@ def main(args):
help="one-step-at-a-time: confirm each task before running")
parser.add_option('--start-at-task', dest='start_at',
help="start the playbook at the task matching this name")
parser.add_option('--force-handlers', dest='force_handlers', action='store_true',
parser.add_option('--force-handlers', dest='force_handlers',
default=C.DEFAULT_FORCE_HANDLERS, action='store_true',
help="run handlers even if a task fails")
parser.add_option('--flush-cache', dest='flush_cache', action='store_true',
help="clear the fact cache")
@ -108,35 +109,33 @@ def main(args):
parser.print_help(file=sys.stderr)
return 1
# su and sudo command line arguments need to be mutually exclusive
if (options.su or options.su_user or options.ask_su_pass) and \
(options.sudo or options.sudo_user or options.ask_sudo_pass):
parser.error("Sudo arguments ('--sudo', '--sudo-user', and '--ask-sudo-pass') "
"and su arguments ('-su', '--su-user', and '--ask-su-pass') are "
"mutually exclusive")
# privlege escalation command line arguments need to be mutually exclusive
utils.check_mutually_exclusive_privilege(options, parser)
if (options.ask_vault_pass and options.vault_password_file):
parser.error("--ask-vault-pass and --vault-password-file are mutually exclusive")
sshpass = None
sudopass = None
su_pass = None
becomepass = None
vault_pass = None
options.ask_vault_pass = options.ask_vault_pass or C.DEFAULT_ASK_VAULT_PASS
if options.listhosts or options.syntax or options.listtasks or options.listtags:
(_, _, _, vault_pass) = utils.ask_passwords(ask_vault_pass=options.ask_vault_pass)
(_, _, vault_pass) = utils.ask_passwords(ask_vault_pass=options.ask_vault_pass)
else:
options.ask_pass = options.ask_pass or C.DEFAULT_ASK_PASS
# Never ask for an SSH password when we run with local connection
if options.connection == "local":
options.ask_pass = False
options.ask_sudo_pass = options.ask_sudo_pass or C.DEFAULT_ASK_SUDO_PASS
options.ask_su_pass = options.ask_su_pass or C.DEFAULT_ASK_SU_PASS
(sshpass, sudopass, su_pass, vault_pass) = utils.ask_passwords(ask_pass=options.ask_pass, ask_sudo_pass=options.ask_sudo_pass, ask_su_pass=options.ask_su_pass, ask_vault_pass=options.ask_vault_pass)
options.sudo_user = options.sudo_user or C.DEFAULT_SUDO_USER
options.su_user = options.su_user or C.DEFAULT_SU_USER
# set pe options
utils.normalize_become_options(options)
prompt_method = utils.choose_pass_prompt(options)
(sshpass, becomepass, vault_pass) = utils.ask_passwords(ask_pass=options.ask_pass,
become_ask_pass=options.become_ask_pass,
ask_vault_pass=options.ask_vault_pass,
become_method=prompt_method)
# read vault_pass from a file
if not options.ask_vault_pass and options.vault_password_file:
@ -197,20 +196,18 @@ def main(args):
stats=stats,
timeout=options.timeout,
transport=options.connection,
sudo=options.sudo,
sudo_user=options.sudo_user,
sudo_pass=sudopass,
become=options.become,
become_method=options.become_method,
become_user=options.become_user,
become_pass=becomepass,
extra_vars=extra_vars,
private_key_file=options.private_key_file,
only_tags=only_tags,
skip_tags=skip_tags,
check=options.check,
diff=options.diff,
su=options.su,
su_pass=su_pass,
su_user=options.su_user,
vault_password=vault_pass,
force_handlers=options.force_handlers
force_handlers=options.force_handlers,
)
if options.flush_cache:
@ -313,7 +310,7 @@ def main(args):
return 3
except errors.AnsibleError, e:
display("ERROR: %s" % e, color='red')
display(u"ERROR: %s" % utils.unicode.to_unicode(e, nonstring='simplerepr'), color='red')
return 1
return 0
@ -326,7 +323,7 @@ if __name__ == "__main__":
try:
sys.exit(main(sys.argv[1:]))
except errors.AnsibleError, e:
display("ERROR: %s" % e, color='red', stderr=True)
display(u"ERROR: %s" % utils.unicode.to_unicode(e, nonstring='simplerepr'), color='red', stderr=True)
sys.exit(1)
except KeyboardInterrupt, ke:
display("ERROR: interrupted", color='red', stderr=True)

View file

@ -140,6 +140,11 @@ def main(args):
help='adds the hostkey for the repo url if not already added')
parser.add_option('--key-file', dest='key_file',
help="Pass '-i <key_file>' to the SSH arguments used by git.")
parser.add_option('--git-force', dest='gitforce', default=False, action='store_true',
help='modified files in the working git repository will be discarded')
parser.add_option('--track-submodules', dest='tracksubmodules', default=False, action='store_true',
help='submodules will track the latest commit on their master branch (or other branch specified in .gitmodules).'
' This is equivalent to specifying the --remote flag to git submodule update')
options, args = parser.parse_args(args)
hostname = socket.getfqdn()
@ -182,13 +187,22 @@ def main(args):
if options.key_file:
repo_opts += ' key_file=%s' % options.key_file
if options.gitforce:
repo_opts += ' force=yes'
if options.tracksubmodules:
repo_opts += ' track_submodules=yes'
path = utils.plugins.module_finder.find_plugin(options.module_name)
if path is None:
sys.stderr.write("module '%s' not found.\n" % options.module_name)
return 1
cmd = 'ansible localhost -i "%s" %s -m %s -a "%s"' % (
inv_opts, base_opts, options.module_name, repo_opts
bin_path = os.path.dirname(os.path.abspath(__file__))
cmd = '%s/ansible localhost -i "%s" %s -m %s -a "%s"' % (
bin_path, inv_opts, base_opts, options.module_name, repo_opts
)
for ev in options.extra_vars:
cmd += ' -e "%s"' % ev
@ -221,7 +235,7 @@ def main(args):
print >>sys.stderr, "Could not find a playbook to run."
return 1
cmd = 'ansible-playbook %s %s' % (base_opts, playbook)
cmd = '%s/ansible-playbook %s %s' % (bin_path, base_opts, playbook)
if options.vault_password_file:
cmd += " --vault-password-file=%s" % options.vault_password_file
if options.inventory:

View file

@ -2,12 +2,12 @@
.\" Title: ansible-doc
.\" Author: :doctype:manpage
.\" Generator: DocBook XSL Stylesheets v1.78.1 <http://docbook.sf.net/>
.\" Date: 12/09/2014
.\" Date: 03/10/2015
.\" Manual: System administration commands
.\" Source: Ansible 1.9
.\" Source: Ansible 1.9.0
.\" Language: English
.\"
.TH "ANSIBLE\-DOC" "1" "12/09/2014" "Ansible 1\&.9" "System administration commands"
.TH "ANSIBLE\-DOC" "1" "03/10/2015" "Ansible 1\&.9\&.0" "System administration commands"
.\" -----------------------------------------------------------------
.\" * Define some portability stuff
.\" -----------------------------------------------------------------
@ -64,3 +64,9 @@ Ansible is released under the terms of the GPLv3 License\&.
\fBansible\-playbook\fR(1), \fBansible\fR(1), \fBansible\-pull\fR(1)
.sp
Extensive documentation is available in the documentation site: http://docs\&.ansible\&.com\&. IRC and mailing list info can be found in file CONTRIBUTING\&.md, available in: https://github\&.com/ansible/ansible
.SH "AUTHOR"
.PP
\fB:doctype:manpage\fR
.RS 4
Author.
.RE

View file

@ -2,12 +2,12 @@
.\" Title: ansible-galaxy
.\" Author: [see the "AUTHOR" section]
.\" Generator: DocBook XSL Stylesheets v1.78.1 <http://docbook.sf.net/>
.\" Date: 12/09/2014
.\" Date: 03/10/2015
.\" Manual: System administration commands
.\" Source: Ansible 1.9
.\" Source: Ansible 1.9.0
.\" Language: English
.\"
.TH "ANSIBLE\-GALAXY" "1" "12/09/2014" "Ansible 1\&.9" "System administration commands"
.TH "ANSIBLE\-GALAXY" "1" "03/10/2015" "Ansible 1\&.9\&.0" "System administration commands"
.\" -----------------------------------------------------------------
.\" * Define some portability stuff
.\" -----------------------------------------------------------------

View file

@ -2,12 +2,12 @@
.\" Title: ansible-playbook
.\" Author: :doctype:manpage
.\" Generator: DocBook XSL Stylesheets v1.78.1 <http://docbook.sf.net/>
.\" Date: 12/09/2014
.\" Date: 03/10/2015
.\" Manual: System administration commands
.\" Source: Ansible 1.9
.\" Source: Ansible 1.9.0
.\" Language: English
.\"
.TH "ANSIBLE\-PLAYBOOK" "1" "12/09/2014" "Ansible 1\&.9" "System administration commands"
.TH "ANSIBLE\-PLAYBOOK" "1" "03/10/2015" "Ansible 1\&.9\&.0" "System administration commands"
.\" -----------------------------------------------------------------
.\" * Define some portability stuff
.\" -----------------------------------------------------------------
@ -66,7 +66,7 @@ search path to load modules from\&. The default is
.PP
\fB\-e\fR \fIVARS\fR, \fB\-\-extra\-vars=\fR\fIVARS\fR
.RS 4
Extra variables to inject into a playbook, in key=value key=value format or as quoted JSON (hashes and arrays)\&.
Extra variables to inject into a playbook, in key=value key=value format or as quoted JSON (hashes and arrays)\&. To load variables from a file, specify the file preceded by @ (e\&.g\&. @vars\&.yml)\&.
.RE
.PP
\fB\-f\fR \fINUM\fR, \fB\-\-forks=\fR\fINUM\fR
@ -181,3 +181,9 @@ Ansible is released under the terms of the GPLv3 License\&.
\fBansible\fR(1), \fBansible\-pull\fR(1), \fBansible\-doc\fR(1)
.sp
Extensive documentation is available in the documentation site: http://docs\&.ansible\&.com\&. IRC and mailing list info can be found in file CONTRIBUTING\&.md, available in: https://github\&.com/ansible/ansible
.SH "AUTHOR"
.PP
\fB:doctype:manpage\fR
.RS 4
Author.
.RE

View file

@ -2,12 +2,12 @@
.\" Title: ansible
.\" Author: :doctype:manpage
.\" Generator: DocBook XSL Stylesheets v1.78.1 <http://docbook.sf.net/>
.\" Date: 12/09/2014
.\" Date: 03/10/2015
.\" Manual: System administration commands
.\" Source: Ansible 1.9
.\" Source: Ansible 1.9.0
.\" Language: English
.\"
.TH "ANSIBLE" "1" "12/09/2014" "Ansible 1\&.9" "System administration commands"
.TH "ANSIBLE" "1" "03/10/2015" "Ansible 1\&.9\&.0" "System administration commands"
.\" -----------------------------------------------------------------
.\" * Define some portability stuff
.\" -----------------------------------------------------------------
@ -104,3 +104,9 @@ Ansible is released under the terms of the GPLv3 License\&.
\fBansible\fR(1), \fBansible\-playbook\fR(1), \fBansible\-doc\fR(1)
.sp
Extensive documentation is available in the documentation site: http://docs\&.ansible\&.com\&. IRC and mailing list info can be found in file CONTRIBUTING\&.md, available in: https://github\&.com/ansible/ansible
.SH "AUTHOR"
.PP
\fB:doctype:manpage\fR
.RS 4
Author.
.RE

View file

@ -2,12 +2,12 @@
.\" Title: ansible-vault
.\" Author: [see the "AUTHOR" section]
.\" Generator: DocBook XSL Stylesheets v1.78.1 <http://docbook.sf.net/>
.\" Date: 12/09/2014
.\" Date: 03/10/2015
.\" Manual: System administration commands
.\" Source: Ansible 1.9
.\" Source: Ansible 1.9.0
.\" Language: English
.\"
.TH "ANSIBLE\-VAULT" "1" "12/09/2014" "Ansible 1\&.9" "System administration commands"
.TH "ANSIBLE\-VAULT" "1" "03/10/2015" "Ansible 1\&.9\&.0" "System administration commands"
.\" -----------------------------------------------------------------
.\" * Define some portability stuff
.\" -----------------------------------------------------------------

View file

@ -2,12 +2,12 @@
.\" Title: ansible
.\" Author: :doctype:manpage
.\" Generator: DocBook XSL Stylesheets v1.78.1 <http://docbook.sf.net/>
.\" Date: 12/09/2014
.\" Date: 03/10/2015
.\" Manual: System administration commands
.\" Source: Ansible 1.9
.\" Source: Ansible 1.9.0
.\" Language: English
.\"
.TH "ANSIBLE" "1" "12/09/2014" "Ansible 1\&.9" "System administration commands"
.TH "ANSIBLE" "1" "03/10/2015" "Ansible 1\&.9\&.0" "System administration commands"
.\" -----------------------------------------------------------------
.\" * Define some portability stuff
.\" -----------------------------------------------------------------
@ -89,19 +89,14 @@ The
to pass to the module\&.
.RE
.PP
\fB\-k\fR, \fB\-\-ask\-pass\fR
\fB\-k\fR, \fB\-\-ask\-pass\fR
.RS 4
Prompt for the SSH password instead of assuming key\-based authentication with ssh\-agent\&.
.RE
.PP
\fB--ask-su-pass\fR
.RS 4
Prompt for the su password instead of assuming key\-based authentication with ssh\-agent\&.
.RE
.PP
\fB\-K\fR, \fB\-\-ask\-sudo\-pass\fR
.RS 4
Prompt for the password to use with \-\-sudo, if any\&.
Prompt for the password to use with \-\-sudo, if any
.RE
.PP
\fB\-o\fR, \fB\-\-one\-line\fR
@ -111,12 +106,7 @@ Try to output everything on one line\&.
.PP
\fB\-s\fR, \fB\-\-sudo\fR
.RS 4
Run the command as the user given by \-u and sudo to root.
.RE
.PP
\fB\-S\fR, \fB\-\-su\fR
.RS 4
Run operations with su\&.
Run the command as the user given by \-u and sudo to root\&.
.RE
.PP
\fB\-t\fR \fIDIRECTORY\fR, \fB\-\-tree=\fR\fIDIRECTORY\fR
@ -221,3 +211,9 @@ Ansible is released under the terms of the GPLv3 License\&.
\fBansible\-playbook\fR(1), \fBansible\-pull\fR(1), \fBansible\-doc\fR(1)
.sp
Extensive documentation is available in the documentation site: http://docs\&.ansible\&.com\&. IRC and mailing list info can be found in file CONTRIBUTING\&.md, available in: https://github\&.com/ansible/ansible
.SH "AUTHOR"
.PP
\fB:doctype:manpage\fR
.RS 4
Author.
.RE

View file

@ -113,6 +113,24 @@
}
</style>
<!-- Google Code for Remarketing Tag -->
<script type="text/javascript">
/* <![CDATA[ */
var google_conversion_id = 972577926;
var google_custom_params = window.google_tag_params;
var google_remarketing_only = true;
/* ]]> */
</script>
<script type="text/javascript" src="//www.googleadservices.com/pagead/conversion.js">
</script>
<noscript>
<div style="display:inline;">
<img height="1" width="1" style="border-style:none;" alt=""
src="//googleads.g.doubleclick.net/pagead/viewthroughconversion/972577926/?value=0&amp;guid=ON&amp;script=0"/>
</div>
</noscript>
<!-- End of Google Code for Remarketing Tag -->
</head>
<body class="wy-body-for-nav">
@ -180,10 +198,10 @@
<!-- AnsibleFest and free eBook preview stuff -->
<center>
<a href="http://www.ansible.com/tower?utm_source=docs">
<img src="http://www.ansible.com/hs-fs/hub/330046/file-2031636235-png/DL_Folder/festlondon-docs.png">
<img src="http://www.ansible.com/hubfs/Docs_Ads/TowerDocs.png">
</a>
<a href="http://www.ansible.com/ansible-book">
<img src="http://www.ansible.com/hs-fs/hub/330046/file-2031636250-png/DL_Folder/Ebook-docs.png">
<a href="https://www.eventbrite.com/e/ansiblefest-nyc-2015-tickets-16058031003">
<img src="http://www.ansible.com/hubfs/Docs_Ads/Untitled_design_1.png">
</a>
<br/>&nbsp;<br/>
<br/>&nbsp;<br/>

146
docsite/rst/become.rst Normal file
View file

@ -0,0 +1,146 @@
Become (Privilege Escalation)
+++++++++++++++++++++++++++++
Ansible can use existing privilege escalation systems to allow a user to execute tasks as another.
.. contents:: Topics
Become
``````
Before 1.9 Ansible mostly allowed the use of `sudo` and a limited use of `su` to allow a login/remote user to become a different user
and execute tasks, create resources with the 2nd user's permissions. As of 1.9 `become` supersedes the old sudo/su, while still
being backwards compatible. This new system also makes it easier to add other privilege escalation tools like `pbrun` (Powerbroker),
`pfexec` and others.
New directives
--------------
become
equivalent to adding `sudo:` or `su:` to a play or task, set to 'true'/'yes' to activate privilege escalation
become_user
equivalent to adding sudo_user: or su_user: to a play or task
become_method
at play or task level overrides the default method set in ansibile.cfg
New ansible_ variables
----------------------
Each allows you to set an option per group and/or host
ansible_become
equivalent to ansible_sudo or ansbile_su, allows to force privilege escalation
ansible_become_method
allows to set privilege escalation method
ansible_become_user
equivalent to ansible_sudo_user or ansbile_su_user, allows to set the user you become through privilege escalation
ansible_become_pass
equivalent to ansible_sudo_pass or ansbile_su_pass, allows you to set the privilege escalation password
New command line options
-----------------------
--ask-become-pass
ask for privilege escalation password
-b, --become
run operations with become (no passorwd implied)
--become-method=BECOME_METHOD
privilege escalation method to use (default=sudo),
valid choices: [ sudo | su | pbrun | pfexec ]
--become-user=BECOME_USER
run operations as this user (default=root)
sudo and su still work!
-----------------------
Old playbooks will not need to be changed, even though they are deprecated, sudo and su directives will continue to work though it
is recommended to move to become as they may be retired at one point. You cannot mix directives on the same object though, ansible
will complain if you try to.
Become will default to using the old sudo/su configs and variables if they exist, but will override them if you specify any of the
new ones.
Limitations
-----------
Although privilege escalation is mostly intuitive, there are a few limitations
on how it works. Users should be aware of these to avoid surprises.
Becoming an Unprivileged User
=============================
Ansible has a limitation with regards to becoming an
unprivileged user that can be a security risk if users are not aware of it.
Ansible modules are executed on the remote machine by first substituting the
parameters into the module file, then copying the file to the remote machine,
and finally executing it there. If the module file is executed without using
become, when the become user is root, or when the connection to the remote
machine is made as root then the module file is created with permissions that
only allow reading by the user and root.
If the become user is an unprivileged user and then Ansible has no choice but
to make the module file world readable as there's no other way for the user
Ansible connects as to save the file so that the user that we're becoming can
read it.
If any of the parameters passed to the module are sensitive in nature then
those pieces of data are readable by reading the module file for the duration
of the Ansible module execution. Once the module is done executing Ansible
will delete the temporary file. If you trust the client machines then there's
no problem here. If you do not trust the client machines then this is
a potential danger.
Ways to resolve this include:
* Use :ref:`pipelining`. When pipelining is enabled, Ansible doesn't save the
module to a temporary file on the client. Instead it pipes the module to
the remote python interpreter's stdin. Pipelining does not work for
non-python modules.
* Don't perform an action on the remote machine by becoming an unprivileged
user. Temporary files are protected by UNIX file permissions when you
become root or do not use become.
Connection Plugin Support
=========================
Privilege escalation methods must also be supported by the connection plugin
used. Most connection plugins will warn if they do not support become. Some
will just ignore it as they always run as root (jail, chroot, etc).
Only one method may be enabled per host
=======================================
Methods cannot be chained. You cannot use ``sudo /bin/su -`` to become a user,
you need to have privileges to run the command as that user in sudo or be able
to su directly to it (the same for pbrun, pfexec or other supported methods).
Can't limit escalation to certain commands
==========================================
Privilege escalation permissions have to be general. Ansible does not always
use a specific command to do something but runs modules (code) from
a temporary file name which changes every time. If you have '/sbin/service'
or '/bin/chmod' as the allowed commands this will fail with ansible as those
paths won't match with the temporary file that ansible creates to run the
module.
.. seealso::
`Mailing List <http://groups.google.com/group/ansible-project>`_
Questions? Help? Ideas? Stop by the list on Google Groups
`irc.freenode.net <http://irc.freenode.net>`_
#ansible IRC chat channel

View file

@ -112,7 +112,7 @@ For example, using double vs single quotes in the above example would
evaluate the variable on the box you were on.
So far we've been demoing simple command execution, but most Ansible modules usually do not work like
simple scripts. They make the remote system look like you state, and run the commands necessary to
simple scripts. They make the remote system look like a state, and run the commands necessary to
get it there. This is commonly referred to as 'idempotence', and is a core design goal of Ansible.
However, we also recognize that running arbitrary commands is equally important, so Ansible easily supports both.

View file

@ -252,6 +252,20 @@ This options forces color mode even when running without a TTY::
force_color = 1
.. _force_handlers:
force_handlers
==============
.. versionadded:: 1.9.1
This option causes notified handlers to run on a host even if a failure occurs on that host::
force_handlers = True
The default is False, meaning that handlers will not run if a failure has occurred on a host.
This can also be set per play or on the command line. See :doc:`_handlers_and_failure` for more details.
.. _forks:
forks

View file

@ -24,7 +24,7 @@ For information about writing your own dynamic inventory source, see :doc:`devel
Example: The Cobbler External Inventory Script
``````````````````````````````````````````````
It is expected that many Ansible users with a reasonable amount of physical hardware may also be `Cobbler <http://cobbler.github.com>`_ users. (note: Cobbler was originally written by Michael DeHaan and is now lead by James Cammarata, who also works for Ansible, Inc).
It is expected that many Ansible users with a reasonable amount of physical hardware may also be `Cobbler <http://cobbler.github.com>`_ users. (note: Cobbler was originally written by Michael DeHaan and is now led by James Cammarata, who also works for Ansible, Inc).
While primarily used to kickoff OS installations and manage DHCP and DNS, Cobbler has a generic
layer that allows it to represent data for multiple configuration management systems (even at the same time), and has

View file

@ -27,12 +27,11 @@ What Version To Pick?
`````````````````````
Because it runs so easily from source and does not require any installation of software on remote
machines, many users will actually track the development version.
machines, many users will actually track the development version.
Ansible's release cycles are usually about two months long. Due to this
short release cycle, minor bugs will generally be fixed in the next release versus maintaining
backports on the stable branch. Major bugs will still have maintenance releases when needed, though
these are infrequent.
Ansible's release cycles are usually about four months long. Due to this short release cycle,
minor bugs will generally be fixed in the next release versus maintaining backports on the stable branch.
Major bugs will still have maintenance releases when needed, though these are infrequent.
If you are wishing to run the latest released version of Ansible and you are running Red Hat Enterprise Linux (TM), CentOS, Fedora, Debian, or Ubuntu, we recommend using the OS package manager.

View file

@ -29,6 +29,26 @@ write a task that looks like this::
Note that the above system only governs the failure of the particular task, so if you have an undefined
variable used, it will still raise an error that users will need to address.
.. _handlers_and_failure:
Handlers and Failure
````````````````````
.. versionadded:: 1.9.1
When a task fails on a host, handlers which were previously notified
will *not* be run on that host. This can lead to cases where an unrelated failure
can leave a host in an unexpected state. For example, a task could update
a configuration file and notify a handler to restart some service. If a
task later on in the same play fails, the service will not be restarted despite
the configuration change.
You can change this behavior with the ``--force-handlers`` command-line option,
or by including ``force_handlers: True`` in a play, or ``force_handlers = True``
in ansible.cfg. When handlers are forced, they will run when notified even
if a task fails on that host. (Note that certain errors could still prevent
the handler from running, such as a host becoming unreachable.)
.. _controlling_what_defines_failure:
Controlling What Defines Failure

View file

@ -140,6 +140,112 @@ default empty string return value if the key is not in the csv file
.. note:: The default delimiter is TAB, *not* comma.
.. _dns_lookup:
The DNS Lookup (dig)
````````````````````
.. versionadded:: 1.9.0
.. warning:: This lookup depends on the `dnspython <http://www.dnspython.org/>`_
library.
The ``dig`` lookup runs queries against DNS servers to retrieve DNS records for
a specific name (*FQDN* - fully qualified domain name). It is possible to lookup any DNS record in this manner.
There is a couple of different syntaxes that can be used to specify what record
should be retrieved, and for which name. It is also possible to explicitly
specify the DNS server(s) to use for lookups.
In its simplest form, the ``dig`` lookup plugin can be used to retrieve an IPv4
address (DNS ``A`` record) associated with *FQDN*:
.. note:: If you need to obtain the ``AAAA`` record (IPv6 address), you must
specify the record type explicitly. Syntax for specifying the record
type is described below.
.. note:: The trailing dot in most of the examples listed is purely optional,
but is specified for completeness/correctness sake.
::
- debug: msg="The IPv4 address for example.com. is {{ lookup('dig', 'example.com.')}}"
In addition to (default) ``A`` record, it is also possible to specify a different
record type that should be queried. This can be done by either passing-in
additional parameter of format ``qtype=TYPE`` to the ``dig`` lookup, or by
appending ``/TYPE`` to the *FQDN* being queried. For example::
- debug: msg="The TXT record for gmail.com. is {{ lookup('dig', 'gmail.com.', 'qtype=TXT') }}"
- debug: msg="The TXT record for gmail.com. is {{ lookup('dig', 'gmail.com./TXT') }}"
If multiple values are associated with the requested record, the results will be
returned as a comma-separated list. In such cases you may want to pass option
``wantlist=True`` to the plugin, which will result in the record values being
returned as a list over which you can iterate later on::
- debug: msg="One of the MX records for gmail.com. is {{ item }}"
with_items: "{{ lookup('dig', 'gmail.com./MX', wantlist=True) }}"
In case of reverse DNS lookups (``PTR`` records), you can also use a convenience
syntax of format ``IP_ADDRESS/PTR``. The following three lines would produce the
same output::
- debug: msg="Reverse DNS for 8.8.8.8 is {{ lookup('dig', '8.8.8.8/PTR') }}"
- debug: msg="Reverse DNS for 8.8.8.8 is {{ lookup('dig', '8.8.8.8.in-addr.arpa./PTR') }}"
- debug: msg="Reverse DNS for 8.8.8.8 is {{ lookup('dig', '8.8.8.8.in-addr.arpa.', 'qtype=PTR') }}"
By default, the lookup will rely on system-wide configured DNS servers for
performing the query. It is also possible to explicitly specify DNS servers to
query using the ``@DNS_SERVER_1,DNS_SERVER_2,...,DNS_SERVER_N`` notation. This
needs to be passed-in as an additional parameter to the lookup. For example::
- debug: msg="Querying 8.8.8.8 for IPv4 address for example.com. produces {{ lookup('dig', 'example.com', '@8.8.8.8') }}"
In some cases the DNS records may hold a more complex data structure, or it may
be useful to obtain the results in a form of a dictionary for future
processing. The ``dig`` lookup supports parsing of a number of such records,
with the result being returned as a dictionary. This way it is possible to
easily access such nested data. This return format can be requested by
passing-in the ``flat=0`` option to the lookup. For example::
- debug: msg="XMPP service for gmail.com. is available at {{ item.target }} on port {{ item.port }}"
with_items: "{{ lookup('dig', '_xmpp-server._tcp.gmail.com./SRV', 'flat=0', wantlist=True) }}"
Take note that due to the way Ansible lookups work, you must pass the
``wantlist=True`` argument to the lookup, otherwise Ansible will report errors.
Currently the dictionary results are supported for the following records:
.. note:: *ALL* is not a record per-se, merely the listed fields are available
for any record results you retrieve in the form of a dictionary.
========== =============================================================================
Record Fields
---------- -----------------------------------------------------------------------------
*ALL* owner, ttl, type
A address
AAAA address
CNAME target
DNAME target
DLV algorithm, digest_type, key_tag, digest
DNSKEY flags, algorithm, protocol, key
DS algorithm, digest_type, key_tag, digest
HINFO cpu, os
LOC latitude, longitude, altitude, size, horizontal_precision, vertical_precision
MX preference, exchange
NAPTR order, preference, flags, service, regexp, replacement
NS target
NSEC3PARAM algorithm, flags, iterations, salt
PTR target
RP mbox, txt
SOA mname, rname, serial, refresh, retry, expire, minimum
SPF strings
SRV priority, weight, port, target
SSHFP algorithm, fp_type, fingerprint
TLSA usage, selector, mtype, cert
TXT strings
========== =============================================================================
.. _more_lookups:
More Lookups

View file

@ -159,6 +159,12 @@ fact_caching = memory
#retry_files_enabled = False
#retry_files_save_path = ~/.ansible-retry
[privilege_escalation]
#become=True
#become_method=sudo
#become_user=root
#become_ask_pass=False
[paramiko_connection]
# uncomment this line to cause the paramiko connection plugin to not record new host
@ -217,3 +223,8 @@ accelerate_daemon_timeout = 30
# is "no".
#accelerate_multi_key = yes
[selinux]
# file systems that require special treatment when dealing with security context
# the default behaviour that copies the existing context or uses the user default
# needs to be changed to use the file system dependant context.
#special_context_filesystems=nfs,vboxsf,fuse

View file

@ -41,11 +41,10 @@ expr "$MANPATH" : "${PREFIX_MANPATH}.*" > /dev/null || export MANPATH="$PREFIX_M
# Do the work in a function so we don't repeat ourselves later
gen_egg_info()
{
python setup.py egg_info
if [ -e "$PREFIX_PYTHONPATH/ansible.egg-info" ] ; then
rm -r "$PREFIX_PYTHONPATH/ansible.egg-info"
fi
mv "ansible.egg-info" "$PREFIX_PYTHONPATH"
python setup.py egg_info
}
if [ "$ANSIBLE_HOME" != "$PWD" ] ; then

View file

@ -14,5 +14,5 @@
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
__version__ = '1.9'
__author__ = 'Michael DeHaan'
__version__ = '1.9.6'
__author__ = 'Ansible, Inc.'

View file

@ -18,6 +18,7 @@
import os
import time
import errno
import codecs
try:
import simplejson as json
@ -36,7 +37,7 @@ class CacheModule(BaseCacheModule):
self._timeout = float(C.CACHE_PLUGIN_TIMEOUT)
self._cache = {}
self._cache_dir = C.CACHE_PLUGIN_CONNECTION # expects a dir path
self._cache_dir = os.path.expandvars(os.path.expanduser(C.CACHE_PLUGIN_CONNECTION)) # expects a dir path
if not self._cache_dir:
utils.exit("error, fact_caching_connection is not set, cannot use fact cache")
@ -57,9 +58,9 @@ class CacheModule(BaseCacheModule):
cachefile = "%s/%s" % (self._cache_dir, key)
try:
f = open( cachefile, 'r')
f = codecs.open(cachefile, 'r', encoding='utf-8')
except (OSError,IOError), e:
utils.warning("error while trying to write to %s : %s" % (cachefile, str(e)))
utils.warning("error while trying to read %s : %s" % (cachefile, str(e)))
else:
value = json.load(f)
self._cache[key] = value
@ -73,9 +74,9 @@ class CacheModule(BaseCacheModule):
cachefile = "%s/%s" % (self._cache_dir, key)
try:
f = open(cachefile, 'w')
f = codecs.open(cachefile, 'w', encoding='utf-8')
except (OSError,IOError), e:
utils.warning("error while trying to read %s : %s" % (cachefile, str(e)))
utils.warning("error while trying to write to %s : %s" % (cachefile, str(e)))
else:
f.write(utils.jsonify(value))
finally:

View file

@ -19,8 +19,8 @@ import collections
import os
import sys
import time
import threading
from itertools import chain
from multiprocessing import Lock
from ansible import constants as C
from ansible.cache.base import BaseCacheModule
@ -51,7 +51,7 @@ class ProxyClientPool(object):
self._num_connections = 0
self._available_connections = collections.deque(maxlen=self.max_connections)
self._locked_connections = set()
self._lock = threading.Lock()
self._lock = Lock()
def _check_safe(self):
if self.pid != os.getpid():

View file

@ -392,6 +392,7 @@ class CliRunnerCallbacks(DefaultRunnerCallbacks):
def on_unreachable(self, host, res):
if type(res) == dict:
res = res.get('msg','')
res = to_bytes(res)
display("%s | FAILED => %s" % (host, res), stderr=True, color='red', runner=self.runner)
if self.options.tree:
utils.write_tree_file(

View file

@ -86,9 +86,6 @@ def shell_expand_path(path):
path = os.path.expanduser(os.path.expandvars(path))
return path
def get_plugin_paths(path):
return ':'.join([os.path.join(x, path) for x in [os.path.expanduser('~/.ansible/plugins/'), '/usr/share/ansible_plugins/']])
p = load_config_file()
active_user = pwd.getpwuid(os.geteuid())[0]
@ -115,7 +112,6 @@ DEFAULT_POLL_INTERVAL = get_config(p, DEFAULTS, 'poll_interval', 'ANSIBLE
DEFAULT_REMOTE_USER = get_config(p, DEFAULTS, 'remote_user', 'ANSIBLE_REMOTE_USER', active_user)
DEFAULT_ASK_PASS = get_config(p, DEFAULTS, 'ask_pass', 'ANSIBLE_ASK_PASS', False, boolean=True)
DEFAULT_PRIVATE_KEY_FILE = shell_expand_path(get_config(p, DEFAULTS, 'private_key_file', 'ANSIBLE_PRIVATE_KEY_FILE', None))
DEFAULT_SUDO_USER = get_config(p, DEFAULTS, 'sudo_user', 'ANSIBLE_SUDO_USER', 'root')
DEFAULT_ASK_SUDO_PASS = get_config(p, DEFAULTS, 'ask_sudo_pass', 'ANSIBLE_ASK_SUDO_PASS', False, boolean=True)
DEFAULT_REMOTE_PORT = get_config(p, DEFAULTS, 'remote_port', 'ANSIBLE_REMOTE_PORT', None, integer=True)
DEFAULT_ASK_VAULT_PASS = get_config(p, DEFAULTS, 'ask_vault_pass', 'ANSIBLE_ASK_VAULT_PASS', False, boolean=True)
@ -126,6 +122,7 @@ DEFAULT_MANAGED_STR = get_config(p, DEFAULTS, 'ansible_managed', None,
DEFAULT_SYSLOG_FACILITY = get_config(p, DEFAULTS, 'syslog_facility', 'ANSIBLE_SYSLOG_FACILITY', 'LOG_USER')
DEFAULT_KEEP_REMOTE_FILES = get_config(p, DEFAULTS, 'keep_remote_files', 'ANSIBLE_KEEP_REMOTE_FILES', False, boolean=True)
DEFAULT_SUDO = get_config(p, DEFAULTS, 'sudo', 'ANSIBLE_SUDO', False, boolean=True)
DEFAULT_SUDO_USER = get_config(p, DEFAULTS, 'sudo_user', 'ANSIBLE_SUDO_USER', 'root')
DEFAULT_SUDO_EXE = get_config(p, DEFAULTS, 'sudo_exe', 'ANSIBLE_SUDO_EXE', 'sudo')
DEFAULT_SUDO_FLAGS = get_config(p, DEFAULTS, 'sudo_flags', 'ANSIBLE_SUDO_FLAGS', '-H')
DEFAULT_HASH_BEHAVIOUR = get_config(p, DEFAULTS, 'hash_behaviour', 'ANSIBLE_HASH_BEHAVIOUR', 'replace')
@ -137,15 +134,31 @@ DEFAULT_SU_FLAGS = get_config(p, DEFAULTS, 'su_flags', 'ANSIBLE_SU_FLAG
DEFAULT_SU_USER = get_config(p, DEFAULTS, 'su_user', 'ANSIBLE_SU_USER', 'root')
DEFAULT_ASK_SU_PASS = get_config(p, DEFAULTS, 'ask_su_pass', 'ANSIBLE_ASK_SU_PASS', False, boolean=True)
DEFAULT_GATHERING = get_config(p, DEFAULTS, 'gathering', 'ANSIBLE_GATHERING', 'implicit').lower()
DEFAULT_LOG_PATH = shell_expand_path(get_config(p, DEFAULTS, 'log_path', 'ANSIBLE_LOG_PATH', ''))
DEFAULT_ACTION_PLUGIN_PATH = get_config(p, DEFAULTS, 'action_plugins', 'ANSIBLE_ACTION_PLUGINS', get_plugin_paths('action_plugins'))
DEFAULT_CACHE_PLUGIN_PATH = get_config(p, DEFAULTS, 'cache_plugins', 'ANSIBLE_CACHE_PLUGINS', get_plugin_paths('cache_plugins'))
DEFAULT_CALLBACK_PLUGIN_PATH = get_config(p, DEFAULTS, 'callback_plugins', 'ANSIBLE_CALLBACK_PLUGINS', get_plugin_paths('callback_plugins'))
DEFAULT_CONNECTION_PLUGIN_PATH = get_config(p, DEFAULTS, 'connection_plugins', 'ANSIBLE_CONNECTION_PLUGINS', get_plugin_paths('connection_plugins'))
DEFAULT_LOOKUP_PLUGIN_PATH = get_config(p, DEFAULTS, 'lookup_plugins', 'ANSIBLE_LOOKUP_PLUGINS', get_plugin_paths('lookup_plugins'))
DEFAULT_VARS_PLUGIN_PATH = get_config(p, DEFAULTS, 'vars_plugins', 'ANSIBLE_VARS_PLUGINS', get_plugin_paths('vars_plugins'))
DEFAULT_FILTER_PLUGIN_PATH = get_config(p, DEFAULTS, 'filter_plugins', 'ANSIBLE_FILTER_PLUGINS', get_plugin_paths('filter_plugins'))
DEFAULT_LOG_PATH = shell_expand_path(get_config(p, DEFAULTS, 'log_path', 'ANSIBLE_LOG_PATH', ''))
# selinux
DEFAULT_SELINUX_SPECIAL_FS = get_config(p, 'selinux', 'special_context_filesystems', None, 'fuse, nfs, vboxsf', islist=True)
#TODO: get rid of ternary chain mess
BECOME_METHODS = ['sudo','su','pbrun','pfexec','runas']
BECOME_ERROR_STRINGS = {'sudo': 'Sorry, try again.', 'su': 'Authentication failure', 'pbrun': '', 'pfexec': '', 'runas': ''}
DEFAULT_BECOME = get_config(p, 'privilege_escalation', 'become', 'ANSIBLE_BECOME',False, boolean=True)
DEFAULT_BECOME_METHOD = get_config(p, 'privilege_escalation', 'become_method', 'ANSIBLE_BECOME_METHOD','sudo' if DEFAULT_SUDO else 'su' if DEFAULT_SU else 'sudo' ).lower()
DEFAULT_BECOME_USER = get_config(p, 'privilege_escalation', 'become_user', 'ANSIBLE_BECOME_USER',default=None)
DEFAULT_BECOME_ASK_PASS = get_config(p, 'privilege_escalation', 'become_ask_pass', 'ANSIBLE_BECOME_ASK_PASS', False, boolean=True)
# need to rethink impementing these 2
DEFAULT_BECOME_EXE = None
#DEFAULT_BECOME_EXE = get_config(p, DEFAULTS, 'become_exe', 'ANSIBLE_BECOME_EXE','sudo' if DEFAULT_SUDO else 'su' if DEFAULT_SU else 'sudo')
#DEFAULT_BECOME_FLAGS = get_config(p, DEFAULTS, 'become_flags', 'ANSIBLE_BECOME_FLAGS',DEFAULT_SUDO_FLAGS if DEFAULT_SUDO else DEFAULT_SU_FLAGS if DEFAULT_SU else '-H')
DEFAULT_ACTION_PLUGIN_PATH = get_config(p, DEFAULTS, 'action_plugins', 'ANSIBLE_ACTION_PLUGINS', '~/.ansible/plugins/action_plugins:/usr/share/ansible_plugins/action_plugins')
DEFAULT_CACHE_PLUGIN_PATH = get_config(p, DEFAULTS, 'cache_plugins', 'ANSIBLE_CACHE_PLUGINS', '~/.ansible/plugins/cache_plugins:/usr/share/ansible_plugins/cache_plugins')
DEFAULT_CALLBACK_PLUGIN_PATH = get_config(p, DEFAULTS, 'callback_plugins', 'ANSIBLE_CALLBACK_PLUGINS', '~/.ansible/plugins/callback_plugins:/usr/share/ansible_plugins/callback_plugins')
DEFAULT_CONNECTION_PLUGIN_PATH = get_config(p, DEFAULTS, 'connection_plugins', 'ANSIBLE_CONNECTION_PLUGINS', '~/.ansible/plugins/connection_plugins:/usr/share/ansible_plugins/connection_plugins')
DEFAULT_LOOKUP_PLUGIN_PATH = get_config(p, DEFAULTS, 'lookup_plugins', 'ANSIBLE_LOOKUP_PLUGINS', '~/.ansible/plugins/lookup_plugins:/usr/share/ansible_plugins/lookup_plugins')
DEFAULT_VARS_PLUGIN_PATH = get_config(p, DEFAULTS, 'vars_plugins', 'ANSIBLE_VARS_PLUGINS', '~/.ansible/plugins/vars_plugins:/usr/share/ansible_plugins/vars_plugins')
DEFAULT_FILTER_PLUGIN_PATH = get_config(p, DEFAULTS, 'filter_plugins', 'ANSIBLE_FILTER_PLUGINS', '~/.ansible/plugins/filter_plugins:/usr/share/ansible_plugins/filter_plugins')
CACHE_PLUGIN = get_config(p, DEFAULTS, 'fact_caching', 'ANSIBLE_CACHE_PLUGIN', 'memory')
CACHE_PLUGIN_CONNECTION = get_config(p, DEFAULTS, 'fact_caching_connection', 'ANSIBLE_CACHE_PLUGIN_CONNECTION', None)
@ -163,6 +176,8 @@ DEPRECATION_WARNINGS = get_config(p, DEFAULTS, 'deprecation_warnings',
DEFAULT_CALLABLE_WHITELIST = get_config(p, DEFAULTS, 'callable_whitelist', 'ANSIBLE_CALLABLE_WHITELIST', [], islist=True)
COMMAND_WARNINGS = get_config(p, DEFAULTS, 'command_warnings', 'ANSIBLE_COMMAND_WARNINGS', False, boolean=True)
DEFAULT_LOAD_CALLBACK_PLUGINS = get_config(p, DEFAULTS, 'bin_ansible_callbacks', 'ANSIBLE_LOAD_CALLBACK_PLUGINS', False, boolean=True)
DEFAULT_FORCE_HANDLERS = get_config(p, DEFAULTS, 'force_handlers', 'ANSIBLE_FORCE_HANDLERS', False, boolean=True)
RETRY_FILES_ENABLED = get_config(p, DEFAULTS, 'retry_files_enabled', 'ANSIBLE_RETRY_FILES_ENABLED', True, boolean=True)
RETRY_FILES_SAVE_PATH = get_config(p, DEFAULTS, 'retry_files_save_path', 'ANSIBLE_RETRY_FILES_SAVE_PATH', '~/')
@ -172,7 +187,7 @@ ANSIBLE_SSH_ARGS = get_config(p, 'ssh_connection', 'ssh_args', 'AN
ANSIBLE_SSH_CONTROL_PATH = get_config(p, 'ssh_connection', 'control_path', 'ANSIBLE_SSH_CONTROL_PATH', "%(directory)s/ansible-ssh-%%h-%%p-%%r")
ANSIBLE_SSH_PIPELINING = get_config(p, 'ssh_connection', 'pipelining', 'ANSIBLE_SSH_PIPELINING', False, boolean=True)
PARAMIKO_RECORD_HOST_KEYS = get_config(p, 'paramiko_connection', 'record_host_keys', 'ANSIBLE_PARAMIKO_RECORD_HOST_KEYS', True, boolean=True)
# obsolete -- will be formally removed in 1.6
# obsolete -- will be formally removed
ZEROMQ_PORT = get_config(p, 'fireball_connection', 'zeromq_port', 'ANSIBLE_ZEROMQ_PORT', 5099, integer=True)
ACCELERATE_PORT = get_config(p, 'accelerate', 'accelerate_port', 'ACCELERATE_PORT', 5099, integer=True)
ACCELERATE_TIMEOUT = get_config(p, 'accelerate', 'accelerate_timeout', 'ACCELERATE_TIMEOUT', 30, integer=True)
@ -188,6 +203,7 @@ PARAMIKO_PTY = get_config(p, 'paramiko_connection', 'pty', 'AN
DEFAULT_PASSWORD_CHARS = ascii_letters + digits + ".,:-_"
# non-configurable things
DEFAULT_BECOME_PASS = None
DEFAULT_SUDO_PASS = None
DEFAULT_REMOTE_PASS = None
DEFAULT_SUBSET = None

View file

@ -36,7 +36,7 @@ class Inventory(object):
Host inventory for ansible.
"""
__slots__ = [ 'host_list', 'groups', '_restriction', '_also_restriction', '_subset',
__slots__ = [ 'host_list', 'groups', '_restriction', '_also_restriction', '_subset',
'parser', '_vars_per_host', '_vars_per_group', '_hosts_cache', '_groups_list',
'_pattern_cache', '_vault_password', '_vars_plugins', '_playbook_basedir']
@ -53,7 +53,7 @@ class Inventory(object):
self._vars_per_host = {}
self._vars_per_group = {}
self._hosts_cache = {}
self._groups_list = {}
self._groups_list = {}
self._pattern_cache = {}
# to be set by calling set_playbook_basedir by playbook code
@ -174,15 +174,16 @@ class Inventory(object):
return results
def get_hosts(self, pattern="all"):
"""
"""
find all host names matching a pattern string, taking into account any inventory restrictions or
applied subsets.
"""
# process patterns
if isinstance(pattern, list):
pattern = ';'.join(pattern)
patterns = pattern.replace(";",":").split(":")
patterns = pattern
else:
patterns = pattern.replace(";",":").replace(",",":").split(":")
hosts = self._get_hosts(patterns)
# exclude hosts not in a subset, if defined
@ -590,8 +591,13 @@ class Inventory(object):
for group in self.groups:
group.vars = utils.combine_vars(group.vars, self.get_group_vars(group, new_pb_basedir=True))
# get host vars from host_vars/ files
### HACK: in 2.0 subset isn't a problem. Never port this to 2.x
### Fixes: https://github.com/ansible/ansible/issues/13557
old_subset = self._subset
self._subset = None
for host in self.get_hosts():
host.vars = utils.combine_vars(host.vars, self.get_host_vars(host, new_pb_basedir=True))
self._subset = old_subset
# invalidate cache
self._vars_per_host = {}
self._vars_per_group = {}

View file

@ -32,6 +32,8 @@ REPLACER_ARGS = "\"<<INCLUDE_ANSIBLE_MODULE_ARGS>>\""
REPLACER_COMPLEX = "\"<<INCLUDE_ANSIBLE_MODULE_COMPLEX_ARGS>>\""
REPLACER_WINDOWS = "# POWERSHELL_COMMON"
REPLACER_VERSION = "\"<<ANSIBLE_VERSION>>\""
REPLACER_SELINUX = "<<SELINUX_SPECIAL_FILESYSTEMS>>"
class ModuleReplacer(object):
@ -40,14 +42,14 @@ class ModuleReplacer(object):
transfer. Rather than doing classical python imports, this allows for more
efficient transfer in a no-bootstrapping scenario by not moving extra files
over the wire, and also takes care of embedding arguments in the transferred
modules.
modules.
This version is done in such a way that local imports can still be
used in the module code, so IDEs don't have to be aware of what is going on.
Example:
from ansible.module_utils.basic import *
from ansible.module_utils.basic import *
... will result in the insertion basic.py into the module
@ -93,7 +95,7 @@ class ModuleReplacer(object):
module_style = 'new'
elif 'WANT_JSON' in module_data:
module_style = 'non_native_want_json'
output = StringIO()
lines = module_data.split('\n')
snippet_names = []
@ -166,6 +168,7 @@ class ModuleReplacer(object):
# these strings should be part of the 'basic' snippet which is required to be included
module_data = module_data.replace(REPLACER_VERSION, repr(__version__))
module_data = module_data.replace(REPLACER_SELINUX, ','.join(C.DEFAULT_SELINUX_SPECIAL_FS))
module_data = module_data.replace(REPLACER_ARGS, encoded_args)
module_data = module_data.replace(REPLACER_COMPLEX, encoded_complex)

View file

@ -38,6 +38,8 @@ BOOLEANS_TRUE = ['yes', 'on', '1', 'true', 1]
BOOLEANS_FALSE = ['no', 'off', '0', 'false', 0]
BOOLEANS = BOOLEANS_TRUE + BOOLEANS_FALSE
SELINUX_SPECIAL_FS="<<SELINUX_SPECIAL_FILESYSTEMS>>"
# ansible modules can be written in any language. To simplify
# development of Python modules, the functions available here
# can be inserted in any module source automatically by including
@ -64,18 +66,24 @@ import grp
import pwd
import platform
import errno
import tempfile
try:
import json
# Detect the python-json library which is incompatible
# Look for simplejson if that's the case
try:
if not isinstance(json.loads, types.FunctionType) or not isinstance(json.dumps, types.FunctionType):
raise ImportError
except AttributeError:
raise ImportError
except ImportError:
try:
import simplejson as json
except ImportError:
sys.stderr.write('Error: ansible requires a json module, none found!')
print('{"msg": "Error: ansible requires the stdlib json or simplejson module, neither was found!", "failed": true}')
sys.exit(1)
except SyntaxError:
sys.stderr.write('SyntaxError: probably due to json and python being for different versions')
print('{"msg": "SyntaxError: probably due to installed simplejson being for a different python version", "failed": true}')
sys.exit(1)
HAVE_SELINUX=False
@ -142,7 +150,7 @@ except ImportError:
elif isinstance(node, List):
return list(map(_convert, node.nodes))
elif isinstance(node, Dict):
return dict((_convert(k), _convert(v)) for k, v in node.items)
return dict((_convert(k), _convert(v)) for k, v in node.items())
elif isinstance(node, Name):
if node.name in _safe_names:
return _safe_names[node.name]
@ -347,7 +355,10 @@ class AnsibleModule(object):
self.check_mode = False
self.no_log = no_log
self.cleanup_files = []
# May be used to set modifications to the environment for any
# run_command invocation
self.run_command_environ_update = {}
self.aliases = {}
if add_file_common_args:
@ -355,8 +366,8 @@ class AnsibleModule(object):
if k not in self.argument_spec:
self.argument_spec[k] = v
# check the locale as set by the current environment, and
# reset to LANG=C if it's an invalid/unavailable locale
# check the locale as set by the current environment, and reset to
# a known valid (LANG=C) if it's an invalid/unavailable locale
self._check_locale()
(self.params, self.args) = self._load_params()
@ -528,10 +539,10 @@ class AnsibleModule(object):
path = os.path.dirname(path)
return path
def is_nfs_path(self, path):
def is_special_selinux_path(self, path):
"""
Returns a tuple containing (True, selinux_context) if the given path
is on a NFS mount point, otherwise the return will be (False, None).
Returns a tuple containing (True, selinux_context) if the given path is on a
NFS or other 'special' fs mount point, otherwise the return will be (False, None).
"""
try:
f = open('/proc/mounts', 'r')
@ -542,9 +553,13 @@ class AnsibleModule(object):
path_mount_point = self.find_mount_point(path)
for line in mount_data:
(device, mount_point, fstype, options, rest) = line.split(' ', 4)
if path_mount_point == mount_point and 'nfs' in fstype:
nfs_context = self.selinux_context(path_mount_point)
return (True, nfs_context)
if path_mount_point == mount_point:
for fs in SELINUX_SPECIAL_FS.split(','):
if fs in fstype:
special_context = self.selinux_context(path_mount_point)
return (True, special_context)
return (False, None)
def set_default_selinux_context(self, path, changed):
@ -562,9 +577,9 @@ class AnsibleModule(object):
# Iterate over the current context instead of the
# argument context, which may have selevel.
(is_nfs, nfs_context) = self.is_nfs_path(path)
if is_nfs:
new_context = nfs_context
(is_special_se, sp_context) = self.is_special_selinux_path(path)
if is_special_se:
new_context = sp_context
else:
for i in range(len(cur_context)):
if len(context) > i:
@ -648,6 +663,10 @@ class AnsibleModule(object):
msg="mode must be in octal or symbolic form",
details=str(e))
if mode != stat.S_IMODE(mode):
# prevent mode from having extra info orbeing invalid long number
self.fail_json(path=path, msg="Invalid mode supplied, only permission info is allowed", details=mode)
prev_mode = stat.S_IMODE(path_stat.st_mode)
if prev_mode != mode:
@ -854,7 +873,7 @@ class AnsibleModule(object):
# setting the locale to '' uses the default locale
# as it would be returned by locale.getdefaultlocale()
locale.setlocale(locale.LC_ALL, '')
except locale.Error, e:
except locale.Error:
# fallback to the 'C' locale, which may cause unicode
# issues but is preferable to simply failing because
# of an unknown locale
@ -1303,7 +1322,7 @@ class AnsibleModule(object):
try:
shutil.copy2(fn, backupdest)
except shutil.Error, e:
except (shutil.Error, IOError), e:
self.fail_json(msg='Could not make backup of %s to %s: %s' % (fn, backupdest, e))
return backupdest
@ -1352,8 +1371,9 @@ class AnsibleModule(object):
# Optimistically try a rename, solves some corner cases and can avoid useless work, throws exception if not atomic.
os.rename(src, dest)
except (IOError,OSError), e:
# only try workarounds for errno 18 (cross device), 1 (not permitted) and 13 (permission denied)
if e.errno != errno.EPERM and e.errno != errno.EXDEV and e.errno != errno.EACCES:
# only try workarounds for errno 18 (cross device), 1 (not permitted), 13 (permission denied)
# and 26 (text file busy) which happens on vagrant synced folders
if e.errno not in [errno.EPERM, errno.EXDEV, errno.EACCES, errno.ETXTBSY]:
self.fail_json(msg='Could not replace file: %s to %s: %s' % (src, dest, e))
dest_dir = os.path.dirname(dest)
@ -1399,25 +1419,29 @@ class AnsibleModule(object):
# rename might not preserve context
self.set_context_if_different(dest, context, False)
def run_command(self, args, check_rc=False, close_fds=True, executable=None, data=None, binary_data=False, path_prefix=None, cwd=None, use_unsafe_shell=False, prompt_regex=None):
def run_command(self, args, check_rc=False, close_fds=True, executable=None, data=None, binary_data=False, path_prefix=None, cwd=None, use_unsafe_shell=False, prompt_regex=None, environ_update=None):
'''
Execute a command, returns rc, stdout, and stderr.
args is the command to run
If args is a list, the command will be run with shell=False.
If args is a string and use_unsafe_shell=False it will split args to a list and run with shell=False
If args is a string and use_unsafe_shell=True it run with shell=True.
Other arguments:
- check_rc (boolean) Whether to call fail_json in case of
non zero RC. Default is False.
- close_fds (boolean) See documentation for subprocess.Popen().
Default is True.
- executable (string) See documentation for subprocess.Popen().
Default is None.
- prompt_regex (string) A regex string (not a compiled regex) which
can be used to detect prompts in the stdout
which would otherwise cause the execution
to hang (especially if no input data is
specified)
:arg args: is the command to run
* If args is a list, the command will be run with shell=False.
* If args is a string and use_unsafe_shell=False it will split args to a list and run with shell=False
* If args is a string and use_unsafe_shell=True it runs with shell=True.
:kw check_rc: Whether to call fail_json in case of non zero RC.
Default False
:kw close_fds: See documentation for subprocess.Popen(). Default True
:kw executable: See documentation for subprocess.Popen(). Default None
:kw data: If given, information to write to the stdin of the command
:kw binary_data: If False, append a newline to the data. Default False
:kw path_prefix: If given, additional path to find the command in.
This adds to the PATH environment vairable so helper commands in
the same directory can also be found
:kw cwd: iIf given, working directory to run the command inside
:kw use_unsafe_shell: See `args` parameter. Default False
:kw prompt_regex: Regex string (not a compiled regex) which can be
used to detect prompts in the stdout which would otherwise cause
the execution to hang (especially if no input data is specified)
:kwarg environ_update: dictionary to *update* os.environ with
'''
shell = False
@ -1448,16 +1472,30 @@ class AnsibleModule(object):
msg = None
st_in = None
# Set a temporart env path if a prefix is passed
env=os.environ
# Manipulate the environ we'll send to the new process
old_env_vals = {}
# We can set this from both an attribute and per call
for key, val in self.run_command_environ_update.items():
old_env_vals[key] = os.environ.get(key, None)
os.environ[key] = val
if environ_update:
for key, val in environ_update.items():
old_env_vals[key] = os.environ.get(key, None)
os.environ[key] = val
if path_prefix:
env['PATH']="%s:%s" % (path_prefix, env['PATH'])
old_env_vals['PATH'] = os.environ['PATH']
os.environ['PATH'] = "%s:%s" % (path_prefix, os.environ['PATH'])
# create a printable version of the command for use
# in reporting later, which strips out things like
# passwords from the args list
if isinstance(args, basestring):
to_clean_args = shlex.split(args.encode('utf-8'))
if isinstance(args, unicode):
b_args = args.encode('utf-8')
else:
b_args = args
to_clean_args = shlex.split(b_args)
del b_args
else:
to_clean_args = args
@ -1487,11 +1525,10 @@ class AnsibleModule(object):
close_fds=close_fds,
stdin=st_in,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE
stderr=subprocess.PIPE,
env=os.environ,
)
if path_prefix:
kwargs['env'] = env
if cwd and os.path.isdir(cwd):
kwargs['cwd'] = cwd
@ -1560,6 +1597,13 @@ class AnsibleModule(object):
except:
self.fail_json(rc=257, msg=traceback.format_exc(), cmd=clean_args)
# Restore env settings
for key, val in old_env_vals.items():
if val is None:
del os.environ[key]
else:
os.environ[key] = val
if rc != 0 and check_rc:
msg = heuristic_log_sanitize(stderr.rstrip())
self.fail_json(cmd=clean_args, rc=rc, stdout=stdout, stderr=stderr, msg=msg)

View file

@ -26,6 +26,11 @@
# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE
# USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
try:
import boto.ec2
except:
pass # error should already be covered by import boto
try:
from distutils.version import LooseVersion
HAS_LOOSE_VERSION = True

View file

@ -16,6 +16,7 @@
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
import os
import sys
import stat
import array
import errno
@ -43,9 +44,17 @@ except ImportError:
try:
import json
# Detect python-json which is incompatible and fallback to simplejson in
# that case
try:
json.loads
json.dumps
except AttributeError:
raise ImportError
except ImportError:
import simplejson as json
# --------------------------------------------------------------
# timeout function to make sure some fact gathering
# steps do not exceed a time limit
@ -87,7 +96,8 @@ class Facts(object):
_I386RE = re.compile(r'i([3456]86|86pc)')
# For the most part, we assume that platform.dist() will tell the truth.
# This is the fallback to handle unknowns or exceptions
OSDIST_LIST = ( ('/etc/redhat-release', 'RedHat'),
OSDIST_LIST = ( ('/etc/oracle-release', 'OracleLinux'),
('/etc/redhat-release', 'RedHat'),
('/etc/vmware-release', 'VMwareESX'),
('/etc/openwrt_release', 'OpenWrt'),
('/etc/system-release', 'OtherLinux'),
@ -171,9 +181,12 @@ class Facts(object):
if self.facts['system'] == 'Linux':
self.get_distribution_facts()
elif self.facts['system'] == 'AIX':
rc, out, err = module.run_command("/usr/sbin/bootinfo -p")
data = out.split('\n')
self.facts['architecture'] = data[0]
try:
rc, out, err = module.run_command("/usr/sbin/bootinfo -p")
data = out.split('\n')
self.facts['architecture'] = data[0]
except:
self.facts['architecture'] = 'Not Available'
elif self.facts['system'] == 'OpenBSD':
self.facts['architecture'] = platform.uname()[5]
@ -287,6 +300,13 @@ class Facts(object):
# Once we determine the value is one of these distros
# we trust the values are always correct
break
elif name == 'OracleLinux':
data = get_file_content(path)
if 'Oracle Linux' in data:
self.facts['distribution'] = name
else:
self.facts['distribution'] = data.split()[0]
break
elif name == 'RedHat':
data = get_file_content(path)
if 'Red Hat' in data:
@ -872,13 +892,14 @@ class LinuxHardware(Hardware):
size_available = statvfs_result.f_bsize * (statvfs_result.f_bavail)
except OSError, e:
continue
lsblkPath = module.get_bin_path("lsblk")
rc, out, err = module.run_command("%s -ln --output UUID %s" % (lsblkPath, fields[0]), use_unsafe_shell=True)
if rc == 0:
uuid = out.strip()
else:
uuid = 'NA'
uuid = 'NA'
lsblkPath = module.get_bin_path("lsblk")
if lsblkPath:
rc, out, err = module.run_command("%s -ln --output UUID %s" % (lsblkPath, fields[0]), use_unsafe_shell=True)
if rc == 0:
uuid = out.strip()
self.facts['mounts'].append(
{'mount': fields[1],
@ -948,7 +969,7 @@ class LinuxHardware(Hardware):
part['sectorsize'] = get_file_content(part_sysdir + "/queue/physical_block_size")
if not part['sectorsize']:
part['sectorsize'] = get_file_content(part_sysdir + "/queue/hw_sector_size",512)
part['size'] = module.pretty_bytes((float(part['sectors']) * float(part['sectorsize'])))
part['size'] = module.pretty_bytes((float(part['sectors']) * 512))
d['partitions'][partname] = part
d['rotational'] = get_file_content(sysdir + "/queue/rotational")
@ -965,7 +986,7 @@ class LinuxHardware(Hardware):
d['sectorsize'] = get_file_content(sysdir + "/queue/physical_block_size")
if not d['sectorsize']:
d['sectorsize'] = get_file_content(sysdir + "/queue/hw_sector_size",512)
d['size'] = module.pretty_bytes(float(d['sectors']) * float(d['sectorsize']))
d['size'] = module.pretty_bytes(float(d['sectors']) * 512)
d['host'] = ""
@ -1963,7 +1984,7 @@ class GenericBsdIfconfigNetwork(Network):
return interface['v4'], interface['v6']
def get_interfaces_info(self, ifconfig_path):
def get_interfaces_info(self, ifconfig_path, ifconfig_options='-a'):
interfaces = {}
current_if = {}
ips = dict(
@ -1973,7 +1994,7 @@ class GenericBsdIfconfigNetwork(Network):
# FreeBSD, DragonflyBSD, NetBSD, OpenBSD and OS X all implicitly add '-a'
# when running the command 'ifconfig'.
# Solaris must explicitly run the command 'ifconfig -a'.
rc, out, err = module.run_command([ifconfig_path, '-a'])
rc, out, err = module.run_command([ifconfig_path, ifconfig_options])
for line in out.split('\n'):
@ -2143,14 +2164,14 @@ class AIXNetwork(GenericBsdIfconfigNetwork, Network):
platform = 'AIX'
# AIX 'ifconfig -a' does not have three words in the interface line
def get_interfaces_info(self, ifconfig_path):
def get_interfaces_info(self, ifconfig_path, ifconfig_options):
interfaces = {}
current_if = {}
ips = dict(
all_ipv4_addresses = [],
all_ipv6_addresses = [],
)
rc, out, err = module.run_command([ifconfig_path, '-a'])
rc, out, err = module.run_command([ifconfig_path, ifconfig_options])
for line in out.split('\n'):
@ -2184,7 +2205,7 @@ class AIXNetwork(GenericBsdIfconfigNetwork, Network):
rc, out, err = module.run_command([uname_path, '-W'])
# don't bother with wpars it does not work
# zero means not in wpar
if out.split()[0] == '0':
if not rc and out.split()[0] == '0':
if current_if['macaddress'] == 'unknown' and re.match('^en', current_if['device']):
entstat_path = module.get_bin_path('entstat')
if entstat_path:
@ -2230,6 +2251,10 @@ class OpenBSDNetwork(GenericBsdIfconfigNetwork, Network):
"""
platform = 'OpenBSD'
# OpenBSD 'ifconfig -a' does not have information about aliases
def get_interfaces_info(self, ifconfig_path, ifconfig_options='-aA'):
return super(OpenBSDNetwork, self).get_interfaces_info(ifconfig_path, ifconfig_options)
# Return macaddress instead of lladdr
def parse_lladdr_line(self, words, current_if, ips):
current_if['macaddress'] = words[1]
@ -2383,7 +2408,7 @@ class LinuxVirtual(Virtual):
if os.path.exists('/proc/1/cgroup'):
for line in get_file_lines('/proc/1/cgroup'):
if re.search('/docker/', line):
if re.search(r'/docker(/|-[0-9a-f]+\.scope)', line):
self.facts['virtualization_type'] = 'docker'
self.facts['virtualization_role'] = 'guest'
return
@ -2439,7 +2464,7 @@ class LinuxVirtual(Virtual):
self.facts['virtualization_role'] = 'guest'
return
if sys_vendor == 'oVirt':
if sys_vendor == 'oVirt':
self.facts['virtualization_type'] = 'kvm'
self.facts['virtualization_role'] = 'guest'
return

View file

@ -73,16 +73,19 @@ def openstack_find_nova_addresses(addresses, ext_tag, key_name=None):
def openstack_full_argument_spec(**kwargs):
spec = dict(
cloud=dict(default=None),
auth_plugin=dict(default=None),
auth_type=dict(default=None),
auth=dict(default=None),
auth_token=dict(default=None),
region_name=dict(default=None),
availability_zone=dict(default=None),
state=dict(default='present', choices=['absent', 'present']),
verify=dict(default=True, aliases=['validate_certs']),
cacert=dict(default=None),
cert=dict(default=None),
key=dict(default=None),
wait=dict(default=True, type='bool'),
timeout=dict(default=180, type='int'),
api_timeout=dict(default=None, type='int'),
endpoint_type=dict(
default='publicURL', choices=['publicURL', 'internalURL']
default='public', choices=['public', 'internal', 'admin']
)
)
spec.update(kwargs)
@ -90,15 +93,7 @@ def openstack_full_argument_spec(**kwargs):
def openstack_module_kwargs(**kwargs):
ret = dict(
required_one_of=[
['cloud', 'auth'],
],
mutually_exclusive=[
['auth', 'auth_token'],
['auth_plugin', 'auth_token'],
],
)
ret = {}
for key in ('mutually_exclusive', 'required_together', 'required_one_of'):
if key in kwargs:
if key in ret:

View file

@ -151,7 +151,7 @@ Function Get-FileChecksum($path)
{
$sp = new-object -TypeName System.Security.Cryptography.SHA1CryptoServiceProvider;
$fp = [System.IO.File]::Open($path, [System.IO.Filemode]::Open, [System.IO.FileAccess]::Read);
[System.BitConverter]::ToString($sp.ComputeHash($fp)).Replace("-", "").ToLower();
$hash = [System.BitConverter]::ToString($sp.ComputeHash($fp)).Replace("-", "").ToLower();
$fp.Dispose();
}
ElseIf (Test-Path -PathType Container $path)

View file

@ -5,6 +5,7 @@
# to the complete work.
#
# Copyright (c), Michael DeHaan <michael.dehaan@gmail.com>, 2012-2013
# Copyright (c), Toshio Kuratomi <tkuratomi@ansible.com>, 2015
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without modification,
@ -25,12 +26,60 @@
# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE
# USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
try:
import urllib
HAS_URLLIB = True
except:
HAS_URLLIB = False
#
# The match_hostname function and supporting code is under the terms and
# conditions of the Python Software Foundation License. They were taken from
# the Python3 standard library and adapted for use in Python2. See comments in the
# source for which code precisely is under this License. PSF License text
# follows:
#
# PYTHON SOFTWARE FOUNDATION LICENSE VERSION 2
# --------------------------------------------
#
# 1. This LICENSE AGREEMENT is between the Python Software Foundation
# ("PSF"), and the Individual or Organization ("Licensee") accessing and
# otherwise using this software ("Python") in source or binary form and
# its associated documentation.
#
# 2. Subject to the terms and conditions of this License Agreement, PSF hereby
# grants Licensee a nonexclusive, royalty-free, world-wide license to reproduce,
# analyze, test, perform and/or display publicly, prepare derivative works,
# distribute, and otherwise use Python alone or in any derivative version,
# provided, however, that PSF's License Agreement and PSF's notice of copyright,
# i.e., "Copyright (c) 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010,
# 2011, 2012, 2013, 2014 Python Software Foundation; All Rights Reserved" are
# retained in Python alone or in any derivative version prepared by Licensee.
#
# 3. In the event Licensee prepares a derivative work that is based on
# or incorporates Python or any part thereof, and wants to make
# the derivative work available to others as provided herein, then
# Licensee hereby agrees to include in any such work a brief summary of
# the changes made to Python.
#
# 4. PSF is making Python available to Licensee on an "AS IS"
# basis. PSF MAKES NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR
# IMPLIED. BY WAY OF EXAMPLE, BUT NOT LIMITATION, PSF MAKES NO AND
# DISCLAIMS ANY REPRESENTATION OR WARRANTY OF MERCHANTABILITY OR FITNESS
# FOR ANY PARTICULAR PURPOSE OR THAT THE USE OF PYTHON WILL NOT
# INFRINGE ANY THIRD PARTY RIGHTS.
#
# 5. PSF SHALL NOT BE LIABLE TO LICENSEE OR ANY OTHER USERS OF PYTHON
# FOR ANY INCIDENTAL, SPECIAL, OR CONSEQUENTIAL DAMAGES OR LOSS AS
# A RESULT OF MODIFYING, DISTRIBUTING, OR OTHERWISE USING PYTHON,
# OR ANY DERIVATIVE THEREOF, EVEN IF ADVISED OF THE POSSIBILITY THEREOF.
#
# 6. This License Agreement will automatically terminate upon a material
# breach of its terms and conditions.
#
# 7. Nothing in this License Agreement shall be deemed to create any
# relationship of agency, partnership, or joint venture between PSF and
# Licensee. This License Agreement does not grant permission to use PSF
# trademarks or trade name in a trademark sense to endorse or promote
# products or services of Licensee, or any third party.
#
# 8. By copying, installing or otherwise using Python, Licensee
# agrees to be bound by the terms and conditions of this License
# Agreement.
try:
import urllib2
@ -46,14 +95,174 @@ except:
try:
import ssl
HAS_SSL=True
HAS_SSL = True
except:
HAS_SSL=False
HAS_SSL = False
try:
# SNI Handling needs python2.7.9's SSLContext
from ssl import create_default_context, SSLContext
HAS_SSLCONTEXT = True
except ImportError:
HAS_SSLCONTEXT = False
# Select a protocol that includes all secure tls protocols
# Exclude insecure ssl protocols if possible
if HAS_SSL:
# If we can't find extra tls methods, ssl.PROTOCOL_TLSv1 is sufficient
PROTOCOL = ssl.PROTOCOL_TLSv1
if not HAS_SSLCONTEXT and HAS_SSL:
try:
import ctypes, ctypes.util
except ImportError:
# python 2.4 (likely rhel5 which doesn't have tls1.1 support in its openssl)
pass
else:
libssl_name = ctypes.util.find_library('ssl')
libssl = ctypes.CDLL(libssl_name)
for method in ('TLSv1_1_method', 'TLSv1_2_method'):
try:
libssl[method]
# Found something - we'll let openssl autonegotiate and hope
# the server has disabled sslv2 and 3. best we can do.
PROTOCOL = ssl.PROTOCOL_SSLv23
break
except AttributeError:
pass
del libssl
HAS_MATCH_HOSTNAME = True
try:
from ssl import match_hostname, CertificateError
except ImportError:
try:
from backports.ssl_match_hostname import match_hostname, CertificateError
except ImportError:
HAS_MATCH_HOSTNAME = False
if not HAS_MATCH_HOSTNAME:
###
### The following block of code is under the terms and conditions of the
### Python Software Foundation License
###
"""The match_hostname() function from Python 3.4, essential when using SSL."""
import re
class CertificateError(ValueError):
pass
def _dnsname_match(dn, hostname, max_wildcards=1):
"""Matching according to RFC 6125, section 6.4.3
http://tools.ietf.org/html/rfc6125#section-6.4.3
"""
pats = []
if not dn:
return False
# Ported from python3-syntax:
# leftmost, *remainder = dn.split(r'.')
parts = dn.split(r'.')
leftmost = parts[0]
remainder = parts[1:]
wildcards = leftmost.count('*')
if wildcards > max_wildcards:
# Issue #17980: avoid denials of service by refusing more
# than one wildcard per fragment. A survey of established
# policy among SSL implementations showed it to be a
# reasonable choice.
raise CertificateError(
"too many wildcards in certificate DNS name: " + repr(dn))
# speed up common case w/o wildcards
if not wildcards:
return dn.lower() == hostname.lower()
# RFC 6125, section 6.4.3, subitem 1.
# The client SHOULD NOT attempt to match a presented identifier in which
# the wildcard character comprises a label other than the left-most label.
if leftmost == '*':
# When '*' is a fragment by itself, it matches a non-empty dotless
# fragment.
pats.append('[^.]+')
elif leftmost.startswith('xn--') or hostname.startswith('xn--'):
# RFC 6125, section 6.4.3, subitem 3.
# The client SHOULD NOT attempt to match a presented identifier
# where the wildcard character is embedded within an A-label or
# U-label of an internationalized domain name.
pats.append(re.escape(leftmost))
else:
# Otherwise, '*' matches any dotless string, e.g. www*
pats.append(re.escape(leftmost).replace(r'\*', '[^.]*'))
# add the remaining fragments, ignore any wildcards
for frag in remainder:
pats.append(re.escape(frag))
pat = re.compile(r'\A' + r'\.'.join(pats) + r'\Z', re.IGNORECASE)
return pat.match(hostname)
def match_hostname(cert, hostname):
"""Verify that *cert* (in decoded format as returned by
SSLSocket.getpeercert()) matches the *hostname*. RFC 2818 and RFC 6125
rules are followed, but IP addresses are not accepted for *hostname*.
CertificateError is raised on failure. On success, the function
returns nothing.
"""
if not cert:
raise ValueError("empty or no certificate")
dnsnames = []
san = cert.get('subjectAltName', ())
for key, value in san:
if key == 'DNS':
if _dnsname_match(value, hostname):
return
dnsnames.append(value)
if not dnsnames:
# The subject is only checked when there is no dNSName entry
# in subjectAltName
for sub in cert.get('subject', ()):
for key, value in sub:
# XXX according to RFC 2818, the most specific Common Name
# must be used.
if key == 'commonName':
if _dnsname_match(value, hostname):
return
dnsnames.append(value)
if len(dnsnames) > 1:
raise CertificateError("hostname %r "
"doesn't match either of %s"
% (hostname, ', '.join(map(repr, dnsnames))))
elif len(dnsnames) == 1:
raise CertificateError("hostname %r "
"doesn't match %r"
% (hostname, dnsnames[0]))
else:
raise CertificateError("no appropriate commonName or "
"subjectAltName fields were found")
###
### End of Python Software Foundation Licensed code
###
HAS_MATCH_HOSTNAME = True
import httplib
import os
import re
import sys
import socket
import platform
import tempfile
@ -80,7 +289,35 @@ zKPZsZ2miVGclicJHzm5q080b1p/sZtuKIEZk6vZqEg=
-----END CERTIFICATE-----
"""
#
# Exceptions
#
class ConnectionError(Exception):
"""Failed to connect to the server"""
pass
class ProxyError(ConnectionError):
"""Failure to connect because of a proxy"""
pass
class SSLValidationError(ConnectionError):
"""Failure to connect due to SSL validation failing"""
pass
class NoSSLError(SSLValidationError):
"""Needed to connect to an HTTPS url but no ssl library available to verify the certificate"""
pass
class CustomHTTPSConnection(httplib.HTTPSConnection):
def __init__(self, *args, **kwargs):
httplib.HTTPSConnection.__init__(self, *args, **kwargs)
if HAS_SSLCONTEXT:
self.context = create_default_context()
if self.cert_file:
self.context.load_cert_chain(self.cert_file, self.key_file)
def connect(self):
"Connect to a host on a given (SSL) port."
@ -91,7 +328,10 @@ class CustomHTTPSConnection(httplib.HTTPSConnection):
if self._tunnel_host:
self.sock = sock
self._tunnel()
self.sock = ssl.wrap_socket(sock, keyfile=self.key_file, certfile=self.cert_file, ssl_version=ssl.PROTOCOL_TLSv1)
if HAS_SSLCONTEXT:
self.sock = self.context.wrap_socket(sock, server_hostname=self.host)
else:
self.sock = ssl.wrap_socket(sock, keyfile=self.key_file, certfile=self.cert_file, ssl_version=PROTOCOL)
class CustomHTTPSHandler(urllib2.HTTPSHandler):
@ -144,7 +384,7 @@ def generic_urlparse(parts):
username, password = auth.split(':', 1)
generic_parts['username'] = username
generic_parts['password'] = password
generic_parts['hostname'] = hostnme
generic_parts['hostname'] = hostname
generic_parts['port'] = port
except:
generic_parts['username'] = None
@ -180,8 +420,7 @@ class SSLValidationHandler(urllib2.BaseHandler):
'''
CONNECT_COMMAND = "CONNECT %s:%s HTTP/1.0\r\nConnection: close\r\n"
def __init__(self, module, hostname, port):
self.module = module
def __init__(self, hostname, port):
self.hostname = hostname
self.port = port
@ -191,23 +430,22 @@ class SSLValidationHandler(urllib2.BaseHandler):
ca_certs = []
paths_checked = []
platform = get_platform()
distribution = get_distribution()
system = platform.system()
# build a list of paths to check for .crt/.pem files
# based on the platform type
paths_checked.append('/etc/ssl/certs')
if platform == 'Linux':
if system == 'Linux':
paths_checked.append('/etc/pki/ca-trust/extracted/pem')
paths_checked.append('/etc/pki/tls/certs')
paths_checked.append('/usr/share/ca-certificates/cacert.org')
elif platform == 'FreeBSD':
elif system == 'FreeBSD':
paths_checked.append('/usr/local/share/certs')
elif platform == 'OpenBSD':
elif system == 'OpenBSD':
paths_checked.append('/etc/ssl')
elif platform == 'NetBSD':
elif system == 'NetBSD':
ca_certs.append('/etc/openssl/certs')
elif platform == 'SunOS':
elif system == 'SunOS':
paths_checked.append('/opt/local/etc/openssl/certs')
# fall back to a user-deployed cert in a standard
@ -217,7 +455,7 @@ class SSLValidationHandler(urllib2.BaseHandler):
tmp_fd, tmp_path = tempfile.mkstemp()
# Write the dummy ca cert if we are running on Mac OS X
if platform == 'Darwin':
if system == 'Darwin':
os.write(tmp_fd, DUMMY_CA_CERT)
# Default Homebrew path for OpenSSL certs
paths_checked.append('/usr/local/etc/openssl')
@ -250,7 +488,7 @@ class SSLValidationHandler(urllib2.BaseHandler):
if int(resp_code) not in valid_codes:
raise Exception
except:
self.module.fail_json(msg='Connection to proxy failed')
raise ProxyError('Connection to proxy failed')
def detect_no_proxy(self, url):
'''
@ -268,9 +506,17 @@ class SSLValidationHandler(urllib2.BaseHandler):
return False
return True
def _make_context(self, tmp_ca_cert_path):
context = create_default_context()
context.load_verify_locations(tmp_ca_cert_path)
return context
def http_request(self, req):
tmp_ca_cert_path, paths_checked = self.get_ca_certs()
https_proxy = os.environ.get('https_proxy')
context = None
if HAS_SSLCONTEXT:
context = self._make_context(tmp_ca_cert_path)
# Detect if 'no_proxy' environment variable is set and if our URL is included
use_proxy = self.detect_no_proxy(req.get_full_url())
@ -292,25 +538,40 @@ class SSLValidationHandler(urllib2.BaseHandler):
s.sendall('\r\n')
connect_result = s.recv(4096)
self.validate_proxy_response(connect_result)
ssl_s = ssl.wrap_socket(s, ca_certs=tmp_ca_cert_path, cert_reqs=ssl.CERT_REQUIRED)
if context:
ssl_s = context.wrap_socket(s, server_hostname=proxy_parts.get('hostname'))
else:
ssl_s = ssl.wrap_socket(s, ca_certs=tmp_ca_cert_path, cert_reqs=ssl.CERT_REQUIRED, ssl_version=PROTOCOL)
match_hostname(ssl_s.getpeercert(), self.hostname)
else:
self.module.fail_json(msg='Unsupported proxy scheme: %s. Currently ansible only supports HTTP proxies.' % proxy_parts.get('scheme'))
raise ProxyError('Unsupported proxy scheme: %s. Currently ansible only supports HTTP proxies.' % proxy_parts.get('scheme'))
else:
s.connect((self.hostname, self.port))
ssl_s = ssl.wrap_socket(s, ca_certs=tmp_ca_cert_path, cert_reqs=ssl.CERT_REQUIRED)
if context:
ssl_s = context.wrap_socket(s, server_hostname=self.hostname)
else:
ssl_s = ssl.wrap_socket(s, ca_certs=tmp_ca_cert_path, cert_reqs=ssl.CERT_REQUIRED, ssl_version=PROTOCOL)
match_hostname(ssl_s.getpeercert(), self.hostname)
# close the ssl connection
#ssl_s.unwrap()
s.close()
except (ssl.SSLError, socket.error), e:
# fail if we tried all of the certs but none worked
if 'connection refused' in str(e).lower():
self.module.fail_json(msg='Failed to connect to %s:%s.' % (self.hostname, self.port))
raise ConnectionError('Failed to connect to %s:%s.' % (self.hostname, self.port))
else:
self.module.fail_json(
msg='Failed to validate the SSL certificate for %s:%s. ' % (self.hostname, self.port) + \
'Use validate_certs=no or make sure your managed systems have a valid CA certificate installed. ' + \
'Paths checked for this platform: %s' % ", ".join(paths_checked)
raise SSLValidationError('Failed to validate the SSL certificate for %s:%s.'
' Make sure your managed systems have a valid CA'
' certificate installed. If the website serving the url'
' uses SNI you need python >= 2.7.9 on your managed'
' machine. You can use validate_certs=False if you do'
' not need to confirm the server\s identity but this is'
' unsafe and not recommended'
' Paths checked for this platform: %s' % (self.hostname, self.port, ", ".join(paths_checked))
)
except CertificateError:
raise SSLValidationError("SSL Certificate does not belong to %s. Make sure the url has a certificate that belongs to it or use validate_certs=False (insecure)" % self.hostname)
try:
# cleanup the temp file created, don't worry
# if it fails for some reason
@ -322,73 +583,42 @@ class SSLValidationHandler(urllib2.BaseHandler):
https_request = http_request
def url_argument_spec():
'''
Creates an argument spec that can be used with any module
that will be requesting content via urllib/urllib2
'''
return dict(
url = dict(),
force = dict(default='no', aliases=['thirsty'], type='bool'),
http_agent = dict(default='ansible-httpget'),
use_proxy = dict(default='yes', type='bool'),
validate_certs = dict(default='yes', type='bool'),
url_username = dict(required=False),
url_password = dict(required=False),
)
def fetch_url(module, url, data=None, headers=None, method=None,
use_proxy=True, force=False, last_mod_time=None, timeout=10):
# Rewrite of fetch_url to not require the module environment
def open_url(url, data=None, headers=None, method=None, use_proxy=True,
force=False, last_mod_time=None, timeout=10, validate_certs=True,
url_username=None, url_password=None, http_agent=None):
'''
Fetches a file from an HTTP/FTP server using urllib2
'''
if not HAS_URLLIB:
module.fail_json(msg='urllib is not installed')
if not HAS_URLLIB2:
module.fail_json(msg='urllib2 is not installed')
elif not HAS_URLPARSE:
module.fail_json(msg='urlparse is not installed')
r = None
handlers = []
info = dict(url=url)
distribution = get_distribution()
# Get validate_certs from the module params
validate_certs = module.params.get('validate_certs', True)
# FIXME: change the following to use the generic_urlparse function
# to remove the indexed references for 'parsed'
parsed = urlparse.urlparse(url)
if parsed[0] == 'https':
if not HAS_SSL and validate_certs:
if distribution == 'Redhat':
module.fail_json(msg='SSL validation is not available in your version of python. You can use validate_certs=no, however this is unsafe and not recommended. You can also install python-ssl from EPEL')
else:
module.fail_json(msg='SSL validation is not available in your version of python. You can use validate_certs=no, however this is unsafe and not recommended')
if parsed[0] == 'https' and validate_certs:
if not HAS_SSL:
raise NoSSLError('SSL validation is not available in your version of python. You can use validate_certs=False, however this is unsafe and not recommended')
elif validate_certs:
# do the cert validation
netloc = parsed[1]
if '@' in netloc:
netloc = netloc.split('@', 1)[1]
if ':' in netloc:
hostname, port = netloc.split(':', 1)
else:
hostname = netloc
port = 443
# create the SSL validation handler and
# add it to the list of handlers
ssl_handler = SSLValidationHandler(module, hostname, port)
handlers.append(ssl_handler)
# do the cert validation
netloc = parsed[1]
if '@' in netloc:
netloc = netloc.split('@', 1)[1]
if ':' in netloc:
hostname, port = netloc.split(':', 1)
port = int(port)
else:
hostname = netloc
port = 443
# create the SSL validation handler and
# add it to the list of handlers
ssl_handler = SSLValidationHandler(hostname, port)
handlers.append(ssl_handler)
if parsed[0] != 'ftp':
username = module.params.get('url_username', '')
username = url_username
if username:
password = module.params.get('url_password', '')
password = url_password
netloc = parsed[1]
elif '@' in parsed[1]:
credentials, netloc = parsed[1].split('@', 1)
@ -432,14 +662,14 @@ def fetch_url(module, url, data=None, headers=None, method=None,
if method:
if method.upper() not in ('OPTIONS','GET','HEAD','POST','PUT','DELETE','TRACE','CONNECT'):
module.fail_json(msg='invalid HTTP request method; %s' % method.upper())
raise ConnectionError('invalid HTTP request method; %s' % method.upper())
request = RequestWithMethod(url, method.upper(), data)
else:
request = urllib2.Request(url, data)
# add the custom agent header, to help prevent issues
# with sites that block the default urllib agent string
request.add_header('User-agent', module.params.get('http_agent'))
request.add_header('User-agent', http_agent)
# if we're ok with getting a 304, set the timestamp in the
# header, otherwise make sure we don't get a cached copy
@ -452,20 +682,81 @@ def fetch_url(module, url, data=None, headers=None, method=None,
# user defined headers now, which may override things we've set above
if headers:
if not isinstance(headers, dict):
module.fail_json("headers provided to fetch_url() must be a dict")
raise ValueError("headers provided to fetch_url() must be a dict")
for header in headers:
request.add_header(header, headers[header])
urlopen_args = [request, None]
if sys.version_info >= (2,6,0):
# urlopen in python prior to 2.6.0 did not
# have a timeout parameter
urlopen_args.append(timeout)
if HAS_SSLCONTEXT and not validate_certs:
# In 2.7.9, the default context validates certificates
context = SSLContext(ssl.PROTOCOL_SSLv23)
context.options |= ssl.OP_NO_SSLv2
context.options |= ssl.OP_NO_SSLv3
context.verify_mode = ssl.CERT_NONE
context.check_hostname = False
urlopen_args += (None, None, None, context)
r = urllib2.urlopen(*urlopen_args)
return r
#
# Module-related functions
#
def url_argument_spec():
'''
Creates an argument spec that can be used with any module
that will be requesting content via urllib/urllib2
'''
return dict(
url = dict(),
force = dict(default='no', aliases=['thirsty'], type='bool'),
http_agent = dict(default='ansible-httpget'),
use_proxy = dict(default='yes', type='bool'),
validate_certs = dict(default='yes', type='bool'),
url_username = dict(required=False),
url_password = dict(required=False),
)
def fetch_url(module, url, data=None, headers=None, method=None,
use_proxy=True, force=False, last_mod_time=None, timeout=10):
'''
Fetches a file from an HTTP/FTP server using urllib2. Requires the module environment
'''
if not HAS_URLLIB2:
module.fail_json(msg='urllib2 is not installed')
elif not HAS_URLPARSE:
module.fail_json(msg='urlparse is not installed')
# Get validate_certs from the module params
validate_certs = module.params.get('validate_certs', True)
username = module.params.get('url_username', '')
password = module.params.get('url_password', '')
http_agent = module.params.get('http_agent', None)
r = None
info = dict(url=url)
try:
if sys.version_info < (2,6,0):
# urlopen in python prior to 2.6.0 did not
# have a timeout parameter
r = urllib2.urlopen(request, None)
else:
r = urllib2.urlopen(request, None, timeout)
r = open_url(url, data=data, headers=headers, method=method,
use_proxy=use_proxy, force=force, last_mod_time=last_mod_time, timeout=timeout,
validate_certs=validate_certs, url_username=username,
url_password=password, http_agent=http_agent)
info.update(r.info())
info['url'] = r.geturl() # The URL goes in too, because of redirects.
info.update(dict(msg="OK (%s bytes)" % r.headers.get('Content-Length', 'unknown'), status=200))
except NoSSLError, e:
distribution = get_distribution()
if distribution.lower() == 'redhat':
module.fail_json(msg='%s. You can also install python-ssl from EPEL' % str(e))
except (ConnectionError, ValueError), e:
module.fail_json(msg=str(e))
except urllib2.HTTPError, e:
info.update(dict(msg=str(e), status=e.code))
except urllib2.URLError, e:
@ -477,4 +768,3 @@ def fetch_url(module, url, data=None, headers=None, method=None,
info.update(dict(msg="An unknown error occurred: %s" % str(e), status=-1))
return r, info

@ -1 +1 @@
Subproject commit bd997b1066e1e98a66cf98643c78adf8e080e4b4
Subproject commit 746d51d1ff7a7bb3c2c71a2d8239cba93b6dea96

@ -1 +1 @@
Subproject commit e60b2167f5ebfd642fe04cb22805203764959f7c
Subproject commit 2c073442b02ddbf9f094378cd5147c595fe4b46f

View file

@ -60,15 +60,12 @@ class PlayBook(object):
timeout = C.DEFAULT_TIMEOUT,
remote_user = C.DEFAULT_REMOTE_USER,
remote_pass = C.DEFAULT_REMOTE_PASS,
sudo_pass = C.DEFAULT_SUDO_PASS,
remote_port = None,
transport = C.DEFAULT_TRANSPORT,
private_key_file = C.DEFAULT_PRIVATE_KEY_FILE,
callbacks = None,
runner_callbacks = None,
stats = None,
sudo = False,
sudo_user = C.DEFAULT_SUDO_USER,
extra_vars = None,
only_tags = None,
skip_tags = None,
@ -77,11 +74,13 @@ class PlayBook(object):
check = False,
diff = False,
any_errors_fatal = False,
su = False,
su_user = False,
su_pass = False,
vault_password = False,
force_handlers = False,
# privelege escalation
become = C.DEFAULT_BECOME,
become_method = C.DEFAULT_BECOME_METHOD,
become_user = C.DEFAULT_BECOME_USER,
become_pass = None,
):
"""
@ -92,13 +91,11 @@ class PlayBook(object):
timeout: connection timeout
remote_user: run as this user if not specified in a particular play
remote_pass: use this remote password (for all plays) vs using SSH keys
sudo_pass: if sudo==True, and a password is required, this is the sudo password
remote_port: default remote port to use if not specified with the host or play
transport: how to connect to hosts that don't specify a transport (local, paramiko, etc)
callbacks output callbacks for the playbook
runner_callbacks: more callbacks, this time for the runner API
stats: holds aggregrate data about events occurring to each host
sudo: if not specified per play, requests all plays use sudo mode
inventory: can be specified instead of host_list to use a pre-existing inventory object
check: don't change anything, just try to detect some potential changes
any_errors_fatal: terminate the entire execution immediately when one of the hosts has failed
@ -139,21 +136,20 @@ class PlayBook(object):
self.callbacks = callbacks
self.runner_callbacks = runner_callbacks
self.stats = stats
self.sudo = sudo
self.sudo_pass = sudo_pass
self.sudo_user = sudo_user
self.extra_vars = extra_vars
self.global_vars = {}
self.private_key_file = private_key_file
self.only_tags = only_tags
self.skip_tags = skip_tags
self.any_errors_fatal = any_errors_fatal
self.su = su
self.su_user = su_user
self.su_pass = su_pass
self.vault_password = vault_password
self.force_handlers = force_handlers
self.become = become
self.become_method = become_method
self.become_user = become_user
self.become_pass = become_pass
self.callbacks.playbook = self
self.runner_callbacks.playbook = self
@ -379,17 +375,17 @@ class PlayBook(object):
# *****************************************************
def _trim_unavailable_hosts(self, hostlist=[]):
def _trim_unavailable_hosts(self, hostlist=[], keep_failed=False):
''' returns a list of hosts that haven't failed and aren't dark '''
return [ h for h in hostlist if (h not in self.stats.failures) and (h not in self.stats.dark)]
return [ h for h in hostlist if (keep_failed or h not in self.stats.failures) and (h not in self.stats.dark)]
# *****************************************************
def _run_task_internal(self, task):
def _run_task_internal(self, task, include_failed=False):
''' run a particular module step in a playbook '''
hosts = self._trim_unavailable_hosts(self.inventory.list_hosts(task.play._play_hosts))
hosts = self._trim_unavailable_hosts(self.inventory.list_hosts(task.play._play_hosts), keep_failed=include_failed)
self.inventory.restrict_to(hosts)
runner = ansible.runner.Runner(
@ -416,10 +412,7 @@ class PlayBook(object):
basedir=task.play.basedir,
conditional=task.when,
callbacks=self.runner_callbacks,
sudo=task.sudo,
sudo_user=task.sudo_user,
transport=task.transport,
sudo_pass=task.sudo_pass,
is_playbook=True,
check=self.check,
diff=self.diff,
@ -429,13 +422,14 @@ class PlayBook(object):
accelerate_port=task.play.accelerate_port,
accelerate_ipv6=task.play.accelerate_ipv6,
error_on_undefined_vars=C.DEFAULT_UNDEFINED_VAR_BEHAVIOR,
su=task.su,
su_user=task.su_user,
su_pass=task.su_pass,
vault_pass = self.vault_password,
run_hosts=hosts,
no_log=task.no_log,
run_once=task.run_once,
become=task.become,
become_method=task.become_method,
become_user=task.become_user,
become_pass=task.become_pass,
)
runner.module_vars.update({'play_hosts': hosts})
@ -499,7 +493,8 @@ class PlayBook(object):
task.ignore_errors = utils.check_conditional(cond, play.basedir, task.module_vars, fail_on_undefined=C.DEFAULT_UNDEFINED_VAR_BEHAVIOR)
# load up an appropriate ansible runner to run the task in parallel
results = self._run_task_internal(task)
include_failed = is_handler and play.force_handlers
results = self._run_task_internal(task, include_failed=include_failed)
# if no hosts are matched, carry on
hosts_remaining = True
@ -551,7 +546,7 @@ class PlayBook(object):
_register_play_vars(host, result)
# flag which notify handlers need to be run
if len(task.notify) > 0:
if task.notify and len(task.notify) > 0:
for host, results in results.get('contacted',{}).iteritems():
if results.get('changed', False):
for handler_name in task.notify:
@ -616,12 +611,10 @@ class PlayBook(object):
setup_cache=self.SETUP_CACHE,
vars_cache=self.VARS_CACHE,
callbacks=self.runner_callbacks,
sudo=play.sudo,
sudo_user=play.sudo_user,
sudo_pass=self.sudo_pass,
su=play.su,
su_user=play.su_user,
su_pass=self.su_pass,
become=play.become,
become_method=play.become_method,
become_user=play.become_user,
become_pass=self.become_pass,
vault_pass=self.vault_password,
transport=play.transport,
is_playbook=True,
@ -819,7 +812,7 @@ class PlayBook(object):
# if no hosts remain, drop out
if not host_list:
if self.force_handlers:
if play.force_handlers:
task_errors = True
break
else:
@ -829,10 +822,15 @@ class PlayBook(object):
# lift restrictions after each play finishes
self.inventory.lift_also_restriction()
if task_errors and not self.force_handlers:
if task_errors and not play.force_handlers:
# if there were failed tasks and handler execution
# is not forced, quit the play with an error
return False
elif task_errors:
# if there were failed tasks and handler execution is forced,
# execute all handlers and quit the play with an error
self.run_handlers(play)
return False
else:
# no errors, go ahead and execute all handlers
if not self.run_handlers(play):
@ -864,7 +862,7 @@ class PlayBook(object):
play.max_fail_pct = 0
if (hosts_count - len(host_list)) > int((play.max_fail_pct)/100.0 * hosts_count):
host_list = None
if not host_list and not self.force_handlers:
if not host_list and not play.force_handlers:
self.callbacks.on_no_hosts_remaining()
return False

View file

@ -22,6 +22,7 @@ from ansible import utils
from ansible import errors
from ansible.playbook.task import Task
from ansible.module_utils.splitter import split_args, unquote
from ansible.utils.unicode import to_bytes
import ansible.constants as C
import pipes
import shlex
@ -32,24 +33,26 @@ import uuid
class Play(object):
__slots__ = [
'hosts', 'name', 'vars', 'vars_file_vars', 'role_vars', 'default_vars', 'vars_prompt', 'vars_files',
'handlers', 'remote_user', 'remote_port', 'included_roles', 'accelerate',
'accelerate_port', 'accelerate_ipv6', 'sudo', 'sudo_user', 'transport', 'playbook',
'tags', 'gather_facts', 'serial', '_ds', '_handlers', '_tasks',
'basedir', 'any_errors_fatal', 'roles', 'max_fail_pct', '_play_hosts', 'su', 'su_user',
'vault_password', 'no_log', 'environment',
_pb_common = [
'accelerate', 'accelerate_ipv6', 'accelerate_port', 'any_errors_fatal', 'become',
'become_method', 'become_user', 'environment', 'force_handlers', 'gather_facts',
'handlers', 'hosts', 'name', 'no_log', 'remote_user', 'roles', 'serial', 'su',
'su_user', 'sudo', 'sudo_user', 'tags', 'vars', 'vars_files', 'vars_prompt',
'vault_password',
]
__slots__ = _pb_common + [
'_ds', '_handlers', '_play_hosts', '_tasks', 'any_errors_fatal', 'basedir',
'default_vars', 'included_roles', 'max_fail_pct', 'playbook', 'remote_port',
'role_vars', 'transport', 'vars_file_vars',
]
# to catch typos and so forth -- these are userland names
# and don't line up 1:1 with how they are stored
VALID_KEYS = frozenset((
'hosts', 'name', 'vars', 'vars_prompt', 'vars_files',
'tasks', 'handlers', 'remote_user', 'user', 'port', 'include', 'accelerate', 'accelerate_port', 'accelerate_ipv6',
'sudo', 'sudo_user', 'connection', 'tags', 'gather_facts', 'serial',
'any_errors_fatal', 'roles', 'role_names', 'pre_tasks', 'post_tasks', 'max_fail_percentage',
'su', 'su_user', 'vault_password', 'no_log', 'environment',
))
VALID_KEYS = frozenset(_pb_common + [
'connection', 'include', 'max_fail_percentage', 'port', 'post_tasks',
'pre_tasks', 'role_names', 'tasks', 'user',
])
# *************************************************
@ -58,7 +61,7 @@ class Play(object):
for x in ds.keys():
if not x in Play.VALID_KEYS:
raise errors.AnsibleError("%s is not a legal parameter at this level in an Ansible Playbook" % x)
raise errors.AnsibleError("%s is not a legal parameter of an Ansible Play" % x)
# allow all playbook keys to be set by --extra-vars
self.vars = ds.get('vars', {})
@ -115,10 +118,14 @@ class Play(object):
_tasks = ds.pop('tasks', [])
_handlers = ds.pop('handlers', [])
temp_vars = utils.merge_hash(self.vars, self.vars_file_vars)
temp_vars = utils.merge_hash(temp_vars, self.playbook.extra_vars)
temp_vars = utils.combine_vars(self.vars, self.vars_file_vars)
temp_vars = utils.combine_vars(temp_vars, self.playbook.extra_vars)
try:
ds = template(basedir, ds, temp_vars)
except errors.AnsibleError, e:
utils.warning("non fatal error while trying to template play variables: %s" % (str(e)))
ds = template(basedir, ds, temp_vars)
ds['tasks'] = _tasks
ds['handlers'] = _handlers
@ -140,8 +147,6 @@ class Play(object):
self._handlers = ds.get('handlers', [])
self.remote_user = ds.get('remote_user', ds.get('user', self.playbook.remote_user))
self.remote_port = ds.get('port', self.playbook.remote_port)
self.sudo = ds.get('sudo', self.playbook.sudo)
self.sudo_user = ds.get('sudo_user', self.playbook.sudo_user)
self.transport = ds.get('connection', self.playbook.transport)
self.remote_port = self.remote_port
self.any_errors_fatal = utils.boolean(ds.get('any_errors_fatal', 'false'))
@ -149,22 +154,42 @@ class Play(object):
self.accelerate_port = ds.get('accelerate_port', None)
self.accelerate_ipv6 = ds.get('accelerate_ipv6', False)
self.max_fail_pct = int(ds.get('max_fail_percentage', 100))
self.su = ds.get('su', self.playbook.su)
self.su_user = ds.get('su_user', self.playbook.su_user)
self.no_log = utils.boolean(ds.get('no_log', 'false'))
self.force_handlers = utils.boolean(ds.get('force_handlers', self.playbook.force_handlers))
# Fail out if user specifies conflicting privelege escalations
if (ds.get('become') or ds.get('become_user')) and (ds.get('sudo') or ds.get('sudo_user')):
raise errors.AnsibleError('sudo params ("become", "become_user") and su params ("sudo", "sudo_user") cannot be used together')
if (ds.get('become') or ds.get('become_user')) and (ds.get('su') or ds.get('su_user')):
raise errors.AnsibleError('sudo params ("become", "become_user") and su params ("su", "su_user") cannot be used together')
if (ds.get('sudo') or ds.get('sudo_user')) and (ds.get('su') or ds.get('su_user')):
raise errors.AnsibleError('sudo params ("sudo", "sudo_user") and su params ("su", "su_user") cannot be used together')
# become settings are inherited and updated normally
self.become = ds.get('become', self.playbook.become)
self.become_method = ds.get('become_method', self.playbook.become_method)
self.become_user = ds.get('become_user', self.playbook.become_user)
# Make sure current play settings are reflected in become fields
if 'sudo' in ds:
self.become=ds['sudo']
self.become_method='sudo'
if 'sudo_user' in ds:
self.become_user=ds['sudo_user']
elif 'su' in ds:
self.become=True
self.become=ds['su']
self.become_method='su'
if 'su_user' in ds:
self.become_user=ds['su_user']
# gather_facts is not a simple boolean, as None means that a 'smart'
# fact gathering mode will be used, so we need to be careful here as
# calling utils.boolean(None) returns False
self.gather_facts = ds.get('gather_facts', None)
if self.gather_facts:
if self.gather_facts is not None:
self.gather_facts = utils.boolean(self.gather_facts)
# Fail out if user specifies a sudo param with a su param in a given play
if (ds.get('sudo') or ds.get('sudo_user')) and (ds.get('su') or ds.get('su_user')):
raise errors.AnsibleError('sudo params ("sudo", "sudo_user") and su params '
'("su", "su_user") cannot be used together')
load_vars['role_names'] = ds.get('role_names', [])
self._tasks = self._load_tasks(self._ds.get('tasks', []), load_vars)
@ -173,9 +198,6 @@ class Play(object):
# apply any missing tags to role tasks
self._late_merge_role_tags()
if self.sudo_user != 'root':
self.sudo = True
# place holder for the discovered hosts to be used in this play
self._play_hosts = None
@ -192,7 +214,8 @@ class Play(object):
role_vars = {}
if type(orig_path) == dict:
# what, not a path?
role_name = orig_path.get('role', None)
parsed_role = utils.role_yaml_parse(orig_path)
role_name = parsed_role.get('role', parsed_role.get('name'))
if role_name is None:
raise errors.AnsibleError("expected a role name in dictionary: %s" % orig_path)
role_vars = orig_path
@ -429,7 +452,7 @@ class Play(object):
for (role, role_path, role_vars, role_params, default_vars) in roles:
# special vars must be extracted from the dict to the included tasks
special_keys = [ "sudo", "sudo_user", "when", "with_items" ]
special_keys = [ "sudo", "sudo_user", "when", "with_items", "su", "su_user", "become", "become_user" ]
special_vars = {}
for k in special_keys:
if k in role_vars:
@ -531,7 +554,7 @@ class Play(object):
# *************************************************
def _load_tasks(self, tasks, vars=None, role_params=None, default_vars=None, sudo_vars=None,
def _load_tasks(self, tasks, vars=None, role_params=None, default_vars=None, become_vars=None,
additional_conditions=None, original_file=None, role_name=None):
''' handle task and handler include statements '''
@ -547,8 +570,8 @@ class Play(object):
role_params = {}
if default_vars is None:
default_vars = {}
if sudo_vars is None:
sudo_vars = {}
if become_vars is None:
become_vars = {}
old_conditions = list(additional_conditions)
@ -560,26 +583,28 @@ class Play(object):
if not isinstance(x, dict):
raise errors.AnsibleError("expecting dict; got: %s, error in %s" % (x, original_file))
# evaluate sudo vars for current and child tasks
included_sudo_vars = {}
for k in ["sudo", "sudo_user"]:
# evaluate privilege escalation vars for current and child tasks
included_become_vars = {}
for k in ["become", "become_user", "become_method", "become_exe", "sudo", "su", "sudo_user", "su_user"]:
if k in x:
included_sudo_vars[k] = x[k]
elif k in sudo_vars:
included_sudo_vars[k] = sudo_vars[k]
x[k] = sudo_vars[k]
if 'meta' in x:
if x['meta'] == 'flush_handlers':
results.append(Task(self, x))
continue
included_become_vars[k] = x[k]
elif k in become_vars:
included_become_vars[k] = become_vars[k]
x[k] = become_vars[k]
task_vars = vars.copy()
if original_file:
task_vars['_original_file'] = original_file
if 'meta' in x:
if x['meta'] == 'flush_handlers':
if role_name and 'role_name' not in x:
x['role_name'] = role_name
results.append(Task(self, x, module_vars=task_vars, role_name=role_name, no_tags=False))
continue
if 'include' in x:
tokens = split_args(str(x['include']))
tokens = split_args(to_bytes(x['include'], nonstring='simplerepr'))
included_additional_conditions = list(additional_conditions)
include_vars = {}
for k in x:
@ -596,7 +621,7 @@ class Play(object):
included_additional_conditions.append(x[k])
elif type(x[k]) is list:
included_additional_conditions.extend(x[k])
elif k in ("include", "vars", "role_params", "default_vars", "sudo", "sudo_user", "role_name", "no_log"):
elif k in ("include", "vars", "role_params", "default_vars", "sudo", "sudo_user", "role_name", "no_log", "become", "become_user", "su", "su_user"):
continue
else:
include_vars[k] = x[k]
@ -632,9 +657,9 @@ class Play(object):
dirname = os.path.dirname(original_file)
# temp vars are used here to avoid trampling on the existing vars structures
temp_vars = utils.merge_hash(self.vars, self.vars_file_vars)
temp_vars = utils.merge_hash(temp_vars, mv)
temp_vars = utils.merge_hash(temp_vars, self.playbook.extra_vars)
temp_vars = utils.combine_vars(self.vars, self.vars_file_vars)
temp_vars = utils.combine_vars(temp_vars, mv)
temp_vars = utils.combine_vars(temp_vars, self.playbook.extra_vars)
include_file = template(dirname, tokens[0], temp_vars)
include_filename = utils.path_dwim(dirname, include_file)
@ -643,7 +668,7 @@ class Play(object):
for y in data:
if isinstance(y, dict) and 'include' in y:
y['role_name'] = new_role
loaded = self._load_tasks(data, mv, role_params, default_vars, included_sudo_vars, list(included_additional_conditions), original_file=include_filename, role_name=new_role)
loaded = self._load_tasks(data, mv, role_params, default_vars, included_become_vars, list(included_additional_conditions), original_file=include_filename, role_name=new_role)
results += loaded
elif type(x) == dict:
task = Task(
@ -727,7 +752,7 @@ class Play(object):
prompt_msg = "%s: " % prompt
if vname not in self.playbook.extra_vars:
vars[vname] = self.playbook.callbacks.on_vars_prompt(
varname=vname, private=False, prompt=prompt_msg, default=None
varname=vname, private=True, prompt=prompt_msg, default=None
)
else:

View file

@ -24,28 +24,26 @@ import sys
class Task(object):
__slots__ = [
'name', 'meta', 'action', 'when', 'async_seconds', 'async_poll_interval',
'notify', 'module_name', 'module_args', 'module_vars', 'play_vars', 'play_file_vars', 'role_vars', 'role_params', 'default_vars',
'play', 'notified_by', 'tags', 'register', 'role_name',
'delegate_to', 'first_available_file', 'ignore_errors',
'local_action', 'transport', 'sudo', 'remote_user', 'sudo_user', 'sudo_pass',
'items_lookup_plugin', 'items_lookup_terms', 'environment', 'args',
'any_errors_fatal', 'changed_when', 'failed_when', 'always_run', 'delay', 'retries', 'until',
'su', 'su_user', 'su_pass', 'no_log', 'run_once',
_t_common = [
'action', 'always_run', 'any_errors_fatal', 'args', 'become', 'become_method', 'become_pass',
'become_user', 'changed_when', 'delay', 'delegate_to', 'environment', 'failed_when',
'first_available_file', 'ignore_errors', 'local_action', 'meta', 'name', 'no_log',
'notify', 'register', 'remote_user', 'retries', 'run_once', 'su', 'su_pass', 'su_user',
'sudo', 'sudo_pass', 'sudo_user', 'tags', 'transport', 'until', 'when',
]
# to prevent typos and such
VALID_KEYS = frozenset((
'name', 'meta', 'action', 'when', 'async', 'poll', 'notify',
'first_available_file', 'include', 'tags', 'register', 'ignore_errors',
'delegate_to', 'local_action', 'transport', 'remote_user', 'sudo', 'sudo_user',
'sudo_pass', 'when', 'connection', 'environment', 'args',
'any_errors_fatal', 'changed_when', 'failed_when', 'always_run', 'delay', 'retries', 'until',
'su', 'su_user', 'su_pass', 'no_log', 'run_once',
))
__slots__ = [
'async_poll_interval', 'async_seconds', 'default_vars', 'first_available_file',
'items_lookup_plugin', 'items_lookup_terms', 'module_args', 'module_name', 'module_vars',
'notified_by', 'play', 'play_file_vars', 'play_vars', 'role_name', 'role_params', 'role_vars',
] + _t_common
def __init__(self, play, ds, module_vars=None, play_vars=None, play_file_vars=None, role_vars=None, role_params=None, default_vars=None, additional_conditions=None, role_name=None):
# to prevent typos and such
VALID_KEYS = frozenset([
'async', 'connection', 'include', 'poll',
] + _t_common)
def __init__(self, play, ds, module_vars=None, play_vars=None, play_file_vars=None, role_vars=None, role_params=None, default_vars=None, additional_conditions=None, role_name=None, no_tags=True):
''' constructor loads from a task or handler datastructure '''
# meta directives are used to tell things like ansible/playbook to run
@ -53,7 +51,12 @@ class Task(object):
# normally.
if 'meta' in ds:
self.meta = ds['meta']
self.tags = []
if no_tags:
self.tags = []
else:
self.tags = self._load_tags(ds, module_vars)
self.module_vars = module_vars
self.role_name = role_name
return
else:
self.meta = None
@ -86,11 +89,6 @@ class Task(object):
elif x.startswith("with_"):
if isinstance(ds[x], basestring):
param = ds[x].strip()
# Only a variable, no logic
if (param.startswith('{{') and
param.find('}}') == len(ds[x]) - 2 and
param.find('|') == -1):
utils.warning("It is unnecessary to use '{{' in loops, leave variables in loop expressions bare.")
plugin_name = x.replace("with_","")
if plugin_name in utils.plugins.lookup_loader:
@ -129,16 +127,13 @@ class Task(object):
# load various attributes
self.name = ds.get('name', None)
self.tags = [ 'untagged' ]
self.register = ds.get('register', None)
self.sudo = utils.boolean(ds.get('sudo', play.sudo))
self.su = utils.boolean(ds.get('su', play.su))
self.environment = ds.get('environment', play.environment)
self.role_name = role_name
self.no_log = utils.boolean(ds.get('no_log', "false")) or self.play.no_log
self.run_once = utils.boolean(ds.get('run_once', 'false'))
#Code to allow do until feature in a Task
#Code to allow do until feature in a Task
if 'until' in ds:
if not ds.get('register'):
raise errors.AnsibleError("register keyword is mandatory when using do until feature")
@ -160,24 +155,51 @@ class Task(object):
else:
self.remote_user = ds.get('remote_user', play.playbook.remote_user)
self.sudo_user = None
self.sudo_pass = None
self.su_user = None
self.su_pass = None
# Fail out if user specifies privilege escalation params in conflict
if (ds.get('become') or ds.get('become_user') or ds.get('become_pass')) and (ds.get('sudo') or ds.get('sudo_user') or ds.get('sudo_pass')):
raise errors.AnsibleError('incompatible parameters ("become", "become_user", "become_pass") and sudo params "sudo", "sudo_user", "sudo_pass" in task: %s' % self.name)
if self.sudo:
self.sudo_user = ds.get('sudo_user', play.sudo_user)
self.sudo_pass = ds.get('sudo_pass', play.playbook.sudo_pass)
elif self.su:
self.su_user = ds.get('su_user', play.su_user)
self.su_pass = ds.get('su_pass', play.playbook.su_pass)
if (ds.get('become') or ds.get('become_user') or ds.get('become_pass')) and (ds.get('su') or ds.get('su_user') or ds.get('su_pass')):
raise errors.AnsibleError('incompatible parameters ("become", "become_user", "become_pass") and su params "su", "su_user", "sudo_pass" in task: %s' % self.name)
# Fail out if user specifies a sudo param with a su param in a given play
if (ds.get('sudo') or ds.get('sudo_user') or ds.get('sudo_pass')) and \
(ds.get('su') or ds.get('su_user') or ds.get('su_pass')):
raise errors.AnsibleError('sudo params ("sudo", "sudo_user", "sudo_pass") '
'and su params "su", "su_user", "su_pass") '
'cannot be used together')
if (ds.get('sudo') or ds.get('sudo_user') or ds.get('sudo_pass')) and (ds.get('su') or ds.get('su_user') or ds.get('su_pass')):
raise errors.AnsibleError('incompatible parameters ("su", "su_user", "su_pass") and sudo params "sudo", "sudo_user", "sudo_pass" in task: %s' % self.name)
self.become = utils.boolean(ds.get('become', play.become))
self.become_method = ds.get('become_method', play.become_method)
self.become_user = ds.get('become_user', play.become_user)
self.become_pass = ds.get('become_pass', play.playbook.become_pass)
# set only if passed in current task data
if 'sudo' in ds or 'sudo_user' in ds:
self.become_method='sudo'
if 'sudo' in ds:
self.become=ds['sudo']
del ds['sudo']
else:
self.become=True
if 'sudo_user' in ds:
self.become_user = ds['sudo_user']
del ds['sudo_user']
if 'sudo_pass' in ds:
self.become_pass = ds['sudo_pass']
del ds['sudo_pass']
elif 'su' in ds or 'su_user' in ds:
self.become_method='su'
if 'su' in ds:
self.become=ds['su']
del ds['su']
else:
self.become=True
if 'su_user' in ds:
self.become_user = ds['su_user']
del ds['su_user']
if 'su_pass' in ds:
self.become_pass = ds['su_pass']
del ds['su_pass']
# Both are defined
if ('action' in ds) and ('local_action' in ds):
@ -240,7 +262,7 @@ class Task(object):
self.items_lookup_plugin = ds.get('items_lookup_plugin', None)
self.items_lookup_terms = ds.get('items_lookup_terms', None)
self.ignore_errors = ds.get('ignore_errors', False)
self.any_errors_fatal = ds.get('any_errors_fatal', play.any_errors_fatal)
@ -271,13 +293,6 @@ class Task(object):
if len(tokens) > 1:
self.module_args = " ".join(tokens[1:])
import_tags = self.module_vars.get('tags',[])
if type(import_tags) in [int,float]:
import_tags = str(import_tags)
elif type(import_tags) in [str,unicode]:
# allow the user to list comma delimited tags
import_tags = import_tags.split(",")
# handle mutually incompatible options
incompatibles = [ x for x in [ self.first_available_file, self.items_lookup_plugin ] if x is not None ]
if len(incompatibles) > 1:
@ -305,22 +320,37 @@ class Task(object):
self.module_vars['failed_when'] = self.failed_when
self.module_vars['always_run'] = self.always_run
# tags allow certain parts of a playbook to be run without running the whole playbook
apply_tags = ds.get('tags', None)
if apply_tags is not None:
if type(apply_tags) in [ str, unicode ]:
self.tags.append(apply_tags)
elif type(apply_tags) in [ int, float ]:
self.tags.append(str(apply_tags))
elif type(apply_tags) == list:
self.tags.extend(apply_tags)
self.tags.extend(import_tags)
if len(self.tags) > 1:
self.tags.remove('untagged')
self.tags = self._load_tags(ds, self.module_vars)
if additional_conditions:
new_conditions = additional_conditions[:]
if self.when:
new_conditions.append(self.when)
self.when = new_conditions
def _load_tags(self, ds, module_vars):
tags = ['untagged']
import_tags = module_vars.get('tags',[])
if type(import_tags) in [int,float]:
import_tags = str(import_tags)
elif type(import_tags) in [str,unicode]:
# allow the user to list comma delimited tags
import_tags = import_tags.split(",")
# tags allow certain parts of a playbook to be run without running the whole playbook
apply_tags = ds.get('tags', None)
if apply_tags is not None:
if type(apply_tags) in [ str, unicode ]:
tags.append(apply_tags)
elif type(apply_tags) in [ int, float ]:
tags.append(str(apply_tags))
elif type(apply_tags) == list:
tags.extend(apply_tags)
tags.extend(import_tags)
if len(tags) > 1:
tags.remove('untagged')
return tags

View file

@ -49,6 +49,7 @@ from ansible.module_common import ModuleReplacer
from ansible.module_utils.splitter import split_args, unquote
from ansible.cache import FactCache
from ansible.utils import update_hash
from ansible.utils.unicode import to_bytes
module_replacer = ModuleReplacer(strip_comments=False)
@ -123,7 +124,6 @@ class Runner(object):
remote_pass=C.DEFAULT_REMOTE_PASS, # ex: 'password123' or None if using key
remote_port=None, # if SSH on different ports
private_key_file=C.DEFAULT_PRIVATE_KEY_FILE, # if not using keys/passwords
sudo_pass=C.DEFAULT_SUDO_PASS, # ex: 'password123' or None
background=0, # async poll every X seconds, else 0 for non-async
basedir=None, # directory of playbook, if applicable
setup_cache=None, # used to share fact data w/ other tasks
@ -131,8 +131,6 @@ class Runner(object):
transport=C.DEFAULT_TRANSPORT, # 'ssh', 'paramiko', 'local'
conditional='True', # run only if this fact expression evals to true
callbacks=None, # used for output
sudo=False, # whether to run sudo or not
sudo_user=C.DEFAULT_SUDO_USER, # ex: 'root'
module_vars=None, # a playbooks internals thing
play_vars=None, #
play_file_vars=None, #
@ -151,14 +149,15 @@ class Runner(object):
accelerate=False, # use accelerated connection
accelerate_ipv6=False, # accelerated connection w/ IPv6
accelerate_port=None, # port to use with accelerated connection
su=False, # Are we running our command via su?
su_user=None, # User to su to when running command, ex: 'root'
su_pass=C.DEFAULT_SU_PASS,
vault_pass=None,
run_hosts=None, # an optional list of pre-calculated hosts to run on
no_log=False, # option to enable/disable logging for a given task
run_once=False, # option to enable/disable host bypass loop for a given task
sudo_exe=C.DEFAULT_SUDO_EXE, # ex: /usr/local/bin/sudo
become=False, # whether to run privelege escalation or not
become_method=C.DEFAULT_BECOME_METHOD,
become_user=C.DEFAULT_BECOME_USER, # ex: 'root'
become_pass=C.DEFAULT_BECOME_PASS, # ex: 'password123' or None
become_exe=C.DEFAULT_BECOME_EXE, # ex: /usr/local/bin/sudo
):
# used to lock multiprocess inputs and outputs at various levels
@ -201,10 +200,12 @@ class Runner(object):
self.remote_port = remote_port
self.private_key_file = private_key_file
self.background = background
self.sudo = sudo
self.sudo_user_var = sudo_user
self.sudo_user = None
self.sudo_pass = sudo_pass
self.become = become
self.become_method = become_method
self.become_user_var = become_user
self.become_user = None
self.become_pass = become_pass
self.become_exe = become_exe
self.is_playbook = is_playbook
self.environment = environment
self.complex_args = complex_args
@ -213,15 +214,10 @@ class Runner(object):
self.accelerate_port = accelerate_port
self.accelerate_ipv6 = accelerate_ipv6
self.callbacks.runner = self
self.su = su
self.su_user_var = su_user
self.su_user = None
self.su_pass = su_pass
self.omit_token = '__omit_place_holder__%s' % sha1(os.urandom(64)).hexdigest()
self.vault_pass = vault_pass
self.no_log = no_log
self.run_once = run_once
self.sudo_exe = sudo_exe
if self.transport == 'smart':
# If the transport is 'smart', check to see if certain conditions
@ -235,9 +231,12 @@ class Runner(object):
self.transport = "paramiko"
else:
# see if SSH can support ControlPersist if not use paramiko
cmd = subprocess.Popen(['ssh','-o','ControlPersist'], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
(out, err) = cmd.communicate()
if "Bad configuration option" in err:
try:
cmd = subprocess.Popen(['ssh','-o','ControlPersist'], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
(out, err) = cmd.communicate()
if "Bad configuration option" in err:
self.transport = "paramiko"
except OSError:
self.transport = "paramiko"
# save the original transport, in case it gets
@ -369,7 +368,7 @@ class Runner(object):
delegate['pass'] = this_info.get('ansible_ssh_pass', password)
delegate['private_key_file'] = this_info.get('ansible_ssh_private_key_file', self.private_key_file)
delegate['transport'] = this_info.get('ansible_connection', self.transport)
delegate['sudo_pass'] = this_info.get('ansible_sudo_pass', self.sudo_pass)
delegate['become_pass'] = this_info.get('ansible_become_pass', this_info.get('ansible_ssh_pass', self.become_pass))
# Last chance to get private_key_file from global variables.
# this is useful if delegated host is not defined in the inventory
@ -399,15 +398,18 @@ class Runner(object):
if inject['hostvars'][host].get('ansible_ssh_user'):
# user for delegate host in inventory
thisuser = inject['hostvars'][host].get('ansible_ssh_user')
else:
# look up the variables for the host directly from inventory
host_vars = self.inventory.get_variables(host, vault_password=self.vault_pass)
if 'ansible_ssh_user' in host_vars:
thisuser = host_vars['ansible_ssh_user']
else:
# look up the variables for the host directly from inventory
host_vars = self.inventory.get_variables(host, vault_password=self.vault_pass)
if 'ansible_ssh_user' in host_vars:
thisuser = host_vars['ansible_ssh_user']
except errors.AnsibleError, e:
# the hostname was not found in the inventory, so
# we just ignore this and try the next method
pass
except TypeError, e:
# Someone is trying to pass a list or some other 'non string' as a host.
raise errors.AnsibleError("Invalid type for delegate_to: %s" % str(e))
if thisuser is None and self.remote_user:
# user defined by play/runner
@ -481,13 +483,13 @@ class Runner(object):
or not conn.has_pipelining
or not C.ANSIBLE_SSH_PIPELINING
or C.DEFAULT_KEEP_REMOTE_FILES
or self.su):
or self.become_method == 'su'):
self._transfer_str(conn, tmp, module_name, module_data)
environment_string = self._compute_environment_string(conn, inject)
if "tmp" in tmp and ((self.sudo and self.sudo_user != 'root') or (self.su and self.su_user != 'root')):
# deal with possible umask issues once sudo'ed to other user
if "tmp" in tmp and (self.become and self.become_user != 'root'):
# deal with possible umask issues once you become another user
self._remote_chmod(conn, 'a+r', remote_module_path, tmp)
cmd = ""
@ -514,8 +516,8 @@ class Runner(object):
else:
argsfile = self._transfer_str(conn, tmp, 'arguments', args)
if (self.sudo and self.sudo_user != 'root') or (self.su and self.su_user != 'root'):
# deal with possible umask issues once sudo'ed to other user
if self.become and self.become_user != 'root':
# deal with possible umask issues once become another user
self._remote_chmod(conn, 'a+r', argsfile, tmp)
if async_jid is None:
@ -524,7 +526,7 @@ class Runner(object):
cmd = " ".join([str(x) for x in [remote_module_path, async_jid, async_limit, async_module, argsfile]])
else:
if async_jid is None:
if conn.has_pipelining and C.ANSIBLE_SSH_PIPELINING and not C.DEFAULT_KEEP_REMOTE_FILES and not self.su:
if conn.has_pipelining and C.ANSIBLE_SSH_PIPELINING and not C.DEFAULT_KEEP_REMOTE_FILES and not self.become_method == 'su':
in_data = module_data
else:
cmd = "%s" % (remote_module_path)
@ -536,7 +538,7 @@ class Runner(object):
rm_tmp = None
if "tmp" in tmp and not C.DEFAULT_KEEP_REMOTE_FILES and not persist_files and delete_remote_tmp:
if not self.sudo or self.su or self.sudo_user == 'root' or self.su_user == 'root':
if not self.become or self.become_user == 'root':
# not sudoing or sudoing to root, so can cleanup files in the same step
rm_tmp = tmp
@ -546,17 +548,14 @@ class Runner(object):
sudoable = True
if module_name == "accelerate":
# always run the accelerate module as the user
# specified in the play, not the sudo_user
# specified in the play, not the become_user
sudoable = False
if self.su:
res = self._low_level_exec_command(conn, cmd, tmp, su=True, in_data=in_data)
else:
res = self._low_level_exec_command(conn, cmd, tmp, sudoable=sudoable, in_data=in_data)
res = self._low_level_exec_command(conn, cmd, tmp, become=self.become, sudoable=sudoable, in_data=in_data)
if "tmp" in tmp and not C.DEFAULT_KEEP_REMOTE_FILES and not persist_files and delete_remote_tmp:
if (self.sudo and self.sudo_user != 'root') or (self.su and self.su_user != 'root'):
# not sudoing to root, so maybe can't delete files as that other user
if self.become and self.become_user != 'root':
# not becoming root, so maybe can't delete files as that other user
# have to clean up temp files as original user in a second step
cmd2 = conn.shell.remove(tmp, recurse=True)
self._low_level_exec_command(conn, cmd2, tmp, sudoable=False)
@ -595,7 +594,7 @@ class Runner(object):
self.callbacks.on_unreachable(host, exec_rc.result)
return exec_rc
except errors.AnsibleError, ae:
msg = str(ae)
msg = to_bytes(ae)
self.callbacks.on_unreachable(host, msg)
return ReturnData(host=host, comm_ok=False, result=dict(failed=True, msg=msg))
except Exception:
@ -619,6 +618,7 @@ class Runner(object):
# since some of the variables we'll be replacing may be contained there too
module_vars_inject = utils.combine_vars(host_variables, combined_cache.get(host, {}))
module_vars_inject = utils.combine_vars(self.module_vars, module_vars_inject)
module_vars_inject = utils.combine_vars(module_vars_inject, self.extra_vars)
module_vars = template.template(self.basedir, self.module_vars, module_vars_inject)
# remove bad variables from the module vars, which may be in there due
@ -674,11 +674,11 @@ class Runner(object):
# Then we selectively merge some variable dictionaries down to a
# single dictionary, used to template the HostVars for this host
temp_vars = self.inventory.get_variables(host, vault_password=self.vault_pass)
temp_vars = utils.merge_hash(temp_vars, inject['combined_cache'])
temp_vars = utils.merge_hash(temp_vars, self.play_vars)
temp_vars = utils.merge_hash(temp_vars, self.play_file_vars)
temp_vars = utils.merge_hash(temp_vars, self.extra_vars)
temp_vars = utils.merge_hash(temp_vars, {'groups': inject['groups']})
temp_vars = utils.combine_vars(temp_vars, inject['combined_cache'] )
temp_vars = utils.combine_vars(temp_vars, {'groups': inject['groups']})
temp_vars = utils.combine_vars(temp_vars, self.play_vars)
temp_vars = utils.combine_vars(temp_vars, self.play_file_vars)
temp_vars = utils.combine_vars(temp_vars, self.extra_vars)
hostvars = HostVars(temp_vars, self.inventory, vault_password=self.vault_pass)
@ -748,7 +748,7 @@ class Runner(object):
if type(items) != list:
raise errors.AnsibleError("lookup plugins have to return a list: %r" % items)
if len(items) and utils.is_list_of_strings(items) and self.module_name in [ 'apt', 'yum', 'pkgng', 'zypper' ]:
if len(items) and utils.is_list_of_strings(items) and self.module_name in ( 'apt', 'yum', 'pkgng', 'zypper', 'dnf' ):
# hack for apt, yum, and pkgng so that with_items maps back into a single module call
use_these_items = []
for x in items:
@ -849,11 +849,9 @@ class Runner(object):
def _executor_internal_inner(self, host, module_name, module_args, inject, port, is_chained=False, complex_args=None):
''' decides how to invoke a module '''
# late processing of parameterized sudo_user (with_items,..)
if self.sudo_user_var is not None:
self.sudo_user = template.template(self.basedir, self.sudo_user_var, inject)
if self.su_user_var is not None:
self.su_user = template.template(self.basedir, self.su_user_var, inject)
# late processing of parameterized become_user (with_items,..)
if self.become_user_var is not None:
self.become_user = template.template(self.basedir, self.become_user_var, inject)
# module_name may be dynamic (but cannot contain {{ ansible_ssh_user }})
module_name = template.template(self.basedir, module_name, inject)
@ -893,18 +891,18 @@ class Runner(object):
actual_transport = inject.get('ansible_connection', self.transport)
actual_private_key_file = inject.get('ansible_ssh_private_key_file', self.private_key_file)
actual_private_key_file = template.template(self.basedir, actual_private_key_file, inject, fail_on_undefined=True)
self.sudo = utils.boolean(inject.get('ansible_sudo', self.sudo))
self.sudo_user = inject.get('ansible_sudo_user', self.sudo_user)
self.sudo_pass = inject.get('ansible_sudo_pass', self.sudo_pass)
self.su = inject.get('ansible_su', self.su)
self.su_pass = inject.get('ansible_su_pass', self.su_pass)
self.sudo_exe = inject.get('ansible_sudo_exe', self.sudo_exe)
# select default root user in case self.sudo requested
self.become = utils.boolean(inject.get('ansible_become', inject.get('ansible_sudo', inject.get('ansible_su', self.become))))
self.become_user = inject.get('ansible_become_user', inject.get('ansible_sudo_user', inject.get('ansible_su_user',self.become_user)))
self.become_pass = inject.get('ansible_become_pass', inject.get('ansible_sudo_pass', inject.get('ansible_su_pass', self.become_pass)))
self.become_exe = inject.get('ansible_become_exe', inject.get('ansible_sudo_exe', self.become_exe))
self.become_method = inject.get('ansible_become_method', self.become_method)
# select default root user in case self.become requested
# but no user specified; happens e.g. in host vars when
# just ansible_sudo=True is specified
if self.sudo and self.sudo_user is None:
self.sudo_user = 'root'
# just ansible_become=True is specified
if self.become and self.become_user is None:
self.become_user = 'root'
if actual_private_key_file is not None:
actual_private_key_file = os.path.expanduser(actual_private_key_file)
@ -937,15 +935,19 @@ class Runner(object):
actual_user = delegate['user']
actual_pass = delegate['pass']
actual_private_key_file = delegate['private_key_file']
self.sudo_pass = delegate['sudo_pass']
self.become_pass = delegate.get('become_pass',delegate.get('sudo_pass'))
inject = delegate['inject']
# set resolved delegate_to into inject so modules can call _remote_checksum
inject['delegate_to'] = self.delegate_to
# user/pass may still contain variables at this stage
actual_user = template.template(self.basedir, actual_user, inject)
actual_pass = template.template(self.basedir, actual_pass, inject)
self.sudo_pass = template.template(self.basedir, self.sudo_pass, inject)
try:
actual_pass = template.template(self.basedir, actual_pass, inject)
self.become_pass = template.template(self.basedir, self.become_pass, inject)
except:
# ignore password template errors, could be triggered by password charaters #10468
pass
# make actual_user available as __magic__ ansible_ssh_user variable
inject['ansible_ssh_user'] = actual_user
@ -1077,14 +1079,15 @@ class Runner(object):
if hasattr(sys.stdout, "isatty"):
if "stdout" in data and sys.stdout.isatty():
if not string_functions.isprintable(data['stdout']):
data['stdout'] = ''
data['stdout'] = ''.join(c for c in data['stdout'] if string_functions.isprintable(c))
if 'item' in inject:
result.result['item'] = inject['item']
result.result['invocation'] = dict(
module_args=module_args,
module_name=module_name
module_name=module_name,
module_complex_args=complex_args,
)
changed_when = self.module_vars.get('changed_when')
@ -1118,7 +1121,7 @@ class Runner(object):
self.callbacks.on_failed(host, data, ignore_errors)
else:
if self.diff:
self.callbacks.on_file_diff(conn.host, result.diff)
self.callbacks.on_file_diff(host, result.diff)
self.callbacks.on_ok(host, data)
return result
@ -1134,7 +1137,7 @@ class Runner(object):
if "tmp" in tmp:
# tmp has already been created
return False
if not conn.has_pipelining or not C.ANSIBLE_SSH_PIPELINING or C.DEFAULT_KEEP_REMOTE_FILES or self.su:
if not conn.has_pipelining or not C.ANSIBLE_SSH_PIPELINING or C.DEFAULT_KEEP_REMOTE_FILES or self.become_method == 'su':
# tmp is necessary to store module source code
return True
if not conn.has_pipelining:
@ -1150,62 +1153,54 @@ class Runner(object):
# *****************************************************
def _low_level_exec_command(self, conn, cmd, tmp, sudoable=False,
executable=None, su=False, in_data=None):
executable=None, become=False, in_data=None):
''' execute a command string over SSH, return the output '''
# this can be skipped with powershell modules when there is no analog to a Windows command (like chmod)
if cmd:
if not cmd:
# this can happen with powershell modules when there is no analog to a Windows command (like chmod)
return dict(stdout='', stderr='')
if executable is None:
executable = C.DEFAULT_EXECUTABLE
if executable is None:
executable = C.DEFAULT_EXECUTABLE
become_user = self.become_user
sudo_user = self.sudo_user
su_user = self.su_user
# compare connection user to (su|sudo)_user and disable if the same
# assume connection type is local if no user attribute
this_user = getattr(conn, 'user', getpass.getuser())
if (not become and this_user == become_user):
sudoable = False
become = False
# compare connection user to (su|sudo)_user and disable if the same
# assume connection type is local if no user attribute
this_user = getattr(conn, 'user', getpass.getuser())
if (not su and this_user == sudo_user) or (su and this_user == su_user):
sudoable = False
su = False
if su:
rc, stdin, stdout, stderr = conn.exec_command(cmd,
tmp,
su=su,
su_user=su_user,
executable=executable,
in_data=in_data)
else:
rc, stdin, stdout, stderr = conn.exec_command(cmd,
tmp,
sudo_user,
become_user=become_user,
sudoable=sudoable,
executable=executable,
in_data=in_data)
if type(stdout) not in [ str, unicode ]:
out = ''.join(stdout.readlines())
else:
out = stdout
if type(stdout) not in [ str, unicode ]:
out = ''.join(stdout.readlines())
else:
out = stdout
if type(stderr) not in [ str, unicode ]:
err = ''.join(stderr.readlines())
else:
err = stderr
if type(stderr) not in [ str, unicode ]:
err = ''.join(stderr.readlines())
else:
err = stderr
if rc is not None:
return dict(rc=rc, stdout=out, stderr=err)
else:
return dict(stdout=out, stderr=err)
return dict(rc=None, stdout='', stderr='')
if rc is not None:
return dict(rc=rc, stdout=out, stderr=err)
else:
return dict(stdout=out, stderr=err)
# *****************************************************
def _remote_chmod(self, conn, mode, path, tmp, sudoable=False, su=False):
def _remote_chmod(self, conn, mode, path, tmp, sudoable=False, become=False):
''' issue a remote chmod command '''
cmd = conn.shell.chmod(mode, path)
return self._low_level_exec_command(conn, cmd, tmp, sudoable=sudoable, su=su)
return self._low_level_exec_command(conn, cmd, tmp, sudoable=sudoable, become=become)
# *****************************************************
@ -1217,13 +1212,11 @@ class Runner(object):
split_path = path.split(os.path.sep, 1)
expand_path = split_path[0]
if expand_path == '~':
if self.sudo and self.sudo_user:
expand_path = '~%s' % self.sudo_user
elif self.su and self.su_user:
expand_path = '~%s' % self.su_user
if self.become and self.become_user:
expand_path = '~%s' % self.become_user
cmd = conn.shell.expand_user(expand_path)
data = self._low_level_exec_command(conn, cmd, tmp, sudoable=False, su=False)
data = self._low_level_exec_command(conn, cmd, tmp, sudoable=False, become=False)
initial_fragment = utils.last_non_blank_line(data['stdout'])
if not initial_fragment:
@ -1263,7 +1256,13 @@ class Runner(object):
python_interp = 'python'
cmd = conn.shell.checksum(path, python_interp)
data = self._low_level_exec_command(conn, cmd, tmp, sudoable=True)
#TODO: remove this horrible hack and find way to get checksum to work with other privilege escalation methods
if self.become_method == 'sudo':
sudoable = True
else:
sudoable = False
data = self._low_level_exec_command(conn, cmd, tmp, sudoable=sudoable)
data2 = utils.last_non_blank_line(data['stdout'])
try:
if data2 == '':
@ -1287,11 +1286,11 @@ class Runner(object):
''' make and return a temporary path on a remote box '''
basefile = 'ansible-tmp-%s-%s' % (time.time(), random.randint(0, 2**48))
use_system_tmp = False
if (self.sudo and self.sudo_user != 'root') or (self.su and self.su_user != 'root'):
if self.become and self.become_user != 'root':
use_system_tmp = True
tmp_mode = None
if self.remote_user != 'root' or ((self.sudo and self.sudo_user != 'root') or (self.su and self.su_user != 'root')):
if self.remote_user != 'root' or (self.become and self.become_user != 'root'):
tmp_mode = 'a+rx'
cmd = conn.shell.mkdtemp(basefile, use_system_tmp, tmp_mode)

View file

@ -20,7 +20,7 @@ import ansible
from ansible.callbacks import vv
from ansible.errors import AnsibleError as ae
from ansible.runner.return_data import ReturnData
from ansible.utils import parse_kv
from ansible.utils import parse_kv, combine_vars
from ansible.inventory.host import Host
from ansible.inventory.group import Group
@ -55,7 +55,7 @@ class ActionModule(object):
if ":" in new_name:
new_name, new_port = new_name.split(":")
args['ansible_ssh_port'] = new_port
# redefine inventory and get group "all"
inventory = self.runner.inventory
allgroup = inventory.get_group('all')
@ -69,13 +69,7 @@ class ActionModule(object):
inventory._hosts_cache[new_name] = new_host
allgroup.add_host(new_host)
# Add any variables to the new_host
for k in args.keys():
if not k in [ 'name', 'hostname', 'groupname', 'groups' ]:
new_host.set_variable(k, args[k])
groupnames = args.get('groupname', args.get('groups', args.get('group', '')))
groupnames = args.get('groupname', args.get('groups', args.get('group', '')))
# add it to the group if that was specified
if groupnames:
for group_name in groupnames.split(","):
@ -95,13 +89,22 @@ class ActionModule(object):
vv("added host to group via add_host module: %s" % group_name)
result['new_groups'] = groupnames.split(",")
# actually load host vars
new_host.vars = combine_vars(new_host.vars, inventory.get_host_variables(new_name, update_cached=True, vault_password=inventory._vault_password))
# Add any passed variables to the new_host
for k in args.keys():
if not k in [ 'name', 'hostname', 'groupname', 'groups' ]:
new_host.set_variable(k, args[k])
result['new_host'] = new_name
# clear pattern caching completely since it's unpredictable what
# patterns may have referenced the group
inventory.clear_pattern_cache()
return ReturnData(conn=conn, comm_ok=True, result=result)

View file

@ -88,6 +88,8 @@ class ActionModule(object):
remote_src = utils.boolean(options.get('remote_src', 'yes'))
regexp = options.get('regexp', None)
if self.runner.noop_on_check(inject):
return ReturnData(conn=conn, comm_ok=True, result=dict(skipped=True))
if src is None or dest is None:
result = dict(failed=True, msg="src and dest are required")
@ -125,7 +127,7 @@ class ActionModule(object):
xfered = self.runner._transfer_str(conn, tmp, 'src', resultant)
# fix file permissions when the copy is done as a different user
if self.runner.sudo and self.runner.sudo_user != 'root' or self.runner.su and self.runner.su_user != 'root':
if self.runner.become and self.runner.become_user != 'root':
self.runner._remote_chmod(conn, 'a+r', xfered, tmp)
# run the copy module
@ -136,6 +138,9 @@ class ActionModule(object):
)
module_args_tmp = utils.merge_module_args(module_args, new_module_args)
if self.runner.no_log:
resultant = " [[ Diff output has been hidden because 'no_log: true' was specified for this result ]]"
if self.runner.noop_on_check(inject):
return ReturnData(conn=conn, comm_ok=True, result=dict(changed=True), diff=dict(before_header=dest, after_header=src, after=resultant))
else:

View file

@ -234,7 +234,7 @@ class ActionModule(object):
self._remove_tempfile_if_content_defined(content, content_tempfile)
# fix file permissions when the copy is done as a different user
if (self.runner.sudo and self.runner.sudo_user != 'root' or self.runner.su and self.runner.su_user != 'root') and not raw:
if self.runner.become and self.runner.become_user != 'root' and not raw:
self.runner._remote_chmod(conn, 'a+r', tmp_src, tmp_path)
if raw:
@ -366,6 +366,11 @@ class ActionModule(object):
diff['after_header'] = source
diff['after'] = src.read()
if self.runner.no_log:
if 'before' in diff:
diff["before"] = ""
if 'after' in diff:
diff["after"] = " [[ Diff output has been hidden because 'no_log: true' was specified for this result ]]"
return diff
def _remove_tempfile_if_content_defined(self, content, content_tempfile):

View file

@ -78,7 +78,7 @@ class ActionModule(object):
# use slurp if sudo and permissions are lacking
remote_data = None
if remote_checksum in ('1', '2') or self.runner.sudo:
if remote_checksum in ('1', '2') or self.runner.become:
slurpres = self.runner._execute_module(conn, tmp, 'slurp', 'src=%s' % source, inject=inject)
if slurpres.is_successful():
if slurpres.result['encoding'] == 'base64':

View file

@ -53,7 +53,12 @@ class ActionModule(object):
module_name = 'command'
module_args += " #USE_SHELL"
vv("REMOTE_MODULE %s %s" % (module_name, module_args), host=conn.host)
if self.runner.no_log:
module_display_args = "(no_log enabled, args censored)"
else:
module_display_args = module_args
vv("REMOTE_MODULE %s %s" % (module_name, module_display_args), host=conn.host)
return self.runner._execute_module(conn, tmp, module_name, module_args, inject=inject, complex_args=complex_args)

View file

@ -16,6 +16,7 @@
import os
from ansible import utils
import ansible.constants as C
from ansible.runner.return_data import ReturnData
class ActionModule(object):
@ -32,7 +33,7 @@ class ActionModule(object):
src = options.get('src', None)
dest = options.get('dest', None)
remote_src = utils.boolean(options.get('remote_src', 'yes'))
remote_src = utils.boolean(options.get('remote_src', 'no'))
if src is None:
result = dict(failed=True, msg="src is required")
@ -47,12 +48,13 @@ class ActionModule(object):
else:
src = utils.path_dwim(self.runner.basedir, src)
tmp_src = tmp + src
tmp_path = self.runner._make_tmp_path(conn)
tmp_src = tmp_path + 'patch_source'
conn.put_file(src, tmp_src)
if self.runner.sudo and self.runner.sudo_user != 'root' or self.runner.su and self.runner.su_user != 'root':
if self.runner.become and self.runner.become_user != 'root':
if not self.runner.noop_on_check(inject):
self.runner._remote_chmod(conn, 'a+r', tmp_src, tmp)
self.runner._remote_chmod(conn, 'a+r', tmp_src, tmp_path)
new_module_args = dict(
src=tmp_src,
@ -63,4 +65,8 @@ class ActionModule(object):
module_args = utils.merge_module_args(module_args, new_module_args)
return self.runner._execute_module(conn, tmp, 'patch', module_args, inject=inject, complex_args=complex_args)
data = self.runner._execute_module(conn, tmp, 'patch', module_args, inject=inject, complex_args=complex_args)
if not C.DEFAULT_KEEP_REMOTE_FILES:
self.runner._remove_tmp_path(conn, tmp_path)
return data

View file

@ -34,7 +34,7 @@ class ActionModule(object):
# in --check mode, always skip this module execution
return ReturnData(conn=conn, comm_ok=True, result=dict(skipped=True))
executable = ''
executable = None
# From library/command, keep in sync
r = re.compile(r'(^|\s)(executable)=(?P<quote>[\'"])?(.*?)(?(quote)(?<!\\)(?P=quote))((?<!\\)\s|$)')
for m in r.finditer(module_args):
@ -42,13 +42,15 @@ class ActionModule(object):
if m.group(2) == "executable":
executable = v
module_args = r.sub("", module_args)
if complex_args and executable is None:
executable = complex_args.get('executable', None)
result = self.runner._low_level_exec_command(conn, module_args, tmp, sudoable=True, executable=executable,
su=self.runner.su)
become=self.runner.become)
# for some modules (script, raw), the sudo success key
# may leak into the stdout due to the way the sudo/su
# command is constructed, so we filter that out here
if result.get('stdout','').strip().startswith('SUDO-SUCCESS-'):
result['stdout'] = re.sub(r'^((\r)?\n)?SUDO-SUCCESS.*(\r)?\n', '', result['stdout'])
if result.get('stdout','').strip().startswith('BECOME-SUCCESS-'):
result['stdout'] = re.sub(r'^((\r)?\n)?BECOME-SUCCESS.*(\r)?\n', '', result['stdout'])
return ReturnData(conn=conn, result=result)

View file

@ -52,14 +52,20 @@ class ActionModule(object):
elif m.group(2) == "removes":
removes = v
module_args = r.sub("", module_args)
if complex_args:
if creates is None:
creates = complex_args.get('creates', None)
if removes is None:
removes = complex_args.get('removes', None)
if creates:
# do not run the command if the line contains creates=filename
# and the filename already exists. This allows idempotence
# of command executions.
module_args_tmp = "path=%s" % creates
module_args_tmp = ""
complex_args_tmp = dict(path=creates)
module_return = self.runner._execute_module(conn, tmp, 'stat', module_args_tmp, inject=inject,
complex_args=complex_args, persist_files=True)
complex_args=complex_args_tmp, persist_files=True)
stat = module_return.result.get('stat', None)
if stat and stat.get('exists', False):
return ReturnData(
@ -74,9 +80,10 @@ class ActionModule(object):
# do not run the command if the line contains removes=filename
# and the filename does not exist. This allows idempotence
# of command executions.
module_args_tmp = "path=%s" % removes
module_args_tmp = ""
complex_args_tmp = dict(path=creates)
module_return = self.runner._execute_module(conn, tmp, 'stat', module_args_tmp, inject=inject,
complex_args=complex_args, persist_files=True)
complex_args=complex_args_tmp, persist_files=True)
stat = module_return.result.get('stat', None)
if stat and not stat.get('exists', False):
return ReturnData(
@ -113,13 +120,12 @@ class ActionModule(object):
sudoable = True
# set file permissions, more permissive when the copy is done as a different user
if ((self.runner.sudo and self.runner.sudo_user != 'root') or
(self.runner.su and self.runner.su_user != 'root')):
if self.runner.become and self.runner.become_user != 'root':
chmod_mode = 'a+rx'
sudoable = False
else:
chmod_mode = '+rx'
self.runner._remote_chmod(conn, chmod_mode, tmp_src, tmp, sudoable=sudoable, su=self.runner.su)
self.runner._remote_chmod(conn, chmod_mode, tmp_src, tmp, sudoable=sudoable, become=self.runner.become)
# add preparation steps to one ssh roundtrip executing the script
env_string = self.runner._compute_environment_string(conn, inject)

View file

@ -78,7 +78,7 @@ class ActionModule(object):
# Store original transport and sudo values.
self.original_transport = inject.get('ansible_connection', self.runner.transport)
self.original_sudo = self.runner.sudo
self.original_become = self.runner.become
self.transport_overridden = False
if inject.get('delegate_to') is None:
@ -87,7 +87,7 @@ class ActionModule(object):
if self.original_transport != 'local':
inject['ansible_connection'] = 'local'
self.transport_overridden = True
self.runner.sudo = False
self.runner.become = False
def run(self, conn, tmp, module_name, module_args,
inject, complex_args=None, **kwargs):
@ -143,7 +143,7 @@ class ActionModule(object):
# use a delegate host instead of localhost
use_delegate = True
# COMPARE DELEGATE, HOST AND TRANSPORT
# COMPARE DELEGATE, HOST AND TRANSPORT
process_args = False
if not dest_host is src_host and self.original_transport != 'local':
# interpret and inject remote host info into src or dest
@ -160,7 +160,7 @@ class ActionModule(object):
if not use_delegate or not user:
user = inject.get('ansible_ssh_user',
self.runner.remote_user)
if use_delegate:
# FIXME
private_key = inject.get('ansible_ssh_private_key_file', self.runner.private_key_file)
@ -172,7 +172,7 @@ class ActionModule(object):
if not private_key is None:
private_key = os.path.expanduser(private_key)
options['private_key'] = private_key
# use the mode to define src and dest's url
if options.get('mode', 'push') == 'pull':
# src is a remote path: <user>@<host>, dest is a local path
@ -192,7 +192,7 @@ class ActionModule(object):
rsync_path = options.get('rsync_path', None)
# If no rsync_path is set, sudo was originally set, and dest is remote then add 'sudo rsync' argument.
if not rsync_path and self.transport_overridden and self.original_sudo and not dest_is_local:
if not rsync_path and self.transport_overridden and self.original_become and not dest_is_local and self.runner.become_method == 'sudo':
rsync_path = 'sudo rsync'
# make sure rsync path is quoted.
@ -206,8 +206,8 @@ class ActionModule(object):
# run the module and store the result
result = self.runner._execute_module(conn, tmp, 'synchronize', module_args, complex_args=options, inject=inject)
# reset the sudo property
self.runner.sudo = self.original_sudo
# reset the sudo property
self.runner.become = self.original_become
return result

View file

@ -117,23 +117,26 @@ class ActionModule(object):
# template is different from the remote value
# if showing diffs, we need to get the remote value
dest_contents = ''
diff = {}
if self.runner.diff:
# using persist_files to keep the temp directory around to avoid needing to grab another
dest_result = self.runner._execute_module(conn, tmp, 'slurp', "path=%s" % dest, inject=inject, persist_files=True)
diff['before'] = ""
if 'content' in dest_result.result:
dest_contents = dest_result.result['content']
if dest_result.result['encoding'] == 'base64':
dest_contents = base64.b64decode(dest_contents)
else:
raise Exception("unknown encoding, failed: %s" % dest_result.result)
diff['before'] = dest_contents
diff['before_header'] = dest
diff['after_header'] = source
diff['after'] = resultant
xfered = self.runner._transfer_str(conn, tmp, 'source', resultant)
# fix file permissions when the copy is done as a different user
if self.runner.sudo and self.runner.sudo_user != 'root' or self.runner.su and self.runner.su_user != 'root':
if self.runner.become and self.runner.become_user != 'root':
self.runner._remote_chmod(conn, 'a+r', xfered, tmp)
# run the copy module
@ -145,12 +148,15 @@ class ActionModule(object):
)
module_args_tmp = utils.merge_module_args(module_args, new_module_args)
if self.runner.no_log and self.runner.diff:
diff['before'] = ""
diff['after'] = " [[ Diff output has been hidden because 'no_log: true' was specified for this result ]]"
if self.runner.noop_on_check(inject):
return ReturnData(conn=conn, comm_ok=True, result=dict(changed=True), diff=dict(before_header=dest, after_header=source, before=dest_contents, after=resultant))
return ReturnData(conn=conn, comm_ok=True, result=dict(changed=True), diff=diff)
else:
res = self.runner._execute_module(conn, tmp, 'copy', module_args_tmp, inject=inject, complex_args=complex_args)
if res.result.get('changed', False):
res.diff = dict(before=dest_contents, after=resultant)
res.diff = diff
return res
else:
# when running the file module based on the template data, we do

View file

@ -62,7 +62,7 @@ class ActionModule(object):
module_args_tmp = ""
complex_args_tmp = dict(path=creates, get_md5=False, get_checksum=False)
module_return = self.runner._execute_module(conn, tmp, 'stat', module_args_tmp, inject=inject,
complex_args=complex_args_tmp, persist_files=True)
complex_args=complex_args_tmp, delete_remote_tmp=False)
stat = module_return.result.get('stat', None)
if stat and stat.get('exists', False):
return ReturnData(
@ -99,7 +99,7 @@ class ActionModule(object):
# handle check mode client side
# fix file permissions when the copy is done as a different user
if copy:
if self.runner.sudo and self.runner.sudo_user != 'root' or self.runner.su and self.runner.su_user != 'root':
if self.runner.become and self.runner.become_user != 'root':
if not self.runner.noop_on_check(inject):
self.runner._remote_chmod(conn, 'a+r', tmp_src, tmp)
# Build temporary module_args.

View file

@ -230,7 +230,7 @@ class ActionModule(object):
self._remove_tempfile_if_content_defined(content, content_tempfile)
# fix file permissions when the copy is done as a different user
if (self.runner.sudo and self.runner.sudo_user != 'root' or self.runner.su and self.runner.su_user != 'root') and not raw:
if self.runner.become and self.runner.become_user != 'root' and not raw:
self.runner._remote_chmod(conn, 'a+r', tmp_src, tmp_path)
if raw:
@ -362,6 +362,12 @@ class ActionModule(object):
diff['after_header'] = source
diff['after'] = src.read()
if self.runner.no_log:
if 'before' in diff:
diff['before'] = ""
if 'after' in diff:
diff["after"] = " [[ Diff output has been hidden because 'no_log: true' was specified for this result ]]"
return diff
def _remove_tempfile_if_content_defined(self, content, content_tempfile):

View file

@ -93,23 +93,26 @@ class ActionModule(object):
# template is different from the remote value
# if showing diffs, we need to get the remote value
dest_contents = ''
diff = {}
if self.runner.diff:
# using persist_files to keep the temp directory around to avoid needing to grab another
dest_result = self.runner._execute_module(conn, tmp, 'slurp', "path=%s" % dest, inject=inject, persist_files=True)
diff["before"] = ""
if 'content' in dest_result.result:
dest_contents = dest_result.result['content']
if dest_result.result['encoding'] == 'base64':
dest_contents = base64.b64decode(dest_contents)
else:
raise Exception("unknown encoding, failed: %s" % dest_result.result)
diff["before"] = dest_contents
diff["before_header"] = dest
diff["after"] = resultant
diff["after_header"] = resultant
xfered = self.runner._transfer_str(conn, tmp, 'source', resultant)
# fix file permissions when the copy is done as a different user
if self.runner.sudo and self.runner.sudo_user != 'root' or self.runner.su and self.runner.su_user != 'root':
if self.runner.become and self.runner.become_user != 'root':
self.runner._remote_chmod(conn, 'a+r', xfered, tmp)
# run the copy module
@ -121,12 +124,15 @@ class ActionModule(object):
)
module_args_tmp = utils.merge_module_args(module_args, new_module_args)
if self.runner.no_log:
diff["before"] = ""
diff["after"] = " [[ Diff output has been hidden because 'no_log: true' was specified for this result ]]"
if self.runner.noop_on_check(inject):
return ReturnData(conn=conn, comm_ok=True, result=dict(changed=True), diff=dict(before_header=dest, after_header=source, before=dest_contents, after=resultant))
return ReturnData(conn=conn, comm_ok=True, result=dict(changed=True), diff=diff)
else:
res = self.runner._execute_module(conn, tmp, 'win_copy', module_args_tmp, inject=inject, complex_args=complex_args)
if res.result.get('changed', False):
res.diff = dict(before=dest_contents, after=resultant)
res.diff = diff
return res
else:
# when running the file module based on the template data, we do

View file

@ -15,12 +15,12 @@
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
import json
import os
import base64
import socket
import struct
import time
from multiprocessing import Lock
from ansible.callbacks import vvv, vvvv
from ansible.errors import AnsibleError, AnsibleFileNotFound
from ansible.runner.connection_plugins.ssh import Connection as SSHConnection
@ -35,6 +35,8 @@ from ansible import constants
# multiple of the value to speed up file reads.
CHUNK_SIZE=1044*20
_LOCK = Lock()
class Connection(object):
''' raw socket accelerated connection '''
@ -50,6 +52,7 @@ class Connection(object):
self.accport = port[1]
self.is_connected = False
self.has_pipelining = False
self.become_methods_supported=['sudo']
if not self.port:
self.port = constants.DEFAULT_REMOTE_PORT
@ -110,6 +113,15 @@ class Connection(object):
def connect(self, allow_ssh=True):
''' activates the connection object '''
# ensure only one fork tries to setup the connection, in case the
# first task for multiple hosts is delegated to the same host.
if not self.is_connected:
with(_LOCK):
return self._connect(allow_ssh)
return self
def _connect(self, allow_ssh=True):
try:
if not self.is_connected:
wrong_user = False
@ -149,7 +161,7 @@ class Connection(object):
res = self._execute_accelerate_module()
if not res.is_successful():
raise AnsibleError("Failed to launch the accelerated daemon on %s (reason: %s)" % (self.host,res.result.get('msg')))
return self.connect(allow_ssh=False)
return self._connect(allow_ssh=False)
else:
raise AnsibleError("Failed to connect to %s:%s" % (self.host,self.accport))
self.is_connected = True
@ -226,11 +238,11 @@ class Connection(object):
else:
return response.get('rc') == 0
def exec_command(self, cmd, tmp_path, sudo_user=None, sudoable=False, executable='/bin/sh', in_data=None, su=None, su_user=None):
def exec_command(self, cmd, tmp_path, become_user=None, sudoable=False, executable='/bin/sh', in_data=None):
''' run a command on the remote host '''
if su or su_user:
raise AnsibleError("Internal Error: this module does not support running commands via su")
if sudoable and self.runner.become and self.runner.become_method not in self.become_methods_supported:
raise AnsibleError("Internal Error: this module does not support running commands via %s" % self.runner.become_method)
if in_data:
raise AnsibleError("Internal Error: this module does not support optimized module pipelining")
@ -238,8 +250,8 @@ class Connection(object):
if executable == "":
executable = constants.DEFAULT_EXECUTABLE
if self.runner.sudo and sudoable and sudo_user:
cmd, prompt, success_key = utils.make_sudo_cmd(self.runner.sudo_exe, sudo_user, executable, cmd)
if self.runner.become and sudoable:
cmd, prompt, success_key = utils.make_become_cmd(cmd, become_user, executable, self.runner.become_method, '', self.runner.become_exe)
vvv("EXEC COMMAND %s" % cmd)
@ -292,8 +304,8 @@ class Connection(object):
if fd.tell() >= fstat.st_size:
last = True
data = dict(mode='put', data=base64.b64encode(data), out_path=out_path, last=last)
if self.runner.sudo:
data['user'] = self.runner.sudo_user
if self.runner.become:
data['user'] = self.runner.become_user
data = utils.jsonify(data)
data = utils.encrypt(self.key, data)

View file

@ -1,5 +1,6 @@
# Based on local.py (c) 2012, Michael DeHaan <michael.dehaan@gmail.com>
# (c) 2013, Maykel Moya <mmoya@speedyrails.com>
# (c) 2015, Toshio Kuratomi <tkuratomi@ansible.com>
#
# This file is part of Ansible
#
@ -15,15 +16,21 @@
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import distutils.spawn
import traceback
import os
import shutil
import shlex
import subprocess
from ansible import errors
from ansible import utils
from ansible.utils.unicode import to_bytes
from ansible.callbacks import vvv
import ansible.constants as C
BUFSIZE = 65536
class Connection(object):
''' Local chroot based connections '''
@ -31,6 +38,7 @@ class Connection(object):
def __init__(self, runner, host, port, *args, **kwargs):
self.chroot = host
self.has_pipelining = False
self.become_methods_supported=C.BECOME_METHODS
if os.geteuid() != 0:
raise errors.AnsibleError("chroot connection requires running as root")
@ -60,70 +68,94 @@ class Connection(object):
return self
def exec_command(self, cmd, tmp_path, sudo_user=None, sudoable=False, executable='/bin/sh', in_data=None, su=None, su_user=None):
''' run a command on the chroot '''
def _generate_cmd(self, executable, cmd):
if executable:
local_cmd = [self.chroot_cmd, self.chroot, executable, '-c', cmd]
else:
# Prev to python2.7.3, shlex couldn't handle unicode type strings
cmd = to_bytes(cmd)
cmd = shlex.split(cmd)
local_cmd = [self.chroot_cmd, self.chroot]
local_cmd += cmd
return local_cmd
if su or su_user:
raise errors.AnsibleError("Internal Error: this module does not support running commands via su")
def _buffered_exec_command(self, cmd, tmp_path, become_user=None, sudoable=False, executable='/bin/sh', in_data=None, stdin=subprocess.PIPE):
''' run a command on the chroot. This is only needed for implementing
put_file() get_file() so that we don't have to read the whole file
into memory.
compared to exec_command() it looses some niceties like being able to
return the process's exit code immediately.
'''
if sudoable and self.runner.become and self.runner.become_method not in self.become_methods_supported:
raise errors.AnsibleError("Internal Error: this module does not support running commands via %s" % self.runner.become_method)
if in_data:
raise errors.AnsibleError("Internal Error: this module does not support optimized module pipelining")
# We enter chroot as root so sudo stuff can be ignored
if executable:
local_cmd = [self.chroot_cmd, self.chroot, executable, '-c', cmd]
else:
local_cmd = '%s "%s" %s' % (self.chroot_cmd, self.chroot, cmd)
# We enter zone as root so we ignore privilege escalation (probably need to fix in case we have to become a specific used [ex: postgres admin])?
local_cmd = self._generate_cmd(executable, cmd)
vvv("EXEC %s" % (local_cmd), host=self.chroot)
p = subprocess.Popen(local_cmd, shell=isinstance(local_cmd, basestring),
p = subprocess.Popen(local_cmd, shell=False,
cwd=self.runner.basedir,
stdin=subprocess.PIPE,
stdin=stdin,
stdout=subprocess.PIPE, stderr=subprocess.PIPE)
return p
def exec_command(self, cmd, tmp_path, become_user=None, sudoable=False, executable='/bin/sh', in_data=None):
''' run a command on the chroot '''
p = self._buffered_exec_command(cmd, tmp_path, become_user, sudoable, executable, in_data)
stdout, stderr = p.communicate()
return (p.returncode, '', stdout, stderr)
def put_file(self, in_path, out_path):
''' transfer a file from local to chroot '''
if not out_path.startswith(os.path.sep):
out_path = os.path.join(os.path.sep, out_path)
normpath = os.path.normpath(out_path)
out_path = os.path.join(self.chroot, normpath[1:])
vvv("PUT %s TO %s" % (in_path, out_path), host=self.chroot)
if not os.path.exists(in_path):
raise errors.AnsibleFileNotFound("file or module does not exist: %s" % in_path)
try:
shutil.copyfile(in_path, out_path)
except shutil.Error:
traceback.print_exc()
raise errors.AnsibleError("failed to copy: %s and %s are the same" % (in_path, out_path))
with open(in_path, 'rb') as in_file:
try:
p = self._buffered_exec_command('dd of=%s bs=%s' % (out_path, BUFSIZE), None, stdin=in_file)
except OSError:
raise errors.AnsibleError("chroot connection requires dd command in the chroot")
try:
stdout, stderr = p.communicate()
except:
traceback.print_exc()
raise errors.AnsibleError("failed to transfer file %s to %s" % (in_path, out_path))
if p.returncode != 0:
raise errors.AnsibleError("failed to transfer file %s to %s:\n%s\n%s" % (in_path, out_path, stdout, stderr))
except IOError:
traceback.print_exc()
raise errors.AnsibleError("failed to transfer file to %s" % out_path)
raise errors.AnsibleError("file or module does not exist at: %s" % in_path)
def fetch_file(self, in_path, out_path):
''' fetch a file from chroot to local '''
if not in_path.startswith(os.path.sep):
in_path = os.path.join(os.path.sep, in_path)
normpath = os.path.normpath(in_path)
in_path = os.path.join(self.chroot, normpath[1:])
vvv("FETCH %s TO %s" % (in_path, out_path), host=self.chroot)
if not os.path.exists(in_path):
raise errors.AnsibleFileNotFound("file or module does not exist: %s" % in_path)
try:
shutil.copyfile(in_path, out_path)
except shutil.Error:
traceback.print_exc()
raise errors.AnsibleError("failed to copy: %s and %s are the same" % (in_path, out_path))
except IOError:
traceback.print_exc()
raise errors.AnsibleError("failed to transfer file to %s" % out_path)
p = self._buffered_exec_command('dd if=%s bs=%s' % (in_path, BUFSIZE), None)
except OSError:
raise errors.AnsibleError("chroot connection requires dd command in the chroot")
with open(out_path, 'wb+') as out_file:
try:
chunk = p.stdout.read(BUFSIZE)
while chunk:
out_file.write(chunk)
chunk = p.stdout.read(BUFSIZE)
except:
traceback.print_exc()
raise errors.AnsibleError("failed to transfer file %s to %s" % (in_path, out_path))
stdout, stderr = p.communicate()
if p.returncode != 0:
raise errors.AnsibleError("failed to transfer file %s to %s:\n%s\n%s" % (in_path, out_path, stdout, stderr))
def close(self):
''' terminate the connection; nothing to do here '''

View file

@ -53,6 +53,8 @@ class Connection(object):
else:
self.port = port
self.become_methods_supported=[]
def connect(self):
''' activates the connection object '''
@ -64,11 +66,11 @@ class Connection(object):
socket = self.context.socket(zmq.REQ)
addr = "tcp://%s:%s" % (self.host, self.port)
socket.connect(addr)
self.socket = socket
self.socket = socket
return self
def exec_command(self, cmd, tmp_path, sudo_user, sudoable=False, executable='/bin/sh', in_data=None, su_user=None, su=None):
def exec_command(self, cmd, tmp_path, become_user, sudoable=False, executable='/bin/sh', in_data=None):
''' run a command on the remote host '''
if in_data:
@ -76,7 +78,7 @@ class Connection(object):
vvv("EXEC COMMAND %s" % cmd)
if (self.runner.sudo and sudoable) or (self.runner.su and su):
if self.runner.become and sudoable:
raise errors.AnsibleError(
"When using fireball, do not specify sudo or su to run your tasks. " +
"Instead sudo the fireball action with sudo. " +

View file

@ -53,16 +53,14 @@ class Connection(object):
self.client = fc.Client(self.host)
return self
def exec_command(self, cmd, tmp_path, sudo_user=None, sudoable=False,
executable='/bin/sh', in_data=None, su=None, su_user=None):
def exec_command(self, cmd, tmp_path, become_user=None, sudoable=False,
executable='/bin/sh', in_data=None):
''' run a command on the remote minion '''
if su or su_user:
raise errors.AnsibleError("Internal Error: this module does not support running commands via su")
if in_data:
raise errors.AnsibleError("Internal Error: this module does not support optimized module pipelining")
# totally ignores privlege escalation
vvv("EXEC %s" % (cmd), host=self.host)
p = self.client.command.run(cmd)[self.host]
return (p[0], '', p[1], p[2])

View file

@ -1,6 +1,7 @@
# Based on local.py (c) 2012, Michael DeHaan <michael.dehaan@gmail.com>
# and chroot.py (c) 2013, Maykel Moya <mmoya@speedyrails.com>
# (c) 2013, Michael Scherer <misc@zarb.org>
# (c) 2015, Toshio Kuratomi <tkuratomi@ansible.com>
#
# This file is part of Ansible
#
@ -16,17 +17,23 @@
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import distutils.spawn
import traceback
import os
import shutil
import shlex
import subprocess
from ansible import errors
from ansible.utils.unicode import to_bytes
from ansible.callbacks import vvv
import ansible.constants as C
BUFSIZE = 65536
class Connection(object):
''' Local chroot based connections '''
''' Local BSD Jail based connections '''
def _search_executable(self, executable):
cmd = distutils.spawn.find_executable(executable)
@ -54,20 +61,19 @@ class Connection(object):
# remove \n
return stdout[:-1]
def __init__(self, runner, host, port, *args, **kwargs):
self.jail = host
self.runner = runner
self.host = host
self.has_pipelining = False
self.become_methods_supported=C.BECOME_METHODS
if os.geteuid() != 0:
raise errors.AnsibleError("jail connection requires running as root")
self.jls_cmd = self._search_executable('jls')
self.jexec_cmd = self._search_executable('jexec')
if not self.jail in self.list_jails():
raise errors.AnsibleError("incorrect jail name %s" % self.jail)
@ -77,9 +83,9 @@ class Connection(object):
self.port = port
def connect(self, port=None):
''' connect to the chroot; nothing to do here '''
''' connect to the jail; nothing to do here '''
vvv("THIS IS A LOCAL CHROOT DIR", host=self.jail)
vvv("THIS IS A LOCAL JAIL DIR", host=self.jail)
return self
@ -88,63 +94,90 @@ class Connection(object):
if executable:
local_cmd = [self.jexec_cmd, self.jail, executable, '-c', cmd]
else:
local_cmd = '%s "%s" %s' % (self.jexec_cmd, self.jail, cmd)
# Prev to python2.7.3, shlex couldn't handle unicode type strings
cmd = to_bytes(cmd)
cmd = shlex.split(cmd)
local_cmd = [self.jexec_cmd, self.jail]
local_cmd += cmd
return local_cmd
def exec_command(self, cmd, tmp_path, sudo_user=None, sudoable=False, executable='/bin/sh', in_data=None, su=None, su_user=None):
''' run a command on the chroot '''
def _buffered_exec_command(self, cmd, tmp_path, become_user=None, sudoable=False, executable='/bin/sh', in_data=None, stdin=subprocess.PIPE):
''' run a command on the jail. This is only needed for implementing
put_file() get_file() so that we don't have to read the whole file
into memory.
if su or su_user:
raise errors.AnsibleError("Internal Error: this module does not support running commands via su")
compared to exec_command() it looses some niceties like being able to
return the process's exit code immediately.
'''
if sudoable and self.runner.become and self.runner.become_method not in self.become_methods_supported:
raise errors.AnsibleError("Internal Error: this module does not support running commands via %s" % self.runner.become_method)
if in_data:
raise errors.AnsibleError("Internal Error: this module does not support optimized module pipelining")
# We enter chroot as root so sudo stuff can be ignored
# We enter zone as root so we ignore privilege escalation (probably need to fix in case we have to become a specific used [ex: postgres admin])?
local_cmd = self._generate_cmd(executable, cmd)
vvv("EXEC %s" % (local_cmd), host=self.jail)
p = subprocess.Popen(local_cmd, shell=isinstance(local_cmd, basestring),
p = subprocess.Popen(local_cmd, shell=False,
cwd=self.runner.basedir,
stdin=subprocess.PIPE,
stdin=stdin,
stdout=subprocess.PIPE, stderr=subprocess.PIPE)
return p
def exec_command(self, cmd, tmp_path, become_user=None, sudoable=False, executable='/bin/sh', in_data=None):
''' run a command on the jail '''
p = self._buffered_exec_command(cmd, tmp_path, become_user, sudoable, executable, in_data)
stdout, stderr = p.communicate()
return (p.returncode, '', stdout, stderr)
def _normalize_path(self, path, prefix):
if not path.startswith(os.path.sep):
path = os.path.join(os.path.sep, path)
normpath = os.path.normpath(path)
return os.path.join(prefix, normpath[1:])
def _copy_file(self, in_path, out_path):
if not os.path.exists(in_path):
raise errors.AnsibleFileNotFound("file or module does not exist: %s" % in_path)
try:
shutil.copyfile(in_path, out_path)
except shutil.Error:
traceback.print_exc()
raise errors.AnsibleError("failed to copy: %s and %s are the same" % (in_path, out_path))
except IOError:
traceback.print_exc()
raise errors.AnsibleError("failed to transfer file to %s" % out_path)
def put_file(self, in_path, out_path):
''' transfer a file from local to chroot '''
''' transfer a file from local to jail '''
out_path = self._normalize_path(out_path, self.get_jail_path())
vvv("PUT %s TO %s" % (in_path, out_path), host=self.jail)
self._copy_file(in_path, out_path)
try:
with open(in_path, 'rb') as in_file:
try:
p = self._buffered_exec_command('dd of=%s bs=%s' % (out_path, BUFSIZE), None, stdin=in_file)
except OSError:
raise errors.AnsibleError("jail connection requires dd command in the jail")
try:
stdout, stderr = p.communicate()
except:
traceback.print_exc()
raise errors.AnsibleError("failed to transfer file %s to %s" % (in_path, out_path))
if p.returncode != 0:
raise errors.AnsibleError("failed to transfer file %s to %s:\n%s\n%s" % (in_path, out_path, stdout, stderr))
except IOError:
raise errors.AnsibleError("file or module does not exist at: %s" % in_path)
def fetch_file(self, in_path, out_path):
''' fetch a file from chroot to local '''
''' fetch a file from jail to local '''
in_path = self._normalize_path(in_path, self.get_jail_path())
vvv("FETCH %s TO %s" % (in_path, out_path), host=self.jail)
self._copy_file(in_path, out_path)
try:
p = self._buffered_exec_command('dd if=%s bs=%s' % (in_path, BUFSIZE), None)
except OSError:
raise errors.AnsibleError("jail connection requires dd command in the jail")
with open(out_path, 'wb+') as out_file:
try:
chunk = p.stdout.read(BUFSIZE)
while chunk:
out_file.write(chunk)
chunk = p.stdout.read(BUFSIZE)
except:
traceback.print_exc()
raise errors.AnsibleError("failed to transfer file %s to %s" % (in_path, out_path))
stdout, stderr = p.communicate()
if p.returncode != 0:
raise errors.AnsibleError("failed to transfer file %s to %s:\n%s\n%s" % (in_path, out_path, stdout, stderr))
def close(self):
''' terminate the connection; nothing to do here '''

View file

@ -22,6 +22,7 @@ import os
import subprocess
from ansible import errors
from ansible.callbacks import vvv
import ansible.constants as C
class Connection(object):
''' Local lxc based connections '''
@ -50,6 +51,7 @@ class Connection(object):
self.host = host
# port is unused, since this is local
self.port = port
self.become_methods_supported=C.BECOME_METHODS
def connect(self, port=None):
''' connect to the lxc; nothing to do here '''
@ -65,16 +67,16 @@ class Connection(object):
local_cmd = '%s -q -c lxc:/// lxc-enter-namespace %s -- %s' % (self.cmd, self.lxc, cmd)
return local_cmd
def exec_command(self, cmd, tmp_path, sudo_user, sudoable=False, executable='/bin/sh', in_data=None, su=None, su_user=None):
def exec_command(self, cmd, tmp_path, become_user, sudoable=False, executable='/bin/sh', in_data=None):
''' run a command on the chroot '''
if su or su_user:
raise errors.AnsibleError("Internal Error: this module does not support running commands via su")
if sudoable and self.runner.become and self.runner.become_method not in self.become_methods_supported:
raise errors.AnsibleError("Internal Error: this module does not support running commands via %s" % self.runner.become_method)
if in_data:
raise errors.AnsibleError("Internal Error: this module does not support optimized module pipelining")
# We enter lxc as root so sudo stuff can be ignored
# We ignore privelege escalation!
local_cmd = self._generate_cmd(executable, cmd)
vvv("EXEC %s" % (local_cmd), host=self.lxc)

View file

@ -26,6 +26,7 @@ from ansible import errors
from ansible import utils
from ansible.callbacks import vvv
class Connection(object):
''' Local based connections '''
@ -33,31 +34,34 @@ class Connection(object):
self.runner = runner
self.host = host
# port is unused, since this is local
self.port = port
self.port = port
self.has_pipelining = False
# TODO: add su(needs tty), pbrun, pfexec
self.become_methods_supported=['sudo']
def connect(self, port=None):
''' connect to the local host; nothing to do here '''
return self
def exec_command(self, cmd, tmp_path, sudo_user=None, sudoable=False, executable='/bin/sh', in_data=None, su=None, su_user=None):
def exec_command(self, cmd, tmp_path, become_user=None, sudoable=False, executable='/bin/sh', in_data=None):
''' run a command on the local host '''
# su requires to be run from a terminal, and therefore isn't supported here (yet?)
if su or su_user:
raise errors.AnsibleError("Internal Error: this module does not support running commands via su")
if sudoable and self.runner.become and self.runner.become_method not in self.become_methods_supported:
raise errors.AnsibleError("Internal Error: this module does not support running commands via %s" % self.runner.become_method)
if in_data:
raise errors.AnsibleError("Internal Error: this module does not support optimized module pipelining")
if not self.runner.sudo or not sudoable:
if self.runner.become and sudoable:
local_cmd, prompt, success_key = utils.make_become_cmd(cmd, become_user, executable, self.runner.become_method, '-H', self.runner.become_exe)
else:
if executable:
local_cmd = executable.split() + ['-c', cmd]
else:
local_cmd = cmd
else:
local_cmd, prompt, success_key = utils.make_sudo_cmd(self.runner.sudo_exe, sudo_user, executable, cmd)
executable = executable.split()[0] if executable else None
vvv("EXEC %s" % (local_cmd), host=self.host)
@ -66,13 +70,19 @@ class Connection(object):
stdin=subprocess.PIPE,
stdout=subprocess.PIPE, stderr=subprocess.PIPE)
if self.runner.sudo and sudoable and self.runner.sudo_pass:
if self.runner.become and sudoable and self.runner.become_pass:
fcntl.fcntl(p.stdout, fcntl.F_SETFL,
fcntl.fcntl(p.stdout, fcntl.F_GETFL) | os.O_NONBLOCK)
fcntl.fcntl(p.stderr, fcntl.F_SETFL,
fcntl.fcntl(p.stderr, fcntl.F_GETFL) | os.O_NONBLOCK)
sudo_output = ''
while not sudo_output.endswith(prompt) and success_key not in sudo_output:
become_output = ''
while success_key not in become_output:
if prompt and become_output.endswith(prompt):
break
if utils.su_prompts.check_su_prompt(become_output):
break
rfd, wfd, efd = select.select([p.stdout, p.stderr], [],
[p.stdout, p.stderr], self.runner.timeout)
if p.stdout in rfd:
@ -81,13 +91,13 @@ class Connection(object):
chunk = p.stderr.read()
else:
stdout, stderr = p.communicate()
raise errors.AnsibleError('timeout waiting for sudo password prompt:\n' + sudo_output)
raise errors.AnsibleError('timeout waiting for %s password prompt:\n' % self.runner.become_method + become_output)
if not chunk:
stdout, stderr = p.communicate()
raise errors.AnsibleError('sudo output closed while waiting for password prompt:\n' + sudo_output)
sudo_output += chunk
if success_key not in sudo_output:
p.stdin.write(self.runner.sudo_pass + '\n')
raise errors.AnsibleError('%s output closed while waiting for password prompt:\n' % self.runner.become_method + become_output)
become_output += chunk
if success_key not in become_output:
p.stdin.write(self.runner.become_pass + '\n')
fcntl.fcntl(p.stdout, fcntl.F_SETFL, fcntl.fcntl(p.stdout, fcntl.F_GETFL) & ~os.O_NONBLOCK)
fcntl.fcntl(p.stderr, fcntl.F_SETFL, fcntl.fcntl(p.stderr, fcntl.F_GETFL) & ~os.O_NONBLOCK)

View file

@ -125,6 +125,9 @@ class Connection(object):
self.private_key_file = private_key_file
self.has_pipelining = False
# TODO: add pbrun, pfexec
self.become_methods_supported=['sudo', 'su', 'pbrun']
def _cache_key(self):
return "%s__%s__" % (self.host, self.user)
@ -184,9 +187,12 @@ class Connection(object):
return ssh
def exec_command(self, cmd, tmp_path, sudo_user=None, sudoable=False, executable='/bin/sh', in_data=None, su=None, su_user=None):
def exec_command(self, cmd, tmp_path, become_user=None, sudoable=False, executable='/bin/sh', in_data=None):
''' run a command on the remote host '''
if self.runner.become and sudoable and self.runner.become_method not in self.become_methods_supported:
raise errors.AnsibleError("Internal Error: this module does not support running commands via %s" % self.runner.become_method)
if in_data:
raise errors.AnsibleError("Internal Error: this module does not support optimized module pipelining")
@ -206,7 +212,7 @@ class Connection(object):
no_prompt_out = ''
no_prompt_err = ''
if not (self.runner.sudo and sudoable) and not (self.runner.su and su):
if not (self.runner.become and sudoable):
if executable:
quoted_command = executable + ' -c ' + pipes.quote(cmd)
@ -224,50 +230,46 @@ class Connection(object):
chan.get_pty(term=os.getenv('TERM', 'vt100'),
width=int(os.getenv('COLUMNS', 0)),
height=int(os.getenv('LINES', 0)))
if self.runner.sudo or sudoable:
shcmd, prompt, success_key = utils.make_sudo_cmd(self.runner.sudo_exe, sudo_user, executable, cmd)
elif self.runner.su or su:
shcmd, prompt, success_key = utils.make_su_cmd(su_user, executable, cmd)
if self.runner.become and sudoable:
shcmd, prompt, success_key = utils.make_become_cmd(cmd, become_user, executable, self.runner.become_method, '', self.runner.become_exe)
vvv("EXEC %s" % shcmd, host=self.host)
sudo_output = ''
become_output = ''
try:
chan.exec_command(shcmd)
if self.runner.sudo_pass or self.runner.su_pass:
if self.runner.become_pass:
while True:
if success_key in sudo_output or \
(self.runner.sudo_pass and sudo_output.endswith(prompt)) or \
(self.runner.su_pass and utils.su_prompts.check_su_prompt(sudo_output)):
if success_key in become_output or \
(prompt and become_output.endswith(prompt)) or \
utils.su_prompts.check_su_prompt(become_output):
break
chunk = chan.recv(bufsize)
if not chunk:
if 'unknown user' in sudo_output:
if 'unknown user' in become_output:
raise errors.AnsibleError(
'user %s does not exist' % sudo_user)
'user %s does not exist' % become_user)
else:
raise errors.AnsibleError('ssh connection ' +
'closed waiting for password prompt')
sudo_output += chunk
become_output += chunk
if success_key not in sudo_output:
if success_key not in become_output:
if sudoable:
chan.sendall(self.runner.sudo_pass + '\n')
elif su:
chan.sendall(self.runner.su_pass + '\n')
chan.sendall(self.runner.become_pass + '\n')
else:
no_prompt_out += sudo_output
no_prompt_err += sudo_output
no_prompt_out += become_output
no_prompt_err += become_output
except socket.timeout:
raise errors.AnsibleError('ssh timed out waiting for sudo.\n' + sudo_output)
raise errors.AnsibleError('ssh timed out waiting for privilege escalation.\n' + become_output)
stdout = ''.join(chan.makefile('rb', bufsize))
stderr = ''.join(chan.makefile_stderr('rb', bufsize))

View file

@ -34,6 +34,7 @@ from ansible.callbacks import vvv
from ansible import errors
from ansible import utils
class Connection(object):
''' ssh based connections '''
@ -48,6 +49,9 @@ class Connection(object):
self.HASHED_KEY_MAGIC = "|1|"
self.has_pipelining = True
# TODO: add pbrun, pfexec
self.become_methods_supported=['sudo', 'su', 'pbrun']
fcntl.lockf(self.runner.process_lockfile, fcntl.LOCK_EX)
self.cp_dir = utils.prepare_writeable_dir('$HOME/.ansible/cp',mode=0700)
fcntl.lockf(self.runner.process_lockfile, fcntl.LOCK_UN)
@ -87,10 +91,7 @@ class Connection(object):
self.common_args += ["-o", "IdentityFile=\"%s\"" % os.path.expanduser(self.private_key_file)]
elif self.runner.private_key_file is not None:
self.common_args += ["-o", "IdentityFile=\"%s\"" % os.path.expanduser(self.runner.private_key_file)]
if self.password:
self.common_args += ["-o", "GSSAPIAuthentication=no",
"-o", "PubkeyAuthentication=no"]
else:
if not self.password:
self.common_args += ["-o", "KbdInteractiveAuthentication=no",
"-o", "PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey",
"-o", "PasswordAuthentication=no"]
@ -140,7 +141,7 @@ class Connection(object):
os.write(self.wfd, "%s\n" % self.password)
os.close(self.wfd)
def _communicate(self, p, stdin, indata, su=False, sudoable=False, prompt=None):
def _communicate(self, p, stdin, indata, sudoable=False, prompt=None):
fcntl.fcntl(p.stdout, fcntl.F_SETFL, fcntl.fcntl(p.stdout, fcntl.F_GETFL) & ~os.O_NONBLOCK)
fcntl.fcntl(p.stderr, fcntl.F_SETFL, fcntl.fcntl(p.stderr, fcntl.F_GETFL) & ~os.O_NONBLOCK)
# We can't use p.communicate here because the ControlMaster may have stdout open as well
@ -157,23 +158,19 @@ class Connection(object):
while True:
rfd, wfd, efd = select.select(rpipes, [], rpipes, 1)
# fail early if the sudo/su password is wrong
if self.runner.sudo and sudoable:
if self.runner.sudo_pass:
incorrect_password = gettext.dgettext(
"sudo", "Sorry, try again.")
if stdout.endswith("%s\r\n%s" % (incorrect_password,
prompt)):
raise errors.AnsibleError('Incorrect sudo password')
# fail early if the become password is wrong
if self.runner.become and sudoable:
incorrect_password = gettext.dgettext(self.runner.become_method, C.BECOME_ERROR_STRINGS[self.runner.become_method])
if stdout.endswith(prompt):
raise errors.AnsibleError('Missing sudo password')
if prompt:
if self.runner.become_pass:
if stdout.endswith("%s\r\n%s" % (incorrect_password, prompt)):
raise errors.AnsibleError('Incorrect become password')
if self.runner.su and su and self.runner.su_pass:
incorrect_password = gettext.dgettext(
"su", "Sorry")
if stdout.endswith("%s\r\n%s" % (incorrect_password, prompt)):
raise errors.AnsibleError('Incorrect su password')
if stdout.endswith(prompt):
raise errors.AnsibleError('Missing become password')
elif stdout.endswith("%s\r\n%s" % (incorrect_password, prompt)):
raise errors.AnsibleError('Incorrect become password')
if p.stdout in rfd:
dat = os.read(p.stdout.fileno(), 9000)
@ -256,9 +253,12 @@ class Connection(object):
vvv("EXEC previous known host file not found for %s" % host)
return True
def exec_command(self, cmd, tmp_path, sudo_user=None, sudoable=False, executable='/bin/sh', in_data=None, su_user=None, su=False):
def exec_command(self, cmd, tmp_path, become_user=None, sudoable=False, executable='/bin/sh', in_data=None):
''' run a command on the remote host '''
if sudoable and self.runner.become and self.runner.become_method not in self.become_methods_supported:
raise errors.AnsibleError("Internal Error: this module does not support running commands via %s" % self.runner.become_method)
ssh_cmd = self._password_cmd()
ssh_cmd += ["ssh", "-C"]
if not in_data:
@ -269,32 +269,32 @@ class Connection(object):
if utils.VERBOSITY > 3:
ssh_cmd += ["-vvv"]
else:
ssh_cmd += ["-v"]
if self.runner.module_name == 'raw':
ssh_cmd += ["-q"]
else:
ssh_cmd += ["-v"]
ssh_cmd += self.common_args
if self.ipv6:
ssh_cmd += ['-6']
ssh_cmd += [self.host]
if su and su_user:
sudocmd, prompt, success_key = utils.make_su_cmd(su_user, executable, cmd)
ssh_cmd.append(sudocmd)
elif not self.runner.sudo or not sudoable:
if self.runner.become and sudoable:
becomecmd, prompt, success_key = utils.make_become_cmd(cmd, become_user, executable, self.runner.become_method, '', self.runner.become_exe)
ssh_cmd.append(becomecmd)
else:
prompt = None
if executable:
ssh_cmd.append(executable + ' -c ' + pipes.quote(cmd))
else:
ssh_cmd.append(cmd)
else:
sudocmd, prompt, success_key = utils.make_sudo_cmd(self.runner.sudo_exe, sudo_user, executable, cmd)
ssh_cmd.append(sudocmd)
vvv("EXEC %s" % ' '.join(ssh_cmd), host=self.host)
not_in_host_file = self.not_in_host_file(self.host)
if C.HOST_KEY_CHECKING and not_in_host_file:
# lock around the initial SSH connectivity so the user prompt about whether to add
# lock around the initial SSH connectivity so the user prompt about whether to add
# the host to known hosts is not intermingled with multiprocess output.
fcntl.lockf(self.runner.process_lockfile, fcntl.LOCK_EX)
fcntl.lockf(self.runner.output_lockfile, fcntl.LOCK_EX)
@ -306,9 +306,8 @@ class Connection(object):
no_prompt_out = ''
no_prompt_err = ''
if (self.runner.sudo and sudoable and self.runner.sudo_pass) or \
(self.runner.su and su and self.runner.su_pass):
# several cases are handled for sudo privileges with password
if sudoable and self.runner.become and self.runner.become_pass:
# several cases are handled for escalated privileges with password
# * NOPASSWD (tty & no-tty): detect success_key on stdout
# * without NOPASSWD:
# * detect prompt on stdout (tty)
@ -317,13 +316,13 @@ class Connection(object):
fcntl.fcntl(p.stdout, fcntl.F_GETFL) | os.O_NONBLOCK)
fcntl.fcntl(p.stderr, fcntl.F_SETFL,
fcntl.fcntl(p.stderr, fcntl.F_GETFL) | os.O_NONBLOCK)
sudo_output = ''
sudo_errput = ''
become_output = ''
become_errput = ''
while True:
if success_key in sudo_output or \
(self.runner.sudo_pass and sudo_output.endswith(prompt)) or \
(self.runner.su_pass and utils.su_prompts.check_su_prompt(sudo_output)):
if success_key in become_output or \
(prompt and become_output.endswith(prompt)) or \
utils.su_prompts.check_su_prompt(become_output):
break
rfd, wfd, efd = select.select([p.stdout, p.stderr], [],
@ -331,36 +330,33 @@ class Connection(object):
if p.stderr in rfd:
chunk = p.stderr.read()
if not chunk:
raise errors.AnsibleError('ssh connection closed waiting for sudo or su password prompt')
sudo_errput += chunk
raise errors.AnsibleError('ssh connection closed waiting for a privilege escalation password prompt')
become_errput += chunk
incorrect_password = gettext.dgettext(
"sudo", "Sorry, try again.")
if sudo_errput.strip().endswith("%s%s" % (prompt, incorrect_password)):
raise errors.AnsibleError('Incorrect sudo password')
elif prompt and sudo_errput.endswith(prompt):
stdin.write(self.runner.sudo_pass + '\n')
"become", "Sorry, try again.")
if become_errput.strip().endswith("%s%s" % (prompt, incorrect_password)):
raise errors.AnsibleError('Incorrect become password')
elif prompt and become_errput.endswith(prompt):
stdin.write(self.runner.become_pass + '\n')
if p.stdout in rfd:
chunk = p.stdout.read()
if not chunk:
raise errors.AnsibleError('ssh connection closed waiting for sudo or su password prompt')
sudo_output += chunk
raise errors.AnsibleError('ssh connection closed waiting for %s password prompt' % self.runner.become_method)
become_output += chunk
if not rfd:
# timeout. wrap up process communication
stdout = p.communicate()
raise errors.AnsibleError('ssh connection error waiting for sudo or su password prompt')
raise errors.AnsibleError('ssh connection error while waiting for %s password prompt' % self.runner.become_method)
if success_key not in sudo_output:
if sudoable:
stdin.write(self.runner.sudo_pass + '\n')
elif su:
stdin.write(self.runner.su_pass + '\n')
else:
no_prompt_out += sudo_output
no_prompt_err += sudo_errput
if success_key in become_output:
no_prompt_out += become_output
no_prompt_err += become_errput
elif sudoable:
stdin.write(self.runner.become_pass + '\n')
(returncode, stdout, stderr) = self._communicate(p, stdin, in_data, su=su, sudoable=sudoable, prompt=prompt)
(returncode, stdout, stderr) = self._communicate(p, stdin, in_data, sudoable=sudoable, prompt=prompt)
if C.HOST_KEY_CHECKING and not_in_host_file:
# lock around the initial SSH connectivity so the user prompt about whether to add

View file

@ -18,17 +18,17 @@
from __future__ import absolute_import
import base64
import hashlib
import imp
import os
import re
import shlex
import inspect
import traceback
import urlparse
from ansible import errors
from ansible import utils
from ansible.callbacks import vvv, vvvv, verbose
from ansible.runner.shell_plugins import powershell
from ansible.utils.unicode import to_bytes
try:
from winrm import Response
@ -44,10 +44,6 @@ try:
except ImportError:
pass
_winrm_cache = {
# 'user:pwhash@host:port': <protocol instance>
}
def vvvvv(msg, host=None):
verbose(msg, host=host, caplevel=4)
@ -55,8 +51,8 @@ class Connection(object):
'''WinRM connections over HTTP/HTTPS.'''
transport_schemes = {
'http': [('kerberos', 'http'), ('plaintext', 'http'), ('plaintext', 'https')],
'https': [('kerberos', 'https'), ('plaintext', 'https')],
'http': [('kerberos', 'http'), ('plaintext', 'http'), ('ssl', 'https')],
'https': [('kerberos', 'https'), ('ssl', 'https')],
}
def __init__(self, runner, host, port, user, password, *args, **kwargs):
@ -72,30 +68,47 @@ class Connection(object):
self.shell_id = None
self.delegate = None
# Add runas support
#self.become_methods_supported=['runas']
self.become_methods_supported=[]
def _winrm_connect(self):
'''
Establish a WinRM connection over HTTP/HTTPS.
'''
# get winrm-specific connection vars
host_vars = self.runner.inventory._hosts_cache[self.delegate].get_variables()
port = self.port or 5986
vvv("ESTABLISH WINRM CONNECTION FOR USER: %s on PORT %s TO %s" % \
(self.user, port, self.host), host=self.host)
netloc = '%s:%d' % (self.host, port)
cache_key = '%s:%s@%s:%d' % (self.user, hashlib.md5(self.password).hexdigest(), self.host, port)
if cache_key in _winrm_cache:
vvvv('WINRM REUSE EXISTING CONNECTION: %s' % cache_key, host=self.host)
return _winrm_cache[cache_key]
exc = None
for transport, scheme in self.transport_schemes['http' if port == 5985 else 'https']:
if transport == 'kerberos' and not HAVE_KERBEROS:
if transport == 'kerberos' and (not HAVE_KERBEROS or not '@' in self.user):
continue
if transport == 'kerberos':
realm = self.user.split('@', 1)[1].strip() or None
else:
realm = None
endpoint = urlparse.urlunsplit((scheme, netloc, '/wsman', '', ''))
self._winrm_kwargs = dict(username=self.user, password=self.password, realm=realm)
argspec = inspect.getargspec(Protocol.__init__)
for arg in argspec.args:
if arg in ('self', 'endpoint', 'transport', 'username', 'password', 'realm'):
continue
if 'ansible_winrm_%s' % arg in host_vars:
self._winrm_kwargs[arg] = host_vars['ansible_winrm_%s' % arg]
vvvv('WINRM CONNECT: transport=%s endpoint=%s' % (transport, endpoint),
host=self.host)
protocol = Protocol(endpoint, transport=transport,
username=self.user, password=self.password)
protocol = Protocol(endpoint, transport=transport, **self._winrm_kwargs)
try:
protocol.send_message('')
_winrm_cache[cache_key] = protocol
return protocol
except WinRMTransportError, exc:
err_msg = str(exc)
@ -107,7 +120,6 @@ class Connection(object):
if code == 401:
raise errors.AnsibleError("the username/password specified for this server was incorrect")
elif code == 411:
_winrm_cache[cache_key] = protocol
return protocol
vvvv('WINRM CONNECTION ERROR: %s' % err_msg, host=self.host)
continue
@ -143,7 +155,11 @@ class Connection(object):
self.protocol = self._winrm_connect()
return self
def exec_command(self, cmd, tmp_path, sudo_user=None, sudoable=False, executable=None, in_data=None, su=None, su_user=None):
def exec_command(self, cmd, tmp_path, become_user=None, sudoable=False, executable=None, in_data=None):
if sudoable and self.runner.become and self.runner.become_method not in self.become_methods_supported:
raise errors.AnsibleError("Internal Error: this module does not support running commands via %s" % self.runner.become_method)
cmd = cmd.encode('utf-8')
cmd_parts = shlex.split(cmd, posix=False)
if '-EncodedCommand' in cmd_parts:
@ -161,7 +177,7 @@ class Connection(object):
except Exception, e:
traceback.print_exc()
raise errors.AnsibleError("failed to exec cmd %s" % cmd)
return (result.status_code, '', result.std_out.encode('utf-8'), result.std_err.encode('utf-8'))
return (result.status_code, '', to_bytes(result.std_out), to_bytes(result.std_err))
def put_file(self, in_path, out_path):
vvv("PUT %s TO %s" % (in_path, out_path), host=self.host)
@ -184,7 +200,7 @@ class Connection(object):
# windows command length), divide by 2.67 (UTF16LE base64 command
# encoding), then by 1.35 again (data base64 encoding).
buffer_size = int(((8190 - len(cmd)) / 2.67) / 1.35)
for offset in xrange(0, in_size, buffer_size):
for offset in xrange(0, in_size or 1, buffer_size):
try:
out_data = in_file.read(buffer_size)
if offset == 0:

View file

@ -2,6 +2,7 @@
# and chroot.py (c) 2013, Maykel Moya <mmoya@speedyrails.com>
# and jail.py (c) 2013, Michael Scherer <misc@zarb.org>
# (c) 2015, Dagobert Michelsen <dam@baltic-online.de>
# (c) 2015, Toshio Kuratomi <tkuratomi@ansible.com>
#
# This file is part of Ansible
#
@ -17,15 +18,20 @@
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import distutils.spawn
import traceback
import os
import shutil
import shlex
import subprocess
from subprocess import Popen,PIPE
from ansible import errors
from ansible.utils.unicode import to_bytes
from ansible.callbacks import vvv
import ansible.constants as C
BUFSIZE = 65536
class Connection(object):
''' Local zone based connections '''
@ -41,7 +47,7 @@ class Connection(object):
cwd=self.runner.basedir,
stdin=subprocess.PIPE,
stdout=subprocess.PIPE, stderr=subprocess.PIPE)
#stdout, stderr = p.communicate()
zones = []
for l in pipe.stdout.readlines():
# 1:work:running:/zones/work:3126dc59-9a07-4829-cde9-a816e4c5040e:native:shared
@ -68,6 +74,7 @@ class Connection(object):
self.runner = runner
self.host = host
self.has_pipelining = False
self.become_methods_supported=C.BECOME_METHODS
if os.geteuid() != 0:
raise errors.AnsibleError("zone connection requires running as root")
@ -93,68 +100,99 @@ class Connection(object):
# a modifier
def _generate_cmd(self, executable, cmd):
if executable:
### TODO: Why was "-c" removed from here? (vs jail.py)
local_cmd = [self.zlogin_cmd, self.zone, executable, cmd]
else:
local_cmd = '%s "%s" %s' % (self.zlogin_cmd, self.zone, cmd)
# Prev to python2.7.3, shlex couldn't handle unicode type strings
cmd = to_bytes(cmd)
cmd = shlex.split(cmd)
local_cmd = [self.zlogin_cmd, self.zone]
local_cmd += cmd
return local_cmd
#def exec_command(self, cmd, tmp_path, sudo_user=None, sudoable=False, executable='/bin/sh', in_data=None, su=None, su_user=None):
def exec_command(self, cmd, tmp_path, sudo_user=None, sudoable=False, executable=None, in_data=None, su=None, su_user=None):
''' run a command on the zone '''
def _buffered_exec_command(self, cmd, tmp_path, become_user=None, sudoable=False, executable=None, in_data=None, stdin=subprocess.PIPE):
''' run a command on the zone. This is only needed for implementing
put_file() get_file() so that we don't have to read the whole file
into memory.
if su or su_user:
raise errors.AnsibleError("Internal Error: this module does not support running commands via su")
compared to exec_command() it looses some niceties like being able to
return the process's exit code immediately.
'''
if sudoable and self.runner.become and self.runner.become_method not in self.become_methods_supported:
raise errors.AnsibleError("Internal Error: this module does not support running commands via %s" % self.runner.become_method)
if in_data:
raise errors.AnsibleError("Internal Error: this module does not support optimized module pipelining")
# We enter zone as root so sudo stuff can be ignored
if executable == '/bin/sh':
executable = None
# We enter zone as root so we ignore privilege escalation (probably need to fix in case we have to become a specific used [ex: postgres admin])?
local_cmd = self._generate_cmd(executable, cmd)
vvv("EXEC %s" % (local_cmd), host=self.zone)
p = subprocess.Popen(local_cmd, shell=isinstance(local_cmd, basestring),
p = subprocess.Popen(local_cmd, shell=False,
cwd=self.runner.basedir,
stdin=subprocess.PIPE,
stdin=stdin,
stdout=subprocess.PIPE, stderr=subprocess.PIPE)
return p
def exec_command(self, cmd, tmp_path, become_user=None, sudoable=False, executable=None, in_data=None):
''' run a command on the zone '''
### TODO: Why all the precautions not to specify /bin/sh? (vs jail.py)
if executable == '/bin/sh':
executable = None
p = self._buffered_exec_command(cmd, tmp_path, become_user, sudoable, executable, in_data)
stdout, stderr = p.communicate()
return (p.returncode, '', stdout, stderr)
def _normalize_path(self, path, prefix):
if not path.startswith(os.path.sep):
path = os.path.join(os.path.sep, path)
normpath = os.path.normpath(path)
return os.path.join(prefix, normpath[1:])
def _copy_file(self, in_path, out_path):
if not os.path.exists(in_path):
raise errors.AnsibleFileNotFound("file or module does not exist: %s" % in_path)
try:
shutil.copyfile(in_path, out_path)
except shutil.Error:
traceback.print_exc()
raise errors.AnsibleError("failed to copy: %s and %s are the same" % (in_path, out_path))
except IOError:
traceback.print_exc()
raise errors.AnsibleError("failed to transfer file to %s" % out_path)
def put_file(self, in_path, out_path):
''' transfer a file from local to zone '''
out_path = self._normalize_path(out_path, self.get_zone_path())
vvv("PUT %s TO %s" % (in_path, out_path), host=self.zone)
self._copy_file(in_path, out_path)
try:
with open(in_path, 'rb') as in_file:
try:
p = self._buffered_exec_command('dd of=%s bs=%s' % (out_path, BUFSIZE), None, stdin=in_file)
except OSError:
raise errors.AnsibleError("jail connection requires dd command in the jail")
try:
stdout, stderr = p.communicate()
except:
traceback.print_exc()
raise errors.AnsibleError("failed to transfer file %s to %s" % (in_path, out_path))
if p.returncode != 0:
raise errors.AnsibleError("failed to transfer file %s to %s:\n%s\n%s" % (in_path, out_path, stdout, stderr))
except IOError:
raise errors.AnsibleError("file or module does not exist at: %s" % in_path)
def fetch_file(self, in_path, out_path):
''' fetch a file from zone to local '''
in_path = self._normalize_path(in_path, self.get_zone_path())
vvv("FETCH %s TO %s" % (in_path, out_path), host=self.zone)
self._copy_file(in_path, out_path)
try:
p = self._buffered_exec_command('dd if=%s bs=%s' % (in_path, BUFSIZE), None)
except OSError:
raise errors.AnsibleError("zone connection requires dd command in the zone")
with open(out_path, 'wb+') as out_file:
try:
chunk = p.stdout.read(BUFSIZE)
while chunk:
out_file.write(chunk)
chunk = p.stdout.read(BUFSIZE)
except:
traceback.print_exc()
raise errors.AnsibleError("failed to transfer file %s to %s" % (in_path, out_path))
stdout, stderr = p.communicate()
if p.returncode != 0:
raise errors.AnsibleError("failed to transfer file %s to %s:\n%s\n%s" % (in_path, out_path, stdout, stderr))
def close(self):
''' terminate the connection; nothing to do here '''

View file

@ -88,7 +88,7 @@ def failed(*a, **kw):
def success(*a, **kw):
''' Test if task result yields success '''
return not failed(*a, **kw)
return not failed(*a, **kw) and not skipped(*a, **kw)
def changed(*a, **kw):
''' Test if task result yields changed '''

View file

@ -16,7 +16,6 @@
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
from ansible import utils, errors
import os
import codecs
import csv
@ -29,7 +28,7 @@ class LookupModule(object):
try:
f = codecs.open(filename, 'r', encoding='utf-8')
creader = csv.reader(f, delimiter=delimiter)
creader = csv.reader(f, delimiter=str(delimiter))
for row in creader:
if row[0] == key:
@ -72,7 +71,7 @@ class LookupModule(object):
path = utils.path_dwim(self.basedir, paramvals['file'])
var = self.read_csv(path, key, paramvals['delimiter'], paramvals['default'], paramvals['col'])
var = self.read_csv(path, key, str(paramvals['delimiter']), paramvals['default'], paramvals['col'])
if var is not None:
if type(var) is list:
for v in var:

View file

@ -17,21 +17,24 @@
from ansible import utils
import os
import urllib2
try:
import json
except ImportError:
import simplejson as json
from ansible.module_utils.urls import open_url
# this can be made configurable, not should not use ansible.cfg
ANSIBLE_ETCD_URL = 'http://127.0.0.1:4001'
if os.getenv('ANSIBLE_ETCD_URL') is not None:
ANSIBLE_ETCD_URL = os.environ['ANSIBLE_ETCD_URL']
class etcd():
def __init__(self, url=ANSIBLE_ETCD_URL):
class Etcd(object):
def __init__(self, url=ANSIBLE_ETCD_URL, validate_certs=True):
self.url = url
self.baseurl = '%s/v1/keys' % (self.url)
self.validate_certs = validate_certs
def get(self, key):
url = "%s/%s" % (self.baseurl, key)
@ -39,7 +42,7 @@ class etcd():
data = None
value = ""
try:
r = urllib2.urlopen(url)
r = open_url(url, validate_certs=self.validate_certs)
data = r.read()
except:
return value
@ -61,7 +64,6 @@ class LookupModule(object):
def __init__(self, basedir=None, **kwargs):
self.basedir = basedir
self.etcd = etcd()
def run(self, terms, inject=None, **kwargs):
@ -70,9 +72,13 @@ class LookupModule(object):
if isinstance(terms, basestring):
terms = [ terms ]
validate_certs = kwargs.get('validate_certs', True)
etcd = Etcd(validate_certs=validate_certs)
ret = []
for term in terms:
key = term.split()[0]
value = self.etcd.get(key)
value = etcd.get(key)
ret.append(value)
return ret

View file

@ -151,15 +151,26 @@ class LookupModule(object):
)
elif self.count is not None:
# convert count to end
self.end = self.start + self.count * self.stride - 1
if self.count != 0:
self.end = self.start + self.count * self.stride - 1
else:
self.start = 0
self.end = 0
self.stride = 0
del self.count
if self.end < self.start:
raise AnsibleError("can't count backwards")
if self.stride > 0 and self.end < self.start:
raise AnsibleError("to count backwards make stride negative")
if self.stride < 0 and self.end > self.start:
raise AnsibleError("to count forward don't make stride negative")
if self.format.count('%') != 1:
raise AnsibleError("bad formatting string: %s" % self.format)
def generate_sequence(self):
numbers = xrange(self.start, self.end + 1, self.stride)
if self.stride > 0:
adjust = 1
else:
adjust = -1
numbers = xrange(self.start, self.end + adjust, self.stride)
for i in numbers:
try:
@ -192,13 +203,13 @@ class LookupModule(object):
)
self.sanity_check()
results.extend(self.generate_sequence())
if self.stride != 0:
results.extend(self.generate_sequence())
except AnsibleError:
raise
except Exception:
except Exception, e:
raise AnsibleError(
"unknown error generating sequence"
"unknown error generating sequence: %s" % str(e)
)
return results

View file

@ -18,6 +18,9 @@
from ansible import utils
import urllib2
from ansible.module_utils.urls import open_url, ConnectionError, SSLValidationError
from ansible.utils.unicode import to_unicode
class LookupModule(object):
def __init__(self, basedir=None, **kwargs):
@ -30,19 +33,25 @@ class LookupModule(object):
if isinstance(terms, basestring):
terms = [ terms ]
validate_certs = kwargs.get('validate_certs', True)
ret = []
for term in terms:
try:
r = urllib2.Request(term)
response = urllib2.urlopen(r)
except URLError, e:
utils.warnings("Failed lookup url for %s : %s" % (term, str(e)))
response = open_url(term, validate_certs=validate_certs)
except urllib2.URLError as e:
utils.warning("Failed lookup url for %s : %s" % (term, str(e)))
continue
except HTTPError, e:
utils.warnings("Recieved HTTP error for %s : %s" % (term, str(e)))
except urllib2.HTTPError as e:
utils.warning("Received HTTP error for %s : %s" % (term, str(e)))
continue
except SSLValidationError as e:
utils.warning("Error validating the server's certificate for %s: %s" % (term, str(e)))
continue
except ConnectionError as e:
utils.warning("Error connecting to %s: %s" % (term, str(e)))
continue
for line in response.read().splitlines():
ret.append(line)
ret.append(to_unicode(line))
return ret

2
lib/ansible/runner/shell_plugins/powershell.py Normal file → Executable file
View file

@ -22,7 +22,7 @@ import random
import shlex
import time
_common_args = ['PowerShell', '-NoProfile', '-NonInteractive']
_common_args = ['PowerShell', '-NoProfile', '-NonInteractive','-ExecutionPolicy', 'Unrestricted']
# Primarily for testing, allow explicitly specifying PowerShell version via
# an environment variable.

View file

@ -19,9 +19,7 @@ import errno
import sys
import re
import os
import shlex
import yaml
import copy
import optparse
import operator
from ansible import errors
@ -29,8 +27,7 @@ from ansible import __version__
from ansible.utils.display_functions import *
from ansible.utils.plugins import *
from ansible.utils.su_prompts import *
from ansible.utils.hashing import secure_hash, secure_hash_s, checksum, checksum_s, md5, md5s
from ansible.callbacks import display
from ansible.utils.hashing import secure_hash, secure_hash_s, checksum, checksum_s, md5, md5s #unused here but 'reexported'
from ansible.module_utils.splitter import split_args, unquote
from ansible.module_utils.basic import heuristic_log_sanitize
from ansible.utils.unicode import to_bytes, to_unicode
@ -45,11 +42,11 @@ import pipes
import random
import difflib
import warnings
import traceback
import getpass
import sys
import subprocess
import contextlib
import tempfile
from multiprocessing import Lock
from vault import VaultLib
@ -63,6 +60,7 @@ LOOKUP_REGEX = re.compile(r'lookup\s*\(')
PRINT_CODE_REGEX = re.compile(r'(?:{[{%]|[%}]})')
CODE_REGEX = re.compile(r'(?:{%|%})')
_LOCK = Lock()
try:
# simplejson can be much faster if it's available
@ -128,8 +126,15 @@ def key_for_hostname(hostname):
key_path = os.path.expanduser(C.ACCELERATE_KEYS_DIR)
if not os.path.exists(key_path):
os.makedirs(key_path, mode=0700)
os.chmod(key_path, int(C.ACCELERATE_KEYS_DIR_PERMS, 8))
# avoid race with multiple forks trying to create paths on host
# but limit when locking is needed to creation only
with(_LOCK):
if not os.path.exists(key_path):
# use a temp directory and rename to ensure the directory
# searched for only appears after permissions applied.
tmp_dir = tempfile.mkdtemp(dir=os.path.dirname(key_path))
os.chmod(tmp_dir, int(C.ACCELERATE_KEYS_DIR_PERMS, 8))
os.rename(tmp_dir, key_path)
elif not os.path.isdir(key_path):
raise errors.AnsibleError('ACCELERATE_KEYS_DIR is not a directory.')
@ -140,22 +145,28 @@ def key_for_hostname(hostname):
# use new AES keys every 2 hours, which means fireball must not allow running for longer either
if not os.path.exists(key_path) or (time.time() - os.path.getmtime(key_path) > 60*60*2):
key = AesKey.Generate()
fd = os.open(key_path, os.O_WRONLY | os.O_CREAT, int(C.ACCELERATE_KEYS_FILE_PERMS, 8))
fh = os.fdopen(fd, 'w')
fh.write(str(key))
fh.close()
return key
else:
if stat.S_IMODE(os.stat(key_path).st_mode) != int(C.ACCELERATE_KEYS_FILE_PERMS, 8):
raise errors.AnsibleError('Incorrect permissions on the key file for this host. Use `chmod 0%o %s` to correct this issue.' % (int(C.ACCELERATE_KEYS_FILE_PERMS, 8), key_path))
fh = open(key_path)
key = AesKey.Read(fh.read())
fh.close()
return key
# avoid race with multiple forks trying to create key
# but limit when locking is needed to creation only
with(_LOCK):
if not os.path.exists(key_path) or (time.time() - os.path.getmtime(key_path) > 60*60*2):
key = AesKey.Generate()
# use temp file to ensure file only appears once it has
# desired contents and permissions
with tempfile.NamedTemporaryFile(mode='w', dir=os.path.dirname(key_path), delete=False) as fh:
tmp_key_path = fh.name
fh.write(str(key))
os.chmod(tmp_key_path, int(C.ACCELERATE_KEYS_FILE_PERMS, 8))
os.rename(tmp_key_path, key_path)
return key
if stat.S_IMODE(os.stat(key_path).st_mode) != int(C.ACCELERATE_KEYS_FILE_PERMS, 8):
raise errors.AnsibleError('Incorrect permissions on the key file for this host. Use `chmod 0%o %s` to correct this issue.' % (int(C.ACCELERATE_KEYS_FILE_PERMS, 8), key_path))
with open(key_path) as fh:
return AesKey.Read(fh.read())
def encrypt(key, msg):
return key.Encrypt(msg)
return key.Encrypt(msg.encode('utf-8'))
def decrypt(key, msg):
try:
@ -229,9 +240,9 @@ def write_tree_file(tree, hostname, buf):
# TODO: might be nice to append playbook runs per host in a similar way
# in which case, we'd want append mode.
path = os.path.join(tree, hostname)
fd = open(path, "w+")
fd.write(buf)
fd.close()
buf = to_bytes(buf)
with open(path, 'wb+') as fd:
fd.write(buf)
def is_failed(result):
''' is a given JSON result a failed result? '''
@ -260,10 +271,10 @@ def check_conditional(conditional, basedir, inject, fail_on_undefined=False):
conditional = conditional.replace("jinja2_compare ","")
# allow variable names
if conditional in inject and '-' not in str(inject[conditional]):
conditional = inject[conditional]
if conditional in inject and '-' not in to_unicode(inject[conditional], nonstring='simplerepr'):
conditional = to_unicode(inject[conditional], nonstring='simplerepr')
conditional = template.template(basedir, conditional, inject, fail_on_undefined=fail_on_undefined)
original = str(conditional).replace("jinja2_compare ","")
original = to_unicode(conditional, nonstring='simplerepr').replace("jinja2_compare ","")
# a Jinja2 evaluation that results in something Python can eval!
presented = "{%% if %s %%} True {%% else %%} False {%% endif %%}" % conditional
conditional = template.template(basedir, presented, inject)
@ -342,9 +353,6 @@ def path_dwim_relative(original, dirname, source, playbook_base, check=True):
''' find one file in a directory one level up in a dir named dirname relative to current '''
# (used by roles code)
from ansible.utils import template
basedir = os.path.dirname(original)
if os.path.islink(basedir):
basedir = unfrackpath(basedir)
@ -537,8 +545,6 @@ def _clean_data_struct(orig_data, from_remote=False, from_inventory=False):
def parse_json(raw_data, from_remote=False, from_inventory=False, no_exceptions=False):
''' this version for module return data only '''
orig_data = raw_data
# ignore stuff like tcgetattr spewage or other warnings
data = filter_leading_non_json_lines(raw_data)
@ -807,23 +813,27 @@ def merge_hash(a, b):
''' recursively merges hash b into a
keys from b take precedence over keys from a '''
result = {}
# we check here as well as in combine_vars() since this
# function can work recursively with nested dicts
_validate_both_dicts(a, b)
for dicts in a, b:
# next, iterate over b keys and values
for k, v in dicts.iteritems():
# if there's already such key in a
# and that key contains dict
if k in result and isinstance(result[k], dict):
# merge those dicts recursively
result[k] = merge_hash(a[k], v)
else:
# otherwise, just copy a value from b to a
result[k] = v
# if a is empty or equal to b, return b
if a == {} or a == b:
return b.copy()
# if b is empty the below unfolds quickly
result = a.copy()
# next, iterate over b keys and values
for k, v in b.iteritems():
# if there's already such key in a
# and that key contains dict
if k in result and isinstance(result[k], dict) and isinstance(v, dict):
# merge those dicts recursively
result[k] = merge_hash(result[k], v)
else:
# otherwise, just copy a value from b to a
result[k] = v
return result
@ -992,14 +1002,12 @@ def base_parser(constants=C, usage="", output_opts=False, runas_opts=False,
default=constants.DEFAULT_HOST_LIST)
parser.add_option('-e', '--extra-vars', dest="extra_vars", action="append",
help="set additional variables as key=value or YAML/JSON", default=[])
parser.add_option('-u', '--user', default=constants.DEFAULT_REMOTE_USER, dest='remote_user',
help='connect as this user (default=%s)' % constants.DEFAULT_REMOTE_USER)
parser.add_option('-k', '--ask-pass', default=False, dest='ask_pass', action='store_true',
help='ask for SSH password')
parser.add_option('--private-key', default=C.DEFAULT_PRIVATE_KEY_FILE, dest='private_key_file',
parser.add_option('--private-key', default=constants.DEFAULT_PRIVATE_KEY_FILE, dest='private_key_file',
help='use this file to authenticate the connection')
parser.add_option('-K', '--ask-sudo-pass', default=False, dest='ask_sudo_pass', action='store_true',
help='ask for sudo password')
parser.add_option('--ask-su-pass', default=False, dest='ask_su_pass', action='store_true',
help='ask for su password')
parser.add_option('--ask-vault-pass', default=False, dest='ask_vault_pass', action='store_true',
help='ask for vault password')
parser.add_option('--vault-password-file', default=constants.DEFAULT_VAULT_PASSWORD_FILE,
@ -1025,22 +1033,35 @@ def base_parser(constants=C, usage="", output_opts=False, runas_opts=False,
help='log output to this directory')
if runas_opts:
parser.add_option("-s", "--sudo", default=constants.DEFAULT_SUDO, action="store_true",
dest='sudo', help="run operations with sudo (nopasswd)")
# priv user defaults to root later on to enable detecting when this option was given here
parser.add_option('-K', '--ask-sudo-pass', default=constants.DEFAULT_ASK_SUDO_PASS, dest='ask_sudo_pass', action='store_true',
help='ask for sudo password (deprecated, use become)')
parser.add_option('--ask-su-pass', default=constants.DEFAULT_ASK_SU_PASS, dest='ask_su_pass', action='store_true',
help='ask for su password (deprecated, use become)')
parser.add_option("-s", "--sudo", default=constants.DEFAULT_SUDO, action="store_true", dest='sudo',
help="run operations with sudo (nopasswd) (deprecated, use become)")
parser.add_option('-U', '--sudo-user', dest='sudo_user', default=None,
help='desired sudo user (default=root)') # Can't default to root because we need to detect when this option was given
parser.add_option('-u', '--user', default=constants.DEFAULT_REMOTE_USER,
dest='remote_user', help='connect as this user (default=%s)' % constants.DEFAULT_REMOTE_USER)
help='desired sudo user (default=root) (deprecated, use become)')
parser.add_option('-S', '--su', default=constants.DEFAULT_SU, action='store_true',
help='run operations with su (deprecated, use become)')
parser.add_option('-R', '--su-user', default=None,
help='run operations with su as this user (default=%s) (deprecated, use become)' % constants.DEFAULT_SU_USER)
# consolidated privilege escalation (become)
parser.add_option("-b", "--become", default=constants.DEFAULT_BECOME, action="store_true", dest='become',
help="run operations with become (nopasswd implied)")
parser.add_option('--become-method', dest='become_method', default=constants.DEFAULT_BECOME_METHOD, type='string',
help="privilege escalation method to use (default=%s), valid choices: [ %s ]" % (constants.DEFAULT_BECOME_METHOD, ' | '.join(constants.BECOME_METHODS)))
parser.add_option('--become-user', default=None, dest='become_user', type='string',
help='run operations as this user (default=%s)' % constants.DEFAULT_BECOME_USER)
parser.add_option('--ask-become-pass', default=False, dest='become_ask_pass', action='store_true',
help='ask for privilege escalation password')
parser.add_option('-S', '--su', default=constants.DEFAULT_SU,
action='store_true', help='run operations with su')
parser.add_option('-R', '--su-user', help='run operations with su as this '
'user (default=%s)' % constants.DEFAULT_SU_USER)
if connect_opts:
parser.add_option('-c', '--connection', dest='connection',
default=C.DEFAULT_TRANSPORT,
help="connection type to use (default=%s)" % C.DEFAULT_TRANSPORT)
default=constants.DEFAULT_TRANSPORT,
help="connection type to use (default=%s)" % constants.DEFAULT_TRANSPORT)
if async_opts:
parser.add_option('-P', '--poll', default=constants.DEFAULT_POLL_INTERVAL, type='int',
@ -1059,7 +1080,6 @@ def base_parser(constants=C, usage="", output_opts=False, runas_opts=False,
help="when changing (small) files and templates, show the differences in those files; works great with --check"
)
return parser
def parse_extra_vars(extra_vars_opts, vault_pass):
@ -1106,41 +1126,58 @@ def ask_vault_passwords(ask_vault_pass=False, ask_new_vault_pass=False, confirm_
return vault_pass, new_vault_pass
def ask_passwords(ask_pass=False, ask_sudo_pass=False, ask_su_pass=False, ask_vault_pass=False):
def ask_passwords(ask_pass=False, become_ask_pass=False, ask_vault_pass=False, become_method=C.DEFAULT_BECOME_METHOD):
sshpass = None
sudopass = None
supass = None
becomepass = None
vaultpass = None
sudo_prompt = "sudo password: "
su_prompt = "su password: "
become_prompt = ''
if ask_pass:
sshpass = getpass.getpass(prompt="SSH password: ")
become_prompt = "%s password[defaults to SSH password]: " % become_method.upper()
if sshpass:
sshpass = to_bytes(sshpass, errors='strict', nonstring='simplerepr')
sudo_prompt = "sudo password [defaults to SSH password]: "
su_prompt = "su password [defaults to SSH password]: "
else:
become_prompt = "%s password: " % become_method.upper()
if ask_sudo_pass:
sudopass = getpass.getpass(prompt=sudo_prompt)
if ask_pass and sudopass == '':
sudopass = sshpass
if sudopass:
sudopass = to_bytes(sudopass, errors='strict', nonstring='simplerepr')
if ask_su_pass:
supass = getpass.getpass(prompt=su_prompt)
if ask_pass and supass == '':
supass = sshpass
if supass:
supass = to_bytes(supass, errors='strict', nonstring='simplerepr')
if become_ask_pass:
becomepass = getpass.getpass(prompt=become_prompt)
if ask_pass and becomepass == '':
becomepass = sshpass
if becomepass:
becomepass = to_bytes(becomepass)
if ask_vault_pass:
vaultpass = getpass.getpass(prompt="Vault password: ")
if vaultpass:
vaultpass = to_bytes(vaultpass, errors='strict', nonstring='simplerepr').strip()
return (sshpass, sudopass, supass, vaultpass)
return (sshpass, becomepass, vaultpass)
def choose_pass_prompt(options):
if options.ask_su_pass:
return 'su'
elif options.ask_sudo_pass:
return 'sudo'
return options.become_method
def normalize_become_options(options):
options.become_ask_pass = options.become_ask_pass or options.ask_sudo_pass or options.ask_su_pass or C.DEFAULT_BECOME_ASK_PASS
options.become_user = options.become_user or options.sudo_user or options.su_user or C.DEFAULT_BECOME_USER
if options.become:
pass
elif options.sudo:
options.become = True
options.become_method = 'sudo'
elif options.su:
options.become = True
options.become_method = 'su'
def do_encrypt(result, encrypt, salt_size=None, salt=None):
if PASSLIB_AVAILABLE:
@ -1194,38 +1231,64 @@ def boolean(value):
else:
return False
def make_become_cmd(cmd, user, shell, method, flags=None, exe=None):
"""
helper function for connection plugins to create privilege escalation commands
"""
randbits = ''.join(chr(random.randint(ord('a'), ord('z'))) for x in xrange(32))
success_key = 'BECOME-SUCCESS-%s' % randbits
prompt = None
becomecmd = None
shell = shell or '$SHELL'
if method == 'sudo':
# Rather than detect if sudo wants a password this time, -k makes sudo always ask for
# a password if one is required. Passing a quoted compound command to sudo (or sudo -s)
# directly doesn't work, so we shellquote it with pipes.quote() and pass the quoted
# string to the user's shell. We loop reading output until we see the randomly-generated
# sudo prompt set with the -p option.
prompt = '[sudo via ansible, key=%s] password: ' % randbits
exe = exe or C.DEFAULT_SUDO_EXE
becomecmd = '%s -k && %s %s -S -p "%s" -u %s %s -c %s' % \
(exe, exe, flags or C.DEFAULT_SUDO_FLAGS, prompt, user, shell, pipes.quote('echo %s; %s' % (success_key, cmd)))
elif method == 'su':
exe = exe or C.DEFAULT_SU_EXE
flags = flags or C.DEFAULT_SU_FLAGS
becomecmd = '%s %s %s -c "%s -c %s"' % (exe, flags, user, shell, pipes.quote('echo %s; %s' % (success_key, cmd)))
elif method == 'pbrun':
prompt = 'assword:'
exe = exe or 'pbrun'
flags = flags or ''
becomecmd = '%s -b %s -u %s "%s"' % (exe, flags, user, pipes.quote('echo %s; %s' % (success_key,cmd)))
elif method == 'pfexec':
exe = exe or 'pfexec'
flags = flags or ''
# No user as it uses it's own exec_attr to figure it out
becomecmd = '%s %s "%s"' % (exe, flags, pipes.quote('echo %s; %s' % (success_key,cmd)))
if becomecmd is None:
raise errors.AnsibleError("Privilege escalation method not found: %s" % method)
return (('%s -c ' % shell) + pipes.quote(becomecmd), prompt, success_key)
def make_sudo_cmd(sudo_exe, sudo_user, executable, cmd):
"""
helper function for connection plugins to create sudo commands
"""
# Rather than detect if sudo wants a password this time, -k makes
# sudo always ask for a password if one is required.
# Passing a quoted compound command to sudo (or sudo -s)
# directly doesn't work, so we shellquote it with pipes.quote()
# and pass the quoted string to the user's shell. We loop reading
# output until we see the randomly-generated sudo prompt set with
# the -p option.
randbits = ''.join(chr(random.randint(ord('a'), ord('z'))) for x in xrange(32))
prompt = '[sudo via ansible, key=%s] password: ' % randbits
success_key = 'SUDO-SUCCESS-%s' % randbits
sudocmd = '%s -k && %s %s -S -p "%s" -u %s %s -c %s' % (
sudo_exe, sudo_exe, C.DEFAULT_SUDO_FLAGS,
prompt, sudo_user, executable or '$SHELL', pipes.quote('echo %s; %s' % (success_key, cmd)))
return ('/bin/sh -c ' + pipes.quote(sudocmd), prompt, success_key)
return make_become_cmd(cmd, sudo_user, executable, 'sudo', C.DEFAULT_SUDO_FLAGS, sudo_exe)
def make_su_cmd(su_user, executable, cmd):
"""
Helper function for connection plugins to create direct su commands
"""
# TODO: work on this function
randbits = ''.join(chr(random.randint(ord('a'), ord('z'))) for x in xrange(32))
success_key = 'SUDO-SUCCESS-%s' % randbits
sudocmd = '%s %s %s -c "%s -c %s"' % (
C.DEFAULT_SU_EXE, C.DEFAULT_SU_FLAGS, su_user, executable or '$SHELL',
pipes.quote('echo %s; %s' % (success_key, cmd))
)
return ('/bin/sh -c ' + pipes.quote(sudocmd), None, success_key)
return make_become_cmd(cmd, su_user, executable, 'su', C.DEFAULT_SU_FLAGS, C.DEFAULT_SU_EXE)
def get_diff(diff):
# called by --diff usage in playbook and runner via callbacks
@ -1313,6 +1376,14 @@ def safe_eval(expr, locals={}, include_exceptions=False):
http://stackoverflow.com/questions/12523516/using-ast-and-whitelists-to-make-pythons-eval-safe
'''
# define certain JSON types
# eg. JSON booleans are unknown to python eval()
JSON_TYPES = {
'false': False,
'null': None,
'true': True,
}
# this is the whitelist of AST nodes we are going to
# allow in the evaluation. Any node type other than
# those listed here will raise an exception in our custom
@ -1376,7 +1447,7 @@ def safe_eval(expr, locals={}, include_exceptions=False):
parsed_tree = ast.parse(expr, mode='eval')
cnv.visit(parsed_tree)
compiled = compile(parsed_tree, expr, 'eval')
result = eval(compiled, {}, locals)
result = eval(compiled, JSON_TYPES, dict(locals))
if include_exceptions:
return (result, None)
@ -1433,12 +1504,13 @@ def listify_lookup_plugin_terms(terms, basedir, inject):
def combine_vars(a, b):
_validate_both_dicts(a, b)
if C.DEFAULT_HASH_BEHAVIOUR == "merge":
return merge_hash(a, b)
else:
return dict(a.items() + b.items())
_validate_both_dicts(a, b)
result = a.copy()
result.update(b)
return result
def random_password(length=20, chars=C.DEFAULT_PASSWORD_CHARS):
'''Return a random password string of length containing only chars.'''
@ -1577,9 +1649,9 @@ def update_hash(hash, key, new_value):
hash[key] = value
def censor_unlogged_data(data):
'''
'''
used when the no_log: True attribute is passed to a task to keep data from a callback.
NOT intended to prevent variable registration, but only things from showing up on
NOT intended to prevent variable registration, but only things from showing up on
screen
'''
new_data = {}
@ -1589,5 +1661,19 @@ def censor_unlogged_data(data):
new_data['censored'] = 'results hidden due to no_log parameter'
return new_data
def check_mutually_exclusive_privilege(options, parser):
# privilege escalation command line arguments need to be mutually exclusive
if (options.su or options.su_user or options.ask_su_pass) and \
(options.sudo or options.sudo_user or options.ask_sudo_pass) or \
(options.su or options.su_user or options.ask_su_pass) and \
(options.become or options.become_user or options.become_ask_pass) or \
(options.sudo or options.sudo_user or options.ask_sudo_pass) and \
(options.become or options.become_user or options.become_ask_pass):
parser.error("Sudo arguments ('--sudo', '--sudo-user', and '--ask-sudo-pass') "
"and su arguments ('-su', '--su-user', and '--ask-su-pass') "
"and become arguments ('--become', '--become-user', and '--ask-become-pass')"
" are exclusive of each other")

View file

@ -20,6 +20,8 @@ from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import os
from ansible.errors import AnsibleError
# Note, sha1 is the only hash algorithm compatible with python2.4 and with
# FIPS-140 mode (as of 11-2014)
@ -63,7 +65,7 @@ def secure_hash(filename, hash_func=sha1):
block = infile.read(blocksize)
infile.close()
except IOError, e:
raise errors.AnsibleError("error while accessing the file %s, error was: %s" % (filename, e))
raise AnsibleError("error while accessing the file %s, error was: %s" % (filename, e))
return digest.hexdigest()
# The checksum algorithm must match with the algorithm in ShellModule.checksum() method

View file

@ -32,7 +32,8 @@ options:
I(auth_url), I(username), I(password), I(project_name) and any
information about domains if the cloud supports them. For other plugins,
this param will need to contain whatever parameters that auth plugin
requires. This parameter is not needed if a named cloud is provided.
requires. This parameter is not needed if a named cloud is provided or
OpenStack OS_* environment variables are present.
required: false
auth_plugin:
description:
@ -84,5 +85,6 @@ notes:
can come from a yaml config file in /etc/ansible/openstack.yaml,
/etc/openstack/clouds.yaml or ~/.config/openstack/clouds.yaml, then from
standard environment variables, then finally by explicit parameters in
plays.
plays. More information can be found at
U(http://docs.openstack.org/developer/os-client-config)
'''

View file

@ -165,7 +165,7 @@ class PluginLoader(object):
else:
suffixes = ['.py', '']
potential_names = frozenset('%s%s' % (name, s) for s in suffixes)
potential_names = tuple('%s%s' % (name, s) for s in suffixes)
for full_name in potential_names:
if full_name in self._plugin_path_cache:
return self._plugin_path_cache[full_name]

View file

@ -15,12 +15,14 @@
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
import sys
import os
import re
import codecs
import jinja2
from jinja2.runtime import StrictUndefined
from jinja2.exceptions import TemplateSyntaxError
from jinja2.utils import missing
import yaml
import json
from ansible import errors
@ -33,7 +35,7 @@ import ast
import traceback
from ansible.utils.string_functions import count_newlines_from_end
from ansible.utils import to_bytes
from ansible.utils import to_bytes, to_unicode
class Globals(object):
@ -117,8 +119,11 @@ def template(basedir, varname, templatevars, lookup_fatal=True, depth=0, expand_
varname = "{{%s}}" % varname
if isinstance(varname, basestring):
if '{{' in varname or '{%' in varname:
varname = template_from_string(basedir, varname, templatevars, fail_on_undefined)
if ('{{' in varname and '}}' in varname ) or ( '{%' in varname and '%}' in varname ):
try:
varname = template_from_string(basedir, varname, templatevars, fail_on_undefined)
except errors.AnsibleError, e:
raise errors.AnsibleError("Failed to template %s: %s" % (varname, str(e)))
if (varname.startswith("{") and not varname.startswith("{{")) or varname.startswith("["):
eval_results = utils.safe_eval(varname, locals=templatevars, include_exceptions=True)
@ -154,16 +159,23 @@ class _jinja2_vars(object):
extras is a list of locals to also search for variables.
'''
def __init__(self, basedir, vars, globals, fail_on_undefined, *extras):
def __init__(self, basedir, vars, globals, fail_on_undefined, locals=None, *extras):
self.basedir = basedir
self.vars = vars
self.globals = globals
self.fail_on_undefined = fail_on_undefined
self.extras = extras
self.locals = dict()
if isinstance(locals, dict):
for key, val in locals.iteritems():
if key[:2] == 'l_' and val is not missing:
self.locals[key[2:]] = val
def __contains__(self, k):
if k in self.vars:
return True
if k in self.locals:
return True
for i in self.extras:
if k in i:
return True
@ -174,6 +186,8 @@ class _jinja2_vars(object):
def __getitem__(self, varname):
from ansible.runner import HostVars
if varname not in self.vars:
if varname in self.locals:
return self.locals[varname]
for i in self.extras:
if varname in i:
return i[varname]
@ -184,6 +198,7 @@ class _jinja2_vars(object):
var = self.vars[varname]
# HostVars is special, return it as-is, as is the special variable
# 'vars', which contains the vars structure
var = to_unicode(var, nonstring="passthru")
if isinstance(var, dict) and varname == "vars" or isinstance(var, HostVars):
return var
else:
@ -196,7 +211,7 @@ class _jinja2_vars(object):
'''
if locals is None:
return self
return _jinja2_vars(self.basedir, self.vars, self.globals, self.fail_on_undefined, locals, *self.extras)
return _jinja2_vars(self.basedir, self.vars, self.globals, self.fail_on_undefined, locals=locals, *self.extras)
class J2Template(jinja2.environment.Template):
'''

View file

@ -5,7 +5,7 @@ To create an Ansible DEB package:
sudo apt-get install python-paramiko python-yaml python-jinja2 python-httplib2 python-setuptools sshpass
sudo apt-get install cdbs debhelper dpkg-dev git-core reprepro python-support fakeroot
git clone git://github.com/ansible/ansible.git
git clone git://github.com/ansible/ansible.git --recursive
cd ansible
make deb

View file

@ -1,8 +1,52 @@
ansible (1.9) unstable; urgency=low
ansible (%VERSION%-%RELEASE%~%DIST%) %DIST%; urgency=low
* 1.9 release (PENDING)
* %VERSION% release
-- Ansible, Inc. <support@ansible.com> Wed, 21 Oct 2015 04:29:00 -0500
-- Ansible, Inc. <support@ansible.com> %DATE%
ansible (1.9.6) unstable; urgency=low
* 1.9.6
-- Ansible, Inc. <support@ansible.com> Fri, 15 Apr 2016 14:50:07 -0400
ansible (1.9.5) unstable; urgency=low
* 1.9.5
-- Ansible, Inc. <support@ansible.com> Mon, 21 Mar 2016 18:55:50 -0400
ansible (1.9.4) unstable; urgency=low
* 1.9.4
-- Ansible, Inc. <support@ansible.com> Fri, 09 Oct 2015 15:00:00 -0500
ansible (1.9.3) unstable; urgency=low
* 1.9.3
-- Ansible, Inc. <support@ansible.com> Thu, 03 Sep 2015 18:30:00 -0500
ansible (1.9.2) unstable; urgency=low
* 1.9.2
-- Ansible, Inc. <support@ansible.com> Wed, 24 Jun 2015 14:00:00 -0500
ansible (1.9.1) unstable; urgency=low
* 1.9.1
-- Ansible, Inc. <support@ansible.com> Mon, 27 Apr 2015 17:00:00 -0500
ansible (1.9.0.1) unstable; urgency=low
* 1.9.0.1
-- Ansible, Inc. <support@ansible.com> Wed, 25 Mar 2015 15:00:00 -0500
ansible (1.8.4) unstable; urgency=low

View file

@ -1,5 +1,6 @@
%define name ansible
%define name ansible1.9
%define ansible_version $VERSION
%define ansible_release $RELEASE
%if 0%{?rhel} == 5
%define __python /usr/bin/python26
@ -7,12 +8,12 @@
Name: %{name}
Version: %{ansible_version}
Release: 1%{?dist}
Release: %{ansible_release}%{?dist}
Url: http://www.ansible.com
Summary: SSH-based application deployment, configuration management, and IT orchestration platform
License: GPLv3
Group: Development/Libraries
Source: http://releases.ansible.com/ansible/%{name}-%{version}.tar.gz
Source: http://releases.ansible.com/ansible/ansible-%{version}.tar.gz
BuildRoot: %{_tmppath}/%{name}-%{version}-%{release}-buildroot
%{!?python_sitelib: %global python_sitelib %(%{__python} -c "from distutils.sysconfig import get_python_lib; print(get_python_lib())")}
@ -73,6 +74,9 @@ Requires: python-setuptools
Requires: sshpass
Provides: ansible = %{version}-%{release}
Conflicts: ansible
%description
Ansible is a radically simple model-driven configuration management,
@ -82,7 +86,7 @@ on remote nodes. Extension modules can be written in any language and
are transferred to managed machines automatically.
%prep
%setup -q
%setup -qn ansible-%{version}
%build
%{__python} setup.py build
@ -110,6 +114,27 @@ rm -rf %{buildroot}
%changelog
* Fri Apr 15 2016 Ansible, Inc. <support@ansible.com> - 1.9.6-1
- Release 1.9.6-1
* Mon Mar 21 2016 Ansible, Inc. <support@ansible.com> - 1.9.5-1
- Release 1.9.5-1
* Fri Oct 09 2015 Ansible, Inc. <support@ansible.com> - 1.9.4
- Release 1.9.4
* Thu Sep 03 2015 Ansible, Inc. <support@ansible.com> - 1.9.3
- Release 1.9.3
* Wed Jun 24 2015 Ansible, Inc. <support@ansible.com> - 1.9.2
- Release 1.9.2
* Mon Apr 27 2015 Ansible, Inc. <support@ansible.com> - 1.9.1
- Release 1.9.1
* Wed Mar 25 2015 Ansible, Inc. <support@ansible.com> - 1.9.0
- Release 1.9.0
* Thu Feb 19 2015 Ansible, Inc. <support@ansible.com> - 1.8.4
- Release 1.8.4

View file

@ -17,9 +17,9 @@
import os
import urllib
import urllib2
from ansible import utils
from ansible.module_utils.urls import open_url
try:
import prettytable
@ -77,7 +77,7 @@ class CallbackModule(object):
url = ('%s?auth_token=%s' % (self.msg_uri, self.token))
try:
response = urllib2.urlopen(url, urllib.urlencode(params))
response = open_url(url, data=urllib.urlencode(params))
return response.read()
except:
utils.warning('Could not submit message to hipchat')

View file

@ -212,7 +212,7 @@ class ConsulInventory(object):
'''loads the data for a sinle node adding it to various groups based on
metadata retrieved from the kv store and service availablity'''
index, node_data = self.consul_api.catalog.node(node, datacenter)
index, node_data = self.consul_api.catalog.node(node, dc=datacenter)
node = node_data['Node']
self.add_node_to_map(self.nodes, 'all', node)
self.add_metadata(node_data, "consul_datacenter", datacenter)

View file

@ -334,23 +334,24 @@ class Ec2Inventory(object):
self.write_to_cache(self.inventory, self.cache_path_cache)
self.write_to_cache(self.index, self.cache_path_index)
def connect(self, region):
''' create connection to api server'''
if self.eucalyptus:
conn = boto.connect_euca(host=self.eucalyptus_host)
conn.APIVersion = '2010-08-31'
else:
conn = ec2.connect_to_region(region)
# connect_to_region will fail "silently" by returning None if the region name is wrong or not supported
if conn is None:
self.fail_with_error("region name: %s likely not supported, or AWS is down. connection to region failed." % region)
return conn
def get_instances_by_region(self, region):
''' Makes an AWS EC2 API call to the list of instances in a particular
region '''
try:
if self.eucalyptus:
conn = boto.connect_euca(host=self.eucalyptus_host)
conn.APIVersion = '2010-08-31'
else:
conn = ec2.connect_to_region(region)
# connect_to_region will fail "silently" by returning None if the region name is wrong or not supported
if conn is None:
print("region name: %s likely not supported, or AWS is down. connection to region failed." % region)
sys.exit(1)
conn = self.connect(region)
reservations = []
if self.ec2_instance_filters:
for filter_key, filter_values in self.ec2_instance_filters.iteritems():
@ -363,10 +364,12 @@ class Ec2Inventory(object):
self.add_instance(instance, region)
except boto.exception.BotoServerError, e:
if not self.eucalyptus:
print "Looks like AWS is down again:"
print e
sys.exit(1)
if e.error_code == 'AuthFailure':
error = self.get_auth_error_message()
else:
backend = 'Eucalyptus' if self.eucalyptus else 'AWS'
error = "Error connecting to %s backend.\n%s" % (backend, e.message)
self.fail_with_error(error)
def get_rds_instances_by_region(self, region):
''' Makes an AWS API call to the list of RDS instances in a particular
@ -379,23 +382,37 @@ class Ec2Inventory(object):
for instance in instances:
self.add_rds_instance(instance, region)
except boto.exception.BotoServerError, e:
error = e.message
if e.error_code == 'AuthFailure':
error = self.get_auth_error_message()
if not e.reason == "Forbidden":
print "Looks like AWS RDS is down: "
print e
sys.exit(1)
error = "Looks like AWS RDS is down:\n%s" % e.message
self.fail_with_error(error)
def get_auth_error_message(self):
''' create an informative error message if there is an issue authenticating'''
errors = ["Authentication error retrieving ec2 inventory."]
if None in [os.environ.get('AWS_ACCESS_KEY_ID'), os.environ.get('AWS_SECRET_ACCESS_KEY')]:
errors.append(' - No AWS_ACCESS_KEY_ID or AWS_SECRET_ACCESS_KEY environment vars found')
else:
errors.append(' - AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY environment vars found but may not be correct')
boto_paths = ['/etc/boto.cfg', '~/.boto', '~/.aws/credentials']
boto_config_found = list(p for p in boto_paths if os.path.isfile(os.path.expanduser(p)))
if len(boto_config_found) > 0:
errors.append(" - Boto configs found at '%s', but the credentials contained may not be correct" % ', '.join(boto_config_found))
else:
errors.append(" - No Boto config found at any expected location '%s'" % ', '.join(boto_paths))
return '\n'.join(errors)
def fail_with_error(self, err_msg):
'''log an error to std err for ansible-playbook to consume and exit'''
sys.stderr.write(err_msg)
sys.exit(1)
def get_instance(self, region, instance_id):
''' Gets details about a specific instance '''
if self.eucalyptus:
conn = boto.connect_euca(self.eucalyptus_host)
conn.APIVersion = '2010-08-31'
else:
conn = ec2.connect_to_region(region)
# connect_to_region will fail "silently" by returning None if the region name is wrong or not supported
if conn is None:
print("region name: %s likely not supported, or AWS is down. connection to region failed." % region)
sys.exit(1)
conn = self.connect(region)
reservations = conn.get_all_instances([instance_id])
for reservation in reservations:
@ -492,9 +509,8 @@ class Ec2Inventory(object):
if self.nested_groups:
self.push_group(self.inventory, 'security_groups', key)
except AttributeError:
print 'Package boto seems a bit older.'
print 'Please upgrade boto >= 2.3.0.'
sys.exit(1)
self.fail_with_error('\n'.join(['Package boto seems a bit older.',
'Please upgrade boto >= 2.3.0.']))
# Inventory: Group by tag keys
if self.group_by_tag_keys:
@ -587,9 +603,9 @@ class Ec2Inventory(object):
self.push_group(self.inventory, 'security_groups', key)
except AttributeError:
print 'Package boto seems a bit older.'
print 'Please upgrade boto >= 2.3.0.'
sys.exit(1)
self.fail_with_error('\n'.join(['Package boto seems a bit older.',
'Please upgrade boto >= 2.3.0.']))
# Inventory: Group by engine
if self.group_by_rds_engine:
@ -785,4 +801,3 @@ class Ec2Inventory(object):
# Run the script
Ec2Inventory()

View file

@ -72,6 +72,16 @@ Author: Eric Johnson <erjohnso@google.com>
Version: 0.0.1
'''
__requires__ = ['pycrypto>=2.6']
try:
import pkg_resources
except ImportError:
# Use pkg_resources to find the correct versions of libraries and set
# sys.path appropriately when there are multiversion installs. We don't
# fail here as there is code that better expresses the errors where the
# library is used.
pass
USER_AGENT_PRODUCT="Ansible-gce_inventory_plugin"
USER_AGENT_VERSION="v1"

View file

@ -115,7 +115,7 @@ class VMwareInventory(object):
else:
cache_max_age = 0
cache_stat = os.stat(cache_file)
if (cache_stat.st_mtime + cache_max_age) < time.time():
if (cache_stat.st_mtime + cache_max_age) >= time.time():
with open(cache_file) as cache:
return json.load(cache)
return default

View file

@ -22,7 +22,7 @@ setup(name='ansible',
url='http://ansible.com/',
license='GPLv3',
install_requires=['paramiko', 'jinja2', "PyYAML", 'setuptools', 'pycrypto >= 2.6'],
package_dir={ 'ansible': 'lib/ansible' },
package_dir={ '': 'lib' },
packages=find_packages('lib'),
package_data={
'': ['module_utils/*.ps1', 'modules/core/windows/*.ps1', 'modules/extras/windows/*.ps1'],

9
test-requirements.txt Normal file
View file

@ -0,0 +1,9 @@
#
# Test requirements
#
nose
mock
passlib
coverage
coveralls

View file

@ -56,6 +56,20 @@ test_group_by:
test_handlers:
ansible-playbook test_handlers.yml -i inventory.handlers -e @$(VARS_FILE) $(CREDENTIALS_ARG) -v $(TEST_FLAGS)
# Not forcing, should only run on successful host
[ "$$(ansible-playbook test_force_handlers.yml --tags normal -i inventory.handlers -e @$(VARS_FILE) $(CREDENTIALS_ARG) -v $(TEST_FLAGS) | egrep -o CALLED_HANDLER_. | sort | uniq | xargs)" = "CALLED_HANDLER_B" ]
# Forcing from command line
[ "$$(ansible-playbook test_force_handlers.yml --tags normal -i inventory.handlers --force-handlers -e @$(VARS_FILE) $(CREDENTIALS_ARG) -v $(TEST_FLAGS) | egrep -o CALLED_HANDLER_. | sort | uniq | xargs)" = "CALLED_HANDLER_A CALLED_HANDLER_B" ]
# Forcing from command line, should only run later tasks on unfailed hosts
[ "$$(ansible-playbook test_force_handlers.yml --tags normal -i inventory.handlers --force-handlers -e @$(VARS_FILE) $(CREDENTIALS_ARG) -v $(TEST_FLAGS) | egrep -o CALLED_TASK_. | sort | uniq | xargs)" = "CALLED_TASK_B CALLED_TASK_D CALLED_TASK_E" ]
# Forcing from command line, should call handlers even if all hosts fail
[ "$$(ansible-playbook test_force_handlers.yml --tags normal -i inventory.handlers --force-handlers -e @$(VARS_FILE) $(CREDENTIALS_ARG) -v -e fail_all=yes $(TEST_FLAGS) | egrep -o CALLED_HANDLER_. | sort | uniq | xargs)" = "CALLED_HANDLER_A CALLED_HANDLER_B" ]
# Forcing from ansible.cfg
[ "$$(ANSIBLE_FORCE_HANDLERS=true ansible-playbook --tags normal test_force_handlers.yml -i inventory.handlers -e @$(VARS_FILE) $(CREDENTIALS_ARG) -v $(TEST_FLAGS) | egrep -o CALLED_HANDLER_. | sort | uniq | xargs)" = "CALLED_HANDLER_A CALLED_HANDLER_B" ]
# Forcing true in play
[ "$$(ansible-playbook test_force_handlers.yml --tags force_true_in_play -i inventory.handlers -e @$(VARS_FILE) $(CREDENTIALS_ARG) -v $(TEST_FLAGS) | egrep -o CALLED_HANDLER_. | sort | uniq | xargs)" = "CALLED_HANDLER_A CALLED_HANDLER_B" ]
# Forcing false in play, which overrides command line
[ "$$(ansible-playbook test_force_handlers.yml --force-handlers --tags force_false_in_play -i inventory.handlers -e @$(VARS_FILE) $(CREDENTIALS_ARG) -v $(TEST_FLAGS) | egrep -o CALLED_HANDLER_. | sort | uniq | xargs)" = "CALLED_HANDLER_B" ]
test_hash:
ANSIBLE_HASH_BEHAVIOUR=replace ansible-playbook test_hash.yml -i $(INVENTORY) $(CREDENTIALS_ARG) -v -e '{"test_hash":{"extra_args":"this is an extra arg"}}'
@ -84,11 +98,11 @@ test_winrm:
test_tags:
# Run everything by default
[ "$$(ansible-playbook --list-tasks test_tags.yml -i $(INVENTORY) -e @$(VARS_FILE) $(CREDENTIALS_ARG) -v $(TEST_FLAGS) | fgrep Task_with | xargs)" = "Task_with_tag Task_with_always_tag Task_without_tag" ]
[ "$$(ansible-playbook --list-tasks test_tags.yml -i $(INVENTORY) -e @$(VARS_FILE) $(CREDENTIALS_ARG) -v $(TEST_FLAGS) | fgrep Task_with | xargs)" = "Task_with_tag TAGS: [tag] Task_with_always_tag TAGS: [always] Task_without_tag TAGS: []" ]
# Run the exact tags, and always
[ "$$(ansible-playbook --list-tasks --tags tag test_tags.yml -i $(INVENTORY) -e @$(VARS_FILE) $(CREDENTIALS_ARG) -v $(TEST_FLAGS) | fgrep Task_with | xargs)" = "Task_with_tag Task_with_always_tag" ]
[ "$$(ansible-playbook --list-tasks --tags tag test_tags.yml -i $(INVENTORY) -e @$(VARS_FILE) $(CREDENTIALS_ARG) -v $(TEST_FLAGS) | fgrep Task_with | xargs)" = "Task_with_tag TAGS: [tag] Task_with_always_tag TAGS: [always]" ]
# Skip one tag
[ "$$(ansible-playbook --list-tasks --skip-tags tag test_tags.yml -i $(INVENTORY) -e @$(VARS_FILE) $(CREDENTIALS_ARG) -v $(TEST_FLAGS) | fgrep Task_with | xargs)" = "Task_with_always_tag Task_without_tag" ]
[ "$$(ansible-playbook --list-tasks --skip-tags tag test_tags.yml -i $(INVENTORY) -e @$(VARS_FILE) $(CREDENTIALS_ARG) -v $(TEST_FLAGS) | fgrep Task_with | xargs)" = "Task_with_always_tag TAGS: [always] Task_without_tag TAGS: []" ]
cloud: amazon rackspace

View file

@ -3,6 +3,8 @@
roles:
# In destructive because it creates and removes a user
- { role: test_sudo, tags: test_sudo}
#- { role: test_su, tags: test_su} # wait till su support is added to local connection, needs tty
- { role: test_become, tags: test_become}
- { role: test_service, tags: test_service }
# Current pip unconditionally uses md5. We can re-enable if pip switches
# to a different hash or allows us to not check md5
@ -15,3 +17,4 @@
- { role: test_mysql_db, tags: test_mysql_db}
- { role: test_mysql_user, tags: test_mysql_user}
- { role: test_mysql_variables, tags: test_mysql_variables}
- { role: test_docker, tags: test_docker}

View file

@ -15,6 +15,7 @@ invenoverride ansible_ssh_host=127.0.0.1 ansible_connection=local
[all:vars]
extra_var_override=FROM_INVENTORY
inven_var=inventory_var
unicode_host_var=CaféEñyei
[inven_overridehosts:vars]
foo=foo

View file

@ -0,0 +1 @@
testing tilde expansion with become

View file

@ -0,0 +1,77 @@
- include_vars: default.yml
- name: Create test user
become: True
become_user: root
user:
name: "{{ become_test_user }}"
- name: test becoming user
shell: whoami
become: True
become_user: "{{ become_test_user }}"
register: results
- assert:
that:
- "results.stdout == '{{ become_test_user }}'"
- name: tilde expansion honors become in file
become: True
become_user: "{{ become_test_user }}"
file:
path: "~/foo.txt"
state: touch
- name: check that the path in the user's home dir was created
stat:
path: "~{{ become_test_user }}/foo.txt"
register: results
- assert:
that:
- "results.stat.exists == True"
- "results.stat.path|dirname|basename == '{{ become_test_user }}'"
- name: tilde expansion honors become in template
become: True
become_user: "{{ become_test_user }}"
template:
src: "bar.j2"
dest: "~/bar.txt"
- name: check that the path in the user's home dir was created
stat:
path: "~{{ become_test_user }}/bar.txt"
register: results
- assert:
that:
- "results.stat.exists == True"
- "results.stat.path|dirname|basename == '{{ become_test_user }}'"
- name: tilde expansion honors become in copy
become: True
become_user: "{{ become_test_user }}"
copy:
src: baz.txt
dest: "~/baz.txt"
- name: check that the path in the user's home dir was created
stat:
path: "~{{ become_test_user }}/baz.txt"
register: results
- assert:
that:
- "results.stat.exists == True"
- "results.stat.path|dirname|basename == '{{ become_test_user }}'"
- name: Remove test user and their home dir
become: True
become_user: root
user:
name: "{{ become_test_user }}"
state: "absent"
remove: "yes"

View file

@ -0,0 +1 @@
{{ become_test_user }}

Some files were not shown because too many files have changed in this diff Show more