Adds 'chroot' connection for executing modules chrooted to
a local dir. Requires running ansible as root.
chroot dirs should be specified in the inventory like any
other host.
You can do things like:
$ sudo -E ansible -vvv -f 1 "./chroot1,./chroot2" -c chroot \
all -m setup
$ sudo -E ansible-playbook -vvv -f 1 -i "./chroot1,./chroot2" \
-c chroot some-playbook.yml
some-playbook.yml:
---
- hosts: all
tasks:
- name: echo something
shell: echo "Yaaay!" >/tmp/foobar.txt
- name: install less
apt: pkg=less state=latest
be a BSD licensed snippet so that it's ok to write proprietary modules. The actual license of Ansible (GPLv3) or any modules
written for ansible (any) do not change.
Jinja extensions adds features to the jinja2 templating engine. This
patch allows module loading for the templating engine vian an
ansible.cfg configuration key (jinja_extensions).
The default behaviour doesn't change (no module loading).
Requested modules can be added coma separated in ansible.cfg
Adds whitespace handling in jinja_extension config
Added whitespace handling in jinja_extension configuration directive, so
things stay safe if user adds spaces around comas in the directives
list.
Adds config example for jinja_extensions
Added config example with multiple extentions for jinja_extensions
If we need to acquire a PTY for sudo's use, then it should really
inherit the capabilities of the calling environment. This is what
OpenSSH does, and so it makes sense to copy this behaviour for the
paramiko connection type.
Closes: #2065
Signed-off-by: martin f. krafft <madduck@madduck.net>
Postpone the paramiko.Channel.get_pty until we know sudo is used. If
sudo is not used, then we do not need a PTY. In fact, the paramiko docs
explicitly state that it's not desirable to allocate a PTY for a simple
exec_command.
Signed-off-by: martin f. krafft <madduck@madduck.net>
If it is a directory, change the destination path by appending the
basename of the source file, like is done if the destination ends with a
/, and try to get the MD5 of the new path.
Instead of having to remember when to use which one, rename template_ds
to template and move the last bit of code from template to varReplace
(which gets used for all string replacements, in the end).
This means that you can template any data type without worrying about
whether it's a string or not, and the right thing will happen.
* improves error handling and reporting
* uses run_command to reduce code
* fails quicker on errors as opposed to return codes and tracebacks
* can now also specify the key as data versus needing to wget it from a file
Hash variables are currently overriden if they are redefined. This
doesn't let the user refine hash entries or overriding selected keys,
which can, for some, be a desirable feature.
This patch let the user force hash merging by setting the
hash_behaviour value to "merge" (without the quotes) in ansible.cfg
However, by default, ansible behaves like it always did and if any value
besides "merge" is used ("replace" is suggested in the example ansible.cfg
file), it will also behave as always.
PluginLoader.add_directory() can receive None from, for example,
Inventory.add_directory(self.basedir()) if host_list is a custom list.
None has no reasonable interpretation other than ignore it.
Adds -i to make_sudo_cmd so target user's environment gets loaded when configurationslike this are used :
- hosts: ubuntu
name: Install ruby for the configured ruby user
sudo: True
sudo_user: rubyuser
# should be ${ruby_user}, but can't for now because of #1665
tasks:
- name: Gets current ruby version
action: shell rbenv version
register: ruby_current_version
* Rename fail_on_rc_non_zero to check_rc, much more succinct.
* Simplify method defintion
* Fix command module and drop shell=shell option; whether to use
shell is determined by if args is a list.
This adds a helper method that modules can call to execute a command via
subproces. It takes two arguments: the command to run and
keyword options that control how the process is executed. Supported
options are: fail_on_rc_non_zero, close_fds, and executable.
fail_on_rc_non_zero will call fail_json if the command fails. If
args is a list, the command will be run with shell=False; otherwise, if
a string, it will be run with shell=True. Otherwise, run_command() returns
the returncode, stdout, and stderr.
For compatibility with older releases as well as avoiding things like
action: raw executable= show status
to communicate with devices that don't have sh.
This commit extends the 'when_' conditions to failed and changed
json results
Additionally it makes when_{set,unset,failed,changed,int,str,flt}
behave more similiarily in that they all except and/or/not logic
Two problems here
* unchecked exception handling and erroneous assumption as to why
an exception might fire
* although the file module expands the path, when using file_args
the unexpanded path is passed.
Expected result: ~/path/to/file should work fine
Actual result: exception is because it doesn't find file with a message
about not being able to get the selinux context
Added two additional template variables
* template_fullpath - absolute path to the template
* template_run_date - date that the template was rendered
Documented these additional variables in the module documentation
Path might have to be expanded on some operations. It seems that path
containing '~' are not.
Using os.path.expanduser in appropriate places solves the problem, but
this might be required in many other places.
It seems that os.path.basename(__file__) can return a unicode
string. In this case syslog.openlog fails. Forcing the result
to a string causes the resulting error to go away.
Since we use 'raw' heavily on equipment where 'command' and 'shell' are not (yet) working (and python may need to be installed first using raw) these improvements are necessary in order to write more complex scripts (with return code handling and separated stdout/stderr).
This change includes the following changes:
- exec_command() now returns the return code of the command
- _low_level_exec_command() now returns a dict, including 'rc', 'stdout' and 'stderr'
- all users of the above interfaces have been improved to make use of the above changes
- all connection plugins have been modified to return rc and stderr
- fix the newline problem (stdout and stderr would have excess newlines)
In a future commit I intend to add assertions or error handling code to verify the return code in those places where it wasn't done. Since only the output was available, the return code was ignored, even though we expect them to be 0.
Three changes:
* Add set_default_selinux_context() to module_common that sets
a file's context according to the defaults in the policy
* In atomic_replace(), set the default context for the file if
selinux is enabled and the destination file does not exist.
* In authorized_key, set the default context when creating
$HOME/.ssh and $HOME/.ssh/authorized_keys. If these already
exist, this won't touch them.
I guess my previous pull request was confusing, by changing the message to something we already do for tasks, it makes it more clear.
Just like we say:
TASK: [foo bar]
skipping: [system01]
The message now is more clear:
PLAY [wagawaga] *******************************
skipping: no hosts matched
It makes it clear that we are skipping the play, just as is done for a task when a condition is not met.
This allows patterns such as webservers:!debian:&datacenter1 to target
hosts in the webservers group, that are not in the debian group, but are
in the datacenter1 group. It also parses patterns left to right.
I hit the following exception because errno is referenced but not imported.
```
fatal: [system01] => failed to parse: Traceback (most recent call last):
File "/root/.ansible/tmp/ansible-1354644532.37-246102819320352/copy", line 782, in <module>
main()
File "/root/.ansible/tmp/ansible-1354644532.37-246102819320352/copy", line 117, in main
module.atomic_replace(dest_tmp, dest)
File "/root/.ansible/tmp/ansible-1354644532.37-246102819320352/copy", line 772, in atomic_replace
if e.errno != errno.EPERM:
NameError: global name 'errno' is not defined
```
This ensures we don't litter remote systems with temporary directories
that don't get cleaned up, as well as speeds things up from not having
to touch every node.
commit 48069adf0f
Author: Gregory Duchatelet <skygreg@gmail.com>
Date: Tue Nov 27 10:13:08 2012 +0100
Removing this plugin from this branch.
commit 15400fffe6
Author: Gregory Duchatelet <skygreg@gmail.com>
Date: Tue Nov 27 09:53:16 2012 +0100
Enhance _match function in inventory with regex.
--limit ~regex could be used to filter hosts or group with a regex.
Tested on cli and ansible-playbook.
commit 63c1b2e17e
Author: Gregory Duchatelet <skygreg@gmail.com>
Date: Tue Nov 27 09:03:41 2012 +0100
Revert pull request #1684
commit 7c2c6fee3a
Merge: f023a2fdd5a847
Author: Gregory Duchatelet <skygreg@gmail.com>
Date: Tue Nov 27 08:52:53 2012 +0100
Merge remote branch 'upstream/devel' into devel
commit f023a2f3df
Author: Gregory Duchatelet <skygreg@gmail.com>
Date: Mon Nov 26 20:52:27 2012 +0100
Add an inventory plugin to fetch groups and host from our CMDB.
commit c64193b4c6
Author: Gregory Duchatelet <skygreg@gmail.com>
Date: Mon Nov 26 20:43:30 2012 +0100
Added possibility to filter hosts from a group, with a regex, separating
groupname and regex with a ~
Usage in group pattern: group~filterpattern
Samples:
ansible group~server-0[1236] -m ping
ansible web~proxy -m ping
ansible web~(proxy|frontend) -m ping
Add constant DEFAULT_MODULE_LANG that defaults to C. Can be set via
environment variable ANSIBLE_MODULE_LANG or configuration variable
module_lang. Updated test-module to have same behavior.
This change avoids the "tcgetattr: Invalid argument" error by making sure the ssh we start does have a proper pseudo-tty.
We could also check whether our current terminal is a proper terminal (by doing a tcgetattr ourselves) but I don't think this adds anything.
This closes#1662 (if all use-cases have been tested: sudo, passwd)
Otherwise, a host in two groups, A and B, using a variable defined
in group A and all will get the value of all, as B's variables will
include the all variable.
Partially fixes#1647.
global_vars has higher precedence than inventory. Putting the all
group's variables into it overrides all other groups and hosts.
Partially fixes#1647.
As reported on the mailinglist, the user received a ValueError when the port number was not templated (fixed in #1649) and therefore it was not an integer. This change will catch the exception and provide a proper error so it is more clear.
Executive summary: skipping a host corrupts a variable (when it is registered)
We have a play existing out of multiple tasks that check a condition, if one of these tasks fails we want to skip all next tasks in the playbook. I noticed that if we skip a task because a certain condition is met, and this task has a register-attribute, I loose the value in the variable. Which means we cannot use that variable in subsequent tasks to evaluate because it was skipped:
```
- action: command test -d /some/directory
register: task
- action: command test -f /some/directory/file
register: task
only_if: '${task.rc} == 0'
- action: do something else
only_if: '${task.rc} == 0'
```
In the above example, if the second task is skipped (because the first failed), the third action will end with a "SyntaxError: invalid syntax" complaining about the unsubstituted ${task.rc} (even though it was set by the first task and used for skipping the second).
The following play demonstrates the problem:
```
- name: Test register on ignored tasks
hosts: all
gather_facts: no
vars:
skip: true
task: { 'rc': 666 }
tasks:
- action: debug msg='skip = ${skip}, task.rc = ${task.rc}'
- name: Skip this task, just to test if task has changed
action: command ls
register: task
only_if: '${skip} != True'
- action: debug msg='skip = ${skip}, task.rc = ${task.rc}'
- name: Now use task value
action: command echo 'Works !'
only_if: '${task.rc} == 0'
```
And the enclosed fix, fixes the above problem.
After spending 10 minutes to find which playbook had an action/local_action missing, I changed the error to include the task name (if set). The error eventually was caused because I added a name to a task, but the dash before the existing action was not removed.