make os_subnet behave like os_network in terms of returning information
about the created resource. With this commit, os_subnet will return the
created subnet in `subnet` and the subnet id in `id`.
Specifically, the stat module now has a checksum_algorithm parameter.
This lets the module utilize one of the hash algorithms available on the host
to return the checksum of the file.
This change is backwards compatible. The checksum_algorithm defaults to
sha1 and still returns its result to the stat.checksum property.
Allow the 'interfaces' attribute to represent internal router
interfaces, composed of subnet names, and the 'external_fixed_ips'
attribute to represent external interface subnet/IP.
This commit adds some unit tests for the `cloud.openstack.os_server`
module. These tests exercise `_network_args` thoroughly and
`_create_server` lightly.
These tests will **fail** until #2275 lands.
To run the tests:
pip install -r test-requirements.txt
PYTHONPATH=$PWD py.test
The existing code was receiving a list of strings and erroneously
assuming it was being given a list of dictionaries, leading it to fail
with:
AttributeError: 'str' object has no attribute 'get'
This commit corrects the list handling code to check the type of each
item and handle it appropriately. Also, based on bcoca's comment
in #2253, thie code removes the special case for a string-only argument.
By transforming string arguments into dicts and then handling them like
any other dict argument, this also permits arguments of the form:
nics: net-name=mynet
Or:
nics: port-name=mynet
Previous versions of this code only supported `net-id` and `port-id` in
string specifications.
There was a parameter in the docs called 'public_ip' that didn't
actually exist. Additionally, auto_floating_ip is not consistent with
the underlying parameter which is auto_ip - for no good reason.
Add auto_ip as the real parameter, and then make public_ip and
auto_floating_ip as aliases for it for backwards compatability.
Fixes#2301
This patch adds support to setting metadata key/value through a string
argument. Variables can now be used for both the metadata key and
value.
example:
meta: "{{ var1 }}:SomeValue,key:{{ var2 }}"
Changed=true now reported on new volume.
Only detach volume when instance is specified as 'None' or '' rather than whenever instance is not specified at all
Fix regression caused by 6b27cdc where by no volume is created if id or Name is not supplied
Remove unnecessary empty aliases
Corrected example to use acceptable parameter for ions
Added exception handling to get_all_instances call
Moved the attachment state validation code to attach_volume function rather than create_volume function
Refactored attach_volume and detach_volume so that changed state can be passed back to call
Created get_volume_info function so that state=present and state=list can return the same data. Also added instance_id as a returned value in attachment_set dict
Updated aws connection method so that boto profile can be used
When pulling an image using Docker 1.8, it seems the output
JSON stream has an empty dict at the very end. This causes
ansible to fail when pulling an image, as it's expecting a
status message in that dict which it uses to determine whether
it had to download the image or not. As a bit of an ugly hack
for that which remains backward compatible, try the last item
in the stream, and if it's an empty dict, take the last-but-one
item instead.
The strip() is needed as the exact value appears to be '{}/r/n';
we could just match that, but it seems like the kind of thing
where maybe it'd happen to just be '{}/n' or '{}' or something
in some cases, so let's just use strip() in case.
A recent change [1] in docker between v1.8.2 and v1.8.3 changed what
is returned in the json when inspecting an image. Five variables which
could have been expected before will now be omited when empty. Only
one of those variables is being addressed in the docker, ExposedPorts.
Unfortunately there was also no API version change on this so this
can't be easily corrected with pinning the API to the older version.
This does a get() which will return None if the variable is not in the
dict formed from the json that was returned. Everything else works the
same way.
[1] 9098628b29
Without this, «ec2: state=stopped instance_ids=…» would fail with a
traceback like this:
if inst.get_attribute('sourceDestCheck')['sourceDestCheck'] != source_dest_check:
NameError: global name 'source_dest_check' is not defined
Detached head detection seems to have broken somewhere a long the way
because git decided to change how that situation looks when doing a 'git
branch -a' which is performed by get_branches().
This is how git 1.7.1 displays this situation (which works):
shell> git branch -a
* (no branch)
master
This is the output from git 1.8.3.1 (which does not work):
shell> git branch -a
* (detached from e132711)
master
It looks like this same wording is used in the most recent version of
git (2.6.1 as of writing this).
Both `source_dest_check` and `termination_protection` variables are not
available within the scope of the startstopec2 instance method. This just
pulls them from module.params.
With shade > 0.13.0, networks can be created that are externally
accessible. This adds a parameter for that.
Also, add RETURN documentation and 'if __name__' check around call
to main().
I think in this commit 720aeffca2
There was bug introduced where the ElastiCacheManager init method has
a number of positional arguments like so.
```py
def __init__(self, module, name, engine, cache_engine_version, node_type,
num_nodes, cache_port, parameter_group, cache_subnet_group,
cache_security_groups, security_group_ids, zone, wait,
hard_modify, region, **aws_connect_kwargs):
```
But then later in the code the positional arguments are passed in
like this.
```py
elasticache_manager = ElastiCacheManager(module, name, engine,
cache_engine_version, node_type,
num_nodes, cache_port,
cache_subnet_group,
cache_security_groups,
security_group_ids, parameter_group, zone, wait,
hard_modify, region, **aws_connect_kwargs)
```
If you count, you can see that cache_subnet_group, is being passed in
where the manager expects to see parameter_group.
There can be instances during an Ansible play where the list of subnets
currently available from OpenStack is required. This update provides
subnet list functionality as a new os_subnets_facts module.
There can be instances during an Ansible play where the list of networks
currently available from OpenStack is required. This update provides
network list functionality as a new os_networks_facts module.
An attempt to make clear how privilege escalation works with respect to the src/source host and dest/destination host. One existing note was incorporated into three new ones, iterating each.
It is not documented in [the Ansible doc page][1] nor
[the BSD setfacl man entry][2] (which means it might not be compatible
with BSD) so removing it does not break the API.
On the other hand, it does not conform with POSIX 1003.1e DRAFT
STANDARD 17 according to the [Linux setfacl man entry][3] so safer to
remove.
Finally, the most important reason: in non POSIX 1003.e mode, only ACL
entries without the permissions field are accepted, so having an
optional field here is very much error-prone.
[1]: http://docs.ansible.com/ansible/acl_module.html
[2]: http://www.freebsd.org/cgi/man.cgi?format=html&query=setfacl(1)
[3]: http://linuxcommand.org/man_pages/setfacl1.html
This patch allows the hostname module to detect and set the hostname for a
Kali Linux 2.0 installation. Without this patch, the hostname module raises
the following error
hostname module cannot be used on platform Linux (Kali)
Kali is based off of Debian.
Fixes https://github.com/ansible/ansible/issues/11768
Test plan:
- (in a Vagrant VM) created a user 'bob' with no ssh key
- ran the following playbook in check mode:
---
- hosts: trusty
tasks:
- user: name=bob state=present generate_ssh_key=yes
- saw that ansible-playbook reported "changes=1"
- saw that /home/bob/.ssh was still absent
- ran the playbook for real
- saw that /home/bob/.ssh was created
- ran the playbook in check mode again
- saw that ansible-playbook reported no changes
- tried a variation with a different username for a user that didn't
exist: ansible-playbook --check worked correctly (no errors, reported
"changed")
PR #1651 fixed issue #1515 but the requirement for path to be defined is unecessarily strict. If the user has previously been created a path isn't necessary.
This patch properly fixes bug 1226 without introducing a breaking
change to idempotency which was introduced in PR #1358
We can properly assign permissions to databases with a '.' in the name
of the database as well as assign priviliges to all databases as
specified with '*'
While this change doesn't break the creation, it does break
idempotency. This change will convert '*.*' to '`*`.*' which is
functionally the same, however when the user_mod() function looks up
the current privileges with privileges_get() it will read '*.*'
Since '*.*' != '`*`.*' it will go through the process of updating the
privleges always resulting in a 'changed' result.
This reverts commit db9ab9b262.
- Make build_entry compatible with Python 2.4
- Re-add missing warning/comment that was forgotten while refactoring
- Replace `all()` with a good ol' for-loop Python 2.4 compatibility
- Make a condition check more explicit (when `state` is `query`)
- Make sure this module can only be run with on a Linux distribution
- Add a note about Linux-only support in the documentation
- Set the version in which recursive support was added, 2.0
By default `.get()` will return `None` on a key that doesn't exist. This
causes a `TypeError` in the `for` loop a few lines down. This change simply
returns an iterable type to avoid the error.
I have a task like this in a playbook. The ansible_ssh_user is 'root'
for this host.
- cron:
hour: 00
job: /home/backup/backup.sh
name: baserock.org data backup
user: backup
Running it gave me the following error:
TASK: [backup cron job, runs every day at midnight] ***************************
failed: [baserock-backup1] => {"failed": true}
msg: crontab: can't open '/tmp/crontabvVjoZe': Permission denied
crontab: user backup cannot read /tmp/crontabvVjoZe
The temporary file created by the 'cron' module is created with the
Python tempfile.mkstemp() function. This creates a file that is readable
only by 'root' (mode 600). The Busybox `crontab` program then checks if
the file is readable by the 'backup' user, and fails if it isn't. So we
need to make sure the file is world-readable before running `crontab`.