Previously, the `promote` command in the `rds` module would always return OK and never actually promote an instance. This was because `promote_db_instance()` had its conditions backwards: if the instance had the `replication_source` attribute indicating that it **was** a replica, it would set `changed = False` and do nothing. If the instance **wasn't** a replica, it would attempt to run `boto.rds.promote_read_replica()`, which would always fail.
'exact_count' and 'state' are mutually exclusive options they should not be in the following examples:
- # Enforce that 5 running instances named "database" with a "dbtype" of "postgres" example and
- # Enforce that 5 instances with a tag "foo" are running
The min_disk and min_ram parameters were not being passed to
the shade API. They also need to be integer values. Also
updated the description of these parameters for better
clarification.
Previously the logging module hard coded the default logging driver. This means
if the docker daemon is started with a different logging driver, the ansible
module would continually restart it when run.
This fix adds a call to docker.Client.info(), which is inspected if a logging
driver is not supplied in the playbook, and the container only restarted if
the logging driver applied differs from the configured default.
In usage, this has solved issues with using alternative logging drivers.
There was no db restore example. I've provided one that shows how to do the restore, then add a security group (you cannot add the security group during the restore step -- it has to be done in a modify step afterward). Also, I show how to get the endpoint.
Absent unction was not working on user with login profile
also fixed the exception handling
fixed the delete user function
now works with or without loginprofile (password)
typo
1 allow interface ids and vpc peering connections as route targets
2 set state to "terminated" when VPC is removed
3 fix some comment typos
updates per PR comments
Give user a course of action in the case where the suggestions do not
work. This will hopefully allow us to work through any further issues
much faster.
Check commit enables using tls when using the docker_image module. It
also removes the default for docker_url which doesn't allow us to check
for DOCKER_HOST which is a more sane default. This allows you to use
docker_image on OSX but more documentation is needed.
* devel: (84 commits)
Document and return an error if httplib2 >= 0.7 is not present. We
since find doesn't make changes, support check mode and gather data for other tasks in check mode
Correct typo in yum module docs
Update doc to reflect password is required if adding a new user
Update error message to be more explicit
Simplify logic to handle options set to empty string
Fix to issue 12912. Supply 'force' to install of python-apt.
Note the difference between yum package groups and environment groups.
rearranged systemd check, removed redundant systemctl check fixed unused cmd and state var assignements
added earlier paths to systemd
make os_router return a top level 'id' key
Version bump for new beta 2.0.0-0.4.beta2
allow os_port to accept a list of security groups
allow os_server to accept a list of security groups
Add capability for stat module to use more hash algorithms
allow empty description attribute for os_security_group
Update hostname.py
simpler way to check if systemd is the init system
make os_keypair return a top level 'id' key
make os_flavor return a top-level 'id' key
...
have `os_server_facts` call `list_servers` rather than `get_server`, and
treat the `server` parameter as a wildcard pattern. This permits one to
get facts on a single server:
- os_server:
server: webserver1
On mutiple servers:
- os_server:
server: webserver*
Or on all servers:
- os_server:
Introduces a `detailed` parameter to request additional server details
at the cost of additional API calls.
Since we now have several exceptions to the assumption that the
result of the pull would be on the last status line returned by
docker-py's pull(), I've changed the function so that it looks
through the status lines and returns what if finds on it.
Despite the repeated `break`s, the code seems simpler and a little
more coherent like this. From what I've checked using
`https://github.com/jlafon/ansible-profile`, the execution time is
mostly the same.
with this commit, the `security_groups` attribute for `os_port` will
accept either a common-delimited string or ` YAML list. That is, either
this:
- os_port:
[...]
security_groups: group1,group2
Or this:
- os_port:
[...]
security_groups:
- group1
- group2
This commit allows the `security_groups` parameter of the `os_server`
module to be either a YAML list or a common-delimited string (much like
the `nics` attribute). E.g., this:
- os_nova_server:
[...]
security_groups:
- default
- webserver
Or this:
- os_nova_server:
[...]
security_groups: default,webserver
The `os_security_group` module would fail if there was no `description:`
attribute:
localhost | FAILED! => {
"changed": false,
"failed": true,
"msg": "Error creating security group larstest: Invalid input for
description. Reason: 'None' is not a valid string."
}
This commit makes the default description `''` rather than `None`.
The `os_network` module was incorrectly returning changed=False whether
or not the network was created. This commit makes the changed return
value useful.
make os_subnet behave like os_network in terms of returning information
about the created resource. With this commit, os_subnet will return the
created subnet in `subnet` and the subnet id in `id`.
When this was treated as a boolean, sphinx was leaving the Default
column on http://docs.ansible.com/ansible/ec2_module.html blank,
implying it would use AWS's default. In reality, it passes False, which
overrides the defaults at AWS (it's possible to boot an instance which
AWS claims will always have EBS optimization without it because of this
silently passed False).
Allow the 'interfaces' attribute to represent internal router
interfaces, composed of subnet names, and the 'external_fixed_ips'
attribute to represent external interface subnet/IP.
The existing code was receiving a list of strings and erroneously
assuming it was being given a list of dictionaries, leading it to fail
with:
AttributeError: 'str' object has no attribute 'get'
This commit corrects the list handling code to check the type of each
item and handle it appropriately. Also, based on bcoca's comment
in #2253, thie code removes the special case for a string-only argument.
By transforming string arguments into dicts and then handling them like
any other dict argument, this also permits arguments of the form:
nics: net-name=mynet
Or:
nics: port-name=mynet
Previous versions of this code only supported `net-id` and `port-id` in
string specifications.
There was a parameter in the docs called 'public_ip' that didn't
actually exist. Additionally, auto_floating_ip is not consistent with
the underlying parameter which is auto_ip - for no good reason.
Add auto_ip as the real parameter, and then make public_ip and
auto_floating_ip as aliases for it for backwards compatability.
Fixes#2301
Changed=true now reported on new volume.
Only detach volume when instance is specified as 'None' or '' rather than whenever instance is not specified at all
Fix regression caused by 6b27cdc where by no volume is created if id or Name is not supplied
Remove unnecessary empty aliases
Corrected example to use acceptable parameter for ions
Added exception handling to get_all_instances call
Moved the attachment state validation code to attach_volume function rather than create_volume function
Refactored attach_volume and detach_volume so that changed state can be passed back to call
Created get_volume_info function so that state=present and state=list can return the same data. Also added instance_id as a returned value in attachment_set dict
Updated aws connection method so that boto profile can be used
When pulling an image using Docker 1.8, it seems the output
JSON stream has an empty dict at the very end. This causes
ansible to fail when pulling an image, as it's expecting a
status message in that dict which it uses to determine whether
it had to download the image or not. As a bit of an ugly hack
for that which remains backward compatible, try the last item
in the stream, and if it's an empty dict, take the last-but-one
item instead.
The strip() is needed as the exact value appears to be '{}/r/n';
we could just match that, but it seems like the kind of thing
where maybe it'd happen to just be '{}/n' or '{}' or something
in some cases, so let's just use strip() in case.
A recent change [1] in docker between v1.8.2 and v1.8.3 changed what
is returned in the json when inspecting an image. Five variables which
could have been expected before will now be omited when empty. Only
one of those variables is being addressed in the docker, ExposedPorts.
Unfortunately there was also no API version change on this so this
can't be easily corrected with pinning the API to the older version.
This does a get() which will return None if the variable is not in the
dict formed from the json that was returned. Everything else works the
same way.
[1] 9098628b29
Without this, «ec2: state=stopped instance_ids=…» would fail with a
traceback like this:
if inst.get_attribute('sourceDestCheck')['sourceDestCheck'] != source_dest_check:
NameError: global name 'source_dest_check' is not defined
This patch adds support to setting metadata key/value through a string
argument. Variables can now be used for both the metadata key and
value.
example:
meta: "{{ var1 }}:SomeValue,key:{{ var2 }}"
Both `source_dest_check` and `termination_protection` variables are not
available within the scope of the startstopec2 instance method. This just
pulls them from module.params.
The pysphere VIVirtualMachine.clone() method supports specifying a VM
folder to place the VM in after the clone has completed. This exposes
that functionality to playbooks.
Also documents that creating VMs could always place VMs in a specific
folder.
Before this patch:
- Command was matched if 'Command' field of docker-py
representation of Docker container ends with 'command' passed
to Ansible docker module by user.
- That can give false positives and false negatives.
- For example:
a) If 'command' was set up with more than one spaces,
like 'command=sleep 123', it would be never matched again
with a container(s) launched by this task.
Because after launching, command would be normalized and
appear, in docker-py API call, just as 'sleep 123' - with one
space. This is false negative case.
b) If 'entrypoint + command = command', for example
'sleep + 123 = sleep 123', module would give false positive
match.
This patch fixes it, by making matching more explicit - against
'Config'->Cmd' field of 'docker inspect' output, provided by docker-py
API and with proper normalization of user input by splitting it to
tokens with 'shlex.split()'.
I think in this commit 720aeffca2
There was bug introduced where the ElastiCacheManager init method has
a number of positional arguments like so.
```py
def __init__(self, module, name, engine, cache_engine_version, node_type,
num_nodes, cache_port, parameter_group, cache_subnet_group,
cache_security_groups, security_group_ids, zone, wait,
hard_modify, region, **aws_connect_kwargs):
```
But then later in the code the positional arguments are passed in
like this.
```py
elasticache_manager = ElastiCacheManager(module, name, engine,
cache_engine_version, node_type,
num_nodes, cache_port,
cache_subnet_group,
cache_security_groups,
security_group_ids, parameter_group, zone, wait,
hard_modify, region, **aws_connect_kwargs)
```
If you count, you can see that cache_subnet_group, is being passed in
where the manager expects to see parameter_group.