vsphere_guest now can deploy a template using a datacenter and hostname
as the target, instead of requiring a cluster and resource_pool.
This commit fixes#951.
Signed-off-by: Vyronas Tsingaras <vtsingaras@it.auth.gr>
port mapping with this module only works for ports that are exposed either in the Dockerfile or via an additional arguments. This is different from the command line docker client, that is willing to also map ports that are not exposed.
This comments makes the behaviour more obvious.
* Only install yum-utils if needed (b/c we're going to use repoquery)
* Add a warning message explaining that why slower repoquery was used
rather than yum API.
The message there is that Yum API prints an error message if the
rhn-plugin is in use and no rhn-certificate is available. So instead of
using repoquery in preference always here we use repoquery in preference
if the rhn-plugin is enabled.
Updating os_ironic module to the most recent version accounting for
changes in Ansible devel branch and the shade library since the
original creation of the module.
If execution of supervisorctl was not successful (exit code > 0),
module silently supress this error and returns changed = false,
which turns to OK task state.
This is very confusing, when supervisorctl needs authentication,
and credentials are not specified in module or are incorrect,
services are not restarted/started/stopped without raising an error.
All of the ansible OpenStack modules are driven by a clouds.yaml config
file which is processed by os-client-config. Expose the data returned by
that library to enable playbooks to iterate over available clouds.
Since the YAML data format is a subset of JSON, it is trivial to convert
the former to the latter. This means that we can use YAML templates to
build cloudformation stacks, as long as we translate them before passing
them to the AWS API. I figure this could potentially be quite popular in
the Ansible world, since we already use so much YAML for our playbooks.
When a SVN repository has some svn:externals properties, files will be
reported with the X attribute, and lines will be added at the end to
list externals statuses with a text looking like
"Performing status on external item at ....".
Such lines were counted as a local modification by the regex, and the
module returned a change, even though they were none.
To have a clean (and parsable) "svn status" output, it is recommended
to use the --quiet option. The externals will only appear if they have
been modified. With this option on, it seems even safer to consider
there are local modifications when "svn status" outputs anything.
boto can throw SSLError when timeouts occur (among other SSL errors). Catch these so proper JSON can be returned, and also add the ability to retry the operation.
There's an open issue in boto for this: https://github.com/boto/boto/issues/2409
Here's a sample stacktrace that inspired me to work on this. I'm on 1.7, but there's no meaningful differences in the 1.8 release that would affect this. I've added line breaks to the trace for readability.
failed to parse: Traceback (most recent call last):
File "/home/ubuntu/.ansible/tmp/ansible-tmp-1419895753.17-160808281985012/s3", line 2031, in <module> main()
File "/home/ubuntu/.ansible/tmp/ansible-tmp-1419895753.17-160808281985012/s3", line 353, in main download_s3file(module, s3, bucket, obj, dest)
File "/home/ubuntu/.ansible/tmp/ansible-tmp-1419895753.17-160808281985012/s3", line 234, in download_s3file key.get_contents_to_filename(dest)
File "/usr/local/lib/python2.7/dist-packages/boto/s3/key.py", line 1665, in get_contents_to_filename response_headers=response_headers)
File "/usr/local/lib/python2.7/dist-packages/boto/s3/key.py", line 1603, in get_contents_to_file response_headers=response_headers)
File "/usr/local/lib/python2.7/dist-packages/boto/s3/key.py", line 1435, in get_file query_args=None)
File "/usr/local/lib/python2.7/dist-packages/boto/s3/key.py", line 1488, in _get_file_internal for bytes in self:
File "/usr/local/lib/python2.7/dist-packages/boto/s3/key.py", line 368, in next data = self.resp.read(self.BufferSize)
File "/usr/local/lib/python2.7/dist-packages/boto/connection.py", line 416, in read return httplib.HTTPResponse.read(self, amt)
File "/usr/lib/python2.7/httplib.py", line 567, in read s = self.fp.read(amt)
File "/usr/lib/python2.7/socket.py", line 380, in read data = self._sock.recv(left)
File "/usr/lib/python2.7/ssl.py", line 341, in recv return self.read(buflen)
File "/usr/lib/python2.7/ssl.py", line 260, in read return self._sslobj.read(len) ssl.SSLError: The read operation timed out
* If a db user belonged to a role which had a privilege, the user would
not have the privilege added as the role gave the appearance that the
user already had it. Fixed to always check the privileges specific to
the user.
* Make fewer db queries to determine if privileges need to be changed
and change them (was four for each privilege. Now two for each object
that has a set of privileges changed).
Use `has_table_privileges` and `has_database_privileges`
to test whether a user already has a privilege before
granting it, or whether a user doesn't have a privilege
before revoking it.
By default docker-py uses latest version of Docker API. This is not
always desireable, and this patch adds option to specify version, that
should be used.
This prevents errors when the login_user does not have 'ALL'
permissions, and the 'priv' value contains fewer permissions than are
held by an existing user. This is particularly an issue when using an
Amazon Web Services RDS instance, as there is no (accessible) user with
'ALL' permissions on *.*.
This module supports a few of the server actions that are easy to
initially impiment. Other actions require input and provide return
values in the API calls that will be more difficult to impliment, and
thus are not part of this initial commit.
This adds a must_exist option to the service module, which gives callers the
ability to be tolerant to services that do not exist. This allows for
opportunistic manipulation of a list of services if they happen to exist on the
host. While failed_when could be used, it's difficult to track all the
different error strings that might come from various service tools regarding a
missing service.
removing policy if enabled is no
adding sanity checks
removing debuging
check if policy exists before deleting
updating version_added to 2.0
adding stickiness support to ec2_elb_lb.py (squashed commit)
* Fix docs to specify when python2.6+ is required (due to a library
dep). This helps us know when it is okay to use python2.6+ syntax in
the file.
* remove BabyJson returns. See #1211 This commit fixes all but the
openstack modules.
* Use if __name__ == '__main__' to only run the main part of the module
if the module is run as a program. This allows for the potential to
unittest the code later.
Context: I recently discovered that when setting a fact, key=value pairs and complex arguments differ in how the fact is stored. For example, when attempting to use complex arguments using key=values, the result can be stored as a unicode string as opposed to an object/list/etc.
I'm hoping the above example update will better demonstrate to and instruct people to use complex arguments instead of key=value pairs in certain situations.
If an EC2 instance is already associated with an EIP address, we use
that, rather than allocating a new EIP address and associating it with
that.
Fixes#35.
Update/fix to Support specifying cidr_ip as a list
Unicode isn't compatible with python2, so we needed some other
solution to this problem. The simplest approach is if the ip item
isn't already a list, simply convert it to one, and we're done.
Thanks to @mspiegle for this suggestion.
Remove `USAGE` from the `VALID_PRIVS` dict for both database and
table because it is not a valid privilege for either (and
breaks the implementation of `has_table_privilege` and
`has_database_privilege`
See http://www.postgresql.org/docs/9.0/static/sql-grant.html
For read-only databases, users should not change when no changes
are required.
Don't issue ALTER ROLE when role attribute flags, users password
or expiry time is not changing.
In certain cases (hashed passwords in the DB, but the password
argument is not hashed) passlib.hash is required to avoid
running ALTER ROLE.
The default value set by the module was a value of None for the
config_file parameter, which propogates into the connect method
call overriding the stated default in the method.
Instead, the default should be set with-in the parameter
specification so the file check is not requested to check None.
Do not attempt to attach an already attached volume.
Likewise, do not attempt to detach a volume that is not
attached.
This version adds support for check mode.
The ordering of disabling/enabling yum repositories matters, and
the yum module was mixing and matching the order. Specifically,
when yum-utils isn't installed, the codepath which uses the yum
python module was incorrectly ordering enabling and disabling.
The preferred order is to disable repositories and then enable them
to prevent clobbering. This was previously discussed in
ansible/ansible#5255 and incompletely addressed in 0cca4a3.
When subscribing a system with an activationkey, it seems (sometimes?)
required to pass the "--org <number>" parameter to subscription-manager.
Activation Keys can be created through the Red Hat Customer Portal, and
a subscription can be attached to those. This makes is easy to register
systems without passing username/passwords around.
The organisation ID can be retrieved by executing the following command
on a registered system (*not* the account number):
# subscription-manager identity
URL: https://access.redhat.com/management/activation_keys
Signed-off-by: Niels de Vos <ndevos@redhat.com>
Reviewed-by: Ken Dreyer <kdreyer@redhat.com>
Includes commits for:
* Don't return change if the password is not set
* Set the group to nogroup if none is specified
* Set an uid if none is specified
* Test if SHADOWFILE is set (for Darwin)
* remove unused uid
Prior to this commit, Ansible would pass '--activationkeys <value>' as a
literal string, which the remote server would interpret as a single
argument to subscription-manager.
This led to the following failure message when using an activation key:
subscription-manager: error: no such option: --activationkey "mykey"
Update the arguments so that the remote server will properly interpret
them as two separate values.
boto's rds2 renamed `vpc_security_groups` to `vpc_security_group_ids`
and changed from a list of `VPCSecurityGroupMembership` to just a
list of ids. This accommodates that change when rds2 is being used.
Upstart scripts are being incorrectly identified as SysV init scripts
due to a logic error in the `service` module.
Because upstart uses multiple commands (`/sbin/start`, `/sbin/stop`,
etc.) for managing service state, the codepath for upstart sets
`self.svc_cmd` to an empty string on line 451.
Empty strings are considered a non-truthy value in Python, so
conditionals which are checking the state of `self.svc_cmd` should
explicitly compare it to `None` to avoid overlooking the fact that
the service may be controlled by an upstart script.
Some places ([AWS RDS](https://forums.aws.amazon.com/thread.jspa?threadID=151248)) don't have, or don't allow, access to the `pg_authid` table. The only reason that is necessary is to check for a password change.
This flag is a workaround so passwords can only be set at creation time. It isn't as elegant as changing the password down the line, but it fixes the longstanding issue #297 that prevented this from being useful on AWS RDS.
body_format is a new optional argument that enables handling of JSON or
YAML serialization format for the body argument.
When set to either 'json' or 'yaml', the body argument can be a dict or list.
The body will be encoded, and the Content-Type HTTP header will be set,
accordingly to the body_format argument.
Example:
- name: Facette - Create memory graph
uri:
method: POST
url: http://facette/api/v1/library/graphs
status_code: 201
body_format: json
body:
name: "{{ ansible_fqdn }} - Memory usage"
attributes:
Source": "{{ ansible_fqdn }}"
link: "1947a490-8ac6-4bf2-47c1-ff74272f8b32"
The return code of "service pf onestatus" is usually zero on FreeBSD (tested with FreeBSD 10.0), even if pf is not running. So the service module always thinks that pf is running, even when it needs to be started.
I tried a playbook with the following (accidentally wrong) task:
tasks:
- name: authorized key test
authorized_key: key=/home/sam/.ssh/id_rsa.pub key_options='command="/foo/bar"' user=sam
I got the following traceback:
TASK: [authorized key test] ***************************************************
failed: [localhost] => {"failed": true, "parsed": false}
Traceback (most recent call last):
File "/home/sam/.ansible/tmp/ansible-tmp-1427110003.65-277897441194582/authorized_key", line 2515, in <module>
main()
File "/home/sam/.ansible/tmp/ansible-tmp-1427110003.65-277897441194582/authorized_key", line 460, in main
results = enforce_state(module, module.params)
File "/home/sam/.ansible/tmp/ansible-tmp-1427110003.65-277897441194582/authorized_key", line 385, in enforce_state
parsed_new_key = (parsed_new_key[0], parsed_new_key[1], parsed_options, parsed_new_key[3])
TypeError: 'NoneType' object has no attribute '__getitem__'
With this fix, I see the expected error instead:
TASK: [authorized key test] ***************************************************
failed: [localhost] => {"failed": true}
msg: invalid key specified: /home/sam/.ssh/id_rsa.pub
In cases when the python-apt package is not installed, ansible will
attempt to install it. After this attempt, it tries to import the
needed apt modules, but forgets to import the apt.debfile module.
The result is that playbooks that use the dpkg argument on a machine
that does not initially have the python-apt package available will
fail with the following error
AttributeError: 'module' object has no attribute 'debfile'
This patch adds the appropriate import to the apt module to ensure
that necessary libraries are available in cases when the dpkg argument
is being used on a system that does not initially have the python-apt
package installed
The following cases work for me now:
- Create new ASG with tags
- Update tags on ASG (create/change/delete)
In short, the module should now work as expected
wrt tagging. The previous code did not work at all
with latest boto for me (serialization errors) and
the logic was buggy anyway; e.g. removed tags
would never get deleted from ec2.
This will account for settings that are provided by the hierarchy of
Dockerfiles used to construct your image, rather than only accounting
for settings provided to the module directly.
This allows setting the pid namespace for a container. Currently only
the 'host' pid namespace is supported.
This requires Docker 1.4.1 and docker-py 1.0.0
Organize each state into a distinct function for readability and composability.
Rework `present` to create but not start containers. Add a `restarted` state
to unconditionally restart a container and a `reloaded` state to restart a
container if and only if its configuration is incorrect. Store our most recent
knowledge about container states in a ContainerSet object. Improve the value
registered by this task to include not only the inspect data from any changed
containers, but also action counters in their native form, a summary message
for all actions taken, and a `reload_reasons` key to store a human-readable
diagnostic to determine why each container was reloaded.
Don't pass the volumes_from argument to the Docker create_container method.
If the volumes_from argument is passed to the create_container method, Docker
raises the following exception:
docker.errors.DockerException: 'volumes_from' parameter has no effect on
create_container(). It has been moved to start()
consider the following response body (content) of a REST/JSON webservice containing escaped quotation marks:
```json
{ "key": "\"works\"" }
```
decoding this string not as raw will lose the backslash as JSON escape. later json.loads will fail to parse.
Inspired by [this thread](https://groups.google.com/forum/#!topic/ansible-project/kymtiloDme4) on the mailing list and the following python shell code:
```python
import json
string=r'{ "key": "\"works\"" }'
json.loads(string)
json.loads(string.decode('raw_unicode_escape'))
json.loads(string.decode('unicode_escape'))
```
Ports are integer values but the old code was assuming they were
strings. When login_port is put into playbook complex_args as an
integer the code would fail. This update should make the argument
validating make sure we have an integer and then we can send that value
directly to the relevant APIs.
Fixes#818
If insertbefore/insertafter didn't match anything, lineinfile module was doing nothing, instead of adding the line at end of fille as it's supposed to.