Cloudtrail is the AWS auditing configuration. It's fairly simple, but also very important to configuration management/devops/security to ensure it remains enabled. That's why I created it as a module.
This commit adds the overlayfs type to the lxc_container module. In
Adding the overlayfs type the commit adds the ability to clone a
container. While cloning is not locked down to only the overlayfs
container backend it is of particular interest when using the overlayfs
backend as it provides for amazingly fast snapshots.
Changes to the resource types and documentation have been added on how
the new backend type can be used along with the clone operation.
This PR addresses a question asked on the original merged pull request
for overlayfs support which came from @fghaas on PR
"https://github.com/ansible/ansible-modules-extras/pull/123".
The overlayfs archive function is a first class function and will
allow for the containers to be backed-up using all methods which
brings support up to that of all other storage backends.
The option parsing object within the module was performing a split
on an '=' sign and assuming that there would only ever be one '='
in a user provided option. Sadly, the assumption is incorrect and
the list comprehension that is building the options list needs to
be set to split on the first occurrence of an '=' sign in a given
option string. This commit adds the required change to make it
possible for options to contain additional '=' signs and be handled
correctly.
The volume create methods were making an assumption on the unit
sizes being presented by the `vgdisplay` and the `lvdisplay`
commands. To correct the assumption the commands will now enforce
a unit size of "g" which will alway convert sives to gigabytes.
This was an issue brought up by @hughsaunders.
The new module will allow users to control LXC containers from ansible.
The module was built for use in LXC >= 1.0 or greater and implements most
of what can be done using the various lxc clients with regards to running
containers. This first module is geared only at managing lxc containers.
The module provides:
build containers
destroy containers
archive containers
info from a single container
start / stop / restart containers
run commands within containers
add/modify lxc config for a container
supports backends including LVM
The example was showing how to use the `files` option to pass in a local file as an authorized public key for root. While this works, it's a bit sloppy, given that there's a specific option, `key_name` which will use one of your public keys on your rackspace account and add it as an authorized key for root. In our case, one of our admins didn't notice the `key_name` option because they scrolled straight to the example and saw the `files` strategy.
I propose that the example still shows `files`, but not using a root public key as an example, and instead also demonstrate the `key_name` option so that it's clear from the example how to get the initial root public key deployed.
Commit 311ec543af ("If not specified, do not modify subnet/route_tables for ec2 VPCs") mostly fixed the problem, except that it left defaults for subnets and route_tables so that not specifying them still deleted them.
Taking a page out of the ec2 config, make sure that all of the
OpenStack modules handle the inbound auth config in the same way.
The one outlier is keystone wrt auth_url.
The OpenStack client utilities consume a set of input environment
variables for things like username and auth_url, so it's very
common for OpenStack users to have such settings set in their
environment. Indeed, things like devstack also output a shell file
to be sourced to set them. Although in a playbook it's entirely
expected that variables should be used to pass in system settings
like api passwords, for ad-hoc command line usage, needing to pass
in five parameters which are almost certainly in the environment
already reduces the utility.
Grab the environment variables and inject them as default. Special care
is taken to ensure that in the case where the values are not found, the
behavior of which parameters are required is not altered.
It is possible to create an instance, terminate the instance and then
attempt to recreate the instance with the same parameters. In this case
`ec2.run_instances` returns a reservation list containing the instance ids
but the logic gets stuck waiting for the instance to exist in the call to
`ec2.get_all_instances`, even if wait is no).
The provisioning module knows more about how nova deals with IP
addresses now. Ensure that the inventory module is similarly as smart
by separating out the logic into the openstack/module_utils.
During the state check, check IP address information. This gets us
two things. The most obvious is that for direct IP management, a
change to the config will reflect in the config of the instance. But
also, if we succeed in creating the instance but fail in adding an IP,
this should let us re-run and arrive in the state we were expecting.
The fun part about having multiple vendors providing the same cloud
is that while their APIs are the same, what they do with their metadata
tends to be ... fun. So in order to be able to express sanely what you
want without needing to stick tons of unreadable uuids in your config,
it turns out what sometimes you need to further filter image and flavor
names. Specific examples are (deprecated) images in HP Cloud and the
Standard and Performance flavors on Rackspace.
Putting uuid and numberic identifies in playbooks is fragile, especially
with cloud providers who change them out from under you. Asking for
Ubuntu 14.04 is consistent, the UUID associated with that is not. Add
mutually exclusive parameters to allow for specifying images by name and
flavors by RAM amount.
Taking a page out of the ec2 config, make sure that all of the
OpenStack modules handle the inbound auth config in the same way.
The one outlier is keystone wrt auth_url.
The OpenStack client utilities consume a set of input environment
variables for things like username and auth_url, so it's very
common for OpenStack users to have such settings set in their
environment. Indeed, things like devstack also output a shell file
to be sourced to set them. Although in a playbook it's entirely
expected that variables should be used to pass in system settings
like api passwords, for ad-hoc command line usage, needing to pass
in five parameters which are almost certainly in the environment
already reduces the utility.
Grab the environment variables and inject them as default. Special care
is taken to ensure that in the case where the values are not found, the
behavior of which parameters are required is not altered.
The floating-ip extension, while pretty ubiquitous, is not a
foregone conclusion. Specifically, Rackspace, while also
served by the rax module, is a valid OpenStack cloud and can
be interacted with directly via nova interfaces.
Add support for determining public and private IPs for
OpenStack clouds that don't use floating ips by reading
the public and private keys from the addresses dict.
If the region name is specified in the config, we need to pass it
in to the nova client constructor. Since key_name is similarly optional,
go ahead and handle both parameters the same.
The desires around getting a floating ip associated with a pool and
getting a floating ip not associated with a pool is just different
enough that following it as one set of nested ifs is tricky. Split
the function into two, one for the pool and one for the non-pool logic.
Several azure fixes/improvements, including:
* Improve failure message when python-azure is not installed
* Improve required argument handling
* Fixes a traceback on instance termination when the variable
'deployment' was not set.
* Fixes a traceback (#8298) when creating instances using the newer SDK
otherwise the module will return the info about the instance that it got prior to the action taken
So if you had a task to start an instance:
ec2:
instance_ids: ...
state: running
register: ec2_info
the registered data would have empty public_dns_name, public_ip, private_dns_name, private_ip
The current (hard-coded) retry interval of 500 seconds can cause ansible to have excessive run-times in the case of many domains. `retry_interval` provides a way to customize the wait between retries of calls to route53.
Some environments that utilize an SSL terminator with a self-signed
certificate can use the publicURL without getting certificate
verify errors. This allows using the internalURL with in my case
is HTTP and not HTTPS.
Closes issue: #8057
The following patch adds a missing 'msg=' syntax. An exception is raised
in ansible if this block is reached during the execution of the module
TypeError: fail_json() takes exactly 1 argument (2 given)
With the 'msg=' added, you get a more informative error. For example
msg: No settings provided to update_domain().