* FreeBSD do not support --omit-header and --absolute-names
* The option for following symlink wth getfacl is different on FreeBSD
* ZFS on Freebsd use nfsv4 acls, who use a slightly different syntax
* FreeBSD do not have a --test flag, so always return 'True'
* FreeBSD do not have the --omit-headers options, so we have to filter by ourself
* Mark Freebsd as working for the acl module
* commands argument now accepts a dict arguments[1]
* waitfor has been renamed to wait_for with an alias to waitfor
* only show commands are allowd when check mode is specified
* config mode is no longer allowed in the command stack
* add argument match with valid values any, all
[1] The commands argument will now accept a dict argument that can
specifiy the output format of the command. To specify a dict argument
use the form of { command: <str>, output: <str>, prompt: <str>,
response: <str> }. Command and output are required arguments. Output
accepts valid values text and json.
* add src argument to provide path to config file
* add new choice to match used to ignore current running config
* add update argument with choices merge, replace or check
* add backup argument to backup current running config to control host
* add defaults argument to control collection of config with or without defaults
* add save argument to save current running config to startup config
* add state argument to control state of config file
* deprecated force argument, use match=none instead
The CommandRunner will not allow duplicate commands to be added to the
command stack. This fix will now catch the exception and continue if
a duplicate command is attempting to be added to the runner instance.
* commands argument now accepts a dict arguments[1]
* waitfor has been renamed to wait_for with an alias to waitfor
* only show commands are allowd when check mode is specified
* config mode is no longer allowed in the command stack
* add argument match with valid values any, all
[1] The commands argument will now accept a dict argument that can
specifiy the output format of the command. To specify a dict argument
use the form of { command: <str>, output: <str>, prompt: <str>,
response: <str> }. Command and output are required arguments. Output
accepts valid values text and json.
* add src argument to provide path to config file
* add new choice to match used to ignore current running config
* add update argument with choices merge or check
* add backup argument to backup current running config to control host
* add defaults argument to control collection of config with or without defaults
* add save argument to save current running config to startup config
* add state argument to control state of config file
* deprecated force argument, use match=none instead
* merge changes from ios shared module functions into ios_config.
* add src argument to provide path to config file
* add new choice to match used to ignore current running config
* add update argument with choices merge or check
* add backup argument to backup current running config to control host
* add defaults argument to control collection of config with or withoutdefaults
* add save argument to save current running config to startup config
* add state argument to control state of config file
* deprecated force argument, use match=none instead
* commands argument now accepts a dict arguments[1]
* waitfor has been renamed to wait_for with an alias to waitfor
* only show commands are allowd when check mode is specified
* config mode is no longer allowed in the command stack
* add argument match with valid values any, all
[1] The commands argument will now accept a dict argument that can
specifiy the output format of the command. To specify a dict argument
use the form of { command: <str>, output: <str>, prompt: <str>,
response: <str> }. Command and output are required arguments. Output
accepts valid values text and json.
Importing a (sign only) subkey with apt_key module always fails,
however the actual keyring gets created and contains the correct keys.
Apparently the all_keys function skips the subkeys, hence the problem.
Fixes#4365
* make HEAD parsing more robust
* Fail the module for any splitter errors
* fix combining depth and version on filepath urls by prepending file://
Addresses #907
* Made some changes to determine branch name more reliable (it may contain slashes now).
* Determination of branch name more reliable, as per comment on PR #907
- Removed required_if.
- Fixed doc strings.
- Removed debug output being appended to actions.
- Put import of basics at bottom to be consistent with other docker modules
- Added 'containers' alias to 'connected' param
- Put facts in ansible_facts.ansible_docker_network
* Git: Determine if remote URL is being changed
Ansible reported there were no changes when only the remote URL for a
repo was changed. This properly tracks and reports when the remote URL
for a repo changes.
Fixes#4006
* Fix handling of local repo paths
* Git: Use newer method for fetching remote URL
* Git: use ls-remote to fetch remote URL
Using ls-remote to fetch remote URL is supported in earlier versions
of Git compared to using remote command.
* Maintain previous behavior for older Git versions
Previously whether or not the remote URL changed was not factored
into command's changed status. Git versions prior to 1.7.5 lack the
functionality used for fetching a repo's remote URL so these versions
will update the remote URL without affecting the changed status.
When you try to remote unarchive files with the option copy=no the code always fail, as evidenced in issue #4202. That happens because the conditional to check "if remote_src=no or copy=yes" will always be true since the default value of them is remote_src=no and copy=yes.
My modification is only to change the condition from or to and, that way only if both the vars stay with the default value will be true, otherwise you can unarchive remote files.
* Add diffmode support to git module
This patch adds missing diffmode support to the git module.
* Remodel get_diff() and calls to it
As proposed by @abadger
* Ensure we fetch the required object before performing a diff
Also we handle the return code ourselves, so don't leave this up to run_command().
Now that there is general purpose `Fact` helper to detect if systemd
is active, we would be able to rely on that to apply SystemdStrategy.
Detecting presence of systemd at runtime would be more reliable than
distribution version based heuristics. (e.g., Debian, Ubuntu allows
user to change the default init system, Gentoo allows switching as
well, and so on).
A capital "S" appears when the the setuid or setgid bit are set but have no effect. Likewise, a capital "T" appears when the sticky bit is set but it has no effect.
During check_mode (`--check`), the variable change could be
used uninitialized, yielding this error:
`UnboundLocalError: local variable 'changed' referenced before assignment`
This changeset simply initializes it to False.
* error handling for importing non-existent db
* creating db on import state and suitable message on deleting db
* handling all possible cases when db exists/not-exists
* Check mode fixes for ec2_vpc_net module
Returns VPC object information
Detects state change for VPC, DHCP options, and tags in check mode
* Early exit on VPC creation in check mode
The default VPC egress rules was being left in the egress rules for
purging in check mode. This ensures that the module returns the correct
change state during check mode.
By default, ssh-keygen will pick a suitable default for ssh keys
for all type of keys. By hardocing the number of bits to the
RSA default, we make life harder for people picking Elliptic
Curve keys, so this commit make ssh-keygen use its own default
unless specificed otherwise by the playbook
sysrc(8) does not exit with non-zero status when encountering a
permission error.
By using service(8) `service <name> enabled`, we now check the actual
semantics expressed through calling sysrc(8), i.e. we check if the
service enablement worked from the rc(8) system's perspective.
Note that in case service(8) detects the wrong value is still set,
we still output the sysrc(8) output in the fail_json() call:
the user can derive the exact reason of failure from sysrc(8) output.
AWS security groups are unique by name only by VPC (Restated, the VPC
and group name form a unique key).
When attaching security groups to an ELB, the ec2_elb_lb module would
erroneously find security groups of the same name in other VPCs thus
causing an error stating as such.
To eliminate the error, we check that we are attaching subnets (implying
that we are in a VPC), grab the vpc_id of the 0th subnet, and filtering
the list of security groups on this VPC. In other cases, no such filter
is applied (filters=None).
EC2 Security Group names are unique given a VPC. When a group_name
value is specified in a rule, if the group_name does not exist in the
provided vpc_id it should create the group as per the documentation.
The groups dictionary uses group_names as keys, so it is possible to
find a group in another VPC with the name that is desired. This causes
an error as the security group being acted on, and the security group
referenced in the rule are in two different VPCs.
To prevent this issue, we check to see if vpc_id is defined and if so
check that VPCs match, else we treat the group as new.
While from the documentation[1] one would assume that replacing
CAPABILITY_IAM with CAPABILITY_NAMED_IAM; this as empirically been shown
to not be the case.
1: "If you have IAM resources, you can specify either capability. If you
have IAM resources with custom names, you must specify
CAPABILITY_NAMED_IAM."
http://docs.aws.amazon.com/AWSCloudFormation/latest/APIReference/API_CreateStack.html
Previously, when the attributes of a GCE firewall change, they were ignored. This PR changes that behavior and now updates them.
Note that the "update" also removes attributes that are not specified.
An overview of the firewall rule behavior is as follows:
1. firewall name in GCP, state=absent in PLAYBOOK: Delete from GCP
2. firewall name in PLAYBOOK, not in GCP: Add to GCP.
3. firewall name in GCP, name not in PLAYBOOK: No change.
4. firewall names exist in both GCP and PLAYBOOK, attributes differ: Update GCP to match attributes from PLAYBOOK.
Current module fails when tries to assign floating-ips to server that
already have them and either fails or reports "changed=True" when no
ip was added
Removing floating-ip doesn't require address
Server name/id is enough to remove a floating ip.
This parameter was actually added in 2.0. It's just that the
documentation in previous versions of the module were wrong (it said the
name was "network" rather than "name.) I've renamed the parameter in
the documentation of prior versions so ansible-module-validate should no
longer think that this is a new parameter.
The module would raise a KeyError trying to find the save_config key
which is not present in the argument_spec. This was caused by the
check_args() function. Since the ios shared argument spec isn't used
the check_args function is not needed and has been removed.
This removes the get_module() factory function and directly creates
an instance of NetworkModule. This commit includes some minor clean
up to transition to the ios shared module for 2.2
The shade update_router() call will return None if the router is
not actually updated. This will cause the module to fail if we
do not protect against that.
* using check mode will now block all commands except show commands
* module will no longer allow config mode commands
* check args for unused values and issue warning
This refactors the ios_config module to use the network module added
in 2.2 to simplify common network functions
new features
* add src, dest arguments for working with config
* results now return flag if the config was saved or not
* adds append argument for updating the dest file (when dest is used)
The os_server module could automatically generate a floating IP for
the user with auto_ip=true, but we didn't allow for this FIP to be
automatically deleted when deleting the instance, which is a bug.
Add a new option called delete_fip that enables this.
* Revert PR #3575 since it causes problems related to exclude patterns
By using a different method for getting archive filelists, and extracting we introduced new problems related to excluding based on gtar patterns.
As a result files that would be excluded by gtar, would still be in the filelist. Implementing our own gtar compatible pattern exclusion mechanism is near to impossible (believe me, we looked at it...). The best way is to look at the original problem and deal with that, and ensure that extraction and filelists are done with the exact same tool and exact same options.
The solution is to decode the octal unicode representation in gtar's output back to unicode. Since gtar has no problem extracting these files in LANG=C, we simply has to compensate for it.
This reverts #3575 and fixes#11348.
* Implement codecs.escape_decode() instead of decode("string_escape") for python3
* remove unused variables
* fetch branch name instead of HEAD
fix#3782, which was introduced by f1bacc1d3f
* disable git depth option for old git versions
fixes#3782
git support for `--depth` did not fully work in old git versions (before 1.8.2)
fall back to full clones/fetches on those versions
* raise required git version to 1.9.1 for depth option
* use correct depth argument in switch_version
* service module: use sysrc on FreeBSD
sysrc(8) is the designated userland program to edit rc files on FreeBSD.
It first appeared in FreeBSD 9.2, hence is available on all supported
versions of FreeBSD.
Side effect: fixes#2664
* Incorporate changes suggested by bcoca.
- Use `get_bin_path` to find sysrc binary.
- Only use sysrc when available (support for legacy versions of FreeBSD)
Without this, ansible 2.1 will convert some arguments that are
meant to be dict or list type to their str representation.
Signed-off-by: Patrick Uiterwijk <puiterwijk@redhat.com>
Change the file mode arg to 'raw' ala file args
Following the file_common_args model, change the
type of the 'mode' arg here to type='raw' with no
default arg value.
The default mode for file creation is the module
constant DEFAULT_SOURCES_PER, and is used if no
mode os specified.
A default mode of 0644 (and not specified as int or str)
would get converted to an octal 420, resulting in the
sources file being created with mode '0420' instead of '0644'
Fixes#16370
The `source_dest_check` and `termination_protection` variables are being
assigned twice in ec2.py, likely due to an incorrect merge somewhere
along the line.
* A few more sanity checks for detecting unzip output that's not a file entry
Also note that there's a rounding error somewhere in the mtime
comparison code.
* Fix reference to sub-array
Because the async_status module will read from the same file that
the async_wrapper module is writing, it's possible that the file
may not be fully synced during a read, causing spurious failures.
Use a temp file to do an atomic operation on the file. We can't
use atomic_move() here as that doesn't work properly under async.
Also, let's not read concurrently from the same file the subprocess
is writing to. Instead, capture stdout/stderr via PIPE and write to
the file to avoid nasty races.
* This adds support the CommandRunner to handle executing commands on
the remote device.
* It also changes the waitfor argument to wait_for to remain compatable
with other modules and adds an alias for waitfor.
* Restricts commands to show commands only when check mode is specified.
* add version_added to wait_for doc string
The IAM group modules were not receiving the `module` object, but they
use `module.fail_json()` in their exception handlers. This patch passes
through the module object so the real errors from boto are exposed,
rather than errors about "NoneType has no method `fail_json`".
$source check causes:
FAILED! => {"changed": false, "failed": true, "msg": "A parameter cannot be found that matches parameter name 'Source'."}
$Params.Remove causes:
FAILED! => {"changed": false, "failed": true, "msg": "Method invocation failed because [System.Management.Automation.PSCustomObject] does not contain a method named 'Remove'."}
* Change documented options for os_networks_facts
os_network_facts currently lists 'network' as an available option, taking the Name or ID. In Ansible 2.0.2 to 2.2.0, this is not valid. Options 'name' and 'id' should be used instead.
* Update os_networks_facts.py
* Update os_networks_facts.py
Set version_added to the only accepted value
* Update os_networks_facts.py
Removed inappropriate 'ID' parameter
When `version` is not specified, it defaults to "HEAD". "HEAD" is not a
remote tag, and it's not listed in the output of get_branches(), so we'd
keep repo_updated at the default value (None) and then return early with
changed=True in --check mode, even when before == after.
Fixes#3024.
Ceph Object Gateway (Ceph RGW) is an object storage interface built on top of
librados to provide applications with a RESTful gateway to Ceph Storage
Clusters:
http://docs.ceph.com/docs/master/radosgw/
This patch adds the required bits to use the RGW S3 RESTful API properly.
Signed-off-by: Javier M. Mellid <jmunhoz@igalia.com>
Prior to Python 2.5, SystemExit was a subclass of Exception.
In Py2.4, this is causing extra error output on valid sys.exit(0).
(Toshio) Call sys.exit from inside of the SystemExit exception handler so py2.4 and py2.5+ behaviour matches
* Fixing compile time errors irt a) exception handling for Python 3 in util, also: b) problem octal usage (fixed) and c) print json_dump -> print(json_dump(xyz) ... et al
* This code was not Python 2.4 compliant. Octal codes and exception handling is now working with Py 2.4, 2.6, & 3.5.
* Fixing formating (or rather reverting an non 2.4 compatible change). Works in compile & runtime checking.
* a) revert to use print sys.stderr not fail_json; b) fixed var name in exception
* Python 3 compatible print (print >>sys.stderr will generate a TypeError - now uses sys.stderr.write instead).
The "Developing Modules" documentation states:
Include a minimum of dependencies if possible. If there are
dependencies, document them at the top of the module file, and have
the module raise JSON error messages when the import fails.
When docker_service runs on a remote host without PyYAML it crashes with
ImportError.
This patch raises a JSON error message when import fails, but only if
the PyYAML module is actually used. It's only needed when the
"definition" parameter is given.
Signed-off-by: Stefan Hajnoczi <stefanha@gmail.com>
The example for delete=yes does not specify recursive although it is
required. In addition, the wording for the delete option is confusing
about from where files are really deleted. This should clarify that.
* Update GitHub templates to reflect ansible/ansible
Update the GitHub templates to what is used for some time on ansible/ansible
For more information, see ansible/ansible#15961
* Small improvement from ansible/ansible
* Improve the unzip output scraping
Ensure we capture the complete file (also when it includes spaces).
Drop lines that do not conform (in length) to what we expect (e.g. header/footer).
This fixes#3813
* Fix how split() works
* remove unused variables
* fetch branch name instead of HEAD
fix#3782, which was introduced by f1bacc1d3f
* disable git depth option for old git versions
fixes#3782
git support for `--depth` did not fully work in old git versions (before 1.8.2)
fall back to full clones/fetches on those versions
Reading the entire tar file into memory can result in out-of-memory
conditions such as this traceback:
Traceback (most recent call last):
File "/tmp/ansible_YELTSu/ansible_module_docker_image.py", line 486, in load_image
self.client.load_image(image_data)
File "/usr/local/lib/python2.7/dist-packages/docker/api/image.py", line 147, in load_image
res = self._post(self._url("/images/load"), data=data)
...
File "/usr/lib/python2.7/httplib.py", line 997, in endheaders
self._send_output(message_body)
File "/usr/lib/python2.7/httplib.py", line 848, in _send_output
msg += message_body
MemoryError
Luckily docker-py's load_image(), which calls requests post(), accepts a
file-like object instead of a string. Pass in the file object to avoid
reading the full file into memory. This allows larger tar files to load
succesfully.
Signed-off-by: Stefan Hajnoczi <stefanha@gmail.com>
* This moves the lines in the code that parse the `env` and `env_file` options for docker to the end of the `__init__()` function.
This is needed because the `_check_capabilites` function needs both a working `self.client` and a proper `self.docker_py_versioninfo`.
`_check_capabilities` is used by `ensure_capabilities` which is, in turn, used by `get_environment`
This means that before this commit, the environment variables could not be loaded because both `self.client` and `self.docker_py_versioninfo` were not set at that time.
This commit fixes that by putting the environment variable parsing after those two.
* This moves the lines in the code that parse the `env` and `env_file` options for docker to the end of the `__init__()` function.
This is needed because the `_check_capabilites` function needs both a working `self.client` and a proper `self.docker_py_versioninfo`.
`_check_capabilities` is used by `ensure_capabilities` which is, in turn, used by `get_environment`
This means that before this commit, the environment variables could not be loaded because both `self.client` and `self.docker_py_versioninfo` were not set at that time.
This commit fixes that by putting the environment variable parsing after those two.
The default pagination is every 100 items with a maximum of 1000 from
Amazon. This properly uses the marker returned by Amazon to concatenate
the various pages from the results.
This fixes#2440.