This fixes a condition where an exception is raised when collecting `interface`
facts and the transport is set to nxapi in the nxos_nxapi module.
fixesansible/ansible#17691
* Fixing bind mount on Linux
* The latest update from jtyr doesn't pass integration tests.
Manually select the changes that are necessary to fix the bug with
unmounting
When setting state=absent the nxos_nxapi module would always try to remove
the configuration regardless of the current state of the device. This will
fix that problem.
This also updates the docstring to correctly reflect https as default=no
fixes#4955
depends on ansible/ansible#17728
Fixes#4063.
Tar does not use this parameter on extraction (-x) or diff (-d)(the
only two cases where it is passed in unarchive). It only uses it on
creation:
https://www.gnu.org/software/tar/manual/html_section/tar_33.html
Providing `unarchive` with a file mode of `0755` (octal) makes it pass
the argument `--mode 493` (493 = 0755 in decimal) to `tar`, which then
fails while verifying it (because it contains an invalid octal char
'9'). Not passing the parameter to tar solves the issue.
* Add support for password aging on Solaris
* Fix shadow file editing when {MIN,MAX,WARN}WEEKS is not set in /etc/default/passwd
* Un-break with python3
* _Really_ un-break with python3
* Pip: handle parsing different pip commands
* Pip: use 'pip list' when available
* Pip: explicitly check which command is used
* Pip: add error checking when fetching packages
* fixed docstring referencing old arguments
* changed out lxml for xml library to avoid import errors
* fixed issue when trying to confirm a commit will end up a NOOP
* fixed issue for passing replace argument to load_config method
* fixes exception thrown when sending commands to device
* fixes exception thrown when retrieving current resource instance
* fixes issue where netconf would be configured in some instances when state
was set to absent
* now returns the command string sent to the remote device
* fixes argument name to be netconf_port with alias to listens_on
Rather than just checking whether a package with the right
name is installed, use `local_nvra` to check whether the
version/release/arch differs too.
Remove `local_name` as it is a shortcut too far.
Fixes#3807Fixes#4529
ansible-doc -vvvv -l show this warning:
[WARNING]: While constructing a mapping from /home/misc/checkout/git/ansible/lib/ansible/modules/core/network/junos/junos_config.py,
line 88, column 5, found a duplicate dict key (required). Using last defined value only.
The eos_eapi module would not configure the port if the protocol wasn't
configured as reported in #4905. This changes the behavior to now allow
the port to be configured independently
fixes#4905
The AWS API requires that any termination policy list that includes
`Default` must end with Default. The attribute sorting caused any list
of attributes to be lexically sorted, so a list like
`["OldestLaunchConfiguration", "Default"]` would be changed to
`["Default", "OldestLaunchConfiguration"]` because default is earlier
alphabetically. This caused calls to fail with BotoServerError per #4069
This commit also adds proper tracebacks to all botoservererror fail_json
calls.
Closes#4069
* added Py2.4 and YAML Documentation fixes
* added no_log for password
* incorporated additional review comments
* remove type for options block
* fix type for pn_multiprotocol
CERN maintains its own fork of "Scientific Linux",
which identifies as "Scientific Linux CERN SLC".
This commit lets Ansible know that this is again
another variant of RHEL.
- Use range instead of xrange.
- Use python3-apt package for python 3.
- Eliminate unsupported for/else/raise usage.
- Use list on dict.items when modifying dict.
- Update requirements documentation.
Also made non-intrustive style fixes (adding blank lines).
Previously calculation of the number of instances that have been
terminated assumed all instances were in the first reservation returned
by AWS. If this is not the case the calculated number of instances
terminated never reaches the number of instances and the module always
times out. By unpacking the instances we get an accurate number and the
module correctly exits.
Currently instances with multiple ENI's can't be started or stopped
because sourceDestCheck is a per-interface attribute, but we use the
boto global access to it (which only works when there's a single ENI).
This patch handles multiple ENI's and applies the sourcedestcheck across
all interfaces the same way.
Fixes#3234
The session keyword is no longer needed or supported in the load_config()
method for eos. This fixes an issue in eos_template where the session
keyword was still being sent.
* remove redundant if submodules_updated
* speed up git by reducing remote commands
* run fetch only once
* run ls-remote less
* don't run ls-remote if one would run fetch anyhow
* remove unnecessary remote_branch check in clone
* kept if depth and version given
* fix fetch on old git versions
The daemonizing code here is taken from an ActiveState recipe, which
includes changing to / as a general best practice. While that is
normally true to allow for deleting the directory that the daemon
process started in, in this case it is not relevant as this is not
intended to be an actual long-running daemon.
Issue ansible/ansible#17466
* Added Solaris support to the mount module.
* Added checking so that if a non-standard fstab file is specified it will
still work in Solaris without breaking existing functionality.
* Added a check to avoid writing duplicate vfstab entries on Solaris
* Added "version_added" to new boot option
This means we will have to unarchive the complete archive if a single change is found.
Unfortunately we cannot fix this for `unzip`, the only hope is a pure-python reimplementation.
This fixes problems reported in the comments of #3810
os.getlogin() returns the user logged in on the controlling terminal. However
'crontab' only looks for the login name of the process' real user id which
pwd.getpwuid(os.getuid())[0] does provide.
While in most cases there is no difference, the former might fail under certain
circumstances (e.g. a lxc container connected by attachment without login),
throwing the error 'OSError: [Errno 25] Inappropriate ioctl for device'.
* 'before' and 'after' are now only applied to 'lines'
* remove update argument
* update doc strings
* add path argument when performing config difference
* removes update argument
* adds `config` option to replace argument
* moves session management into shared module
* cleans up doc strings
* `before` and `after` args now only apply to lines
If you apply `wait=yes` and use `instance_tags` as your filter for
stopping/starting EC2 instances, this stack trace happens:
```
An exception occurred during task execution. The full traceback is: │~
Traceback (most recent call last): │~
File "/tmp/ryansb/ansible_FwE8VR/ansible_module_ec2.py", line 1540, in <module> │~
main() │~
File "/tmp/ryansb/ansible_FwE8VR/ansible_module_ec2.py", line 1514, in main │~
(changed, instance_dict_array, new_instance_ids) = startstop_instances(module, ec2, instance_ids, state, instance_tags) │~
File "/tmp/ryansb/ansible_FwE8VR/ansible_module_ec2.py", line 1343, in startstop_instances │~
if len(matched_instances) < len(instance_ids): │~
TypeError: object of type 'NoneType' has no len() │~
│~
fatal: [localhost -> localhost]: FAILED! => {"changed": false, "failed": true, "invocation": {"module_name": "ec2"}, "module_stderr": "Traceb│~
ack (most recent call last):\n File \"/tmp/ryansb/ansible_FwE8VR/ansible_module_ec2.py\", line 1540, in <module>\n main()\n File \"/tmp/│~
ryansb/ansible_FwE8VR/ansible_module_ec2.py\", line 1514, in main\n (changed, instance_dict_array, new_instance_ids) = startstop_instances│~
(module, ec2, instance_ids, state, instance_tags)\n File \"/tmp/ryansb/ansible_FwE8VR/ansible_module_ec2.py\", line 1343, in startstop_insta│~
nces\n if len(matched_instances) < len(instance_ids):\nTypeError: object of type 'NoneType' has no len()\n", "module_stdout": "", "msg": "│~
MODULE FAILURE", "parsed": false}
```
That's because the `instance_ids` variable is None if not supplied
in the task. That means the instances that result from the instance_tags
query aren't going to be included in the wait loop. To fix this, a list
needs to be kept of instances with matching tags and that list needs to
be added to `instance_ids` before the wait loop.
* Ensure unicode characters in zip-compressed filenames work correctly
Another corner-case we are fixing hoping it doesn't break anything else.
This fixes:
- The correct encoding of unicode paths internally (so the filenames we scrape from the output and is returned by zipfile match)
- Disable LANG=C for the unzip command (because it breaks the unicode output, unlike on gtar)
* Fix for python3 and other suggestions from @abadger
Before this, all spot instance requests would fail because the code
_always_ called module.fail_json when the parameter was set (which it
always was, because the module parameter's default was set to 'stop').
As the comment said, this parameter doesn't make sense for spot
instances at all, so the error message was also misleading.
* fixes issue where configuration was not being loaded (#4704)
* fixes issue where defaults were not included when argument was set to True
tested on EOS 4.15.4F
When the configuration is compared and the results deserialized, the
dumps() function returns a string. This cohereces the return to a list
in case before and/or after needs to be applied
fixes 4707
* fixes save argument to be type bool
* now properly sets the changed returned flag based on diff
* updates docstring RETURNS to add backup_path
* removes unneeded state argument
tested on EOS 4.15.4F
Updates the junos_netconf module with changes to load the
NetworkModule instead of the get_module factory method. This
update is part of the 2.2 refactor of network modules
* adds src argument to load configuration from disk
* adds src_format to set the source file format
* adds update argument with choices merge or replace
* deprecates the replace argument in favor of update=replace
This provides support for passing additional positional parameters to async_wrapper.
Additional parameters will be used by the upcoming async support for Windows.
* Python3 fixes to copy, file, and stat so that the connection integration tests can be run
* Forgot to audit the helper functions as well.
* Fix dest to refledt b_dest (found by @mattclay)
* commands argument now accepts a dict arguments
* rpcs argument now accepts a dict argument
* waitfor has been renamed to wait_for with an alias to waitfor
* only show commands are allowd when check mode is specified
* config mode is no longer allowed in the command stack
* add argument match with valid values any, all
SELinux since 2012 use a configuration file to
convert boolean names from a old name to a new name,
for preserving backward compatibility.
However, this has to be done explicitely when using the python
bindings, and the module was not doing it.
Openshift ansible script use this construct to detect if
a boolean exist or not:
- name: Check for existence of virt_sandbox_use_nfs seboolean
command: getsebool virt_sandbox_use_nfs
register: virt_sandbox_use_nfs_output
failed_when: false
changed_when: false
- name: Set seboolean to allow nfs storage plugin access from containers(sandbox)
seboolean:
name: virt_sandbox_use_nfs
state: yes
persistent: yes
when: virt_sandbox_use_nfs_output.rc == 0
On a system where virt_sandbox_use_nfs do not exist, this work. But
on a system where virt_sandbox_use_nfs is a alias to virt_use_nfs (like
Fedora 24), this fail because the seboolean is not aware of the alias.
On python3, we want to use the surrogateescape error handler if
available for filesystem paths and the like. On python2, have to use
strict in these circumstances. Use the new error strategy for to_text,
to_bytes, and to_native that allows this.
In Ansible 2.x this module gives `changed = True` for all privileges
that are specified including a table with
priv: "database.table:GRANT"
Mysql returns escaped names in the format
`database`.`tables`:GRANT
However in PR #1358, which was intended to support dotted database names
(a crazy idea to begin with), the quotes for the table name were left
out, leading to `curr_priv != new_priv`.
This means that the idempotency comparison between new_priv and
curr_priv is always 'changed'.
This PR re-introduces quoting to the table part of the priv.
dict no longer have a iteritems method, it was replaced
by items. So we need to use six.
Traceback (most recent call last):
File \"/tmp/ansible_hjd7d65c/ansible_module_mysql_user.py\", line 587, in <module>
main()
File \"/tmp/ansible_hjd7d65c/ansible_module_mysql_user.py\", line 571, in main
changed = user_add(cursor, user, host, host_all, password, encrypted, priv, module.check_mode)
File \"/tmp/ansible_hjd7d65c/ansible_module_mysql_user.py\", line 239, in user_add
for db_table, priv in new_priv.iteritems():
AttributeError: 'dict' object has no attribute 'iteritems'
Using something like:
- name: Create ssh keys
user:
name: root
generate_ssh_key: yes
register: key
result into this traceback on F24
Traceback (most recent call last):
File \"/tmp/ansible_jm5d4vlh/ansible_module_user.py\", line 2170, in <module>
main()
File \"/tmp/ansible_jm5d4vlh/ansible_module_user.py\", line 2108, in main
(rc, out, err) = user.modify_user()
File \"/tmp/ansible_jm5d4vlh/ansible_module_user.py\", line 660, in modify_user
return self.modify_user_usermod()
File \"/tmp/ansible_jm5d4vlh/ansible_module_user.py\", line 417, in modify_user_usermod
has_append = self._check_usermod_append()
File \"/tmp/ansible_jm5d4vlh/ansible_module_user.py\", line 405, in _check_usermod_append
lines = helpout.split('\\n')
TypeError: a bytes-like object is required, not 'str'
Traceback (most recent call last):
File "/tmp/ansible_csqv781s/ansible_module_systemd.py", line 374, in <module>
main()
File "/tmp/ansible_csqv781s/ansible_module_systemd.py", line 263, in main
for line in out.split('\\n'): # systemd can have multiline values delimited with {}
* adds support for default facts subset
* adds support for config facts subset
* maintain legacy facts from ops_facts pre-2.2
Tested on Openswitch 0.4.0
* add src argument to provide path to config file
* add new choice to match used to ignore current running config
* add update argument with choices merge or check
* add backup argument to backup current running config to control host
* add save argument to save current running config to startup config
* add state argument to control state of config file
* deprecated force argument, use match=none instead
Note: this module only supports transport=cli
Tested on OpenSwitch 0.4.0
* commands argument now accepts a dict arguments
* waitfor has been renamed to wait_for with an alias to waitfor
* only show commands are allowed when check mode is specified
* config mode is no longer allowed in the command stack
* add argument match with valid values any, all
Tested on OpenSwitch 0.4.0
* add support for vrf configurations
* add support for configing the qos value for eapi
* add config argument to specify the device running-config
Tested on EOS 4.15.4F
IOS devices only support a single command output which is structured
text. This removes the ability to specify the command output format
when providing complex arguments to the commands
Due to a mixup of the group/role/user and policy names, policies with
the same name as the group/role/user they are attached to would never be
updated after creation. To fix that, we needed two changes to the logic
of policy comparison:
- Compare the new policy name to *all* matching policies, not just the
first in lexicographical order
- Compare the new policy name to the matching ones, not to the IAM
object the policy is attached to
We got an error while switching on existent local branch
because git module can not find branch in function get_branches
if we have color.branch=always in git config.
One of the usual issue is that run_command return bytes,
so we have to adapt the string to either be bytes too,
or convert to string.
This result into that kind of traceback:
Traceback (most recent call last):
File \"/tmp/ansible_ej32yu2w/ansible_module_git.py\", line 1009, in <module>
main()
File \"/tmp/ansible_ej32yu2w/ansible_module_git.py\", line 873, in main
git_version_used = git_version(git_path, module)
File \"/tmp/ansible_ej32yu2w/ansible_module_git.py\", line 788, in git_version
rematch = re.search('git version (.*)$', out)
File \"/usr/lib64/python3.5/re.py\", line 173, in search
return _compile(pattern, flags).search(string)
TypeError: cannot use a string pattern on a bytes-like object
Another issue is filter being a object instead of a list.
* key maps are now frozenset instead of dict objects
* FactsBase now includes utility functions for transforming json data structures
Tested on NXOS 7.3(0)D1(1)
* adds support for std network facts
* adds support for default facts subset
* adds support for config facts subset
* adds support for interface facts subset
* adds support for hardware facts subset
Tested on IOS-XR 6.0.0
* adds support for std network facts
* adds support for default facts subset
* adds support for config facts subset
* adds support for interface facts subset
* adds support for hardware facts subset
* maintains backwards capabilitity with 2.1 facts module
Tested on NXOS 7.3(0)D1(1)
* adds support for std network facts
* adds support for default facts subset
* adds support for config facts subset
* adds support for interface facts subset
* adds support for hardware facts subset
Tested on EOS 4.15.4F
* add src argument to provide path to config file
* add new choice to match used to ignore current running config
* add update argument with choices merge, replace or check
* add backup argument to backup current running config to control host
* add comment argument to provide comment to commit
* deprecated force argument, use match=none instead
Lineinfile deals heavily with Unic text files. Makes some sense to deal
with it all as byte strings. So there is a lot of work done here to
show that we're dealing with byte strings throughout.
The ssh_public_keys must be a list otherwise will give the error:
"argument ssh_public_keys is of type <type 'dict'> and we were unable to convert to list"
* Improve the correct handling of gtar and unzip options
Add the option --show-transformed-names when extra_opts is being used
Ignore bogus warnings related to empty filenames
Properly quote _and_ escape filenames for unzip command
Rewrite gtar options and provide run_command with array, not string
This fixes#2480 and #4109.
* Make check-mode work for zip-files
Check-mode was disabled for zip-files since gtar did not support it.
This change enables check-mode support for zip-files, but does skip the task when used with gtar.
(Best of both worlds)
Also remove unused compress_mode variable.
This replaces PR #4401, the changes overlap somewhat so I merged them
* commands argument now accepts a dict arguments
* waitfor has been renamed to wait_for with an alias to waitfor
* only show commands are allowd when check mode is specified
* config mode is no longer allowed in the command stack
* add argument match with valid values any, all
* commands argument now accepts a dict arguments[1]
* waitfor has been renamed to wait_for with an alias to waitfor
* only show commands are allowd when check mode is specified
* config mode is no longer allowed in the command stack
* add argument match with valid values any, all
[1] The commands argument will now accept a dict argument that can
specifiy the output format of the command. To specify a dict argument
use the form of { command: <str>, output: <str>, prompt: <str>,
response: <str> }.
* arguments for vyos_config for 2.2 are now complete
* adds loading config file from disk (src argument)
* removes unsupported rollback argument
* changes update_config to update with options merge or check
* changes backup_config to backup
* add state argument for state of configuration file
* adds backup argument to backup current configuration
* adds save argument to control if active config is saved to disk
* adds comment argument for setting commit comment
* adds match argument to control configuraiton match
Tested with VyOS 1.7
* Import module_utils at the top
* Fix python3 by marking literals combined with stdout/stderr as byte
literals
* Mark parameters as type=path where appropriate
* FreeBSD do not support --omit-header and --absolute-names
* The option for following symlink wth getfacl is different on FreeBSD
* ZFS on Freebsd use nfsv4 acls, who use a slightly different syntax
* FreeBSD do not have a --test flag, so always return 'True'
* FreeBSD do not have the --omit-headers options, so we have to filter by ourself
* Mark Freebsd as working for the acl module
* commands argument now accepts a dict arguments[1]
* waitfor has been renamed to wait_for with an alias to waitfor
* only show commands are allowd when check mode is specified
* config mode is no longer allowed in the command stack
* add argument match with valid values any, all
[1] The commands argument will now accept a dict argument that can
specifiy the output format of the command. To specify a dict argument
use the form of { command: <str>, output: <str>, prompt: <str>,
response: <str> }. Command and output are required arguments. Output
accepts valid values text and json.
* add src argument to provide path to config file
* add new choice to match used to ignore current running config
* add update argument with choices merge, replace or check
* add backup argument to backup current running config to control host
* add defaults argument to control collection of config with or without defaults
* add save argument to save current running config to startup config
* add state argument to control state of config file
* deprecated force argument, use match=none instead
The CommandRunner will not allow duplicate commands to be added to the
command stack. This fix will now catch the exception and continue if
a duplicate command is attempting to be added to the runner instance.
* commands argument now accepts a dict arguments[1]
* waitfor has been renamed to wait_for with an alias to waitfor
* only show commands are allowd when check mode is specified
* config mode is no longer allowed in the command stack
* add argument match with valid values any, all
[1] The commands argument will now accept a dict argument that can
specifiy the output format of the command. To specify a dict argument
use the form of { command: <str>, output: <str>, prompt: <str>,
response: <str> }. Command and output are required arguments. Output
accepts valid values text and json.
* add src argument to provide path to config file
* add new choice to match used to ignore current running config
* add update argument with choices merge or check
* add backup argument to backup current running config to control host
* add defaults argument to control collection of config with or without defaults
* add save argument to save current running config to startup config
* add state argument to control state of config file
* deprecated force argument, use match=none instead
* merge changes from ios shared module functions into ios_config.
* add src argument to provide path to config file
* add new choice to match used to ignore current running config
* add update argument with choices merge or check
* add backup argument to backup current running config to control host
* add defaults argument to control collection of config with or withoutdefaults
* add save argument to save current running config to startup config
* add state argument to control state of config file
* deprecated force argument, use match=none instead
* commands argument now accepts a dict arguments[1]
* waitfor has been renamed to wait_for with an alias to waitfor
* only show commands are allowd when check mode is specified
* config mode is no longer allowed in the command stack
* add argument match with valid values any, all
[1] The commands argument will now accept a dict argument that can
specifiy the output format of the command. To specify a dict argument
use the form of { command: <str>, output: <str>, prompt: <str>,
response: <str> }. Command and output are required arguments. Output
accepts valid values text and json.
Importing a (sign only) subkey with apt_key module always fails,
however the actual keyring gets created and contains the correct keys.
Apparently the all_keys function skips the subkeys, hence the problem.
Fixes#4365
* make HEAD parsing more robust
* Fail the module for any splitter errors
* fix combining depth and version on filepath urls by prepending file://
Addresses #907
* Made some changes to determine branch name more reliable (it may contain slashes now).
* Determination of branch name more reliable, as per comment on PR #907
- Removed required_if.
- Fixed doc strings.
- Removed debug output being appended to actions.
- Put import of basics at bottom to be consistent with other docker modules
- Added 'containers' alias to 'connected' param
- Put facts in ansible_facts.ansible_docker_network
* Git: Determine if remote URL is being changed
Ansible reported there were no changes when only the remote URL for a
repo was changed. This properly tracks and reports when the remote URL
for a repo changes.
Fixes#4006
* Fix handling of local repo paths
* Git: Use newer method for fetching remote URL
* Git: use ls-remote to fetch remote URL
Using ls-remote to fetch remote URL is supported in earlier versions
of Git compared to using remote command.
* Maintain previous behavior for older Git versions
Previously whether or not the remote URL changed was not factored
into command's changed status. Git versions prior to 1.7.5 lack the
functionality used for fetching a repo's remote URL so these versions
will update the remote URL without affecting the changed status.
When you try to remote unarchive files with the option copy=no the code always fail, as evidenced in issue #4202. That happens because the conditional to check "if remote_src=no or copy=yes" will always be true since the default value of them is remote_src=no and copy=yes.
My modification is only to change the condition from or to and, that way only if both the vars stay with the default value will be true, otherwise you can unarchive remote files.
* Add diffmode support to git module
This patch adds missing diffmode support to the git module.
* Remodel get_diff() and calls to it
As proposed by @abadger
* Ensure we fetch the required object before performing a diff
Also we handle the return code ourselves, so don't leave this up to run_command().
Now that there is general purpose `Fact` helper to detect if systemd
is active, we would be able to rely on that to apply SystemdStrategy.
Detecting presence of systemd at runtime would be more reliable than
distribution version based heuristics. (e.g., Debian, Ubuntu allows
user to change the default init system, Gentoo allows switching as
well, and so on).
A capital "S" appears when the the setuid or setgid bit are set but have no effect. Likewise, a capital "T" appears when the sticky bit is set but it has no effect.
During check_mode (`--check`), the variable change could be
used uninitialized, yielding this error:
`UnboundLocalError: local variable 'changed' referenced before assignment`
This changeset simply initializes it to False.
* error handling for importing non-existent db
* creating db on import state and suitable message on deleting db
* handling all possible cases when db exists/not-exists
* Check mode fixes for ec2_vpc_net module
Returns VPC object information
Detects state change for VPC, DHCP options, and tags in check mode
* Early exit on VPC creation in check mode
The default VPC egress rules was being left in the egress rules for
purging in check mode. This ensures that the module returns the correct
change state during check mode.
By default, ssh-keygen will pick a suitable default for ssh keys
for all type of keys. By hardocing the number of bits to the
RSA default, we make life harder for people picking Elliptic
Curve keys, so this commit make ssh-keygen use its own default
unless specificed otherwise by the playbook
sysrc(8) does not exit with non-zero status when encountering a
permission error.
By using service(8) `service <name> enabled`, we now check the actual
semantics expressed through calling sysrc(8), i.e. we check if the
service enablement worked from the rc(8) system's perspective.
Note that in case service(8) detects the wrong value is still set,
we still output the sysrc(8) output in the fail_json() call:
the user can derive the exact reason of failure from sysrc(8) output.
AWS security groups are unique by name only by VPC (Restated, the VPC
and group name form a unique key).
When attaching security groups to an ELB, the ec2_elb_lb module would
erroneously find security groups of the same name in other VPCs thus
causing an error stating as such.
To eliminate the error, we check that we are attaching subnets (implying
that we are in a VPC), grab the vpc_id of the 0th subnet, and filtering
the list of security groups on this VPC. In other cases, no such filter
is applied (filters=None).
EC2 Security Group names are unique given a VPC. When a group_name
value is specified in a rule, if the group_name does not exist in the
provided vpc_id it should create the group as per the documentation.
The groups dictionary uses group_names as keys, so it is possible to
find a group in another VPC with the name that is desired. This causes
an error as the security group being acted on, and the security group
referenced in the rule are in two different VPCs.
To prevent this issue, we check to see if vpc_id is defined and if so
check that VPCs match, else we treat the group as new.
While from the documentation[1] one would assume that replacing
CAPABILITY_IAM with CAPABILITY_NAMED_IAM; this as empirically been shown
to not be the case.
1: "If you have IAM resources, you can specify either capability. If you
have IAM resources with custom names, you must specify
CAPABILITY_NAMED_IAM."
http://docs.aws.amazon.com/AWSCloudFormation/latest/APIReference/API_CreateStack.html
Previously, when the attributes of a GCE firewall change, they were ignored. This PR changes that behavior and now updates them.
Note that the "update" also removes attributes that are not specified.
An overview of the firewall rule behavior is as follows:
1. firewall name in GCP, state=absent in PLAYBOOK: Delete from GCP
2. firewall name in PLAYBOOK, not in GCP: Add to GCP.
3. firewall name in GCP, name not in PLAYBOOK: No change.
4. firewall names exist in both GCP and PLAYBOOK, attributes differ: Update GCP to match attributes from PLAYBOOK.
Current module fails when tries to assign floating-ips to server that
already have them and either fails or reports "changed=True" when no
ip was added
Removing floating-ip doesn't require address
Server name/id is enough to remove a floating ip.
This parameter was actually added in 2.0. It's just that the
documentation in previous versions of the module were wrong (it said the
name was "network" rather than "name.) I've renamed the parameter in
the documentation of prior versions so ansible-module-validate should no
longer think that this is a new parameter.
The module would raise a KeyError trying to find the save_config key
which is not present in the argument_spec. This was caused by the
check_args() function. Since the ios shared argument spec isn't used
the check_args function is not needed and has been removed.
This removes the get_module() factory function and directly creates
an instance of NetworkModule. This commit includes some minor clean
up to transition to the ios shared module for 2.2
The shade update_router() call will return None if the router is
not actually updated. This will cause the module to fail if we
do not protect against that.
* using check mode will now block all commands except show commands
* module will no longer allow config mode commands
* check args for unused values and issue warning
This refactors the ios_config module to use the network module added
in 2.2 to simplify common network functions
new features
* add src, dest arguments for working with config
* results now return flag if the config was saved or not
* adds append argument for updating the dest file (when dest is used)
The os_server module could automatically generate a floating IP for
the user with auto_ip=true, but we didn't allow for this FIP to be
automatically deleted when deleting the instance, which is a bug.
Add a new option called delete_fip that enables this.
* Revert PR #3575 since it causes problems related to exclude patterns
By using a different method for getting archive filelists, and extracting we introduced new problems related to excluding based on gtar patterns.
As a result files that would be excluded by gtar, would still be in the filelist. Implementing our own gtar compatible pattern exclusion mechanism is near to impossible (believe me, we looked at it...). The best way is to look at the original problem and deal with that, and ensure that extraction and filelists are done with the exact same tool and exact same options.
The solution is to decode the octal unicode representation in gtar's output back to unicode. Since gtar has no problem extracting these files in LANG=C, we simply has to compensate for it.
This reverts #3575 and fixes#11348.
* Implement codecs.escape_decode() instead of decode("string_escape") for python3
* remove unused variables
* fetch branch name instead of HEAD
fix#3782, which was introduced by f1bacc1d3f
* disable git depth option for old git versions
fixes#3782
git support for `--depth` did not fully work in old git versions (before 1.8.2)
fall back to full clones/fetches on those versions
* raise required git version to 1.9.1 for depth option
* use correct depth argument in switch_version
* service module: use sysrc on FreeBSD
sysrc(8) is the designated userland program to edit rc files on FreeBSD.
It first appeared in FreeBSD 9.2, hence is available on all supported
versions of FreeBSD.
Side effect: fixes#2664
* Incorporate changes suggested by bcoca.
- Use `get_bin_path` to find sysrc binary.
- Only use sysrc when available (support for legacy versions of FreeBSD)
Without this, ansible 2.1 will convert some arguments that are
meant to be dict or list type to their str representation.
Signed-off-by: Patrick Uiterwijk <puiterwijk@redhat.com>
Change the file mode arg to 'raw' ala file args
Following the file_common_args model, change the
type of the 'mode' arg here to type='raw' with no
default arg value.
The default mode for file creation is the module
constant DEFAULT_SOURCES_PER, and is used if no
mode os specified.
A default mode of 0644 (and not specified as int or str)
would get converted to an octal 420, resulting in the
sources file being created with mode '0420' instead of '0644'
Fixes#16370
The `source_dest_check` and `termination_protection` variables are being
assigned twice in ec2.py, likely due to an incorrect merge somewhere
along the line.
* A few more sanity checks for detecting unzip output that's not a file entry
Also note that there's a rounding error somewhere in the mtime
comparison code.
* Fix reference to sub-array
Because the async_status module will read from the same file that
the async_wrapper module is writing, it's possible that the file
may not be fully synced during a read, causing spurious failures.
Use a temp file to do an atomic operation on the file. We can't
use atomic_move() here as that doesn't work properly under async.
Also, let's not read concurrently from the same file the subprocess
is writing to. Instead, capture stdout/stderr via PIPE and write to
the file to avoid nasty races.
* This adds support the CommandRunner to handle executing commands on
the remote device.
* It also changes the waitfor argument to wait_for to remain compatable
with other modules and adds an alias for waitfor.
* Restricts commands to show commands only when check mode is specified.
* add version_added to wait_for doc string
The IAM group modules were not receiving the `module` object, but they
use `module.fail_json()` in their exception handlers. This patch passes
through the module object so the real errors from boto are exposed,
rather than errors about "NoneType has no method `fail_json`".
$source check causes:
FAILED! => {"changed": false, "failed": true, "msg": "A parameter cannot be found that matches parameter name 'Source'."}
$Params.Remove causes:
FAILED! => {"changed": false, "failed": true, "msg": "Method invocation failed because [System.Management.Automation.PSCustomObject] does not contain a method named 'Remove'."}
* Change documented options for os_networks_facts
os_network_facts currently lists 'network' as an available option, taking the Name or ID. In Ansible 2.0.2 to 2.2.0, this is not valid. Options 'name' and 'id' should be used instead.
* Update os_networks_facts.py
* Update os_networks_facts.py
Set version_added to the only accepted value
* Update os_networks_facts.py
Removed inappropriate 'ID' parameter
When `version` is not specified, it defaults to "HEAD". "HEAD" is not a
remote tag, and it's not listed in the output of get_branches(), so we'd
keep repo_updated at the default value (None) and then return early with
changed=True in --check mode, even when before == after.
Fixes#3024.
Ceph Object Gateway (Ceph RGW) is an object storage interface built on top of
librados to provide applications with a RESTful gateway to Ceph Storage
Clusters:
http://docs.ceph.com/docs/master/radosgw/
This patch adds the required bits to use the RGW S3 RESTful API properly.
Signed-off-by: Javier M. Mellid <jmunhoz@igalia.com>
Prior to Python 2.5, SystemExit was a subclass of Exception.
In Py2.4, this is causing extra error output on valid sys.exit(0).
(Toshio) Call sys.exit from inside of the SystemExit exception handler so py2.4 and py2.5+ behaviour matches
* Fixing compile time errors irt a) exception handling for Python 3 in util, also: b) problem octal usage (fixed) and c) print json_dump -> print(json_dump(xyz) ... et al
* This code was not Python 2.4 compliant. Octal codes and exception handling is now working with Py 2.4, 2.6, & 3.5.
* Fixing formating (or rather reverting an non 2.4 compatible change). Works in compile & runtime checking.
* a) revert to use print sys.stderr not fail_json; b) fixed var name in exception
* Python 3 compatible print (print >>sys.stderr will generate a TypeError - now uses sys.stderr.write instead).
The "Developing Modules" documentation states:
Include a minimum of dependencies if possible. If there are
dependencies, document them at the top of the module file, and have
the module raise JSON error messages when the import fails.
When docker_service runs on a remote host without PyYAML it crashes with
ImportError.
This patch raises a JSON error message when import fails, but only if
the PyYAML module is actually used. It's only needed when the
"definition" parameter is given.
Signed-off-by: Stefan Hajnoczi <stefanha@gmail.com>
The example for delete=yes does not specify recursive although it is
required. In addition, the wording for the delete option is confusing
about from where files are really deleted. This should clarify that.
* Update GitHub templates to reflect ansible/ansible
Update the GitHub templates to what is used for some time on ansible/ansible
For more information, see ansible/ansible#15961
* Small improvement from ansible/ansible
* Improve the unzip output scraping
Ensure we capture the complete file (also when it includes spaces).
Drop lines that do not conform (in length) to what we expect (e.g. header/footer).
This fixes#3813
* Fix how split() works
* remove unused variables
* fetch branch name instead of HEAD
fix#3782, which was introduced by f1bacc1d3f
* disable git depth option for old git versions
fixes#3782
git support for `--depth` did not fully work in old git versions (before 1.8.2)
fall back to full clones/fetches on those versions
Reading the entire tar file into memory can result in out-of-memory
conditions such as this traceback:
Traceback (most recent call last):
File "/tmp/ansible_YELTSu/ansible_module_docker_image.py", line 486, in load_image
self.client.load_image(image_data)
File "/usr/local/lib/python2.7/dist-packages/docker/api/image.py", line 147, in load_image
res = self._post(self._url("/images/load"), data=data)
...
File "/usr/lib/python2.7/httplib.py", line 997, in endheaders
self._send_output(message_body)
File "/usr/lib/python2.7/httplib.py", line 848, in _send_output
msg += message_body
MemoryError
Luckily docker-py's load_image(), which calls requests post(), accepts a
file-like object instead of a string. Pass in the file object to avoid
reading the full file into memory. This allows larger tar files to load
succesfully.
Signed-off-by: Stefan Hajnoczi <stefanha@gmail.com>
* This moves the lines in the code that parse the `env` and `env_file` options for docker to the end of the `__init__()` function.
This is needed because the `_check_capabilites` function needs both a working `self.client` and a proper `self.docker_py_versioninfo`.
`_check_capabilities` is used by `ensure_capabilities` which is, in turn, used by `get_environment`
This means that before this commit, the environment variables could not be loaded because both `self.client` and `self.docker_py_versioninfo` were not set at that time.
This commit fixes that by putting the environment variable parsing after those two.
* This moves the lines in the code that parse the `env` and `env_file` options for docker to the end of the `__init__()` function.
This is needed because the `_check_capabilites` function needs both a working `self.client` and a proper `self.docker_py_versioninfo`.
`_check_capabilities` is used by `ensure_capabilities` which is, in turn, used by `get_environment`
This means that before this commit, the environment variables could not be loaded because both `self.client` and `self.docker_py_versioninfo` were not set at that time.
This commit fixes that by putting the environment variable parsing after those two.
The default pagination is every 100 items with a maximum of 1000 from
Amazon. This properly uses the marker returned by Amazon to concatenate
the various pages from the results.
This fixes#2440.
* Fixing error exception handling for python. Does not need to be compatible with Python2.4 b/c boto is Python 2.6 and above.
* Fixing error exception handling for python. Does not need to be compatible with Python2.4 b/c boto is Python 2.6 and above.
* Fixing compile time errors IRT error exception handling for Python 3.5.
This does not need to be compatible with Python2.4 b/c Boto is Python 2.6 and above.
Fix KeyError: 'prepared' while installing dependencies using deb=<file>.deb
This error shows up when --diff was not passed by and the deb files has dependencies not yet installed.
Closes#3752.
Currently, when writing user's crontab, ansible calls
crontab <file> -u <user>
This is incorrect according to crontab(1) on both FreeBSD and Linux,
which suggest that file argument should be the last.
At least on FreeBSD, this leads to incorrect cron module bahavior which
writes to root's crontab instead of users's
Fallback to unzip if zipfile fails and hope that unzip can deal with it
(sites have an easier time upgrading the unzip utility than all of
python).
https://bugs.python.org/issue3997Fixes#3560
packaging/language/pip.py:
virtualenv option:
Mention that virtualenv is created if it does not exist.
(Explicit is better than implicit.)
Mention other relevant options.
notes:
initialized -> created
Wrap long lines.
This is to address this error:
fatal: [site]: FAILED! => {"changed": false, "failed": true, "msg": "Failed to connect to S3: Region does not seem to be available for awsmodule boto.s3. If the region definitely exists, you may need to upgrade boto or extend with endpoints_path"}
Commit 0dd58e9 changed the logic so an exception is thrown (by
`connect_to_aws`) before the `s3 is None` check is performed. This
changes the `None` check to a catch so the old logic can compensate.
This fix passing the update variable to the str()
so that it avoids the exception when ops.dc.read()
returns a dictionary which contains non-string keys.
This is due to the fact that some of the key types in
OpenSwitch schema are actually defined as integer
and ops.dc declerative config module encode those
in integer inside the dictionary. This could be
the right encoding from the schema point of view
but someone needs to convert it to the string
somewhere, as JSON key should be string.