* Add git_config module
This module can be used for reading and writing git configuration at all
three scopes (local, global and system). It supports --diff and --check
out of the box.
This module is based off of the following gist:
https://gist.github.com/mgedmin/b38c74e2d25cb4f47908
I tidied it up and added support for the following:
- Reading values on top of writing them
- Reading and writing values at any scope
The original author is credited in the documentation for the module.
* Respond to review feedback
- Improve documentation by adding choices for parameters, requirements
for module, and add missing description for scope parameter.
- Fail gracefully when git is not installed (followed example of puppet
module).
- Remove trailing whitespace.
* Change repo parameter to type 'path'
This ensures that all paths are automatically expanded appropriately.
* Set locale to C before running commands to ensure consistent error messages
This is important to ensure error message parsing occurs correctly.
* Adjust comment
As is done in other ansible modules, this adds the __main__ check
to the module so that the module code itself can be used as a library.
For instance, when testing the code.
It's not particularly obvious that removing an application will remove it
from ufw's own state, potentially leaving ports open on your box if you
upload your configuration.
Whilst this applies to a lot of things in Ansible, firewall rules might
cross some sort of line that justifies such a warning in his instance.
Signed-off-by: Chris Lamb <chris@chris-lamb.co.uk>
1) Removed kubectl functionality. We'll move that into a different
module in the future. Also removed post/put/patch/delete options,
as they are not Ansible best practice.
2) Expanded error handling in areas where tracebacks were most likely,
based on bad data from users, etc.
3) Added an 'insecure' option and made the password param optional, to
enable the use of the local insecure port.
4) Allowed the data (both inline and from the file) to support multiple
items via a list. This is common in YAML files where mutliple docs
are used to create/remove multiple resources in one shot.
5) General bug fixing.
* Remove support for ancient zypper versions
Even SLES11 has zypper 1.x.
* zypper_repository: don't silently ignore repo changes
So far when a repo URL changes this got silently ignored (leading to
incorrect package installations) due to this code:
elif 'already exists. Please use another alias' in stderr:
changed = False
Removing this reveals that we correctly detect that a repo definition
has changes (via repo_subset) but don't indicate this as change but as a
nonexistent repo. This makes us currenlty bail out silently in the above
statement.
To fix this distinguish between non existent and modified repos and
remove the repo first in case of modifications (since there is no force
option in zypper to overwrite it and 'zypper mr' uses different
arguments).
To do this we have to identify a repo by name, alias or url.
* Don't fail on empty values
This unbreaks deleting repositories
* refactor zypper_repository module
* add properties enabled and priority
* allow changing of one property and correctly report changed
* allow overwrite of multiple repositories by alias and URL
* cleanup of unused code and more structuring
* respect enabled option
* make zypper_repository conform to python2.4
* allow repo deletion only by alias
* check for non-existant url field and use alias instead
* remove empty notes and aliases
* add version_added for priority and overwrite_multiple
* add version requirement on zypper and distribution
* zypper 1.0 is enough and exists
* make suse versions note, not requirement
based on comment by @alxgu
* add vmware maintenance mode support
* changed version number in documentation
* updated version_added to 2.0 since CI is failing
* changed version to 2.0 due to CI - error asking for 2.1
* added RETURN
* updated formatting of return values and added some to clarify actions taken
* Support for masquerade settings
Ability to enable and disable masquerade settings from ansible via:
- firewalld: mapping=masquerade state=disabled permanent=true zone=dmz
Placeholder added (mapping) to support masquerade and port_forward
choices initially - port_forward not implemented yet.
* Permanent and Immediate zone handling differentiated
* Corrected naming abstraction for masquerading functionality
Removed mapping tag with port_forward choices - not applicable!
* Added version info for new masquerade option
Pull Request #2017 failing due to missing version info
* Add SQS queue policy attachment functionality
SQS queue has no attribute 'Policy' until one is attached, so this special
case must be handled uniquely
SQS queue Policy can now be passed in as json
container_config:
- "lxc.network.ipv4.gateway=auto"
- "lxc.network.ipv4=192.0.2.1"
might try to override lxc.network.ipv4.gateway in the second entry as both
start with "lxc.network.ipv4".
use a regular expression to find a line that contains (optional) whitespace
and an = after the key.
Signed-off-by: Evgeni Golov <evgeni@golov.de>
before the following would produce four entries:
container_config:
- "lxc.network.flags=up"
- "lxc.network.flags =up"
- "lxc.network.flags= up"
- "lxc.network.flags = up"
let's strip the whitespace and insert only one "lxc.network.flags = up"
into the final config
Signed-off-by: Evgeni Golov <evgeni@golov.de>
The previous version of my regexp did not take into account packages
such as 'p5-Perl-Tidy' or 'p5-Test-Output', so use a greedy match up to
the last occurrance of '-' for matching the package. This regex has
been extensively tested using all packages as provided by pkgsrc-2016Q1[1].
Footnotes:
[1] http://cvsweb.netbsd.org/bsdweb.cgi/pkgsrc/?only_with_tag=pkgsrc-2016Q1
Since user_key and app_token are used for authentication, I
suspect both of them should be kept secret.
According to the API manual, https://pushover.net/api
priority go from -2 to 2, so the argument should be constrained.
- make path to pkgin a global and stop passing it around; it's not going
to change while ansible is running
- add support for several new options:
* upgrade
* full_upgrade
* force
* clean
- allow for update_cache to be run in the same task as upgrading/installing
packages instead of needing a separate task for that
Only a small issue in results.
In case of type is ingress, we rely on ip address, but in results we also return the network.
Resolving the ip address works without zone params. If the ip address is not located in the default zone and zone param is not set,
the network won't be found because default zone was used for the network query listing.
However since network param is not used for type ingress we skip the return of the network in results.
At the moment, this only works when 'enable' is equals to 'yes' or 'no'.
While I'm on it, I also fixed a typo in the example and added a required
parameter.
* VMware datacenter module rewritten to don't hold pyvmomi context and objects in Ansible module object
fixed exceptions handling
added datacenter destroy result, moved checks
changed wrong value
wrong value again... need some sleep
* check_mode fixes
* state defaults to present, default changed to true
* module check fixes
Note that since cpanm version 1.6926 its messages are sent to stdout
when previously they were sent to stderr.
Also there is no need to initialize out_cpanm and err_cpanm and
check for their truthiness as module.run_command() and str.find()
take care of that.
* added stdout and stderr outputs
Added stdout and stderr outputs of the results from composer as the current msg output strips \n so very hard to read when debugging
* using stdout for fail_json
using stdout for fail_json so we get the stdout_lines array
with the default umask tar will create a world-readable archive of the
container, which may contain sensitive data
Signed-off-by: Evgeni Golov <evgeni@golov.de>
* do not use a predictable filename for the LXC attach script
* don't use predictable filenames for LXC attach script logging
* don't set a predictable archive_path
this should prevent symlink attacks which could result in
* data corruption
* data leakage
* privilege escalation
otherwise deploying user-containers fail as these require information
from ~/.config/lxc/default.conf that the LXC tools will load if no
--config was supplied
Signed-off-by: Evgeni Golov <evgeni@golov.de>
This change adds a note to the win_scheduled_task module
docs that indicates Windows Server 2012 or later is required.
This is because the module relies on the Get-ScheduledTask
cmdlet, which is a part of the Server 2012 OS. Previous
versions, like Server 2008, simply can't work with this
module.
The range_search() API was added to the shade library in version
1.5.0 so let's check for that and let the user know they need to
upgrade if they try to use it.
Addition of an os_ironic_inspect module to leverage the OpenStack
Baremetal inspector add-on to ironic or ironic driver out-of-band
hardware introspection, if supported and configured.
The manual check to see if get_bin_path() returned anything is
redundant, because we pass True to the required parameter of
get_bin_path(). This automatically causes the task to fail if the pacman
binary isn't available. Therefore, the code within the if statement
being removed is never called.
-e or --execute [1] allows to execute a specific piece of Puppet code
such a class.
For example, in puppet you would run:
puppet apply -e 'include ::mymodule'
Will be in ansible:
puppet: execute='include ::mymodule'
[1] http://docs.puppetlabs.com/puppet/latest/reference/man/apply.html#OPTIONS
win_unzip fails to extract files when either src or dest contains
complex paths such as "..\..\" or "C:\\Program Files" (double slashes).
Fix this by fetching absolute path of both before invoking CopyHere
method.
Set int for the various port (and so avoid to convert them later)
Set no_log=True for the login_password
Verify that db is a int, so avoid a conversion
Do a sorted comparison of the list of security groups supplied via `module.params.get('security_groups')` and the list of security groups fetched via `get_sec_group_list(eni.groups)`. This fixes an incorrect "The specified address is already in use" error if the order of security groups in those lists differ.
I changed the logic here to always use 'netsh ... show rule' keywords as keys for $fwsettings map. While the translation (e.g. Enabled -> enable) is performed when invoking 'netsh ... add rule' command.
I tested rule creation and rule creation when the rule was already existing on Windows Server 2012.
Currently the module doesn't explicitly close the file handle. This
wraps the reading of the private key in a try/finally block to ensure
the file is properly closed.
When passing a package version that parses as a number (e.g. `1.9`), the version should be converted to a string before being concatenated to the package name.
add exit_json code to succesfully exit, when you want to delete the already
deleted host.
Without this, playbook fails with
`Specify at least one group for creating host`
which is not correct message.
New module to retrieve facts about existing instance flavors.
By default, facts on all available flavors will be returned.
This can be narrowed by naming a flavor or specifying criteria
about flavor RAM or VCPUs.
- original parameter comment was probably copy&paste error
- new comment highlights that firewall rules can be
added or removed altering this parameter
Session_id is unused in update_session, changed is always specifically
set in all exit_json call, and consul_client.session.destroy return True
or False, and is unused later (nor checked)
TRACE:
while parsing a block mapping
in "<string>", line 33, column 13:
description: resulting state of ...
^
expected <block end>, but found ','
in "lxc_container.RETURN", line 419, column 53:
... "/tmp/test-container-config.tar",
ERROR: RETURN is not valid YAML. Line 419 column 53
- "action" style invoking is a legacy way to call modules
- the examples were updated to the typical style of calling complex
modules:
ovirt:
parameter1: value1
parameter2: value2
...
The os_project module instantiates the openstack cloud object
by passing the module params kwargs.
As the params contain a key named 'domain_id', this is used
for domain in the OpenStack connection, instead of the domain value
the user specifies on the OSCC clouds.yaml or OpenStack envvars.
This fix corrects this by popping the 'domain_id' key, so it we
keep the value but it's not passed later on module.params.
When this module was written back in May 2015 we were using 1.9.x. Being lazy I added to param the objects that the other functions would need. What I have noticed is in 2.0 exit_json is trying to jsonify those complex objects and failing. This PR resolves that issue with the vmware_vm_vss_dvs_migrate module.
@kamsz reported this issue in https://github.com/ansible/ansible-modules-extras/pull/1568
Playbook
```
- name: Migrate VCSA to vDS
local_action:
module: vmware_vm_vss_dvs_migrate
hostname: "{{ mgmt_ip_address }}"
username: "{{ vcsa_user }}"
password: "{{ vcsa_pass }}"
vm_name: "{{ hostname }}"
dvportgroup_name: Management
```
Module Testing
```
ASK [Migrate VCSA to vDS] *****************************************************
task path: /opt/autodeploy/projects/emmet/site_deploy.yml:260
ESTABLISH LOCAL CONNECTION FOR USER: root
localhost EXEC ( umask 22 && mkdir -p "$( echo $HOME/.ansible/tmp/ansible-tmp-1454695546.3-207189190861859 )" && echo "$( echo $HOME/.ansible/tmp/ansible-tmp-1454695546.3-207189190861859 )" )
localhost PUT /tmp/tmpkzD4pF TO /root/.ansible/tmp/ansible-tmp-1454695546.3-207189190861859/vmware_vm_vss_dvs_migrate
localhost EXEC LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /root/.ansible/tmp/ansible-tmp-1454695546.3-207189190861859/vmware_vm_vss_dvs_migrate; rm -rf "/root/.ansible/tmp/ansible-tmp-1454695546.3-207189190861859/" > /dev/null 2>&1
changed: [foundation-vcsa -> localhost] => {"changed": true, "invocation": {"module_args": {"dvportgroup_name": "Management", "hostname": "172.27.0.100", "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", "username": "root", "vm_name": "cscvcatmp001"}, "module_name": "vmware_vm_vss_dvs_migrate"}, "result": null}
```
When this module was written back in May 2015 we were using 1.9.x. Being lazy I added to param the objects that the other functions would need. What I have noticed is in 2.0 exit_json is trying to jsonify those complex objects and failing. This PR resolves that issue with the vmware_dvs_portgroup module.
@kamsz reported this issue in https://github.com/ansible/ansible-modules-extras/pull/1568
Playbook
```
- name: Create Management portgroup
local_action:
module: vmware_dvs_portgroup
hostname: "{{ mgmt_ip_address }}"
username: "{{ vcsa_user }}"
password: "{{ vcsa_pass }}"
portgroup_name: Management
switch_name: dvSwitch
vlan_id: "{{ hostvars[groups['foundation_esxi'][0]].mgmt_vlan_id }}"
num_ports: 120
portgroup_type: earlyBinding
state: present
```
Module Testing
```
TASK [Create Management portgroup] *********************************************
task path: /opt/autodeploy/projects/emmet/tasks/deploy/dvs_network.yml:17
ESTABLISH LOCAL CONNECTION FOR USER: root
localhost EXEC ( umask 22 && mkdir -p "$( echo $HOME/.ansible/tmp/ansible-tmp-1454693809.13-142252676354410 )" && echo "$( echo $HOME/.ansible/tmp/ansible-tmp-1454693809.13-142252676354410 )" )
localhost PUT /tmp/tmpeQ8M1U TO /root/.ansible/tmp/ansible-tmp-1454693809.13-142252676354410/vmware_dvs_portgroup
localhost EXEC LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /root/.ansible/tmp/ansible-tmp-1454693809.13-142252676354410/vmware_dvs_portgroup; rm -rf "/root/.ansible/tmp/ansible-tmp-1454693809.13-142252676354410/" > /dev/null 2>&1
changed: [foundation-vcsa -> localhost] => {"changed": true, "invocation": {"module_args": {"hostname": "172.27.0.100", "num_ports": 120, "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", "portgroup_name": "Management", "portgroup_type": "earlyBinding", "state": "present", "switch_name": "dvSwitch", "username": "root", "vlan_id": 2700}, "module_name": "vmware_dvs_portgroup"}, "result": "None"}
```
When this module was written back in May 2015 we were using 1.9.x. Being lazy I added to param the objects that the other functions would need. What I have noticed is in 2.0 exit_json is trying to jsonify those complex objects and failing. This PR resolves that issue with the vmware_cluster module.
@kamsz reported this issue in https://github.com/ansible/ansible-modules-extras/pull/1568
Playbook
```
- name: Create Cluster
local_action:
module: vmware_cluster
hostname: "{{ mgmt_ip_address }}"
username: "{{ vcsa_user }}"
password: "{{ vcsa_pass }}"
datacenter_name: "{{ mgmt_vdc }}"
cluster_name: "{{ mgmt_cluster }}"
enable_ha: True
enable_drs: True
enable_vsan: True
```
Module testing
```
TASK [Create Cluster] **********************************************************
task path: /opt/autodeploy/projects/emmet/site_deploy.yml:188
ESTABLISH LOCAL CONNECTION FOR USER: root
localhost EXEC ( umask 22 && mkdir -p "$( echo $HOME/.ansible/tmp/ansible-tmp-1454693788.92-14097560271233 )" && echo "$( echo $HOME/.ansible/tmp/ansible-tmp-1454693788.92-14097560271233 )" )
localhost PUT /tmp/tmpAJfdPb TO /root/.ansible/tmp/ansible-tmp-1454693788.92-14097560271233/vmware_cluster
localhost EXEC LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /root/.ansible/tmp/ansible-tmp-1454693788.92-14097560271233/vmware_cluster; rm -rf "/root/.ansible/tmp/ansible-tmp-1454693788.92-14097560271233/" > /dev/null 2>&1
changed: [foundation-vcsa -> localhost] => {"changed": true, "invocation": {"module_args": {"cluster_name": "Foundation", "datacenter_name": "Test-Lab", "enable_drs": true, "enable_ha": true, "enable_vsan": true, "hostname": "172.27.0.100", "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", "state": "present", "username": "root"}, "module_name": "vmware_cluster"}}
```
win_uri uses "Invoke-WebRequest" under the covers, which apparently
uses Internet Explorer to parse a webpage. The problem is if a user
has never run Internet Explorer, it will be unable to do that. The
work around for this is to set the "-UseBasicParsing" flag.
The only advantage to having the Internet Explorer parsed page is
that you can then access the DOM as if it was a powershell
argument. That doesn't seem super useful for Ansible to be able
to do, so I set the default to be "-UseBasicParsing"
When this module was written back in May 2015 we were using 1.9.x. Being lazy I added to param the objects that the other functions would need. What I have noticed is in 2.0 exit_json is trying to jsonify those complex objects and failing. This PR resolves that issue with the vmware_dvswitch module.
@kamsz reported this issue in https://github.com/ansible/ansible-modules-extras/pull/1568
Playbook
```
- name: Create dvswitch
local_action:
module: vmware_dvswitch
hostname: "{{ mgmt_ip_address }}"
username: "{{ vcsa_user }}"
password: "{{ vcsa_pass }}"
datacenter_name: "{{ mgmt_vdc }}"
switch_name: dvSwitch
mtu: 1500
uplink_quantity: 2
discovery_proto: lldp
discovery_operation: both
state: present
```
Module Testing
```
TASK [Create dvswitch] *********************************************************
task path: /opt/autodeploy/projects/emmet/tasks/deploy/dvs_network.yml:3
ESTABLISH LOCAL CONNECTION FOR USER: root
localhost EXEC ( umask 22 && mkdir -p "$( echo $HOME/.ansible/tmp/ansible-tmp-1454693792.01-113207408596014 )" && echo "$( echo $HOME/.ansible/tmp/ansible-tmp-1454693792.01-113207408596014 )" )
localhost PUT /tmp/tmptb3e2c TO /root/.ansible/tmp/ansible-tmp-1454693792.01-113207408596014/vmware_dvswitch
localhost EXEC LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /root/.ansible/tmp/ansible-tmp-1454693792.01-113207408596014/vmware_dvswitch; rm -rf "/root/.ansible/tmp/ansible-tmp-1454693792.01-113207408596014/" > /dev/null 2>&1
changed: [foundation-vcsa -> localhost] => {"changed": true, "invocation": {"module_args": {"datacenter_name": "Test-Lab", "discovery_operation": "both", "discovery_proto": "lldp", "hostname": "172.27.0.100", "mtu": 1500, "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", "state": "present", "switch_name": "dvSwitch", "uplink_quantity": 2, "username": "root"}, "module_name": "vmware_dvswitch"}, "result": "'vim.dvs.VmwareDistributedVirtualSwitch:dvs-9'"}
```
dnf: name=PACKAGE state=latest is reponsible for two use cases:
- to install a package if not already installed.
- to update the package to the latest if already installed.
The latter use cases is not handled properly as base.upgrade does not
throw dnf.exceptions.MarkingError if a package is not installed.
Setting base.conf.best = True ensures a package is installed or
updated to the latest when calling base.install.
Sign-off: jsilhan@redhat.com
Sign-off: jchaloup@redhat.com
While returning puppet logs as ansible stdout is useful in some cases,
there are also cases where it's more destructive than helpful. For
those, local logging to syslog so that the ansible logging makes sense
is very useful.
This defaults to stdout so that behavior does not change for people.
The bigip_api method was changed in the module_utils function definition
to include the validate_certs option but the bigip_virtual_server module
was not updated accordingly. This patch updates the method so that the
error message below is not returned to the user
received exception: bigip_api() takes exactly 4 arguments (3 given)
This moves the validation of properties to the zfs command itself. The
properties and their choices were not really correct anyway due to
differences between OpenZFS and Solaris/ZFS.
The patch module has a few missing items, and inconsistencies, in its
documentation. A few of which are addressed here.
Within Ansible documentation, the choices for boolean values are
commonly 'yes', and 'no'. We standardise the options on that.
'remote_src' documentation uses 'False' and 'True' for its documentation,
so these have been updated in both the choices and default.
'src' documentation refers to 'remote_src', so is updated to use
the 'no' choice.
'backup' did not describe its options and default at all, so we add
them.
'binary' default used 'False', but specified the type as 'bool' which is
implicitly documented as 'yes'/'no', so we make that 'no' as well.
cloudstack: cs_instance: fix do not require name to be set to avoid clashes
Require one of display_name or name. If both is given, name is used as identifier.
cloudstack: fix name is not case insensitive
cloudstack: cs_template: implement state=extracted
Update f5 validate_certs functionality to do the right thing on multiple python versions
This requires the implementation in the module_utils code here
https://github.com/ansible/ansible/pull/13667 to funciton
fixed domain_id to actually be supported
also added domain as an alias
alt fixes#1437
Simplify the code and remove use_unsafe_shell=True
While there is no security issue with this shell snippet, it
is better to not rely on shell and avoid use_unsafe_shell.
Fix for issue #1074. Now able to create volume without replica's.
Improved fix for #1074. Both None and '' transform to fqdn.
Fix for ansible-modules-extras issue #1080
This prevents failing when a playbook describes a volume deletion and
is launched more that once.
Without this fix, if you run the playbook a second time, it will fail.
Loop compatibility for dry run exception handling
Route table deletion dry run handler
Fixing regression in propagating_vgw_ids default value
Adjusting truthiness of changed attribute for route manipulation
Updating propagating_vgw_ids default in docstring
One of inconvinence this address is the the fact that
you have to pass user's tags even if you just want to
add a permission rule
Signed-off-by: Marian Rusu <rusumarian91@gmail.com>