* VMware datacenter module rewritten to don't hold pyvmomi context and objects in Ansible module object
fixed exceptions handling
added datacenter destroy result, moved checks
changed wrong value
wrong value again... need some sleep
* check_mode fixes
* state defaults to present, default changed to true
* module check fixes
container_config:
- "lxc.network.ipv4.gateway=auto"
- "lxc.network.ipv4=192.0.2.1"
might try to override lxc.network.ipv4.gateway in the second entry as both
start with "lxc.network.ipv4".
use a regular expression to find a line that contains (optional) whitespace
and an = after the key.
Signed-off-by: Evgeni Golov <evgeni@golov.de>
before the following would produce four entries:
container_config:
- "lxc.network.flags=up"
- "lxc.network.flags =up"
- "lxc.network.flags= up"
- "lxc.network.flags = up"
let's strip the whitespace and insert only one "lxc.network.flags = up"
into the final config
Signed-off-by: Evgeni Golov <evgeni@golov.de>
with the default umask tar will create a world-readable archive of the
container, which may contain sensitive data
Signed-off-by: Evgeni Golov <evgeni@golov.de>
* do not use a predictable filename for the LXC attach script
* don't use predictable filenames for LXC attach script logging
* don't set a predictable archive_path
this should prevent symlink attacks which could result in
* data corruption
* data leakage
* privilege escalation
otherwise deploying user-containers fail as these require information
from ~/.config/lxc/default.conf that the LXC tools will load if no
--config was supplied
Signed-off-by: Evgeni Golov <evgeni@golov.de>
The range_search() API was added to the shade library in version
1.5.0 so let's check for that and let the user know they need to
upgrade if they try to use it.
Do a sorted comparison of the list of security groups supplied via `module.params.get('security_groups')` and the list of security groups fetched via `get_sec_group_list(eni.groups)`. This fixes an incorrect "The specified address is already in use" error if the order of security groups in those lists differ.
TRACE:
while parsing a block mapping
in "<string>", line 33, column 13:
description: resulting state of ...
^
expected <block end>, but found ','
in "lxc_container.RETURN", line 419, column 53:
... "/tmp/test-container-config.tar",
ERROR: RETURN is not valid YAML. Line 419 column 53
- "action" style invoking is a legacy way to call modules
- the examples were updated to the typical style of calling complex
modules:
ovirt:
parameter1: value1
parameter2: value2
...
Addition of an os_ironic_inspect module to leverage the OpenStack
Baremetal inspector add-on to ironic or ironic driver out-of-band
hardware introspection, if supported and configured.
The os_project module instantiates the openstack cloud object
by passing the module params kwargs.
As the params contain a key named 'domain_id', this is used
for domain in the OpenStack connection, instead of the domain value
the user specifies on the OSCC clouds.yaml or OpenStack envvars.
This fix corrects this by popping the 'domain_id' key, so it we
keep the value but it's not passed later on module.params.
Currently the module doesn't explicitly close the file handle. This
wraps the reading of the private key in a try/finally block to ensure
the file is properly closed.
When this module was written back in May 2015 we were using 1.9.x. Being lazy I added to param the objects that the other functions would need. What I have noticed is in 2.0 exit_json is trying to jsonify those complex objects and failing. This PR resolves that issue with the vmware_vm_vss_dvs_migrate module.
@kamsz reported this issue in https://github.com/ansible/ansible-modules-extras/pull/1568
Playbook
```
- name: Migrate VCSA to vDS
local_action:
module: vmware_vm_vss_dvs_migrate
hostname: "{{ mgmt_ip_address }}"
username: "{{ vcsa_user }}"
password: "{{ vcsa_pass }}"
vm_name: "{{ hostname }}"
dvportgroup_name: Management
```
Module Testing
```
ASK [Migrate VCSA to vDS] *****************************************************
task path: /opt/autodeploy/projects/emmet/site_deploy.yml:260
ESTABLISH LOCAL CONNECTION FOR USER: root
localhost EXEC ( umask 22 && mkdir -p "$( echo $HOME/.ansible/tmp/ansible-tmp-1454695546.3-207189190861859 )" && echo "$( echo $HOME/.ansible/tmp/ansible-tmp-1454695546.3-207189190861859 )" )
localhost PUT /tmp/tmpkzD4pF TO /root/.ansible/tmp/ansible-tmp-1454695546.3-207189190861859/vmware_vm_vss_dvs_migrate
localhost EXEC LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /root/.ansible/tmp/ansible-tmp-1454695546.3-207189190861859/vmware_vm_vss_dvs_migrate; rm -rf "/root/.ansible/tmp/ansible-tmp-1454695546.3-207189190861859/" > /dev/null 2>&1
changed: [foundation-vcsa -> localhost] => {"changed": true, "invocation": {"module_args": {"dvportgroup_name": "Management", "hostname": "172.27.0.100", "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", "username": "root", "vm_name": "cscvcatmp001"}, "module_name": "vmware_vm_vss_dvs_migrate"}, "result": null}
```
When this module was written back in May 2015 we were using 1.9.x. Being lazy I added to param the objects that the other functions would need. What I have noticed is in 2.0 exit_json is trying to jsonify those complex objects and failing. This PR resolves that issue with the vmware_dvs_portgroup module.
@kamsz reported this issue in https://github.com/ansible/ansible-modules-extras/pull/1568
Playbook
```
- name: Create Management portgroup
local_action:
module: vmware_dvs_portgroup
hostname: "{{ mgmt_ip_address }}"
username: "{{ vcsa_user }}"
password: "{{ vcsa_pass }}"
portgroup_name: Management
switch_name: dvSwitch
vlan_id: "{{ hostvars[groups['foundation_esxi'][0]].mgmt_vlan_id }}"
num_ports: 120
portgroup_type: earlyBinding
state: present
```
Module Testing
```
TASK [Create Management portgroup] *********************************************
task path: /opt/autodeploy/projects/emmet/tasks/deploy/dvs_network.yml:17
ESTABLISH LOCAL CONNECTION FOR USER: root
localhost EXEC ( umask 22 && mkdir -p "$( echo $HOME/.ansible/tmp/ansible-tmp-1454693809.13-142252676354410 )" && echo "$( echo $HOME/.ansible/tmp/ansible-tmp-1454693809.13-142252676354410 )" )
localhost PUT /tmp/tmpeQ8M1U TO /root/.ansible/tmp/ansible-tmp-1454693809.13-142252676354410/vmware_dvs_portgroup
localhost EXEC LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /root/.ansible/tmp/ansible-tmp-1454693809.13-142252676354410/vmware_dvs_portgroup; rm -rf "/root/.ansible/tmp/ansible-tmp-1454693809.13-142252676354410/" > /dev/null 2>&1
changed: [foundation-vcsa -> localhost] => {"changed": true, "invocation": {"module_args": {"hostname": "172.27.0.100", "num_ports": 120, "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", "portgroup_name": "Management", "portgroup_type": "earlyBinding", "state": "present", "switch_name": "dvSwitch", "username": "root", "vlan_id": 2700}, "module_name": "vmware_dvs_portgroup"}, "result": "None"}
```
When this module was written back in May 2015 we were using 1.9.x. Being lazy I added to param the objects that the other functions would need. What I have noticed is in 2.0 exit_json is trying to jsonify those complex objects and failing. This PR resolves that issue with the vmware_dvswitch module.
@kamsz reported this issue in https://github.com/ansible/ansible-modules-extras/pull/1568
Playbook
```
- name: Create dvswitch
local_action:
module: vmware_dvswitch
hostname: "{{ mgmt_ip_address }}"
username: "{{ vcsa_user }}"
password: "{{ vcsa_pass }}"
datacenter_name: "{{ mgmt_vdc }}"
switch_name: dvSwitch
mtu: 1500
uplink_quantity: 2
discovery_proto: lldp
discovery_operation: both
state: present
```
Module Testing
```
TASK [Create dvswitch] *********************************************************
task path: /opt/autodeploy/projects/emmet/tasks/deploy/dvs_network.yml:3
ESTABLISH LOCAL CONNECTION FOR USER: root
localhost EXEC ( umask 22 && mkdir -p "$( echo $HOME/.ansible/tmp/ansible-tmp-1454693792.01-113207408596014 )" && echo "$( echo $HOME/.ansible/tmp/ansible-tmp-1454693792.01-113207408596014 )" )
localhost PUT /tmp/tmptb3e2c TO /root/.ansible/tmp/ansible-tmp-1454693792.01-113207408596014/vmware_dvswitch
localhost EXEC LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /root/.ansible/tmp/ansible-tmp-1454693792.01-113207408596014/vmware_dvswitch; rm -rf "/root/.ansible/tmp/ansible-tmp-1454693792.01-113207408596014/" > /dev/null 2>&1
changed: [foundation-vcsa -> localhost] => {"changed": true, "invocation": {"module_args": {"datacenter_name": "Test-Lab", "discovery_operation": "both", "discovery_proto": "lldp", "hostname": "172.27.0.100", "mtu": 1500, "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", "state": "present", "switch_name": "dvSwitch", "uplink_quantity": 2, "username": "root"}, "module_name": "vmware_dvswitch"}, "result": "'vim.dvs.VmwareDistributedVirtualSwitch:dvs-9'"}
```
When this module was written back in May 2015 we were using 1.9.x. Being lazy I added to param the objects that the other functions would need. What I have noticed is in 2.0 exit_json is trying to jsonify those complex objects and failing. This PR resolves that issue with the vmware_cluster module.
@kamsz reported this issue in https://github.com/ansible/ansible-modules-extras/pull/1568
Playbook
```
- name: Create Cluster
local_action:
module: vmware_cluster
hostname: "{{ mgmt_ip_address }}"
username: "{{ vcsa_user }}"
password: "{{ vcsa_pass }}"
datacenter_name: "{{ mgmt_vdc }}"
cluster_name: "{{ mgmt_cluster }}"
enable_ha: True
enable_drs: True
enable_vsan: True
```
Module testing
```
TASK [Create Cluster] **********************************************************
task path: /opt/autodeploy/projects/emmet/site_deploy.yml:188
ESTABLISH LOCAL CONNECTION FOR USER: root
localhost EXEC ( umask 22 && mkdir -p "$( echo $HOME/.ansible/tmp/ansible-tmp-1454693788.92-14097560271233 )" && echo "$( echo $HOME/.ansible/tmp/ansible-tmp-1454693788.92-14097560271233 )" )
localhost PUT /tmp/tmpAJfdPb TO /root/.ansible/tmp/ansible-tmp-1454693788.92-14097560271233/vmware_cluster
localhost EXEC LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /root/.ansible/tmp/ansible-tmp-1454693788.92-14097560271233/vmware_cluster; rm -rf "/root/.ansible/tmp/ansible-tmp-1454693788.92-14097560271233/" > /dev/null 2>&1
changed: [foundation-vcsa -> localhost] => {"changed": true, "invocation": {"module_args": {"cluster_name": "Foundation", "datacenter_name": "Test-Lab", "enable_drs": true, "enable_ha": true, "enable_vsan": true, "hostname": "172.27.0.100", "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", "state": "present", "username": "root"}, "module_name": "vmware_cluster"}}
```
cloudstack: cs_instance: fix do not require name to be set to avoid clashes
Require one of display_name or name. If both is given, name is used as identifier.
cloudstack: fix name is not case insensitive
cloudstack: cs_template: implement state=extracted
Update f5 validate_certs functionality to do the right thing on multiple python versions
This requires the implementation in the module_utils code here
https://github.com/ansible/ansible/pull/13667 to funciton
fixed domain_id to actually be supported
also added domain as an alias
alt fixes#1437
Simplify the code and remove use_unsafe_shell=True
While there is no security issue with this shell snippet, it
is better to not rely on shell and avoid use_unsafe_shell.
Fix for issue #1074. Now able to create volume without replica's.
Improved fix for #1074. Both None and '' transform to fqdn.
Fix for ansible-modules-extras issue #1080
New module to retrieve facts about existing instance flavors.
By default, facts on all available flavors will be returned.
This can be narrowed by naming a flavor or specifying criteria
about flavor RAM or VCPUs.
Loop compatibility for dry run exception handling
Route table deletion dry run handler
Fixing regression in propagating_vgw_ids default value
Adjusting truthiness of changed attribute for route manipulation
Updating propagating_vgw_ids default in docstring