* add vmware maintenance mode support
* changed version number in documentation
* updated version_added to 2.0 since CI is failing
* changed version to 2.0 due to CI - error asking for 2.1
* added RETURN
* updated formatting of return values and added some to clarify actions taken
* Add SQS queue policy attachment functionality
SQS queue has no attribute 'Policy' until one is attached, so this special
case must be handled uniquely
SQS queue Policy can now be passed in as json
Only a small issue in results.
In case of type is ingress, we rely on ip address, but in results we also return the network.
Resolving the ip address works without zone params. If the ip address is not located in the default zone and zone param is not set,
the network won't be found because default zone was used for the network query listing.
However since network param is not used for type ingress we skip the return of the network in results.
* VMware datacenter module rewritten to don't hold pyvmomi context and objects in Ansible module object
fixed exceptions handling
added datacenter destroy result, moved checks
changed wrong value
wrong value again... need some sleep
* check_mode fixes
* state defaults to present, default changed to true
* module check fixes
container_config:
- "lxc.network.ipv4.gateway=auto"
- "lxc.network.ipv4=192.0.2.1"
might try to override lxc.network.ipv4.gateway in the second entry as both
start with "lxc.network.ipv4".
use a regular expression to find a line that contains (optional) whitespace
and an = after the key.
Signed-off-by: Evgeni Golov <evgeni@golov.de>
before the following would produce four entries:
container_config:
- "lxc.network.flags=up"
- "lxc.network.flags =up"
- "lxc.network.flags= up"
- "lxc.network.flags = up"
let's strip the whitespace and insert only one "lxc.network.flags = up"
into the final config
Signed-off-by: Evgeni Golov <evgeni@golov.de>
with the default umask tar will create a world-readable archive of the
container, which may contain sensitive data
Signed-off-by: Evgeni Golov <evgeni@golov.de>
* do not use a predictable filename for the LXC attach script
* don't use predictable filenames for LXC attach script logging
* don't set a predictable archive_path
this should prevent symlink attacks which could result in
* data corruption
* data leakage
* privilege escalation
otherwise deploying user-containers fail as these require information
from ~/.config/lxc/default.conf that the LXC tools will load if no
--config was supplied
Signed-off-by: Evgeni Golov <evgeni@golov.de>
The range_search() API was added to the shade library in version
1.5.0 so let's check for that and let the user know they need to
upgrade if they try to use it.
Do a sorted comparison of the list of security groups supplied via `module.params.get('security_groups')` and the list of security groups fetched via `get_sec_group_list(eni.groups)`. This fixes an incorrect "The specified address is already in use" error if the order of security groups in those lists differ.
TRACE:
while parsing a block mapping
in "<string>", line 33, column 13:
description: resulting state of ...
^
expected <block end>, but found ','
in "lxc_container.RETURN", line 419, column 53:
... "/tmp/test-container-config.tar",
ERROR: RETURN is not valid YAML. Line 419 column 53
- "action" style invoking is a legacy way to call modules
- the examples were updated to the typical style of calling complex
modules:
ovirt:
parameter1: value1
parameter2: value2
...
Addition of an os_ironic_inspect module to leverage the OpenStack
Baremetal inspector add-on to ironic or ironic driver out-of-band
hardware introspection, if supported and configured.
The os_project module instantiates the openstack cloud object
by passing the module params kwargs.
As the params contain a key named 'domain_id', this is used
for domain in the OpenStack connection, instead of the domain value
the user specifies on the OSCC clouds.yaml or OpenStack envvars.
This fix corrects this by popping the 'domain_id' key, so it we
keep the value but it's not passed later on module.params.
Currently the module doesn't explicitly close the file handle. This
wraps the reading of the private key in a try/finally block to ensure
the file is properly closed.
When this module was written back in May 2015 we were using 1.9.x. Being lazy I added to param the objects that the other functions would need. What I have noticed is in 2.0 exit_json is trying to jsonify those complex objects and failing. This PR resolves that issue with the vmware_vm_vss_dvs_migrate module.
@kamsz reported this issue in https://github.com/ansible/ansible-modules-extras/pull/1568
Playbook
```
- name: Migrate VCSA to vDS
local_action:
module: vmware_vm_vss_dvs_migrate
hostname: "{{ mgmt_ip_address }}"
username: "{{ vcsa_user }}"
password: "{{ vcsa_pass }}"
vm_name: "{{ hostname }}"
dvportgroup_name: Management
```
Module Testing
```
ASK [Migrate VCSA to vDS] *****************************************************
task path: /opt/autodeploy/projects/emmet/site_deploy.yml:260
ESTABLISH LOCAL CONNECTION FOR USER: root
localhost EXEC ( umask 22 && mkdir -p "$( echo $HOME/.ansible/tmp/ansible-tmp-1454695546.3-207189190861859 )" && echo "$( echo $HOME/.ansible/tmp/ansible-tmp-1454695546.3-207189190861859 )" )
localhost PUT /tmp/tmpkzD4pF TO /root/.ansible/tmp/ansible-tmp-1454695546.3-207189190861859/vmware_vm_vss_dvs_migrate
localhost EXEC LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /root/.ansible/tmp/ansible-tmp-1454695546.3-207189190861859/vmware_vm_vss_dvs_migrate; rm -rf "/root/.ansible/tmp/ansible-tmp-1454695546.3-207189190861859/" > /dev/null 2>&1
changed: [foundation-vcsa -> localhost] => {"changed": true, "invocation": {"module_args": {"dvportgroup_name": "Management", "hostname": "172.27.0.100", "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", "username": "root", "vm_name": "cscvcatmp001"}, "module_name": "vmware_vm_vss_dvs_migrate"}, "result": null}
```
When this module was written back in May 2015 we were using 1.9.x. Being lazy I added to param the objects that the other functions would need. What I have noticed is in 2.0 exit_json is trying to jsonify those complex objects and failing. This PR resolves that issue with the vmware_dvs_portgroup module.
@kamsz reported this issue in https://github.com/ansible/ansible-modules-extras/pull/1568
Playbook
```
- name: Create Management portgroup
local_action:
module: vmware_dvs_portgroup
hostname: "{{ mgmt_ip_address }}"
username: "{{ vcsa_user }}"
password: "{{ vcsa_pass }}"
portgroup_name: Management
switch_name: dvSwitch
vlan_id: "{{ hostvars[groups['foundation_esxi'][0]].mgmt_vlan_id }}"
num_ports: 120
portgroup_type: earlyBinding
state: present
```
Module Testing
```
TASK [Create Management portgroup] *********************************************
task path: /opt/autodeploy/projects/emmet/tasks/deploy/dvs_network.yml:17
ESTABLISH LOCAL CONNECTION FOR USER: root
localhost EXEC ( umask 22 && mkdir -p "$( echo $HOME/.ansible/tmp/ansible-tmp-1454693809.13-142252676354410 )" && echo "$( echo $HOME/.ansible/tmp/ansible-tmp-1454693809.13-142252676354410 )" )
localhost PUT /tmp/tmpeQ8M1U TO /root/.ansible/tmp/ansible-tmp-1454693809.13-142252676354410/vmware_dvs_portgroup
localhost EXEC LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /root/.ansible/tmp/ansible-tmp-1454693809.13-142252676354410/vmware_dvs_portgroup; rm -rf "/root/.ansible/tmp/ansible-tmp-1454693809.13-142252676354410/" > /dev/null 2>&1
changed: [foundation-vcsa -> localhost] => {"changed": true, "invocation": {"module_args": {"hostname": "172.27.0.100", "num_ports": 120, "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", "portgroup_name": "Management", "portgroup_type": "earlyBinding", "state": "present", "switch_name": "dvSwitch", "username": "root", "vlan_id": 2700}, "module_name": "vmware_dvs_portgroup"}, "result": "None"}
```
When this module was written back in May 2015 we were using 1.9.x. Being lazy I added to param the objects that the other functions would need. What I have noticed is in 2.0 exit_json is trying to jsonify those complex objects and failing. This PR resolves that issue with the vmware_dvswitch module.
@kamsz reported this issue in https://github.com/ansible/ansible-modules-extras/pull/1568
Playbook
```
- name: Create dvswitch
local_action:
module: vmware_dvswitch
hostname: "{{ mgmt_ip_address }}"
username: "{{ vcsa_user }}"
password: "{{ vcsa_pass }}"
datacenter_name: "{{ mgmt_vdc }}"
switch_name: dvSwitch
mtu: 1500
uplink_quantity: 2
discovery_proto: lldp
discovery_operation: both
state: present
```
Module Testing
```
TASK [Create dvswitch] *********************************************************
task path: /opt/autodeploy/projects/emmet/tasks/deploy/dvs_network.yml:3
ESTABLISH LOCAL CONNECTION FOR USER: root
localhost EXEC ( umask 22 && mkdir -p "$( echo $HOME/.ansible/tmp/ansible-tmp-1454693792.01-113207408596014 )" && echo "$( echo $HOME/.ansible/tmp/ansible-tmp-1454693792.01-113207408596014 )" )
localhost PUT /tmp/tmptb3e2c TO /root/.ansible/tmp/ansible-tmp-1454693792.01-113207408596014/vmware_dvswitch
localhost EXEC LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /root/.ansible/tmp/ansible-tmp-1454693792.01-113207408596014/vmware_dvswitch; rm -rf "/root/.ansible/tmp/ansible-tmp-1454693792.01-113207408596014/" > /dev/null 2>&1
changed: [foundation-vcsa -> localhost] => {"changed": true, "invocation": {"module_args": {"datacenter_name": "Test-Lab", "discovery_operation": "both", "discovery_proto": "lldp", "hostname": "172.27.0.100", "mtu": 1500, "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", "state": "present", "switch_name": "dvSwitch", "uplink_quantity": 2, "username": "root"}, "module_name": "vmware_dvswitch"}, "result": "'vim.dvs.VmwareDistributedVirtualSwitch:dvs-9'"}
```
When this module was written back in May 2015 we were using 1.9.x. Being lazy I added to param the objects that the other functions would need. What I have noticed is in 2.0 exit_json is trying to jsonify those complex objects and failing. This PR resolves that issue with the vmware_cluster module.
@kamsz reported this issue in https://github.com/ansible/ansible-modules-extras/pull/1568
Playbook
```
- name: Create Cluster
local_action:
module: vmware_cluster
hostname: "{{ mgmt_ip_address }}"
username: "{{ vcsa_user }}"
password: "{{ vcsa_pass }}"
datacenter_name: "{{ mgmt_vdc }}"
cluster_name: "{{ mgmt_cluster }}"
enable_ha: True
enable_drs: True
enable_vsan: True
```
Module testing
```
TASK [Create Cluster] **********************************************************
task path: /opt/autodeploy/projects/emmet/site_deploy.yml:188
ESTABLISH LOCAL CONNECTION FOR USER: root
localhost EXEC ( umask 22 && mkdir -p "$( echo $HOME/.ansible/tmp/ansible-tmp-1454693788.92-14097560271233 )" && echo "$( echo $HOME/.ansible/tmp/ansible-tmp-1454693788.92-14097560271233 )" )
localhost PUT /tmp/tmpAJfdPb TO /root/.ansible/tmp/ansible-tmp-1454693788.92-14097560271233/vmware_cluster
localhost EXEC LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /root/.ansible/tmp/ansible-tmp-1454693788.92-14097560271233/vmware_cluster; rm -rf "/root/.ansible/tmp/ansible-tmp-1454693788.92-14097560271233/" > /dev/null 2>&1
changed: [foundation-vcsa -> localhost] => {"changed": true, "invocation": {"module_args": {"cluster_name": "Foundation", "datacenter_name": "Test-Lab", "enable_drs": true, "enable_ha": true, "enable_vsan": true, "hostname": "172.27.0.100", "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", "state": "present", "username": "root"}, "module_name": "vmware_cluster"}}
```