Commit graph

12778 commits

Author SHA1 Message Date
Brian Coca
864bf4e19e added new puppet path to bin search
fixes #1835
2016-12-08 11:33:54 -05:00
Brian Coca
b1281319c1 updated to match core module forms 2016-12-08 11:33:54 -05:00
Hans-Joachim Kliemeck
bb27e38578 corrected replacement of last backslash 2016-12-08 11:33:54 -05:00
Hans-Joachim Kliemeck
cd0e97dc77 corrected requirements 2016-12-08 11:33:54 -05:00
Hans-Joachim Kliemeck
5f9eaf193e fixxed problems related to path input 2016-12-08 11:33:54 -05:00
Hans-Joachim Kliemeck
0d01a36dd9 first implementation of win_share module 2016-12-08 11:33:54 -05:00
Marcin Dobosz
6f68db5c1a Fix win_iis_webapppool module to not null ref when removing an apppool using PS4 2016-12-08 11:33:54 -05:00
nitzmahone
092c3ccbde fix default arg handling and error messages in win_file_version 2016-12-08 11:33:54 -05:00
nitzmahone
db58300aa7 fix missing bracket in win_file_version 2016-12-08 11:33:54 -05:00
Sam Liu
a077c4bc9d fix some error for passing CI build. 2016-12-08 11:33:53 -05:00
Sam Liu
b174416895 Fixed: exception swallowing 2016-12-08 11:33:53 -05:00
Sam Liu
d5fe7633e2 new module win_file_version 2016-12-08 11:33:53 -05:00
liquidat
87bc5fcb24 remove legacy action style from examples
- "action" style invoking is a legacy way to call modules
- the examples were updated to the typical style of calling complex
  modules:

ovirt:
  parameter1: value1
  parameter2: value2
  ...
2016-12-08 11:33:53 -05:00
Matt Martz
019a944fb3 Catch errors related to insufficient (old) versions of pexpect. Fixes #13660 2016-12-08 11:33:53 -05:00
Will Keeling
876fe06290 Better handling of package groups in pacman module 2016-12-08 11:33:53 -05:00
Jonathan Mainguy
ac8b171da4 fixes bug where puppet fails if logdest is not specified 2016-12-08 11:33:53 -05:00
Toshio Kuratomi
c602d49d42 Fail due to no dnf module installed earlier as we use a dnf utility function to determine if we have permission to install packages. 2016-12-08 11:33:53 -05:00
Jiri Tyr
bb194b03bc Removing parameter from yum_repository module 2016-12-08 11:33:53 -05:00
Jiri Tyr
709ae10207 Adding more options to the yum_repository module 2016-12-08 11:33:53 -05:00
Eike Frost
85f6bb4d8e Check whether interface-list exits before querying its length 2016-12-08 11:33:53 -05:00
Ricardo Carrillo Cruz
9fea94b5bf Fix instantiation of openstack_cloud object in os_project
The os_project module instantiates the openstack cloud object
by passing the module params kwargs.
As the params contain a key named 'domain_id', this is used
for domain in the OpenStack connection, instead of the domain value
the user specifies on the OSCC clouds.yaml or OpenStack envvars.
This fix corrects this by popping the 'domain_id' key, so it we
keep the value but it's not passed later on module.params.
2016-12-08 11:33:53 -05:00
Darek Kaczyński
17a6cea512 ecs_task module documentation fixes 2016-12-08 11:33:53 -05:00
Joseph Callen
773db55233 Resolves issue with vmware_migrate_vmk module for v2.0
When this module was written back in May 2015 we were using 1.9.x. Being lazy I added to param the objects that the other functions would need. What I have noticed is in 2.0 exit_json is trying to jsonify those complex objects and failing. This PR resolves that issue with the vmware_migrate_vmk module.

@kamsz reported this issue in https://github.com/ansible/ansible-modules-extras/pull/1568

Playbook
```
- name: Migrate Management vmk
      local_action:
        module: vmware_migrate_vmk
        hostname: "{{ mgmt_ip_address }}"
        username: "{{ vcsa_user }}"
        password: "{{ vcsa_pass }}"
        esxi_hostname: "{{ hostvars[item].hostname }}"
        device: vmk1
        current_switch_name: temp_vswitch
        current_portgroup_name: esx-mgmt
        migrate_switch_name: dvSwitch
        migrate_portgroup_name: Management
      with_items: groups['foundation_esxi']
```

Module Testing
```
TASK [Migrate Management vmk] **************************************************
task path: /opt/autodeploy/projects/emmet/tasks/deploy/migrate_vmk.yml:3
ESTABLISH LOCAL CONNECTION FOR USER: root
localhost EXEC ( umask 22 && mkdir -p "$( echo $HOME/.ansible/tmp/ansible-tmp-1454695485.85-245405603184252 )" && echo "$( echo $HOME/.ansible/tmp/ansible-tmp-1454695485.85-245405603184252 )" )
localhost PUT /tmp/tmpdlhr6t TO /root/.ansible/tmp/ansible-tmp-1454695485.85-245405603184252/vmware_migrate_vmk
localhost EXEC LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /root/.ansible/tmp/ansible-tmp-1454695485.85-245405603184252/vmware_migrate_vmk; rm -rf "/root/.ansible/tmp/ansible-tmp-1454695485.85-245405603184252/" > /dev/null 2>&1
localhost EXEC ( umask 22 && mkdir -p "$( echo $HOME/.ansible/tmp/ansible-tmp-1454695490.35-143738865490168 )" && echo "$( echo $HOME/.ansible/tmp/ansible-tmp-1454695490.35-143738865490168 )" )
localhost PUT /tmp/tmpqfZqh1 TO /root/.ansible/tmp/ansible-tmp-1454695490.35-143738865490168/vmware_migrate_vmk
localhost EXEC LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /root/.ansible/tmp/ansible-tmp-1454695490.35-143738865490168/vmware_migrate_vmk; rm -rf "/root/.ansible/tmp/ansible-tmp-1454695490.35-143738865490168/" > /dev/null 2>&1
localhost EXEC ( umask 22 && mkdir -p "$( echo $HOME/.ansible/tmp/ansible-tmp-1454695491.96-124154332968882 )" && echo "$( echo $HOME/.ansible/tmp/ansible-tmp-1454695491.96-124154332968882 )" )
localhost PUT /tmp/tmpf3rKZq TO /root/.ansible/tmp/ansible-tmp-1454695491.96-124154332968882/vmware_migrate_vmk
localhost EXEC LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /root/.ansible/tmp/ansible-tmp-1454695491.96-124154332968882/vmware_migrate_vmk; rm -rf "/root/.ansible/tmp/ansible-tmp-1454695491.96-124154332968882/" > /dev/null 2>&1
ok: [foundation-vcsa -> localhost] => (item=foundation-esxi-01) => {"changed": false, "invocation": {"module_args": {"current_portgroup_name": "esx-mgmt", "current_switch_name": "temp_vswitch", "device": "vmk1", "esxi_hostname": "cscesxtmp001", "hostname": "172.27.0.100", "migrate_portgroup_name": "Management", "migrate_switch_name": "dvSwitch", "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", "username": "root"}, "module_name": "vmware_migrate_vmk"}, "item": "foundation-esxi-01"}
ok: [foundation-vcsa -> localhost] => (item=foundation-esxi-02) => {"changed": false, "invocation": {"module_args": {"current_portgroup_name": "esx-mgmt", "current_switch_name": "temp_vswitch", "device": "vmk1", "esxi_hostname": "cscesxtmp002", "hostname": "172.27.0.100", "migrate_portgroup_name": "Management", "migrate_switch_name": "dvSwitch", "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", "username": "root"}, "module_name": "vmware_migrate_vmk"}, "item": "foundation-esxi-02"}
ok: [foundation-vcsa -> localhost] => (item=foundation-esxi-03) => {"changed": false, "invocation": {"module_args": {"current_portgroup_name": "esx-mgmt", "current_switch_name": "temp_vswitch", "device": "vmk1", "esxi_hostname": "cscesxtmp003", "hostname": "172.27.0.100", "migrate_portgroup_name": "Management", "migrate_switch_name": "dvSwitch", "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", "username": "root"}, "module_name": "vmware_migrate_vmk"}, "item": "foundation-esxi-03"}
```
2016-12-08 11:33:53 -05:00
Joseph Callen
eece1346ab missing doc fragment 2016-12-08 11:33:53 -05:00
Joseph Callen
3721a4647c Resolves issue with vmware_vm_vss_dvs_migrate module for v2.0
When this module was written back in May 2015 we were using 1.9.x. Being lazy I added to param the objects that the other functions would need. What I have noticed is in 2.0 exit_json is trying to jsonify those complex objects and failing. This PR resolves that issue with the vmware_vm_vss_dvs_migrate module.

@kamsz reported this issue in https://github.com/ansible/ansible-modules-extras/pull/1568

Playbook
```
- name: Migrate VCSA to vDS
      local_action:
        module: vmware_vm_vss_dvs_migrate
        hostname: "{{ mgmt_ip_address }}"
        username: "{{ vcsa_user }}"
        password: "{{ vcsa_pass }}"
        vm_name: "{{ hostname }}"
        dvportgroup_name: Management
```

Module Testing
```
ASK [Migrate VCSA to vDS] *****************************************************
task path: /opt/autodeploy/projects/emmet/site_deploy.yml:260
ESTABLISH LOCAL CONNECTION FOR USER: root
localhost EXEC ( umask 22 && mkdir -p "$( echo $HOME/.ansible/tmp/ansible-tmp-1454695546.3-207189190861859 )" && echo "$( echo $HOME/.ansible/tmp/ansible-tmp-1454695546.3-207189190861859 )" )
localhost PUT /tmp/tmpkzD4pF TO /root/.ansible/tmp/ansible-tmp-1454695546.3-207189190861859/vmware_vm_vss_dvs_migrate
localhost EXEC LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /root/.ansible/tmp/ansible-tmp-1454695546.3-207189190861859/vmware_vm_vss_dvs_migrate; rm -rf "/root/.ansible/tmp/ansible-tmp-1454695546.3-207189190861859/" > /dev/null 2>&1
changed: [foundation-vcsa -> localhost] => {"changed": true, "invocation": {"module_args": {"dvportgroup_name": "Management", "hostname": "172.27.0.100", "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", "username": "root", "vm_name": "cscvcatmp001"}, "module_name": "vmware_vm_vss_dvs_migrate"}, "result": null}

```
2016-12-08 11:33:53 -05:00
Joseph Callen
cef9e42896 Resolves issue with vmware_host module for v2.0
When this module was written back in May 2015 we were using 1.9.x. Being lazy I added to param the objects that the other functions would need. What I have noticed is in 2.0 exit_json is trying to jsonify those complex objects and failing. This PR resolves that issue with the vmware_host module.

@kamsz reported this issue in https://github.com/ansible/ansible-modules-extras/pull/1568

Playbook
```
    - name: Add Host
      local_action:
        module: vmware_host
        hostname: "{{ mgmt_ip_address }}"
        username: "{{ vcsa_user }}"
        password: "{{ vcsa_pass }}"
        datacenter_name: "{{ mgmt_vdc }}"
        cluster_name: "{{ mgmt_cluster }}"
        esxi_hostname: "{{ hostvars[item].hostname }}"
        esxi_username: "{{ esxi_username }}"
        esxi_password: "{{ site_passwd }}"
        state: present
      with_items: groups['foundation_esxi']
```

Module Testing
```
TASK [Add Host] ****************************************************************
task path: /opt/autodeploy/projects/emmet/site_deploy.yml:214
ESTABLISH LOCAL CONNECTION FOR USER: root
localhost EXEC ( umask 22 && mkdir -p "$( echo $HOME/.ansible/tmp/ansible-tmp-1454693866.1-87710459703937 )" && echo "$( echo $HOME/.ansible/tmp/ansible-tmp-1454693866.1-87710459703937 )" )
localhost PUT /tmp/tmppmr9i9 TO /root/.ansible/tmp/ansible-tmp-1454693866.1-87710459703937/vmware_host
localhost EXEC LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /root/.ansible/tmp/ansible-tmp-1454693866.1-87710459703937/vmware_host; rm -rf "/root/.ansible/tmp/ansible-tmp-1454693866.1-87710459703937/" > /dev/null 2>&1
localhost EXEC ( umask 22 && mkdir -p "$( echo $HOME/.ansible/tmp/ansible-tmp-1454693943.8-75870536677834 )" && echo "$( echo $HOME/.ansible/tmp/ansible-tmp-1454693943.8-75870536677834 )" )
localhost PUT /tmp/tmpVB81f2 TO /root/.ansible/tmp/ansible-tmp-1454693943.8-75870536677834/vmware_host
localhost EXEC LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /root/.ansible/tmp/ansible-tmp-1454693943.8-75870536677834/vmware_host; rm -rf "/root/.ansible/tmp/ansible-tmp-1454693943.8-75870536677834/" > /dev/null 2>&1
localhost EXEC ( umask 22 && mkdir -p "$( echo $HOME/.ansible/tmp/ansible-tmp-1454693991.56-163414752982563 )" && echo "$( echo $HOME/.ansible/tmp/ansible-tmp-1454693991.56-163414752982563 )" )
localhost PUT /tmp/tmpFB7VQB TO /root/.ansible/tmp/ansible-tmp-1454693991.56-163414752982563/vmware_host
localhost EXEC LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /root/.ansible/tmp/ansible-tmp-1454693991.56-163414752982563/vmware_host; rm -rf "/root/.ansible/tmp/ansible-tmp-1454693991.56-163414752982563/" > /dev/null 2>&1
changed: [foundation-vcsa -> localhost] => (item=foundation-esxi-01) => {"changed": true, "invocation": {"module_args": {"cluster_name": "Foundation", "datacenter_name": "Test-Lab", "esxi_hostname": "cscesxtmp001", "esxi_password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", "esxi_username": "root", "hostname": "172.27.0.100", "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", "state": "present", "username": "root"}, "module_name": "vmware_host"}, "item": "foundation-esxi-01", "result": "'vim.HostSystem:host-15'"}
changed: [foundation-vcsa -> localhost] => (item=foundation-esxi-02) => {"changed": true, "invocation": {"module_args": {"cluster_name": "Foundation", "datacenter_name": "Test-Lab", "esxi_hostname": "cscesxtmp002", "esxi_password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", "esxi_username": "root", "hostname": "172.27.0.100", "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", "state": "present", "username": "root"}, "module_name": "vmware_host"}, "item": "foundation-esxi-02", "result": "'vim.HostSystem:host-20'"}
changed: [foundation-vcsa -> localhost] => (item=foundation-esxi-03) => {"changed": true, "invocation": {"module_args": {"cluster_name": "Foundation", "datacenter_name": "Test-Lab", "esxi_hostname": "cscesxtmp003", "esxi_password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", "esxi_username": "root", "hostname": "172.27.0.100", "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", "state": "present", "username": "root"}, "module_name": "vmware_host"}, "item": "foundation-esxi-03", "result": "'vim.HostSystem:host-21'"}

```
2016-12-08 11:33:53 -05:00
Joseph Callen
fb3bb746b2 Resolves issue with vmware_dvs_portgroup module for v2.0
When this module was written back in May 2015 we were using 1.9.x. Being lazy I added to param the objects that the other functions would need. What I have noticed is in 2.0 exit_json is trying to jsonify those complex objects and failing. This PR resolves that issue with the vmware_dvs_portgroup module.

@kamsz reported this issue in https://github.com/ansible/ansible-modules-extras/pull/1568

Playbook
```
- name: Create Management portgroup
      local_action:
        module: vmware_dvs_portgroup
        hostname: "{{ mgmt_ip_address }}"
        username: "{{ vcsa_user }}"
        password: "{{ vcsa_pass }}"
        portgroup_name: Management
        switch_name: dvSwitch
        vlan_id: "{{ hostvars[groups['foundation_esxi'][0]].mgmt_vlan_id }}"
        num_ports: 120
        portgroup_type: earlyBinding
        state: present
```

Module Testing
```
TASK [Create Management portgroup] *********************************************
task path: /opt/autodeploy/projects/emmet/tasks/deploy/dvs_network.yml:17
ESTABLISH LOCAL CONNECTION FOR USER: root
localhost EXEC ( umask 22 && mkdir -p "$( echo $HOME/.ansible/tmp/ansible-tmp-1454693809.13-142252676354410 )" && echo "$( echo $HOME/.ansible/tmp/ansible-tmp-1454693809.13-142252676354410 )" )
localhost PUT /tmp/tmpeQ8M1U TO /root/.ansible/tmp/ansible-tmp-1454693809.13-142252676354410/vmware_dvs_portgroup
localhost EXEC LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /root/.ansible/tmp/ansible-tmp-1454693809.13-142252676354410/vmware_dvs_portgroup; rm -rf "/root/.ansible/tmp/ansible-tmp-1454693809.13-142252676354410/" > /dev/null 2>&1
changed: [foundation-vcsa -> localhost] => {"changed": true, "invocation": {"module_args": {"hostname": "172.27.0.100", "num_ports": 120, "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", "portgroup_name": "Management", "portgroup_type": "earlyBinding", "state": "present", "switch_name": "dvSwitch", "username": "root", "vlan_id": 2700}, "module_name": "vmware_dvs_portgroup"}, "result": "None"}
```
2016-12-08 11:33:52 -05:00
Joseph Callen
9bdc59a6ac Resolves issue with vmware_cluster module for v2.0
When this module was written back in May 2015 we were using 1.9.x. Being lazy I added to param the objects that the other functions would need. What I have noticed is in 2.0 exit_json is trying to jsonify those complex objects and failing. This PR resolves that issue with the vmware_cluster module.

@kamsz reported this issue in https://github.com/ansible/ansible-modules-extras/pull/1568

Playbook
```
    - name: Create Cluster
      local_action:
        module: vmware_cluster
        hostname: "{{ mgmt_ip_address }}"
        username: "{{ vcsa_user }}"
        password: "{{ vcsa_pass }}"
        datacenter_name: "{{ mgmt_vdc }}"
        cluster_name: "{{ mgmt_cluster }}"
        enable_ha: True
        enable_drs: True
        enable_vsan: True
```

Module testing
```
TASK [Create Cluster] **********************************************************
task path: /opt/autodeploy/projects/emmet/site_deploy.yml:188
ESTABLISH LOCAL CONNECTION FOR USER: root
localhost EXEC ( umask 22 && mkdir -p "$( echo $HOME/.ansible/tmp/ansible-tmp-1454693788.92-14097560271233 )" && echo "$( echo $HOME/.ansible/tmp/ansible-tmp-1454693788.92-14097560271233 )" )
localhost PUT /tmp/tmpAJfdPb TO /root/.ansible/tmp/ansible-tmp-1454693788.92-14097560271233/vmware_cluster
localhost EXEC LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /root/.ansible/tmp/ansible-tmp-1454693788.92-14097560271233/vmware_cluster; rm -rf "/root/.ansible/tmp/ansible-tmp-1454693788.92-14097560271233/" > /dev/null 2>&1
changed: [foundation-vcsa -> localhost] => {"changed": true, "invocation": {"module_args": {"cluster_name": "Foundation", "datacenter_name": "Test-Lab", "enable_drs": true, "enable_ha": true, "enable_vsan": true, "hostname": "172.27.0.100", "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", "state": "present", "username": "root"}, "module_name": "vmware_cluster"}}
```
2016-12-08 11:33:52 -05:00
Joseph Callen
0aa4f867de Resolves issue with vmware_dvs_host module for v2.0
When this module was written back in May 2015 we were using 1.9.x. Being lazy I added to param the objects that the other functions would need. What I have noticed is in 2.0 exit_json is trying to jsonify those complex objects and failing. This PR resolves that issue with the vmware_dvs_host module.

@kamsz reported this issue in https://github.com/ansible/ansible-modules-extras/pull/1568

Playbook
```
- name: Add Host to dVS
      local_action:
        module: vmware_dvs_host
        hostname: "{{ mgmt_ip_address }}"
        username: "{{ vcsa_user }}"
        password: "{{ vcsa_pass }}"
        esxi_hostname: "{{ hostvars[item].hostname }}"
        switch_name: dvSwitch
        vmnics: "{{ dvs_vmnic }}"
        state: present
      with_items: groups['foundation_esxi']
```
Module Testing
```
TASK [Add Host to dVS] *********************************************************
task path: /opt/autodeploy/projects/emmet/site_deploy.yml:234
ESTABLISH LOCAL CONNECTION FOR USER: root
localhost EXEC ( umask 22 && mkdir -p "$( echo $HOME/.ansible/tmp/ansible-tmp-1454694039.6-259977654985844 )" && echo "$( echo $HOME/.ansible/tmp/ansible-tmp-1454694039.6-259977654985844 )" )
localhost PUT /tmp/tmpGrHqbd TO /root/.ansible/tmp/ansible-tmp-1454694039.6-259977654985844/vmware_dvs_host
localhost EXEC LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /root/.ansible/tmp/ansible-tmp-1454694039.6-259977654985844/vmware_dvs_host; rm -rf "/root/.ansible/tmp/ansible-tmp-1454694039.6-259977654985844/" > /dev/null 2>&1
localhost EXEC ( umask 22 && mkdir -p "$( echo $HOME/.ansible/tmp/ansible-tmp-1454694058.76-121920794239796 )" && echo "$( echo $HOME/.ansible/tmp/ansible-tmp-1454694058.76-121920794239796 )" )
localhost PUT /tmp/tmpkP7DPu TO /root/.ansible/tmp/ansible-tmp-1454694058.76-121920794239796/vmware_dvs_host
localhost EXEC LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /root/.ansible/tmp/ansible-tmp-1454694058.76-121920794239796/vmware_dvs_host; rm -rf "/root/.ansible/tmp/ansible-tmp-1454694058.76-121920794239796/" > /dev/null 2>&1
localhost EXEC ( umask 22 && mkdir -p "$( echo $HOME/.ansible/tmp/ansible-tmp-1454694090.2-33641188152663 )" && echo "$( echo $HOME/.ansible/tmp/ansible-tmp-1454694090.2-33641188152663 )" )
localhost PUT /tmp/tmp216NwV TO /root/.ansible/tmp/ansible-tmp-1454694090.2-33641188152663/vmware_dvs_host
localhost EXEC LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /root/.ansible/tmp/ansible-tmp-1454694090.2-33641188152663/vmware_dvs_host; rm -rf "/root/.ansible/tmp/ansible-tmp-1454694090.2-33641188152663/" > /dev/null 2>&1
changed: [foundation-vcsa -> localhost] => (item=foundation-esxi-01) => {"changed": true, "invocation": {"module_args": {"esxi_hostname": "cscesxtmp001", "hostname": "172.27.0.100", "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", "state": "present", "switch_name": "dvSwitch", "username": "root", "vmnics": ["vmnic2"]}, "module_name": "vmware_dvs_host"}, "item": "foundation-esxi-01", "result": "None"}
changed: [foundation-vcsa -> localhost] => (item=foundation-esxi-02) => {"changed": true, "invocation": {"module_args": {"esxi_hostname": "cscesxtmp002", "hostname": "172.27.0.100", "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", "state": "present", "switch_name": "dvSwitch", "username": "root", "vmnics": ["vmnic2"]}, "module_name": "vmware_dvs_host"}, "item": "foundation-esxi-02", "result": "None"}
changed: [foundation-vcsa -> localhost] => (item=foundation-esxi-03) => {"changed": true, "invocation": {"module_args": {"esxi_hostname": "cscesxtmp003", "hostname": "172.27.0.100", "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", "state": "present", "switch_name": "dvSwitch", "username": "root", "vmnics": ["vmnic2"]}, "module_name": "vmware_dvs_host"}, "item": "foundation-esxi-03", "result": "None"}
```
2016-12-08 11:33:52 -05:00
Rene Moser
fb4c299f13 cloudstack: new module cs_zone 2016-12-08 11:33:52 -05:00
Rene Moser
3a6fd536ab cloudstack: new module cs_cluster 2016-12-08 11:33:52 -05:00
Rene Moser
2b21212dc6 cloudstack: new module cs_pod 2016-12-08 11:33:52 -05:00
Rene Moser
7d1a4db9ee cloudstack: new module cs_instance_facts 2016-12-08 11:33:52 -05:00
Rene Moser
b609250cfd cloudstack: add new module cs_resourcelimit 2016-12-08 11:33:52 -05:00
Rene Moser
595eb1f8f1 cloudstack: new module cs_configuration 2016-12-08 11:33:52 -05:00
Eike Frost
9779792b07 return as unchanged if macro update is unnecessary 2016-12-08 11:33:52 -05:00
Konstantin Shalygin
3956549e6c Fix recurse delete. Add force update_cache feature. 2016-12-08 11:33:52 -05:00
Corwin Brown
62e8f46390 Converting result to snake_case before returning 2016-12-08 11:33:52 -05:00
Corwin Brown
ac620b79dd Added UseBasicParsing flag
win_uri uses "Invoke-WebRequest" under the covers, which apparently
uses Internet Explorer to parse a webpage. The problem is if a user
has never run Internet Explorer, it will be unable to do that. The
work around for this is to set the "-UseBasicParsing" flag.

The only advantage to having the Internet Explorer parsed page is
that you can then access the DOM as if it was a powershell
argument. That doesn't seem super useful for Ansible to be able
to do, so I set the default to be "-UseBasicParsing"
2016-12-08 11:33:52 -05:00
Corwin Brown
20284fed88 bug fixes 2016-12-08 11:33:52 -05:00
Corwin Brown
88e4faa1ac Using Get-AnsibleParam
conflict

typo
2016-12-08 11:33:52 -05:00
Corwin Brown
a979624b88 Adding win_uri module 2016-12-08 11:33:52 -05:00
Marcos Diez
ece891baec Updated database/misc/mongodb_user.py, the docs now explain how to add a read user to the local/oplog db 2016-12-08 11:33:51 -05:00
Matt Martz
4842758fd1 Choices should be a list of true/false not the string BOOLEANS 2016-12-08 11:33:51 -05:00
Matt Martz
402a996430 Don't call sys.exit in sns_topic, use HAS_BOTO to fail 2016-12-08 11:33:51 -05:00
Matt Martz
27be34ef9d DOCUMENTATION fixes for a few modules 2016-12-08 11:33:51 -05:00
Matt Martz
e3cffb0de4 Fix version_added for recently added modules 2016-12-08 11:33:51 -05:00
Joseph Callen
9ab5b367bd Resolves issue with vmware_dvswitch module for v2.0
When this module was written back in May 2015 we were using 1.9.x. Being lazy I added to param the objects that the other functions would need. What I have noticed is in 2.0 exit_json is trying to jsonify those complex objects and failing. This PR resolves that issue with the vmware_dvswitch module.

@kamsz reported this issue in https://github.com/ansible/ansible-modules-extras/pull/1568

Playbook
```
- name: Create dvswitch
      local_action:
        module: vmware_dvswitch
        hostname: "{{ mgmt_ip_address }}"
        username: "{{ vcsa_user }}"
        password: "{{ vcsa_pass }}"
        datacenter_name: "{{ mgmt_vdc }}"
        switch_name: dvSwitch
        mtu: 1500
        uplink_quantity: 2
        discovery_proto: lldp
        discovery_operation: both
        state: present
```
Module Testing
```
TASK [Create dvswitch] *********************************************************
task path: /opt/autodeploy/projects/emmet/tasks/deploy/dvs_network.yml:3
ESTABLISH LOCAL CONNECTION FOR USER: root
localhost EXEC ( umask 22 && mkdir -p "$( echo $HOME/.ansible/tmp/ansible-tmp-1454693792.01-113207408596014 )" && echo "$( echo $HOME/.ansible/tmp/ansible-tmp-1454693792.01-113207408596014 )" )
localhost PUT /tmp/tmptb3e2c TO /root/.ansible/tmp/ansible-tmp-1454693792.01-113207408596014/vmware_dvswitch
localhost EXEC LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /root/.ansible/tmp/ansible-tmp-1454693792.01-113207408596014/vmware_dvswitch; rm -rf "/root/.ansible/tmp/ansible-tmp-1454693792.01-113207408596014/" > /dev/null 2>&1
changed: [foundation-vcsa -> localhost] => {"changed": true, "invocation": {"module_args": {"datacenter_name": "Test-Lab", "discovery_operation": "both", "discovery_proto": "lldp", "hostname": "172.27.0.100", "mtu": 1500, "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", "state": "present", "switch_name": "dvSwitch", "uplink_quantity": 2, "username": "root"}, "module_name": "vmware_dvswitch"}, "result": "'vim.dvs.VmwareDistributedVirtualSwitch:dvs-9'"}
```
2016-12-08 11:33:51 -05:00
Ronny
bb417d2b62 Update zabbix_host.py
Use existing proxy when updating a host unless proxy is specified. Before change proxy was always set to none(0) when updating.
2016-12-08 11:33:51 -05:00
Rene Moser
5344701557 cloudstack: cs_instance: implement updating security groups
ACS API implemented in 4.8, has no effect < 4.8.
2016-12-08 11:33:51 -05:00