ansible/cloud/vmware/vmware_vm_vss_dvs_migrate.py

158 lines
5.3 KiB
Python
Raw Normal View History

#!/usr/bin/python
# -*- coding: utf-8 -*-
# (c) 2015, Joseph Callen <jcallen () csc.com>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
DOCUMENTATION = '''
---
module: vmware_vm_vss_dvs_migrate
short_description: Migrates a virtual machine from a standard vswitch to distributed
description:
- Migrates a virtual machine from a standard vswitch to distributed
version_added: 2.0
author: "Joseph Callen (@jcpowermac)"
notes:
- Tested on vSphere 5.5
requirements:
- "python >= 2.6"
- PyVmomi
options:
vm_name:
description:
- Name of the virtual machine to migrate to a dvSwitch
required: True
dvportgroup_name:
description:
- Name of the portgroup to migrate to the virtual machine to
required: True
2016-02-05 20:56:39 +01:00
extends_documentation_fragment: vmware.documentation
'''
EXAMPLES = '''
- name: Migrate VCSA to vDS
local_action:
module: vmware_vm_vss_dvs_migrate
hostname: vcenter_ip_or_hostname
username: vcenter_username
password: vcenter_password
vm_name: virtual_machine_name
dvportgroup_name: distributed_portgroup_name
'''
try:
from pyVmomi import vim, vmodl
HAS_PYVMOMI = True
except ImportError:
HAS_PYVMOMI = False
Resolves issue with vmware_vm_vss_dvs_migrate module for v2.0 When this module was written back in May 2015 we were using 1.9.x. Being lazy I added to param the objects that the other functions would need. What I have noticed is in 2.0 exit_json is trying to jsonify those complex objects and failing. This PR resolves that issue with the vmware_vm_vss_dvs_migrate module. @kamsz reported this issue in https://github.com/ansible/ansible-modules-extras/pull/1568 Playbook ``` - name: Migrate VCSA to vDS local_action: module: vmware_vm_vss_dvs_migrate hostname: "{{ mgmt_ip_address }}" username: "{{ vcsa_user }}" password: "{{ vcsa_pass }}" vm_name: "{{ hostname }}" dvportgroup_name: Management ``` Module Testing ``` ASK [Migrate VCSA to vDS] ***************************************************** task path: /opt/autodeploy/projects/emmet/site_deploy.yml:260 ESTABLISH LOCAL CONNECTION FOR USER: root localhost EXEC ( umask 22 && mkdir -p "$( echo $HOME/.ansible/tmp/ansible-tmp-1454695546.3-207189190861859 )" && echo "$( echo $HOME/.ansible/tmp/ansible-tmp-1454695546.3-207189190861859 )" ) localhost PUT /tmp/tmpkzD4pF TO /root/.ansible/tmp/ansible-tmp-1454695546.3-207189190861859/vmware_vm_vss_dvs_migrate localhost EXEC LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /root/.ansible/tmp/ansible-tmp-1454695546.3-207189190861859/vmware_vm_vss_dvs_migrate; rm -rf "/root/.ansible/tmp/ansible-tmp-1454695546.3-207189190861859/" > /dev/null 2>&1 changed: [foundation-vcsa -> localhost] => {"changed": true, "invocation": {"module_args": {"dvportgroup_name": "Management", "hostname": "172.27.0.100", "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", "username": "root", "vm_name": "cscvcatmp001"}, "module_name": "vmware_vm_vss_dvs_migrate"}, "result": null} ```
2016-02-05 20:53:36 +01:00
class VMwareVmVssDvsMigrate(object):
def __init__(self, module):
self.module = module
self.content = connect_to_api(module)
self.vm = None
self.vm_name = module.params['vm_name']
self.dvportgroup_name = module.params['dvportgroup_name']
def process_state(self):
vm_nic_states = {
'absent': self.migrate_network_adapter_vds,
'present': self.state_exit_unchanged,
}
vm_nic_states[self.check_vm_network_state()]()
def find_dvspg_by_name(self):
vmware_distributed_port_group = get_all_objs(self.content, [vim.dvs.DistributedVirtualPortgroup])
for dvspg in vmware_distributed_port_group:
if dvspg.name == self.dvportgroup_name:
return dvspg
return None
def find_vm_by_name(self):
virtual_machines = get_all_objs(self.content, [vim.VirtualMachine])
for vm in virtual_machines:
if vm.name == self.vm_name:
return vm
return None
def migrate_network_adapter_vds(self):
vm_configspec = vim.vm.ConfigSpec()
nic = vim.vm.device.VirtualEthernetCard.DistributedVirtualPortBackingInfo()
port = vim.dvs.PortConnection()
devicespec = vim.vm.device.VirtualDeviceSpec()
pg = self.find_dvspg_by_name()
if pg is None:
self.module.fail_json(msg="The standard portgroup was not found")
dvswitch = pg.config.distributedVirtualSwitch
port.switchUuid = dvswitch.uuid
port.portgroupKey = pg.key
nic.port = port
for device in self.vm.config.hardware.device:
if isinstance(device, vim.vm.device.VirtualEthernetCard):
Resolves issue with vmware_vm_vss_dvs_migrate module for v2.0 When this module was written back in May 2015 we were using 1.9.x. Being lazy I added to param the objects that the other functions would need. What I have noticed is in 2.0 exit_json is trying to jsonify those complex objects and failing. This PR resolves that issue with the vmware_vm_vss_dvs_migrate module. @kamsz reported this issue in https://github.com/ansible/ansible-modules-extras/pull/1568 Playbook ``` - name: Migrate VCSA to vDS local_action: module: vmware_vm_vss_dvs_migrate hostname: "{{ mgmt_ip_address }}" username: "{{ vcsa_user }}" password: "{{ vcsa_pass }}" vm_name: "{{ hostname }}" dvportgroup_name: Management ``` Module Testing ``` ASK [Migrate VCSA to vDS] ***************************************************** task path: /opt/autodeploy/projects/emmet/site_deploy.yml:260 ESTABLISH LOCAL CONNECTION FOR USER: root localhost EXEC ( umask 22 && mkdir -p "$( echo $HOME/.ansible/tmp/ansible-tmp-1454695546.3-207189190861859 )" && echo "$( echo $HOME/.ansible/tmp/ansible-tmp-1454695546.3-207189190861859 )" ) localhost PUT /tmp/tmpkzD4pF TO /root/.ansible/tmp/ansible-tmp-1454695546.3-207189190861859/vmware_vm_vss_dvs_migrate localhost EXEC LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /root/.ansible/tmp/ansible-tmp-1454695546.3-207189190861859/vmware_vm_vss_dvs_migrate; rm -rf "/root/.ansible/tmp/ansible-tmp-1454695546.3-207189190861859/" > /dev/null 2>&1 changed: [foundation-vcsa -> localhost] => {"changed": true, "invocation": {"module_args": {"dvportgroup_name": "Management", "hostname": "172.27.0.100", "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", "username": "root", "vm_name": "cscvcatmp001"}, "module_name": "vmware_vm_vss_dvs_migrate"}, "result": null} ```
2016-02-05 20:53:36 +01:00
devicespec.device = device
devicespec.operation = vim.vm.device.VirtualDeviceSpec.Operation.edit
devicespec.device.backing = nic
vm_configspec.deviceChange.append(devicespec)
task = self.vm.ReconfigVM_Task(vm_configspec)
changed, result = wait_for_task(task)
self.module.exit_json(changed=changed, result=result)
def state_exit_unchanged(self):
self.module.exit_json(changed=False)
def check_vm_network_state(self):
try:
self.vm = self.find_vm_by_name()
if self.vm is None:
self.module.fail_json(msg="A virtual machine with name %s does not exist" % self.vm_name)
for device in self.vm.config.hardware.device:
if isinstance(device, vim.vm.device.VirtualEthernetCard):
if isinstance(device.backing, vim.vm.device.VirtualEthernetCard.DistributedVirtualPortBackingInfo):
return 'present'
return 'absent'
except vmodl.RuntimeFault as runtime_fault:
self.module.fail_json(msg=runtime_fault.msg)
except vmodl.MethodFault as method_fault:
self.module.fail_json(msg=method_fault.msg)
def main():
argument_spec = vmware_argument_spec()
argument_spec.update(dict(vm_name=dict(required=True, type='str'),
dvportgroup_name=dict(required=True, type='str')))
module = AnsibleModule(argument_spec=argument_spec, supports_check_mode=False)
if not HAS_PYVMOMI:
module.fail_json(msg='pyvmomi is required for this module')
Resolves issue with vmware_vm_vss_dvs_migrate module for v2.0 When this module was written back in May 2015 we were using 1.9.x. Being lazy I added to param the objects that the other functions would need. What I have noticed is in 2.0 exit_json is trying to jsonify those complex objects and failing. This PR resolves that issue with the vmware_vm_vss_dvs_migrate module. @kamsz reported this issue in https://github.com/ansible/ansible-modules-extras/pull/1568 Playbook ``` - name: Migrate VCSA to vDS local_action: module: vmware_vm_vss_dvs_migrate hostname: "{{ mgmt_ip_address }}" username: "{{ vcsa_user }}" password: "{{ vcsa_pass }}" vm_name: "{{ hostname }}" dvportgroup_name: Management ``` Module Testing ``` ASK [Migrate VCSA to vDS] ***************************************************** task path: /opt/autodeploy/projects/emmet/site_deploy.yml:260 ESTABLISH LOCAL CONNECTION FOR USER: root localhost EXEC ( umask 22 && mkdir -p "$( echo $HOME/.ansible/tmp/ansible-tmp-1454695546.3-207189190861859 )" && echo "$( echo $HOME/.ansible/tmp/ansible-tmp-1454695546.3-207189190861859 )" ) localhost PUT /tmp/tmpkzD4pF TO /root/.ansible/tmp/ansible-tmp-1454695546.3-207189190861859/vmware_vm_vss_dvs_migrate localhost EXEC LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /root/.ansible/tmp/ansible-tmp-1454695546.3-207189190861859/vmware_vm_vss_dvs_migrate; rm -rf "/root/.ansible/tmp/ansible-tmp-1454695546.3-207189190861859/" > /dev/null 2>&1 changed: [foundation-vcsa -> localhost] => {"changed": true, "invocation": {"module_args": {"dvportgroup_name": "Management", "hostname": "172.27.0.100", "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", "username": "root", "vm_name": "cscvcatmp001"}, "module_name": "vmware_vm_vss_dvs_migrate"}, "result": null} ```
2016-02-05 20:53:36 +01:00
vmware_vmnic_migrate = VMwareVmVssDvsMigrate(module)
vmware_vmnic_migrate.process_state()
from ansible.module_utils.vmware import *
from ansible.module_utils.basic import *
if __name__ == '__main__':
Resolves issue with vmware_vm_vss_dvs_migrate module for v2.0 When this module was written back in May 2015 we were using 1.9.x. Being lazy I added to param the objects that the other functions would need. What I have noticed is in 2.0 exit_json is trying to jsonify those complex objects and failing. This PR resolves that issue with the vmware_vm_vss_dvs_migrate module. @kamsz reported this issue in https://github.com/ansible/ansible-modules-extras/pull/1568 Playbook ``` - name: Migrate VCSA to vDS local_action: module: vmware_vm_vss_dvs_migrate hostname: "{{ mgmt_ip_address }}" username: "{{ vcsa_user }}" password: "{{ vcsa_pass }}" vm_name: "{{ hostname }}" dvportgroup_name: Management ``` Module Testing ``` ASK [Migrate VCSA to vDS] ***************************************************** task path: /opt/autodeploy/projects/emmet/site_deploy.yml:260 ESTABLISH LOCAL CONNECTION FOR USER: root localhost EXEC ( umask 22 && mkdir -p "$( echo $HOME/.ansible/tmp/ansible-tmp-1454695546.3-207189190861859 )" && echo "$( echo $HOME/.ansible/tmp/ansible-tmp-1454695546.3-207189190861859 )" ) localhost PUT /tmp/tmpkzD4pF TO /root/.ansible/tmp/ansible-tmp-1454695546.3-207189190861859/vmware_vm_vss_dvs_migrate localhost EXEC LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /root/.ansible/tmp/ansible-tmp-1454695546.3-207189190861859/vmware_vm_vss_dvs_migrate; rm -rf "/root/.ansible/tmp/ansible-tmp-1454695546.3-207189190861859/" > /dev/null 2>&1 changed: [foundation-vcsa -> localhost] => {"changed": true, "invocation": {"module_args": {"dvportgroup_name": "Management", "hostname": "172.27.0.100", "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER", "username": "root", "vm_name": "cscvcatmp001"}, "module_name": "vmware_vm_vss_dvs_migrate"}, "result": null} ```
2016-02-05 20:53:36 +01:00
main()