Merge branch 'devel' into mazer_role_loader

* devel: (30 commits)
  Prevent data being truncated over persistent connection socket (#43885)
  Fix eos_command integration test failures (#43922)
  Update iosxr cliconf plugin (#43837)
  win_domain modules: ensure Netlogon service is still running after promotion (#43703)
  openvswitch_db : Handle column value conversion and idempotency in no_key case (#43869)
  Fix typo
  Fix spelling of ansbile to ansible (#43898)
  added platform guides for NOS and VOSS (#43854)
  Fix download URL for yum integration test.
  New module for managing EMC VNX Block storage (#42945)
  Docker integration tests: factorize setup (#42306)
  VMware: datastore selection (#35812)
  Remove unnecessary features from cli_command (#43829)
  [doc] import_role: mention version from which behavior changed and fix some typos (#43843)
  Add source interface and use-vrf features (#43418)
  Fix unreferenced msg from vmware_host (#43872)
  set supports_generate_diff to False vyos (#43873)
  add group_by_os_family in azure dynamic inventory (#40702)
  ansible-test: Create public key creating Windows targets (#43760)
  azure_rm_loadbalancer_facts.py: list() takes at least 2 arguments fix (#29046) (#29050)
  ...
This commit is contained in:
Adrian Likins 2018-08-10 10:29:18 -04:00
commit 1431900da2
135 changed files with 1337 additions and 794 deletions

18
.github/BOTMETA.yml vendored
View file

@ -465,7 +465,7 @@ files:
$modules/network/dellos9/: dhivyap skg-net
$modules/network/edgeos/: samdoran
$modules/network/enos/: amuraleedhar
$modules/network/eos/: privateip trishnaguha
$modules/network/eos/: trishnaguha
$modules/network/exos/: rdvencioneck
$modules/network/f5/:
ignored: Etienne-Carriere mhite mryanlam perzizzle srvg wojtek0806 JoeReifel $team_networking
@ -474,8 +474,8 @@ files:
$modules/network/fortios/: bjolivot
$modules/network/illumos/: xen0l
$modules/network/interface/: $team_networking
$modules/network/ios/: privateip rcarrillocruz
$modules/network/iosxr/: privateip rcarrillocruz gdpak
$modules/network/ios/: rcarrillocruz
$modules/network/iosxr/: rcarrillocruz gdpak
$modules/network/ironware/: paulquack
$modules/network/junos/: Qalthos ganeshrn
$modules/network/layer2/: $team_networking
@ -496,7 +496,7 @@ files:
$modules/network/ordnance/: alexanderturner djh00t
$modules/network/ovs/:
ignored: stygstra
maintainers: privateip rcarrillocruz
maintainers: rcarrillocruz
$modules/network/panos/: ivanbojer jtschichold
$modules/network/panos/panos_address.py: itdependsnetworks ivanbojer jtschichold
$modules/network/protocol/: $team_networking
@ -1245,7 +1245,7 @@ macros:
team_avi: ericsysmin grastogi23 khaltore
team_azure: haroldwongms nitzmahone trstringer yuwzho xscript zikalino
team_cloudstack: resmo dpassante
team_cumulus: isharacomix jrrivers privateip
team_cumulus: isharacomix jrrivers
team_cyberark_conjur: jvanderhoof ryanprior
team_extreme: bigmstone LindsayHill
team_ipa: Nosmoht Akasurde fxfitz
@ -1253,13 +1253,13 @@ macros:
team_meraki: dagwieers kbreit
team_netapp: hulquest lmprice ndswartz amit0701 schmots1 carchi8py
team_netscaler: chiradeep giorgos-nikolopoulos
team_netvisor: Qalthos amitsi gundalow privateip
team_networking: Qalthos ganeshrn gundalow privateip rcarrillocruz trishnaguha gdpak
team_netvisor: Qalthos amitsi
team_networking: Qalthos ganeshrn rcarrillocruz trishnaguha gdpak
team_nso: cmoberg cnasten tbjurman
team_nxos: mikewiebe privateip rahushen rcarrillocruz trishnaguha tstoner
team_nxos: mikewiebe rahushen rcarrillocruz trishnaguha tstoner
team_onyx: samerd
team_openstack: emonty omgjlk juliakreger rcarrillocruz shrews thingee dagnello
team_openswitch: Qalthos gundalow privateip gdpak
team_openswitch: Qalthos gdpak
team_rabbitmq: chrishoffman manuel-sousa hyperized
team_rhn: alikins barnabycourt flossware vritant
team_tower: ghjm jlaska matburt wwitzel3 simfarm ryanpetrello rooftopcellist AlanCoding

View file

@ -0,0 +1,8 @@
---
name: 🔥 Security bug report
about: How to report security vulnerabilities
---
For all security related bugs, email security@ansible.com instead of using this issue tracker and you will receive a prompt response.
For more information, see https://docs.ansible.com/ansible/latest/community/reporting_bugs_and_features.html

View file

@ -12,6 +12,7 @@ except Exception:
pass
import fcntl
import hashlib
import os
import signal
import socket
@ -36,6 +37,23 @@ from ansible.utils.display import Display
from ansible.utils.jsonrpc import JsonRpcServer
def read_stream(byte_stream):
size = int(byte_stream.readline().strip())
data = byte_stream.read(size)
if len(data) < size:
raise Exception("EOF found before data was complete")
data_hash = to_text(byte_stream.readline().strip())
if data_hash != hashlib.sha1(data).hexdigest():
raise Exception("Read {0} bytes, but data did not match checksum".format(size))
# restore escaped loose \r characters
data = data.replace(br'\r', b'\r')
return data
@contextmanager
def file_lock(lock_path):
"""
@ -204,25 +222,8 @@ def main():
try:
# read the play context data via stdin, which means depickling it
cur_line = stdin.readline()
init_data = b''
while cur_line.strip() != b'#END_INIT#':
if cur_line == b'':
raise Exception("EOF found before init data was complete")
init_data += cur_line
cur_line = stdin.readline()
cur_line = stdin.readline()
vars_data = b''
while cur_line.strip() != b'#END_VARS#':
if cur_line == b'':
raise Exception("EOF found before vars data was complete")
vars_data += cur_line
cur_line = stdin.readline()
# restore escaped loose \r characters
vars_data = vars_data.replace(br'\r', b'\r')
vars_data = read_stream(stdin)
init_data = read_stream(stdin)
if PY3:
pc_data = cPickle.loads(init_data, encoding='bytes')

View file

@ -0,0 +1,3 @@
bugfixes:
- win_domain - ensure the Netlogon service is up and running after promoting host to controller - https://github.com/ansible/ansible/issues/39235
- win_domain_controller - ensure the Netlogon service is up and running after promoting host to controller - https://github.com/ansible/ansible/issues/39235

View file

@ -19,4 +19,5 @@ include_powerstate=yes
group_by_resource_group=yes
group_by_location=yes
group_by_security_group=yes
group_by_os_family=yes
group_by_tag=yes

View file

@ -261,6 +261,7 @@ AZURE_CONFIG_SETTINGS = dict(
group_by_location='AZURE_GROUP_BY_LOCATION',
group_by_security_group='AZURE_GROUP_BY_SECURITY_GROUP',
group_by_tag='AZURE_GROUP_BY_TAG',
group_by_os_family='AZURE_GROUP_BY_OS_FAMILY',
use_private_ip='AZURE_USE_PRIVATE_IP'
)
@ -572,6 +573,7 @@ class AzureInventory(object):
self.replace_dash_in_groups = False
self.group_by_resource_group = True
self.group_by_location = True
self.group_by_os_family = True
self.group_by_security_group = True
self.group_by_tag = True
self.include_powerstate = True
@ -706,7 +708,7 @@ class AzureInventory(object):
host_vars['os_disk'] = dict(
name=machine.storage_profile.os_disk.name,
operating_system_type=machine.storage_profile.os_disk.os_type.value
operating_system_type=machine.storage_profile.os_disk.os_type.value.lower()
)
if self.include_powerstate:
@ -811,10 +813,16 @@ class AzureInventory(object):
host_name = self._to_safe(vars['name'])
resource_group = self._to_safe(vars['resource_group'])
operating_system_type = self._to_safe(vars['os_disk']['operating_system_type'].lower())
security_group = None
if vars.get('security_group'):
security_group = self._to_safe(vars['security_group'])
if self.group_by_os_family:
if not self._inventory.get(operating_system_type):
self._inventory[operating_system_type] = []
self._inventory[operating_system_type].append(host_name)
if self.group_by_resource_group:
if not self._inventory.get(resource_group):
self._inventory[resource_group] = []

View file

@ -9,7 +9,7 @@
# TODO:
# * more jq examples
# * optional folder heriarchy
# * optional folder hierarchy
"""
$ jq '._meta.hostvars[].config' data.json | head
@ -38,9 +38,8 @@ import sys
import uuid
from time import time
import six
from jinja2 import Environment
from six import integer_types, string_types
from six import integer_types, PY3
from six.moves import configparser
try:
@ -235,7 +234,7 @@ class VMWareInventory(object):
'groupby_custom_field': False}
}
if six.PY3:
if PY3:
config = configparser.ConfigParser()
else:
config = configparser.SafeConfigParser()

View file

@ -28,7 +28,7 @@ Emacs
A free, open-source text editor and IDE that supports auto-indentation, syntax highlighting and built in terminal shell(among other things).
* `yaml-mode <https://github.com/yoshiki/yaml-mode>`_ - YAML highlighting and syntax checking.
* `inja2-mode <https://github.com/paradoxxxzero/jinja2-mode>`_ - Jinja2 highlighting and syntax checking.
* `jinja2-mode <https://github.com/paradoxxxzero/jinja2-mode>`_ - Jinja2 highlighting and syntax checking.
* `magit-mode <https://github.com/magit/magit>`_ - Git porcelain within Emacs.

View file

@ -15,9 +15,11 @@ Some Ansible Network platforms support multiple connection types, privilege esca
platform_ios
platform_ironware
platform_junos
platform_nos
platform_nxos
platform_routeros
platform_slxos
platform_voss
.. _settings_by_platform:
@ -43,8 +45,12 @@ Settings by Platform
+-------------------+-------------------------+----------------------+----------------------+------------------+------------------+
| Extreme IronWare | ``ironware`` | in v. >=2.5 | N/A | N/A | in v. >=2.5 |
+-------------------+-------------------------+----------------------+----------------------+------------------+------------------+
| Extreme NOS | ``nos`` | in v. >=2.7 | N/A | N/A | N/A |
+-------------------+-------------------------+----------------------+----------------------+------------------+------------------+
| Extreme SLX-OS | ``slxos`` | in v. >=2.6 | N/A | N/A | N/A |
+-------------------+-------------------------+----------------------+----------------------+------------------+------------------+
| Extreme VOSS | ``voss`` | in v. >=2.7 | N/A | N/A | N/A |
+-------------------+-------------------------+----------------------+----------------------+------------------+------------------+
| F5 BIG-IP | N/A | N/A | N/A | N/A | in v. >=2.0 |
+-------------------+-------------------------+----------------------+----------------------+------------------+------------------+
| F5 BIG-IQ | N/A | N/A | N/A | N/A | in v. >=2.0 |

View file

@ -0,0 +1,70 @@
.. _nos_platform_options:
***************************************
NOS Platform Options
***************************************
Extreme NOS Ansible modules only support CLI connections today. ``httpapi`` modules may be added in future.
This page offers details on how to use ``network_cli`` on NOS in Ansible >= 2.7.
.. contents:: Topics
Connections Available
================================================================================
+---------------------------+-----------------------------------------------+
|.. | CLI |
+===========================+===============================================+
| **Protocol** | SSH |
+---------------------------+-----------------------------------------------+
| | **Credentials** | | uses SSH keys / SSH-agent if present |
| | | | accepts ``-u myuser -k`` if using password |
+---------------------------+-----------------------------------------------+
| **Indirect Access** | via a bastion (jump host) |
+---------------------------+-----------------------------------------------+
| | **Connection Settings** | | ``ansible_connection: network_cli`` |
| | | | |
| | | | |
| | | | |
| | | | |
+---------------------------+-----------------------------------------------+
| | **Enable Mode** | | not supported by NOS |
| | (Privilege Escalation) | | |
+---------------------------+-----------------------------------------------+
| **Returned Data Format** | ``stdout[0].`` |
+---------------------------+-----------------------------------------------+
NOS does not support ``ansible_connection: local``. You must use ``ansible_connection: network_cli``.
Using CLI in Ansible >= 2.7
================================================================================
Example CLI ``group_vars/nos.yml``
----------------------------------
.. code-block:: yaml
ansible_connection: network_cli
ansible_network_os: nos
ansible_user: myuser
ansible_ssh_pass: !vault...
ansible_ssh_common_args: '-o ProxyCommand="ssh -W %h:%p -q bastion01"'
- If you are using SSH keys (including an ssh-agent) you can remove the ``ansible_ssh_pass`` configuration.
- If you are accessing your host directly (not through a bastion/jump host) you can remove the ``ansible_ssh_common_args`` configuration.
- If you are accessing your host through a bastion/jump host, you cannot include your SSH password in the ``ProxyCommand`` directive. To prevent secrets from leaking out (for example in ``ps`` output), SSH does not support providing passwords via environment variables.
Example CLI Task
----------------
.. code-block:: yaml
- name: Get version information (nos)
nos_command:
command: "show version"
register: show_ver
when: ansible_network_os == 'nos'
.. include:: shared_snippets/SSH_warning.txt

View file

@ -0,0 +1,70 @@
.. _voss_platform_options:
***************************************
VOSS Platform Options
***************************************
Extreme VOSS Ansible modules only support CLI connections today. This page offers details on how to
use ``network_cli`` on VOSS in Ansible >= 2.7.
.. contents:: Topics
Connections Available
================================================================================
+---------------------------+-----------------------------------------------+
|.. | CLI |
+===========================+===============================================+
| **Protocol** | SSH |
+---------------------------+-----------------------------------------------+
| | **Credentials** | | uses SSH keys / SSH-agent if present |
| | | | accepts ``-u myuser -k`` if using password |
+---------------------------+-----------------------------------------------+
| **Indirect Access** | via a bastion (jump host) |
+---------------------------+-----------------------------------------------+
| | **Connection Settings** | | ``ansible_connection: network_cli`` |
| | | | |
| | | | |
| | | | |
| | | | |
+---------------------------+-----------------------------------------------+
| | **Enable Mode** | | supported - use ``ansible_become: yes`` |
| | (Privilege Escalation) | | with ``ansible_become_method: enable`` |
+---------------------------+-----------------------------------------------+
| **Returned Data Format** | ``stdout[0].`` |
+---------------------------+-----------------------------------------------+
VOSS does not support ``ansible_connection: local``. You must use ``ansible_connection: network_cli``.
Using CLI in Ansible >= 2.7
================================================================================
Example CLI ``group_vars/voss.yml``
-----------------------------------
.. code-block:: yaml
ansible_connection: network_cli
ansible_network_os: voss
ansible_user: myuser
ansible_become: yes
ansible_become_method: enable
ansible_ssh_pass: !vault...
ansible_ssh_common_args: '-o ProxyCommand="ssh -W %h:%p -q bastion01"'
- If you are using SSH keys (including an ssh-agent) you can remove the ``ansible_ssh_pass`` configuration.
- If you are accessing your host directly (not through a bastion/jump host) you can remove the ``ansible_ssh_common_args`` configuration.
- If you are accessing your host through a bastion/jump host, you cannot include your SSH password in the ``ProxyCommand`` directive. To prevent secrets from leaking out (for example in ``ps`` output), SSH does not support providing passwords via environment variables.
Example CLI Task
----------------
.. code-block:: yaml
- name: Retrieve VOSS info
voss_command:
commands: show sys-info
when: ansible_network_os == 'voss'
.. include:: shared_snippets/SSH_warning.txt

View file

@ -31,7 +31,7 @@ include_role and import_role variable exposure
In Ansible 2.7 a new module argument named ``public`` was added to the ``include_role`` module that dictates whether or not the role's ``defaults`` and ``vars`` will be exposed outside of the role, allowing those variables to be used by later tasks. This value defaults to ``public: False``, matching current behavior.
``import_role`` does not support the ``public`` argument, and will unconditionally expose the role's ``defaults`` and ``vars`` to the rest of the playbook. This functinality brings ``import_role`` into closer alignment with roles listed within the ``roles`` header in a play.
``import_role`` does not support the ``public`` argument, and will unconditionally expose the role's ``defaults`` and ``vars`` to the rest of the playbook. This functionality brings ``import_role`` into closer alignment with roles listed within the ``roles`` header in a play.
There is an important difference in the way that ``include_role`` (dynamic) will expose the role's variables, as opposed to ``import_role`` (static). ``import_role`` is a pre-processor, and the ``defaults`` and ``vars`` are evaluated at playbook parsing, making the variables available to tasks and roles listed at any point in the play. ``include_role`` is a conditional task, and the ``defaults`` and ``vars`` are evaluated at execution time, making the variables available to tasks and roles listed *after* the ``include_role`` task.

View file

@ -328,6 +328,7 @@ By default hosts are grouped by:
* security group name
* tag key
* tag key_value
* os_disk operating_system_type (Windows/Linux)
You can control host groupings and host selection by either defining environment variables or creating an
azure_rm.ini file in your current working directory.
@ -344,6 +345,7 @@ Control grouping using the following variables defined in the environment:
* AZURE_GROUP_BY_LOCATION=yes
* AZURE_GROUP_BY_SECURITY_GROUP=yes
* AZURE_GROUP_BY_TAG=yes
* AZURE_GROUP_BY_OS_FAMILY=yes
Select hosts within specific resource groups by assigning a comma separated list to:
@ -390,7 +392,7 @@ file will contain the following:
group_by_location=yes
group_by_security_group=yes
group_by_tag=yes
group_by_os_family=yes
Examples
........
@ -402,6 +404,12 @@ Here are some examples using the inventory script:
# Execute /bin/uname on all instances in the Testing resource group
$ ansible -i azure_rm.py Testing -m shell -a "/bin/uname -a"
# Execute win_ping on all Windows instances
$ ansible -i azure_rm.py windows -m win_ping
# Execute win_ping on all Windows instances
$ ansible -i azure_rm.py winux -m ping
# Use the inventory script to print instance specific information
$ ./ansible/contrib/inventory/azure_rm.py --host my_instance_host_name --resource-groups=Testing --pretty

View file

@ -123,7 +123,7 @@ VAULT_VERSION_MAX = 1.0
# object. The dictionary values are tuples, to account for aliases
# in variable names.
COMMON_CONNECTION_VARS = frozenset(set(('ansible_connection', 'ansbile_host', 'ansible_user', 'ansible_shell_executable',
COMMON_CONNECTION_VARS = frozenset(set(('ansible_connection', 'ansible_host', 'ansible_user', 'ansible_shell_executable',
'ansible_port', 'ansible_pipelining', 'ansible_password', 'ansible_timeout',
'ansible_shell_type', 'ansible_module_compression', 'ansible_private_key_file')))

View file

@ -10,14 +10,15 @@ import time
import json
import subprocess
import sys
import termios
import traceback
from ansible import constants as C
from ansible.errors import AnsibleError, AnsibleParserError, AnsibleUndefinedVariable, AnsibleConnectionFailure, AnsibleActionFail, AnsibleActionSkip
from ansible.executor.task_result import TaskResult
from ansible.module_utils.six import iteritems, string_types, binary_type
from ansible.module_utils.six.moves import cPickle
from ansible.module_utils._text import to_text, to_native
from ansible.module_utils.connection import write_to_file_descriptor
from ansible.playbook.conditional import Conditional
from ansible.playbook.task import Task
from ansible.template import Templar
@ -920,28 +921,24 @@ class TaskExecutor:
[python, find_file_in_path('ansible-connection'), to_text(os.getppid())],
stdin=slave, stdout=subprocess.PIPE, stderr=subprocess.PIPE
)
stdin = os.fdopen(master, 'wb', 0)
os.close(slave)
# Need to force a protocol that is compatible with both py2 and py3.
# That would be protocol=2 or less.
# Also need to force a protocol that excludes certain control chars as
# stdin in this case is a pty and control chars will cause problems.
# that means only protocol=0 will work.
src = cPickle.dumps(self._play_context.serialize(), protocol=0)
stdin.write(src)
stdin.write(b'\n#END_INIT#\n')
# We need to set the pty into noncanonical mode. This ensures that we
# can receive lines longer than 4095 characters (plus newline) without
# truncating.
old = termios.tcgetattr(master)
new = termios.tcgetattr(master)
new[3] = new[3] & ~termios.ICANON
src = cPickle.dumps(variables, protocol=0)
# remaining \r fail to round-trip the socket
src = src.replace(b'\r', br'\r')
stdin.write(src)
stdin.write(b'\n#END_VARS#\n')
try:
termios.tcsetattr(master, termios.TCSANOW, new)
write_to_file_descriptor(master, variables)
write_to_file_descriptor(master, self._play_context.serialize())
stdin.flush()
(stdout, stderr) = p.communicate()
stdin.close()
(stdout, stderr) = p.communicate()
finally:
termios.tcsetattr(master, termios.TCSANOW, old)
os.close(master)
if p.returncode == 0:
result = json.loads(to_text(stdout, errors='surrogate_then_replace'))

View file

@ -27,6 +27,7 @@
# USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
import os
import hashlib
import json
import socket
import struct
@ -36,6 +37,30 @@ import uuid
from functools import partial
from ansible.module_utils._text import to_bytes, to_text
from ansible.module_utils.six import iteritems
from ansible.module_utils.six.moves import cPickle
def write_to_file_descriptor(fd, obj):
"""Handles making sure all data is properly written to file descriptor fd.
In particular, that data is encoded in a character stream-friendly way and
that all data gets written before returning.
"""
# Need to force a protocol that is compatible with both py2 and py3.
# That would be protocol=2 or less.
# Also need to force a protocol that excludes certain control chars as
# stdin in this case is a pty and control chars will cause problems.
# that means only protocol=0 will work.
src = cPickle.dumps(obj, protocol=0)
# raw \r characters will not survive pty round-trip
# They should be rehydrated on the receiving end
src = src.replace(b'\r', br'\r')
data_hash = to_bytes(hashlib.sha1(src).hexdigest())
os.write(fd, b'%d\n' % len(src))
os.write(fd, src)
os.write(fd, b'%s\n' % data_hash)
def send_data(s, data):

View file

@ -132,8 +132,7 @@ def to_commands(module, commands):
def run_commands(module, commands, check_rc=True):
connection = get_connection(module)
try:
out = connection.run_commands(commands=commands, check_rc=check_rc)
return out
return connection.run_commands(commands=commands, check_rc=check_rc)
except ConnectionError as exc:
module.fail_json(msg=to_text(exc))

View file

@ -199,8 +199,7 @@ def build_xml_subtree(container_ele, xmap, param=None, opcode=None):
def build_xml(container, xmap=None, params=None, opcode=None):
'''
"""
Builds netconf xml rpc document from meta-data
Args:
@ -240,8 +239,7 @@ def build_xml(container, xmap=None, params=None, opcode=None):
</banners>
</config>
:returns: xml rpc document as a string
'''
"""
if opcode == 'filter':
root = etree.Element("filter", type="subtree")
elif opcode in ('delete', 'merge'):
@ -285,30 +283,17 @@ def etree_findall(root, node):
def is_cliconf(module):
capabilities = get_device_capabilities(module)
network_api = capabilities.get('network_api')
if network_api not in ('cliconf', 'netconf'):
module.fail_json(msg=('unsupported network_api: {!s}'.format(network_api)))
return False
if network_api == 'cliconf':
return True
return False
return True if capabilities.get('network_api') == 'cliconf' else False
def is_netconf(module):
capabilities = get_device_capabilities(module)
network_api = capabilities.get('network_api')
if network_api not in ('cliconf', 'netconf'):
module.fail_json(msg=('unsupported network_api: {!s}'.format(network_api)))
return False
if network_api == 'netconf':
if not HAS_NCCLIENT:
module.fail_json(msg=('ncclient is not installed'))
module.fail_json(msg='ncclient is not installed')
if not HAS_XML:
module.fail_json(msg=('lxml is not installed'))
module.fail_json(msg='lxml is not installed')
return True
return False
@ -348,12 +333,15 @@ def commit_config(module, comment=None, confirmed=False, confirm_timeout=None,
conn = get_connection(module)
reply = None
try:
if check:
reply = conn.validate()
else:
if is_netconf(module):
if is_netconf(module):
if check:
reply = conn.validate()
else:
reply = conn.commit(confirmed=confirmed, timeout=confirm_timeout, persist=persist)
elif is_cliconf(module):
elif is_cliconf(module):
if check:
module.fail_json(msg="Validate configuration is not supported with network_cli connection type")
else:
reply = conn.commit(comment=comment, label=label)
except ConnectionError as exc:
module.fail_json(msg=to_text(exc, errors='surrogate_then_replace'))
@ -380,10 +368,10 @@ def get_config(module, config_filter=None, source='running'):
# Note: Does not cache config in favour of latest config on every get operation.
try:
out = conn.get_config(source=source, filter=config_filter)
if is_netconf(module):
out = to_xml(conn.get_config(source=source, filter=config_filter))
elif is_cliconf(module):
out = conn.get_config(source=source, flags=config_filter)
cfg = out.strip()
except ConnectionError as exc:
module.fail_json(msg=to_text(exc, errors='surrogate_then_replace'))
@ -429,10 +417,6 @@ def load_config(module, command_filter, commit=False, replace=False,
pass
elif is_cliconf(module):
# to keep the pre-cliconf behaviour, make a copy, avoid adding commands to input list
cmd_filter = deepcopy(command_filter)
# If label is present check if label already exist before entering
# config mode
try:
if label:
old_label = check_existing_commit_labels(conn, label)
@ -442,67 +426,22 @@ def load_config(module, command_filter, commit=False, replace=False,
' an earlier commit, please choose a different label'
' and rerun task' % label
)
cmd_filter.insert(0, 'configure terminal')
if admin:
cmd_filter.insert(0, 'admin')
conn.edit_config(cmd_filter)
response = conn.edit_config(candidate=command_filter, commit=commit, admin=admin, replace=replace, comment=comment, label=label)
if module._diff:
diff = get_config_diff(module)
if replace:
cmd = list()
cmd.append({'command': 'commit replace',
'prompt': 'This commit will replace or remove the entire running configuration',
'answer': 'yes'})
cmd.append('end')
conn.edit_config(cmd)
elif commit:
commit_config(module, comment=comment, label=label)
conn.edit_config('end')
if admin:
conn.edit_config('exit')
else:
conn.discard_changes()
diff = response.get('diff')
except ConnectionError as exc:
module.fail_json(msg=to_text(exc, errors='surrogate_then_replace'))
return diff
def run_command(module, commands):
conn = get_connection(module)
responses = list()
for cmd in to_list(commands):
try:
if isinstance(cmd, str):
cmd = json.loads(cmd)
command = cmd.get('command', None)
prompt = cmd.get('prompt', None)
answer = cmd.get('answer', None)
sendonly = cmd.get('sendonly', False)
newline = cmd.get('newline', True)
except:
command = cmd
prompt = None
answer = None
sendonly = False
newline = True
try:
out = conn.get(command=command, prompt=prompt, answer=answer, sendonly=sendonly, newline=newline)
except ConnectionError as exc:
module.fail_json(msg=to_text(exc))
try:
out = to_text(out, errors='surrogate_or_strict')
except UnicodeError:
module.fail_json(msg=u'Failed to decode output from {0}: {1}'.format(cmd, to_text(out)))
responses.append(out)
return responses
def run_commands(module, commands, check_rc=True):
connection = get_connection(module)
try:
return connection.run_commands(commands=commands, check_rc=check_rc)
except ConnectionError as exc:
module.fail_json(msg=to_text(exc))
def copy_file(module, src, dst, proto='scp'):

View file

@ -0,0 +1,34 @@
# This code is part of Ansible, but is an independent component.
# This particular file snippet, and this file snippet only, is BSD licensed.
# Modules you write using this snippet, which is embedded dynamically by Ansible
# still belong to the author of the module, and may assign their own license
# to the complete work.
#
# (c) 2018 Luca 'remix_tj' Lorenzetto
#
# Redistribution and use in source and binary forms, with or without modification,
# are permitted provided that the following conditions are met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
# WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
# IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
# PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE
# USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
#
emc_vnx_argument_spec = {
'sp_address': dict(type='str', required=True),
'sp_user': dict(type='str', required=False, default='sysadmin'),
'sp_password': dict(type='str', required=False, default='sysadmin',
no_log=True),
}

View file

@ -114,6 +114,7 @@ EXAMPLES = '''
state: present
purge_subscriptions: False
register: topic_info
- name: Deliver feedback to topic instead of owner email
aws_ses_identity:
identity: example@example.com
@ -134,6 +135,7 @@ EXAMPLES = '''
state: present
purge_subscriptions: False
register: topic_info
- name: Delivery notifications to topic
aws_ses_identity:
identity: example@example.com

View file

@ -163,6 +163,7 @@ EXAMPLES = '''
query: hosted_zone
max_items: 1
register: first_facts
- name: example for using next_marker
route53_facts:
query: hosted_zone

View file

@ -416,6 +416,7 @@ class AzureRMDeploymentManager(AzureRMModuleBase):
self.wait_for_deployment_completion = None
self.wait_for_deployment_polling_period = None
self.tags = None
self.append_tags = None
self.results = dict(
deployment=dict(),
@ -429,7 +430,7 @@ class AzureRMDeploymentManager(AzureRMModuleBase):
def exec_module(self, **kwargs):
for key in list(self.module_arg_spec.keys()) + ['tags']:
for key in list(self.module_arg_spec.keys()) + ['append_tags', 'tags']:
setattr(self, key, kwargs[key])
if self.state == 'present':
@ -454,10 +455,14 @@ class AzureRMDeploymentManager(AzureRMModuleBase):
self.results['changed'] = True
self.results['msg'] = 'deployment succeeded'
else:
if self.resource_group_exists(self.resource_group_name):
self.destroy_resource_group()
self.results['changed'] = True
self.results['msg'] = "deployment deleted"
try:
if self.get_resource_group(self.resource_group_name):
self.destroy_resource_group()
self.results['changed'] = True
self.results['msg'] = "deployment deleted"
except CloudError:
# resource group does not exist
pass
return self.results
@ -484,6 +489,15 @@ class AzureRMDeploymentManager(AzureRMModuleBase):
uri=self.template_link
)
if self.append_tags and self.tags:
try:
rg = self.get_resource_group(self.resource_group_name)
if rg.tags:
self.tags = dict(self.tags, **rg.tags)
except CloudError:
# resource group does not exist
pass
params = self.rm_models.ResourceGroup(location=self.location, tags=self.tags)
try:
@ -531,19 +545,6 @@ class AzureRMDeploymentManager(AzureRMModuleBase):
self.fail("Delete resource group and deploy failed with status code: %s and message: %s" %
(e.status_code, e.message))
def resource_group_exists(self, resource_group):
'''
Return True/False based on existence of requested resource group.
:param resource_group: string. Name of a resource group.
:return: boolean
'''
try:
self.rm_client.resource_groups.get(resource_group)
except CloudError:
return False
return True
def _get_failed_nested_operations(self, current_operations):
new_operations = []
for operation in current_operations:

View file

@ -64,6 +64,10 @@ EXAMPLES = '''
- name: Get facts for all load balancers
azure_rm_loadbalancer_facts:
- name: Get facts for all load balancers in a specific resource group
azure_rm_loadbalancer_facts:
resource_group: TestRG
- name: Get facts by tags
azure_rm_loadbalancer_facts:
tags:
@ -152,10 +156,16 @@ class AzureRMLoadBalancerFacts(AzureRMModuleBase):
self.log('List all load balancers')
try:
response = self.network_client.load_balancers.list()
except AzureHttpError as exc:
self.fail('Failed to list all items - {}'.format(str(exc)))
if self.resource_group:
try:
response = self.network_client.load_balancers.list(self.resource_group)
except AzureHttpError as exc:
self.fail('Failed to list items in resource group {} - {}'.format(self.resource_group, str(exc)))
else:
try:
response = self.network_client.load_balancers.list_all()
except AzureHttpError as exc:
self.fail('Failed to list all items - {}'.format(str(exc)))
results = []
for item in response:

View file

@ -95,6 +95,7 @@ options:
EXAMPLES = '''
# Start a server (if it does not exist) and register the server details
- name: Start cloudscale.ch server
cloudscale_server:
name: my-shiny-cloudscale-server

View file

@ -87,6 +87,7 @@ EXAMPLES = '''
- https://www.googleapis.com/auth/devstorage.full_control
state: present
register: bucket
- name: create a backend bucket
gcp_compute_backend_bucket:
name: testObject

View file

@ -247,6 +247,7 @@ EXAMPLES = '''
- https://www.googleapis.com/auth/compute
state: present
register: instancegroup
- name: create a http health check
gcp_compute_http_health_check:
name: 'httphealthcheck-backendservice'
@ -261,6 +262,7 @@ EXAMPLES = '''
- https://www.googleapis.com/auth/compute
state: present
register: healthcheck
- name: create a backend service
gcp_compute_backend_service:
name: testObject

View file

@ -161,6 +161,7 @@ EXAMPLES = '''
- https://www.googleapis.com/auth/compute
state: present
register: address
- name: create a target pool
gcp_compute_target_pool:
name: 'targetpool-forwardingrule'
@ -172,6 +173,7 @@ EXAMPLES = '''
- https://www.googleapis.com/auth/compute
state: present
register: targetpool
- name: create a forwarding rule
gcp_compute_forwarding_rule:
name: testObject

View file

@ -160,6 +160,7 @@ EXAMPLES = '''
- https://www.googleapis.com/auth/compute
state: present
register: globaladdress
- name: create a instance group
gcp_compute_instance_group:
name: 'instancegroup-globalforwardingrule'
@ -171,6 +172,7 @@ EXAMPLES = '''
- https://www.googleapis.com/auth/compute
state: present
register: instancegroup
- name: create a http health check
gcp_compute_http_health_check:
name: 'httphealthcheck-globalforwardingrule'
@ -185,6 +187,7 @@ EXAMPLES = '''
- https://www.googleapis.com/auth/compute
state: present
register: healthcheck
- name: create a backend service
gcp_compute_backend_service:
name: 'backendservice-globalforwardingrule'
@ -200,6 +203,7 @@ EXAMPLES = '''
- https://www.googleapis.com/auth/compute
state: present
register: backendservice
- name: create a url map
gcp_compute_url_map:
name: 'urlmap-globalforwardingrule'
@ -211,6 +215,7 @@ EXAMPLES = '''
- https://www.googleapis.com/auth/compute
state: present
register: urlmap
- name: create a target http proxy
gcp_compute_target_http_proxy:
name: 'targethttpproxy-globalforwardingrule'
@ -222,6 +227,7 @@ EXAMPLES = '''
- https://www.googleapis.com/auth/compute
state: present
register: httpproxy
- name: create a global forwarding rule
gcp_compute_global_forwarding_rule:
name: testObject

View file

@ -191,6 +191,7 @@ EXAMPLES = '''
- https://www.googleapis.com/auth/compute
state: present
register: disk
- name: create a image
gcp_compute_image:
name: testObject

View file

@ -364,6 +364,7 @@ EXAMPLES = '''
- https://www.googleapis.com/auth/compute
state: present
register: disk
- name: create a network
gcp_compute_network:
name: 'network-instance'
@ -374,6 +375,7 @@ EXAMPLES = '''
- https://www.googleapis.com/auth/compute
state: present
register: network
- name: create a address
gcp_compute_address:
name: 'address-instance'
@ -385,6 +387,7 @@ EXAMPLES = '''
- https://www.googleapis.com/auth/compute
state: present
register: address
- name: create a instance
gcp_compute_instance:
name: testObject

View file

@ -108,6 +108,7 @@ EXAMPLES = '''
- https://www.googleapis.com/auth/compute
state: present
register: network
- name: create a instance group
gcp_compute_instance_group:
name: testObject

View file

@ -116,6 +116,7 @@ EXAMPLES = '''
- https://www.googleapis.com/auth/compute
state: present
register: network
- name: create a address
gcp_compute_address:
name: 'address-instancetemplate'
@ -127,6 +128,7 @@ EXAMPLES = '''
- https://www.googleapis.com/auth/compute
state: present
register: address
- name: create a instance template
gcp_compute_instance_template:
name: "{{ resource_name }}"
@ -150,6 +152,7 @@ EXAMPLES = '''
- https://www.googleapis.com/auth/compute
state: present
register: instancetemplate
- name: create a instance group manager
gcp_compute_instance_group_manager:
name: testObject

View file

@ -365,6 +365,7 @@ EXAMPLES = '''
- https://www.googleapis.com/auth/compute
state: present
register: network
- name: create a address
gcp_compute_address:
name: 'address-instancetemplate'
@ -376,6 +377,7 @@ EXAMPLES = '''
- https://www.googleapis.com/auth/compute
state: present
register: address
- name: create a instance template
gcp_compute_instance_template:
name: testObject

View file

@ -130,6 +130,7 @@ EXAMPLES = '''
- https://www.googleapis.com/auth/compute
state: present
register: network
- name: create a route
gcp_compute_route:
name: testObject

View file

@ -117,6 +117,7 @@ EXAMPLES = '''
- https://www.googleapis.com/auth/compute
state: present
register: network
- name: create a subnetwork
gcp_compute_subnetwork:
name: 'ansiblenet'

View file

@ -79,6 +79,7 @@ EXAMPLES = '''
- https://www.googleapis.com/auth/compute
state: present
register: instancegroup
- name: create a http health check
gcp_compute_http_health_check:
name: 'httphealthcheck-targethttpproxy'
@ -93,6 +94,7 @@ EXAMPLES = '''
- https://www.googleapis.com/auth/compute
state: present
register: healthcheck
- name: create a backend service
gcp_compute_backend_service:
name: 'backendservice-targethttpproxy'
@ -108,6 +110,7 @@ EXAMPLES = '''
- https://www.googleapis.com/auth/compute
state: present
register: backendservice
- name: create a url map
gcp_compute_url_map:
name: 'urlmap-targethttpproxy'
@ -119,6 +122,7 @@ EXAMPLES = '''
- https://www.googleapis.com/auth/compute
state: present
register: urlmap
- name: create a target http proxy
gcp_compute_target_http_proxy:
name: testObject

View file

@ -84,6 +84,7 @@ EXAMPLES = '''
- https://www.googleapis.com/auth/compute
state: present
register: instancegroup
- name: create a http health check
gcp_compute_http_health_check:
name: 'httphealthcheck-targethttpsproxy'
@ -98,6 +99,7 @@ EXAMPLES = '''
- https://www.googleapis.com/auth/compute
state: present
register: healthcheck
- name: create a backend service
gcp_compute_backend_service:
name: 'backendservice-targethttpsproxy'
@ -113,6 +115,7 @@ EXAMPLES = '''
- https://www.googleapis.com/auth/compute
state: present
register: backendservice
- name: create a url map
gcp_compute_url_map:
name: 'urlmap-targethttpsproxy'
@ -124,6 +127,7 @@ EXAMPLES = '''
- https://www.googleapis.com/auth/compute
state: present
register: urlmap
- name: create a ssl certificate
gcp_compute_ssl_certificate:
name: 'sslcert-targethttpsproxy'
@ -160,6 +164,7 @@ EXAMPLES = '''
- https://www.googleapis.com/auth/compute
state: present
register: sslcert
- name: create a target https proxy
gcp_compute_target_https_proxy:
name: testObject

View file

@ -90,6 +90,7 @@ EXAMPLES = '''
- https://www.googleapis.com/auth/compute
state: present
register: instancegroup
- name: create a health check
gcp_compute_health_check:
name: 'healthcheck-targetsslproxy'
@ -108,6 +109,7 @@ EXAMPLES = '''
- https://www.googleapis.com/auth/compute
state: present
register: healthcheck
- name: create a backend service
gcp_compute_backend_service:
name: 'backendservice-targetsslproxy'
@ -123,6 +125,7 @@ EXAMPLES = '''
- https://www.googleapis.com/auth/compute
state: present
register: backendservice
- name: create a ssl certificate
gcp_compute_ssl_certificate:
name: 'sslcert-targetsslproxy'
@ -159,6 +162,7 @@ EXAMPLES = '''
- https://www.googleapis.com/auth/compute
state: present
register: sslcert
- name: create a target ssl proxy
gcp_compute_target_ssl_proxy:
name: testObject

View file

@ -85,6 +85,7 @@ EXAMPLES = '''
- https://www.googleapis.com/auth/compute
state: present
register: instancegroup
- name: create a health check
gcp_compute_health_check:
name: 'healthcheck-targettcpproxy'
@ -103,6 +104,7 @@ EXAMPLES = '''
- https://www.googleapis.com/auth/compute
state: present
register: healthcheck
- name: create a backend service
gcp_compute_backend_service:
name: 'backendservice-targettcpproxy'
@ -118,6 +120,7 @@ EXAMPLES = '''
- https://www.googleapis.com/auth/compute
state: present
register: backendservice
- name: create a target tcp proxy
gcp_compute_target_tcp_proxy:
name: testObject

View file

@ -155,6 +155,7 @@ EXAMPLES = '''
- https://www.googleapis.com/auth/compute
state: present
register: instancegroup
- name: create a http health check
gcp_compute_http_health_check:
name: 'httphealthcheck-urlmap'
@ -169,6 +170,7 @@ EXAMPLES = '''
- https://www.googleapis.com/auth/compute
state: present
register: healthcheck
- name: create a backend service
gcp_compute_backend_service:
name: 'backendservice-urlmap'
@ -184,6 +186,7 @@ EXAMPLES = '''
- https://www.googleapis.com/auth/compute
state: present
register: backendservice
- name: create a url map
gcp_compute_url_map:
name: testObject

View file

@ -221,6 +221,7 @@ EXAMPLES = '''
- https://www.googleapis.com/auth/cloud-platform
state: present
register: cluster
- name: create a node pool
gcp_container_node_pool:
name: testObject

View file

@ -89,6 +89,7 @@ EXAMPLES = '''
- https://www.googleapis.com/auth/ndev.clouddns.readwrite
state: present
register: managed_zone
- name: create a resource record set
gcp_dns_resource_record_set:
name: 'www.testzone-4.com.'

View file

@ -98,6 +98,7 @@ EXAMPLES = '''
- https://www.googleapis.com/auth/pubsub
state: present
register: topic
- name: create a subscription
gcp_pubsub_subscription:
name: testObject

View file

@ -108,6 +108,7 @@ EXAMPLES = '''
- https://www.googleapis.com/auth/devstorage.full_control
state: present
register: bucket
- name: create a bucket access control
gcp_storage_bucket_access_control:
bucket: "{{ bucket }}"

View file

@ -128,7 +128,7 @@ result:
'''
try:
from pyVmomi import vim, vmodl
from pyVmomi import vim
except ImportError as e:
pass

View file

@ -77,7 +77,7 @@ licenses:
'''
try:
from pyVmomi import vim, vmodl
from pyVmomi import vim
HAS_PYVMOMI = True
except ImportError:
HAS_PYVMOMI = False

View file

@ -81,7 +81,7 @@ dest_file:
import os
try:
from pyVmomi import vim, vmodl
from pyVmomi import vim
except ImportError:
pass

View file

@ -78,11 +78,6 @@ result:
sample: "Datastore cluster 'DSC2' created successfully."
"""
try:
from pyVmomi import vim, vmodl
except ImportError:
pass
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.vmware import PyVmomi, vmware_argument_spec, wait_for_task
from ansible.module_utils._text import to_native

View file

@ -107,7 +107,7 @@ drs_rule_facts:
'''
try:
from pyVmomi import vim, vmodl
from pyVmomi import vim
except ImportError:
pass

View file

@ -325,6 +325,14 @@ options:
- This parameter is case sensitive.
- If set, then overrides C(customization) parameter values.
version_added: '2.6'
datastore:
description:
- Specify datastore or datastore cluster to provision virtual machine.
- 'This will take precendence over "disk.datastore" parameter.'
- This parameter is useful to override datastore or datastore cluster setting.
- For example, when user has different datastore or datastore cluster for templates and virtual machines.
- Please see example for more usage.
version_added: '2.7'
extends_documentation_fragment: vmware.documentation
'''
@ -508,6 +516,22 @@ EXAMPLES = r'''
uuid: "{{ vm_uuid }}"
state: poweredoff
delegate_to: localhost
- name: Deploy a virtual machine in a datastore different from the datastore of the template
vmware_guest:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
name: "{{ vm_name }}"
state: present
template: "{{ template_name }}"
# Here datastore can be different which holds template
datastore: "{{ virtual_machine_datastore }}"
hardware:
memory_mb: 512
num_cpus: 2
scsi: paravirtual
delegate_to: localhost
'''
RETURN = r'''
@ -523,9 +547,7 @@ import time
HAS_PYVMOMI = False
try:
import pyVmomi
from pyVmomi import vim, vmodl
HAS_PYVMOMI = True
except ImportError:
pass
@ -1989,7 +2011,19 @@ class PyVmomiHelper(PyVmomi):
resource_pool = self.get_resource_pool()
# set the destination datastore for VM & disks
(datastore, datastore_name) = self.select_datastore(vm_obj)
if self.params['datastore']:
# Give precendence to datastore value provided by user
# User may want to deploy VM to specific datastore.
datastore_name = self.params['datastore']
# Check if user has provided datastore cluster first
datastore_cluster = self.cache.find_obj(self.content, [vim.StoragePod], datastore_name)
if datastore_cluster:
# If user specified datastore cluster so get recommended datastore
datastore_name = self.get_recommended_datastore(datastore_cluster_obj=datastore_cluster)
# Check if get_recommended_datastore or user specified datastore exists or not
datastore = self.cache.find_obj(self.content, [vim.Datastore], datastore_name)
else:
(datastore, datastore_name) = self.select_datastore(vm_obj)
self.configspec = vim.vm.ConfigSpec()
self.configspec.deviceChange = []
@ -2297,6 +2331,7 @@ def main():
customization=dict(type='dict', default={}, no_log=True),
customization_spec=dict(type='str', default=None),
vapp_properties=dict(type='list', default=[]),
datastore=dict(type='str'),
)
module = AnsibleModule(argument_spec=argument_spec,

View file

@ -87,12 +87,6 @@ instance:
sample: None
"""
try:
import pyVmomi
from pyVmomi import vim
except ImportError:
pass
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils._text import to_text
from ansible.module_utils.vmware import PyVmomi, vmware_argument_spec

View file

@ -152,15 +152,9 @@ instance:
}
"""
try:
import pyVmomi
from pyVmomi import vim
except ImportError:
pass
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils._text import to_native
from ansible.module_utils.vmware import PyVmomi, vmware_argument_spec, connect_to_api, wait_for_task
from ansible.module_utils.vmware import PyVmomi, vmware_argument_spec, wait_for_task
class PyVmomiHelper(PyVmomi):

View file

@ -221,7 +221,6 @@ instance:
import time
try:
import pyVmomi
from pyVmomi import vim
except ImportError:
pass

View file

@ -110,13 +110,6 @@ from ansible.module_utils._text import to_native
from ansible.module_utils.vmware import PyVmomi, gather_vm_facts, vmware_argument_spec
try:
import pyVmomi
from pyVmomi import vim
except ImportError:
pass
class PyVmomiHelper(PyVmomi):
def __init__(self, module):
super(PyVmomiHelper, self).__init__(module)

View file

@ -265,7 +265,7 @@ class VMwareHost(PyVmomi):
host_connect_spec.sslThumbprint = task_error.thumbprint
else:
self.module.fail_json(msg="Failed to add host %s to vCenter: %s" % (self.esxi_hostname,
to_native(task_error.msg)))
to_native(task_error)))
except vmodl.fault.NotSupported:
self.module.fail_json(msg="Failed to add host %s to vCenter as host is"
" being added to a folder %s whose childType"

View file

@ -103,7 +103,7 @@ facts:
'''
try:
from pyVmomi import vim, vmodl
from pyVmomi import vim
except ImportError:
pass
from ansible.module_utils.basic import AnsibleModule

View file

@ -107,7 +107,7 @@ from ansible.module_utils.basic import AnsibleModule, bytes_to_human
from ansible.module_utils.vmware import PyVmomi, vmware_argument_spec, find_obj
try:
from pyVmomi import vim, vmodl
from pyVmomi import vim
except ImportError:
pass

View file

@ -110,7 +110,7 @@ rule_set_state:
'''
try:
from pyVmomi import vim, vmodl
from pyVmomi import vim
except ImportError:
pass

View file

@ -122,7 +122,7 @@ results:
'''
try:
from pyvmomi import vim, vmodl
from pyvmomi import vim
except ImportError:
pass

View file

@ -94,7 +94,7 @@ RETURN = r'''#
'''
try:
from pyVmomi import vim, vmodl
from pyVmomi import vim
except ImportError:
pass

View file

@ -111,11 +111,6 @@ result:
}
'''
try:
from pyVmomi import vim, vmodl
except ImportError:
pass
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.vmware import PyVmomi, vmware_argument_spec, wait_for_task, TaskError
from ansible.module_utils._text import to_native

View file

@ -66,11 +66,6 @@ local_user_facts:
]
'''
try:
from pyVmomi import vim, vmodl
except ImportError:
pass
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.vmware import PyVmomi, vmware_argument_spec

View file

@ -78,7 +78,7 @@ resource_pool_facts:
'''
try:
from pyVmomi import vim, vmodl
from pyVmomi import vim
except ImportError:
pass

View file

@ -109,11 +109,6 @@ scsi_tgt_facts:
}
"""
try:
from pyVmomi import vim, vmodl
except ImportError:
pass
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.vmware import PyVmomi, vmware_argument_spec

View file

@ -87,7 +87,7 @@ virtual_machines:
'''
try:
from pyVmomi import vim, vmodl
from pyVmomi import vim
except ImportError:
pass

View file

@ -251,7 +251,10 @@ def main():
name=switch,
nodeId=node_id,
podId=pod_id,
rn='nodep-{0}'.format(serial),
# NOTE: Originally we were sending 'rn', but now we need 'dn' for idempotency
# FIXME: Did this change with ACI version ?
dn='uni/controller/nodeidentpol/nodep-{0}'.format(serial),
# rn='nodep-{0}'.format(serial),
role=role,
serial=serial,
)

View file

@ -312,8 +312,11 @@ def main():
pod_id = module.params['pod_id']
leafs = module.params['leafs']
if leafs is not None:
# Users are likely to use integers for leaf IDs, which would raise an exception when using the join method
leafs = [str(leaf) for leaf in module.params['leafs']]
# Process leafs, and support dash-delimited leafs
leafs = []
for leaf in module.params['leafs']:
# Users are likely to use integers for leaf IDs, which would raise an exception when using the join method
leafs.extend(str(leaf).split('-'))
if len(leafs) == 1:
if interface_type != 'vpc':
leafs = leafs[0]

View file

@ -87,6 +87,7 @@ EXAMPLES = '''
name: "my-logical-device"
state: present
register: logical_device
- name: "Save Logical Device into a JSON file 2/3"
copy:
content: "{{ logical_device.value | to_nice_json }}"

View file

@ -87,6 +87,7 @@ EXAMPLES = '''
name: "my-rack-type"
state: present
register: rack_type
- name: "Save Rack Type into a JSON file 2/3"
copy:
content: "{{ rack_type.value | to_nice_json }}"

View file

@ -96,6 +96,7 @@ EXAMPLES = '''
name: "my-template"
state: present
register: template
- name: "Save Template into a JSON file 2/3"
copy:
content: "{{ template.value | to_nice_json }}"

View file

@ -16,212 +16,113 @@ DOCUMENTATION = """
module: cli_command
version_added: "2.7"
author: "Nathaniel Case (@qalthos)"
short_description: Run arbitrary commands on cli-based network devices
short_description: Run a cli command on cli-based network devices
description:
- Sends an arbitrary set of commands to a network device and returns the
results read from the device. This module includes an argument that
will cause the module to wait for a specific condition before returning
or timing out if the condition is not met.
notes:
- Tested against EOS 4.15
- Sends a command to a network device and returns the result read from the device.
options:
commands:
command:
description:
- The commands to send to the remote EOS device over the
configured provider. The resulting output from the command
is returned. If the I(wait_for) argument is provided, the
module is not returned until the condition is satisfied or
the number of I(retries) has been exceeded.
- The command to send to the remote network device. The resulting output
from the command is returned, unless I(sendonly) is set.
required: true
wait_for:
prompt:
description:
- Specifies what to evaluate from the output of the command
and what conditionals to apply. This argument will cause
the task to wait for a particular conditional to be true
before moving forward. If the conditional is not true
by the configured retries, the task fails.
Note - With I(wait_for) the value in C(result['stdout']) can be accessed
using C(result), that is to access C(result['stdout'][0]) use C(result[0]) See examples.
aliases: ['waitfor']
version_added: "2.2"
match:
- A single regex pattern or a sequence of patterns to evaluate the expected
prompt from I(command).
required: false
answer:
description:
- The I(match) argument is used in conjunction with the
I(wait_for) argument to specify the match policy. Valid
values are C(all) or C(any). If the value is set to C(all)
then all conditionals in the I(wait_for) must be satisfied. If
the value is set to C(any) then only one of the values must be
satisfied.
default: all
choices: ['any', 'all']
version_added: "2.2"
retries:
- The answer to reply with if I(prompt) is matched.
required: false
sendonly:
description:
- Specifies the number of retries a command should be tried
before it is considered failed. The command is run on the
target device every retry and evaluated against the I(wait_for)
conditionals.
default: 10
interval:
description:
- Configures the interval in seconds to wait between retries
of the command. If the command does not pass the specified
conditional, the interval indicates how to long to wait before
trying the command again.
default: 1
- The boolean value, that when set to true will send I(command) to the
device but not wait for a result.
type: bool
default: false
required: false
"""
EXAMPLES = """
- name: run show version on remote devices
cli_command:
commands: show version
command: show version
- name: run show version and check to see if output contains Arista
- name: run command with json formatted output
cli_command:
commands: show version
wait_for: result[0] contains Arista
command: show version | json
- name: run multiple commands on remote nodes
- name: run command expecting user confirmation
cli_command:
commands:
- show version
- show interfaces
- name: run multiple commands and evaluate the output
cli_command:
commands:
- show version
- show interfaces
wait_for:
- result[0] contains Arista
- result[1] contains Loopback0
- name: run commands and specify the output format
cli_command:
commands:
- command: show version
output: json
command: commit replace
prompt: This commit will replace or remove the entire running configuration
answer: yes
"""
RETURN = """
stdout:
description: The set of responses from the commands
returned: always apart from low level errors (such as action plugin)
type: list
sample: ['...', '...']
stdout_lines:
description: The value of stdout split into a list
returned: always apart from low level errors (such as action plugin)
type: list
sample: [['...', '...'], ['...'], ['...']]
failed_conditions:
description: The list of conditionals that have failed
returned: failed
type: list
sample: ['...', '...']
description: The response from the command
returned: when sendonly is false
type: string
sample: 'Version: VyOS 1.1.7[...]'
json:
description: A dictionary representing a JSON-formatted response
returned: when the device response is valid JSON
type: dict
sample: |
{
"architecture": "i386",
"bootupTimestamp": 1532649700.56,
"modelName": "vEOS",
"version": "4.15.9M"
[...]
}
"""
import time
from ansible.module_utils._text import to_text
from ansible.module_utils.six import string_types
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.connection import Connection, ConnectionError
from ansible.module_utils.network.common.parsing import Conditional
from ansible.module_utils.network.common.utils import ComplexList
VALID_KEYS = ['command', 'output', 'prompt', 'response']
def to_lines(output):
lines = []
for item in output:
if isinstance(item, string_types):
item = to_text(item).split('\n')
lines.append(item)
return lines
def parse_commands(module, warnings):
transform = ComplexList(dict(
command=dict(key=True),
output=dict(),
prompt=dict(),
answer=dict()
), module)
commands = transform(module.params['commands'])
if module.check_mode:
for item in list(commands):
if not item['command'].startswith('show'):
warnings.append(
'Only show commands are supported when using check_mode, not '
'executing %s' % item['command']
)
commands.remove(item)
return commands
def main():
"""entry point for module execution
"""
argument_spec = dict(
commands=dict(type='list', required=True),
wait_for=dict(type='list', aliases=['waitfor']),
match=dict(default='all', choices=['all', 'any']),
retries=dict(default=10, type='int'),
interval=dict(default=1, type='int')
command=dict(type='str', required=True),
prompt=dict(type='list', required=False),
answer=dict(type='str', required=False),
sendonly=dict(type='bool', default=False, required=False),
)
module = AnsibleModule(argument_spec=argument_spec,
required_together = [['prompt', 'response']]
module = AnsibleModule(argument_spec=argument_spec, required_together=required_together,
supports_check_mode=True)
if module.check_mode and not module.params['command'].startswith('show'):
module.fail_json(
msg='Only show commands are supported when using check_mode, not '
'executing %s' % module.params['command']
)
warnings = list()
result = {'changed': False, 'warnings': warnings}
wait_for = module.params['wait_for'] or list()
try:
conditionals = [Conditional(c) for c in wait_for]
except AttributeError as exc:
module.fail_json(msg=to_text(exc))
commands = parse_commands(module, warnings)
retries = module.params['retries']
interval = module.params['interval']
match = module.params['match']
connection = Connection(module._socket_path)
for attempt in range(retries):
responses = []
response = ''
try:
response = connection.get(**module.params)
except ConnectionError as exc:
module.fail_json(msg=to_text(exc, errors='surrogate_then_replace'))
if not module.params['sendonly']:
try:
for command in commands:
responses.append(connection.get(**command))
except ConnectionError as exc:
module.fail_json(msg=to_text(exc, errors='surrogate_then_replace'))
result['json'] = module.from_json(response)
except ValueError:
pass
for item in list(conditionals):
if item(responses):
if match == 'any':
conditionals = list()
break
conditionals.remove(item)
if not conditionals:
break
time.sleep(interval)
if conditionals:
failed_conditions = [item.raw for item in conditionals]
msg = 'One or more conditional statements have not been satisfied'
module.fail_json(msg=msg, failed_conditions=failed_conditions)
result.update({
'stdout': responses,
'stdout_lines': to_lines(responses)
})
result.update({
'stdout': response,
})
module.exit_json(**result)

View file

@ -84,6 +84,7 @@ EXAMPLES = '''
commands:
- show interface swp1
register: output
- name: Print Status Of Interface
debug:
var: output
@ -93,6 +94,7 @@ EXAMPLES = '''
commands:
- show interface json
register: output
- name: Print Interface Details
debug:
var: output["msg"]
@ -126,6 +128,7 @@ EXAMPLES = '''
commands:
- show bgp summary json
register: output
- name: Print BGP Status In JSON
debug:
var: output["msg"]

View file

@ -61,7 +61,7 @@ EXAMPLES = """
src: running_cfg_ios1.txt
- name: copy file from ios to common location at /tmp
network_put:
net_get:
src: running_cfg_sw1.txt
dest : /tmp/ios1.txt
"""

View file

@ -122,7 +122,7 @@ failed_conditions:
import time
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.network.iosxr.iosxr import run_command, iosxr_argument_spec
from ansible.module_utils.network.iosxr.iosxr import run_commands, iosxr_argument_spec
from ansible.module_utils.network.iosxr.iosxr import command_spec
from ansible.module_utils.network.common.parsing import Conditional
from ansible.module_utils.six import string_types
@ -188,7 +188,7 @@ def main():
match = module.params['match']
while retries > 0:
responses = run_command(module, commands)
responses = run_commands(module, commands)
for item in list(conditionals):
if item(responses):

View file

@ -366,6 +366,8 @@ def run(module, result):
running_config = get_running_config(module)
commands = None
replace_file_path = None
if match != 'none' and replace != 'config':
commands = candidate_config.difference(running_config, path=path, match=match, replace=replace)
elif replace_config:
@ -380,6 +382,7 @@ def run(module, result):
module.fail_json(msg='Copy of config file to the node failed')
commands = ['load harddisk:/ansible_config.txt']
replace_file_path = 'harddisk:/ansible_config.txt'
else:
commands = candidate_config.items
@ -399,7 +402,7 @@ def run(module, result):
commit = not check_mode
diff = load_config(
module, commands, commit=commit,
replace=replace_config, comment=comment, admin=admin,
replace=replace_file_path, comment=comment, admin=admin,
label=label
)
if diff:

View file

@ -118,7 +118,7 @@ ansible_net_neighbors:
import re
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.network.iosxr.iosxr import iosxr_argument_spec, run_command
from ansible.module_utils.network.iosxr.iosxr import iosxr_argument_spec, run_commands
from ansible.module_utils.six import iteritems
from ansible.module_utils.six.moves import zip
@ -407,7 +407,7 @@ def main():
try:
for inst in instances:
commands = inst.commands()
responses = run_command(module, commands)
responses = run_commands(module, commands)
results = dict(zip(commands, responses))
inst.populate(results)
facts.update(inst.facts)

View file

@ -195,7 +195,7 @@ import collections
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.network.iosxr.iosxr import get_config, load_config, build_xml
from ansible.module_utils.network.iosxr.iosxr import run_command, iosxr_argument_spec, get_oper
from ansible.module_utils.network.iosxr.iosxr import run_commands, iosxr_argument_spec, get_oper
from ansible.module_utils.network.iosxr.iosxr import is_netconf, is_cliconf, etree_findall, etree_find
from ansible.module_utils.network.common.utils import conditional, remove_default_spec
@ -382,7 +382,7 @@ class CliConfiguration(ConfigBase):
sleep(want_item['delay'])
command = 'show interfaces {!s}'.format(want_item['name'])
out = run_command(self._module, command)[0]
out = run_commands(self._module, command)[0]
if want_state in ('up', 'down'):
match = re.search(r'%s (\w+)' % 'line protocol is', out, re.M)

View file

@ -41,6 +41,18 @@ options:
description:
- Hostname or IP Address for remote logging (when dest is 'server').
version_added: '2.7'
use_vrf:
description:
- VRF to be used while configuring remote logging (when dest is 'server').
version_added: '2.7'
interface_type:
description:
- Type of interface to be used when configuring Source-Interface for logging (e.g., 'Ethernet', 'mgmt').
version_added: '2.7'
interface:
description:
- Interface number to be used when configuring Source-Interface for logging (e.g., '1/1', '1/3', '0').
version_added: '2.7'
name:
description:
- If value of C(dest) is I(logfile) it indicates file-name.
@ -91,6 +103,19 @@ EXAMPLES = """
facility: daemon
facility_level: 0
state: absent
- name: Configure Remote Logging
nxos_logging:
dest: server
remote_server: test-syslogserver.com
facility: auth
facility_level: 1
use_vrf: management
state: present
- name: Configure Source Interface for Logging
nxos_logging:
interface_type: mgmt
interface: 0
state: present
- name: Configure logging using aggregate
nxos_logging:
@ -144,6 +169,9 @@ def map_obj_to_commands(updates):
if w['dest'] == 'server':
commands.append('no logging server {}'.format(w['remote_server']))
if w['interface_type']:
commands.append('no logging source-interface')
if state == 'present' and w not in have:
if w['facility'] is None:
if w['dest']:
@ -154,26 +182,44 @@ def map_obj_to_commands(updates):
commands.append('logging logfile {} {}'.format(w['name'], w['dest_level']))
elif w['dest'] == 'server':
if w['dest_level']:
commands.append('logging server {0} {1}'.format(
w['remote_server'], w['dest_level']))
if w['facility_level']:
if w['use_vrf']:
commands.append('logging server {0} {1} use-vrf {2}'.format(
w['remote_server'], w['facility_level'], w['use_vrf']))
else:
commands.append('logging server {0} {1}'.format(
w['remote_server'], w['facility_level']))
else:
commands.append('logging server {0}'.format(w['remote_server']))
else:
pass
if w['use_vrf']:
commands.append('logging server {0} use-vrf {1}'.format(
w['remote_server'], w['use_vrf']))
else:
commands.append('logging server {0}'.format(w['remote_server']))
if w['facility']:
if w['dest'] == 'server':
if w['dest_level']:
commands.append('logging server {0} {1} facility {2}'.format(
w['remote_server'], w['dest_level'], w['facility']))
if w['facility_level']:
if w['use_vrf']:
commands.append('logging server {0} {1} facility {2} use-vrf {3}'.format(
w['remote_server'], w['facility_level'], w['facility'], w['use_vrf']))
else:
commands.append('logging server {0} {1} facility {2}'.format(
w['remote_server'], w['facility_level'], w['facility']))
else:
commands.append('logging server {0} facility {1}'.format(w['remote_server'],
w['facility']))
if w['use_vrf']:
commands.append('logging server {0} facility {1} use-vrf {2}'.format(
w['remote_server'], w['facility'], w['use_vrf']))
else:
commands.append('logging server {0} facility {1}'.format(w['remote_server'],
w['facility']))
else:
commands.append('logging level {} {}'.format(w['facility'],
w['facility_level']))
if w['interface_type']:
commands.append('logging source-interface {0} {1}'.format(w['interface_type'],
w['interface']))
return commands
@ -214,7 +260,7 @@ def parse_dest_level(line, dest, name):
pass
return level
if dest is not None:
if dest and dest != 'server':
if dest == 'logfile':
match = re.search(r'logging logfile {} (\S+)'.format(name), line, re.M)
if match:
@ -232,10 +278,15 @@ def parse_dest_level(line, dest, name):
return dest_level
def parse_facility_level(line, facility):
def parse_facility_level(line, facility, dest):
facility_level = None
if facility is not None:
if dest == 'server':
match = re.search(r'logging server (?:\S+) (\d+)', line, re.M)
if match:
facility_level = match.group(1)
elif facility is not None:
match = re.search(r'logging level {} (\S+)'.format(facility), line, re.M)
if match:
facility_level = match.group(1)
@ -253,6 +304,37 @@ def parse_facility(line):
return facility
def parse_use_vrf(line, dest):
use_vrf = None
if dest and dest == 'server':
match = re.search(r'logging server (?:\S+) (?:\d+) use-vrf (\S+)', line, re.M)
if match:
use_vrf = match.group(1)
return use_vrf
def parse_interface_type(line):
interface_type = None
match = re.search(r'logging source-interface (\D+)', line, re.M)
if match:
interface_type = match.group(1)
return interface_type
def parse_interface(line):
interface = None
match = re.search(r'logging source-interface (?:\D+)(\d*([/]?\d+))', line, re.M)
if match:
interface = match.group(1)
return interface
def map_config_to_obj(module):
obj = []
@ -280,10 +362,13 @@ def map_config_to_obj(module):
obj.append({'dest': dest,
'remote_server': parse_remote_server(line, dest),
'use_vrf': parse_use_vrf(line, dest),
'name': parse_name(line, dest),
'facility': facility,
'dest_level': parse_dest_level(line, dest, parse_name(line, dest)),
'facility_level': parse_facility_level(line, facility)})
'facility_level': parse_facility_level(line, facility, dest),
'interface_type': parse_interface_type(line),
'interface': parse_interface(line)})
cmd = [{'command': 'show logging | section enabled | section console', 'output': 'text'},
{'command': 'show logging | section enabled | section monitor', 'output': 'text'}]
@ -306,7 +391,10 @@ def map_config_to_obj(module):
'name': None,
'facility': None,
'dest_level': dest_level,
'facility_level': None})
'facility_level': None,
'use_vrf': None,
'interface_type': None,
'interface': None})
return obj
@ -317,10 +405,13 @@ def map_params_to_obj(module):
if 'aggregate' in module.params and module.params['aggregate']:
args = {'dest': '',
'remote_server': '',
'use_vrf': '',
'name': '',
'facility': '',
'dest_level': '',
'facility_level': ''}
'facility_level': '',
'interface_type': '',
'interface': ''}
for c in module.params['aggregate']:
d = c.copy()
@ -335,6 +426,9 @@ def map_params_to_obj(module):
if d['facility_level'] is not None:
d['facility_level'] = str(d['facility_level'])
if d['interface_type']:
d['interface'] = str(d['interface'])
if 'state' not in d:
d['state'] = module.params['state']
@ -353,10 +447,13 @@ def map_params_to_obj(module):
obj.append({
'dest': module.params['dest'],
'remote_server': module.params['remote_server'],
'use_vrf': module.params['use_vrf'],
'name': module.params['name'],
'facility': module.params['facility'],
'dest_level': dest_level,
'facility_level': facility_level,
'interface_type': module.params['interface_type'],
'interface': module.params['interface'],
'state': module.params['state']
})
return obj
@ -370,8 +467,11 @@ def main():
name=dict(),
facility=dict(),
remote_server=dict(),
use_vrf=dict(),
dest_level=dict(type='int', aliases=['level']),
facility_level=dict(type='int'),
interface_type=dict(),
interface=dict(),
state=dict(default='present', choices=['present', 'absent']),
aggregate=dict(type='list')
)
@ -383,6 +483,7 @@ def main():
module = AnsibleModule(argument_spec=argument_spec,
required_if=required_if,
required_together=[['interface_type', 'interface']],
supports_check_mode=True)
warnings = list()

View file

@ -120,15 +120,14 @@ def map_obj_to_commands(want, have, module):
"%(col)s"
commands.append(templatized_command % module.params)
else:
if want == have:
# Nothing to commit
return commands
if module.params['key'] is None:
templatized_command = "%(ovs-vsctl)s -t %(timeout)s set %(table)s %(record)s " \
"%(col)s=%(value)s"
commands.append(templatized_command % module.params)
elif 'key' not in have.keys():
templatized_command = "%(ovs-vsctl)s -t %(timeout)s add %(table)s %(record)s " \
"%(col)s %(key)s=%(value)s"
commands.append(templatized_command % module.params)
elif want['value'] != have['value']:
else:
templatized_command = "%(ovs-vsctl)s -t %(timeout)s set %(table)s %(record)s " \
"%(col)s:%(key)s=%(value)s"
commands.append(templatized_command % module.params)
@ -171,7 +170,7 @@ def map_config_to_obj(module):
obj['key'] = module.params['key']
obj['value'] = col_value_to_dict[module.params['key']]
else:
obj['value'] = col_value.strip()
obj['value'] = str(col_value.strip())
return obj
@ -199,7 +198,7 @@ def main():
'record': {'required': True},
'col': {'required': True},
'key': {'required': False},
'value': {'required': True},
'value': {'required': True, 'type': 'str'},
'timeout': {'default': 5, 'type': 'int'},
}

View file

@ -0,0 +1,173 @@
#!/usr/bin/python
#
# Copyright (c) 2018, Luca 'remix_tj' Lorenzetto <lorenzetto.luca@gmail.com>
#
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
#
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
ANSIBLE_METADATA = {
'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'
}
DOCUMENTATION = '''
---
module: emc_vnx_sg_member
short_description: Manage storage group member on EMC VNX
version_added: "2.7"
description:
- "This module manages the members of an existing storage group."
extends_documentation_fragment:
- emc.emc_vnx
options:
name:
description:
- Name of the Storage group to manage.
required: true
lunid:
description:
- Lun id to be added.
required: true
state:
description:
- Indicates the desired lunid state.
- C(present) ensures specified lunid is present in the Storage Group.
- C(absent) ensures specified lunid is absent from Storage Group.
default: present
choices: [ "present", "absent"]
author:
- Luca 'remix_tj' Lorenzetto (@remixtj)
'''
EXAMPLES = '''
- name: Add lun to storage group
emc_vnx_sg_member:
name: sg01
sp_address: sp1a.fqdn
sp_user: sysadmin
sp_password: sysadmin
lunid: 100
state: present
- name: Remove lun from storage group
emc_vnx_sg_member:
name: sg01
sp_address: sp1a.fqdn
sp_user: sysadmin
sp_password: sysadmin
lunid: 100
state: absent
'''
RETURN = '''
hluid:
description: LUNID that hosts attached to the storage group will see.
type: int
returned: success
'''
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils._text import to_native
from ansible.module_utils.storage.emc.emc_vnx import emc_vnx_argument_spec
try:
from storops import VNXSystem
from storops.exception import VNXCredentialError, VNXStorageGroupError, \
VNXAluAlreadyAttachedError, VNXAttachAluError, VNXDetachAluNotFoundError
HAS_LIB = True
except:
HAS_LIB = False
def run_module():
module_args = dict(
name=dict(type='str', required=True),
lunid=dict(type='int', required=True),
state=dict(default='present', choices=['present', 'absent']),
)
module_args.update(emc_vnx_argument_spec)
result = dict(
changed=False,
hluid=None
)
module = AnsibleModule(
argument_spec=module_args,
supports_check_mode=True
)
if not HAS_LIB:
module.fail_json(msg='storops library (0.5.10 or greater) is missing.'
'Install with pip install storops'
)
sp_user = module.params['sp_user']
sp_address = module.params['sp_address']
sp_password = module.params['sp_password']
alu = module.params['lunid']
# if the user is working with this module in only check mode we do not
# want to make any changes to the environment, just return the current
# state with no modifications
if module.check_mode:
return result
try:
vnx = VNXSystem(sp_address, sp_user, sp_password)
sg = vnx.get_sg(module.params['name'])
if sg.existed:
if module.params['state'] == 'present':
if not sg.has_alu(alu):
try:
result['hluid'] = sg.attach_alu(alu)
result['changed'] = True
except VNXAluAlreadyAttachedError:
result['hluid'] = sg.get_hlu(alu)
except (VNXAttachAluError, VNXStorageGroupError) as e:
module.fail_json(msg='Error attaching {0}: '
'{1} '.format(alu, to_native(e)),
**result)
else:
result['hluid'] = sg.get_hlu(alu)
if module.params['state'] == 'absent' and sg.has_alu(alu):
try:
sg.detach_alu(alu)
result['changed'] = True
except VNXDetachAluNotFoundError:
# being not attached when using absent is OK
pass
except VNXStorageGroupError as e:
module.fail_json(msg='Error detaching alu {0}: '
'{1} '.format(alu, to_native(e)),
**result)
else:
module.fail_json(msg='No such storage group named '
'{0}'.format(module.params['name']),
**result)
except VNXCredentialError as e:
module.fail_json(msg='{0}'.format(to_native(e)), **result)
module.exit_json(**result)
def main():
run_module()
if __name__ == '__main__':
main()

View file

@ -56,8 +56,9 @@ options:
default: 'no'
notes:
- Handlers are made available to the whole play.
- Variables defined in C(vars) and C(default) for the role are exposed at playbook parsing time. Due to this,
these variables will be accessible to roles and tasks executed before the the location of the C(import_role) task.
- "Since Ansible 2.7: variables defined in C(vars) and C(defaults) for the role are exposed at playbook parsing time.
Due to this, these variables will be accessible to roles and tasks executed before the location of the
C(import_role) task."
- Unlike C(include_role) variable exposure is not configurable, and will always be exposed.
'''

View file

@ -62,6 +62,7 @@ EXAMPLES = '''
tower_job_launch:
job_template: "My Job Template"
register: job
- name: Wait for job max 120s
tower_job_wait:
job_id: job.id

View file

@ -46,6 +46,7 @@ EXAMPLES = '''
tower_job_launch:
job_template: "My Job Template"
register: job
- name: Wait for job max 120s
tower_job_wait:
job_id: job.id

View file

@ -75,6 +75,17 @@ If(-not $forest) {
$iaf = Install-ADDSForest @install_forest_args
$result.reboot_required = $iaf.RebootRequired
# The Netlogon service is set to auto start but is not started. This is
# required for Ansible to connect back to the host and reboot in a
# later task. Even if this fails Ansible can still connect but only
# with ansible_winrm_transport=basic so we just display a warning if
# this fails.
try {
Start-Service -Name Netlogon
} catch {
Add-Warning -obj $result -message "Failed to start the Netlogon service after promoting the host, Ansible may be unable to connect until the host is manually rebooting: $($_.Exception.Message)"
}
}
}

View file

@ -213,7 +213,20 @@ Try {
}
$install_result = Install-ADDSDomainController -NoRebootOnCompletion -Force @install_params
Write-DebugLog "Installation completed, needs reboot..."
Write-DebugLog "Installation complete, trying to start the Netlogon service"
# The Netlogon service is set to auto start but is not started. This is
# required for Ansible to connect back to the host and reboot in a
# later task. Even if this fails Ansible can still connect but only
# with ansible_winrm_transport=basic so we just display a warning if
# this fails.
try {
Start-Service -Name Netlogon
} catch {
Write-DebugLog "Failed to start the Netlogon service: $($_.Exception.Message)"
Add-Warning -obj $result -message "Failed to start the Netlogon service after promoting the host, Ansible may be unable to connect until the host is manually rebooting: $($_.Exception.Message)"
}
Write-DebugLog "Domain Controller setup completed, needs reboot..."
}
}
member_server {

View file

@ -175,7 +175,7 @@ class CliconfBase(AnsiblePlugin):
:param source: The configuration source to return from the device.
This argument accepts either `running` or `startup` as valid values.
:param flag: For devices that support configuration filtering, this
:param flags: For devices that support configuration filtering, this
keyword argument is used to filter the returned configuration.
The use of this keyword argument is device dependent adn will be
silently ignored on devices that do not support it.
@ -212,7 +212,8 @@ class CliconfBase(AnsiblePlugin):
response on executing configuration commands and platform relevant data.
{
"diff": "",
"response": []
"response": [],
"request": []
}
"""
@ -264,9 +265,11 @@ class CliconfBase(AnsiblePlugin):
'supports_onbox_diff: <bool>, # identify if on box diff capability is supported or not
'supports_generate_diff: <bool>, # identify if diff capability is supported within plugin
'supports_multiline_delimiter: <bool>, # identify if multiline demiliter is supported within config
'supports_diff_match: <bool>, # identify if match is supported
'supports_diff_ignore_lines: <bool>, # identify if ignore line in diff is supported
'supports_config_replace': <bool>, # identify if running config replace with candidate config is supported
'supports_diff_match: <bool>, # identify if match is supported
'supports_diff_ignore_lines: <bool>, # identify if ignore line in diff is supported
'supports_config_replace': <bool>, # identify if running config replace with candidate config is supported
'supports_admin': <bool>, # identify if admin configure mode is supported or not
'supports_commit_label': <bool>, # identify if commit label is supported or not
}
'format': [list of supported configuration format],
'diff_match': [list of supported match values],

View file

@ -214,7 +214,7 @@ class Cliconf(CliconfBase):
raise ValueError("'replace' value %s in invalid, valid values are %s" % (diff_replace, ', '.join(option_values['diff_replace'])))
# prepare candidate configuration
candidate_obj = NetworkConfig(indent=3)
candidate_obj = NetworkConfig(indent=3, ignore_lines=diff_ignore_lines)
candidate_obj.load(candidate)
if running and diff_match != 'none' and diff_replace != 'config':

View file

@ -105,7 +105,7 @@ class Cliconf(CliconfBase):
raise ValueError("'replace' value %s in invalid, valid values are %s" % (diff_replace, ', '.join(option_values['diff_replace'])))
# prepare candidate configuration
candidate_obj = NetworkConfig(indent=1)
candidate_obj = NetworkConfig(indent=1, ignore_lines=diff_ignore_lines)
want_src, want_banners = self._extract_banners(candidate)
candidate_obj.load(want_src)

View file

@ -19,12 +19,13 @@
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import collections
import re
import json
from itertools import chain
from ansible.errors import AnsibleConnectionFailure
from ansible.module_utils._text import to_text
from ansible.module_utils.connection import ConnectionError
from ansible.module_utils.network.common.utils import to_list
from ansible.plugins.cliconf import CliconfBase
@ -56,56 +57,167 @@ class Cliconf(CliconfBase):
return device_info
def get_config(self, source='running', format='text', filter=None):
def configure(self, admin=False):
prompt = to_text(self._connection.get_prompt(), errors='surrogate_or_strict').strip()
if not prompt.endswith(')#'):
if admin and 'admin-' not in prompt:
self.send_command('admin')
self.send_command('configure terminal')
def abort(self, admin=False):
prompt = to_text(self._connection.get_prompt(), errors='surrogate_or_strict').strip()
if prompt.endswith(')#'):
self.send_command('abort')
if admin and 'admin-' in prompt:
self.send_command('exit')
def get_config(self, source='running', format='text', flags=None):
if source not in ['running']:
raise ValueError("fetching configuration from %s is not supported" % source)
lookup = {'running': 'running-config'}
if source not in lookup:
return self.invalid_params("fetching configuration from %s is not supported" % source)
if filter:
cmd = 'show {0} {1}'.format(lookup[source], filter)
else:
cmd = 'show {0}'.format(lookup[source])
cmd = 'show {0} '.format(lookup[source])
cmd += ' '.join(to_list(flags))
cmd = cmd.strip()
return self.send_command(cmd)
def edit_config(self, commands=None):
for cmd in chain(to_list(commands)):
try:
if isinstance(cmd, str):
cmd = json.loads(cmd)
command = cmd.get('command', None)
prompt = cmd.get('prompt', None)
answer = cmd.get('answer', None)
sendonly = cmd.get('sendonly', False)
newline = cmd.get('newline', True)
except:
command = cmd
prompt = None
answer = None
sendonly = None
newline = None
def edit_config(self, candidate=None, commit=True, admin=False, replace=None, comment=None, label=None):
operations = self.get_device_operations()
self.check_edit_config_capabiltiy(operations, candidate, commit, replace, comment)
self.send_command(command=command, prompt=prompt, answer=answer, sendonly=sendonly, newline=newline)
resp = {}
results = []
requests = []
self.configure(admin=admin)
if replace:
candidate = 'load {0}'.format(replace)
for line in to_list(candidate):
if not isinstance(line, collections.Mapping):
line = {'command': line}
cmd = line['command']
results.append(self.send_command(**line))
requests.append(cmd)
diff = self.get_diff(admin=admin)
config_diff = diff.get('config_diff')
if config_diff or replace:
resp['diff'] = config_diff
if commit:
self.commit(comment=comment, label=label, replace=replace)
else:
self.discard_changes()
self.abort(admin=admin)
resp['request'] = requests
resp['response'] = results
return resp
def get_diff(self, admin=False):
self.configure(admin=admin)
diff = {'config_diff': None}
response = self.send_command('show commit changes diff')
for item in response.splitlines():
if item and item[0] in ['<', '+', '-']:
diff['config_diff'] = response
break
return diff
def get(self, command=None, prompt=None, answer=None, sendonly=False, newline=True, output=None):
if output:
raise ValueError("'output' value %s is not supported for get" % output)
return self.send_command(command=command, prompt=prompt, answer=answer, sendonly=sendonly, newline=newline)
def commit(self, comment=None, label=None):
if comment and label:
command = 'commit label {0} comment {1}'.format(label, comment)
elif comment:
command = 'commit comment {0}'.format(comment)
elif label:
command = 'commit label {0}'.format(label)
def commit(self, comment=None, label=None, replace=None):
cmd_obj = {}
if replace:
cmd_obj['command'] = 'commit replace'
cmd_obj['prompt'] = 'This commit will replace or remove the entire running configuration'
cmd_obj['answer'] = 'yes'
else:
command = 'commit'
self.send_command(command)
if comment and label:
cmd_obj['command'] = 'commit label {0} comment {1}'.format(label, comment)
elif comment:
cmd_obj['command'] = 'commit comment {0}'.format(comment)
elif label:
cmd_obj['command'] = 'commit label {0}'.format(label)
else:
cmd_obj['command'] = 'commit'
self.send_command(**cmd_obj)
def run_commands(self, commands=None, check_rc=True):
if commands is None:
raise ValueError("'commands' value is required")
responses = list()
for cmd in to_list(commands):
if not isinstance(cmd, collections.Mapping):
cmd = {'command': cmd}
output = cmd.pop('output', None)
if output:
raise ValueError("'output' value %s is not supported for run_commands" % output)
try:
out = self.send_command(**cmd)
except AnsibleConnectionFailure as e:
if check_rc:
raise
out = getattr(e, 'err', e)
if out is not None:
try:
out = to_text(out, errors='surrogate_or_strict').strip()
except UnicodeError:
raise ConnectionError(msg=u'Failed to decode output from %s: %s' % (cmd, to_text(out)))
try:
out = json.loads(out)
except ValueError:
pass
responses.append(out)
return responses
def discard_changes(self):
self.send_command('abort')
def get_device_operations(self):
return {
'supports_diff_replace': False,
'supports_commit': True,
'supports_rollback': True,
'supports_defaults': False,
'supports_onbox_diff': True,
'supports_commit_comment': True,
'supports_multiline_delimiter': False,
'supports_diff_match': False,
'supports_diff_ignore_lines': False,
'supports_generate_diff': False,
'supports_replace': True,
'supports_admin': True,
'supports_commit_label': True
}
def get_option_values(self):
return {
'format': ['text'],
'diff_match': [],
'diff_replace': [],
'output': []
}
def get_capabilities(self):
result = {}
result['rpc'] = self.get_base_rpc() + ['commit', 'discard_changes']
result['rpc'] = self.get_base_rpc() + ['commit', 'discard_changes', 'get_diff', 'configure', 'exit']
result['network_api'] = 'cliconf'
result['device_info'] = self.get_device_info()
result['device_operations'] = self.get_device_operations()
result.update(self.get_option_values())
return json.dumps(result)

View file

@ -110,10 +110,14 @@ class Cliconf(CliconfBase):
if diff:
resp['diff'] = diff
if commit:
self.commit(comment=comment)
if commit:
self.commit(comment=comment)
else:
self.discard_changes()
else:
self.discard_changes()
for cmd in ['top', 'exit']:
self.send_command(cmd)
resp['request'] = requests
resp['response'] = results
@ -166,7 +170,11 @@ class Cliconf(CliconfBase):
return resp
def get_diff(self, rollback_id=None):
return self.compare_configuration(rollback_id=rollback_id)
diff = {'config_diff': None}
response = self.compare_configuration(rollback_id=rollback_id)
if response:
diff['config_diff'] = response
return diff
def get_device_operations(self):
return {

View file

@ -115,7 +115,7 @@ class Cliconf(CliconfBase):
raise ValueError("'replace' value %s in invalid, valid values are %s" % (diff_replace, ', '.join(option_values['diff_replace'])))
# prepare candidate configuration
candidate_obj = NetworkConfig(indent=2)
candidate_obj = NetworkConfig(indent=2, ignore_lines=diff_ignore_lines)
candidate_obj.load(candidate)
if running and diff_match != 'none' and diff_replace != 'config':
@ -215,7 +215,7 @@ class Cliconf(CliconfBase):
try:
out = json.loads(out)
except ValueError:
out = to_text(out, errors='surrogate_or_strict').strip()
pass
responses.append(out)
return responses

View file

@ -227,7 +227,7 @@ class Cliconf(CliconfBase):
'supports_multiline_delimiter': False,
'supports_diff_match': True,
'supports_diff_ignore_lines': False,
'supports_generate_diff': True,
'supports_generate_diff': False,
'supports_replace': False
}

View file

@ -386,7 +386,7 @@ class Connection(ConnectionBase):
raise AnsibleConnectionFailure(msg)
# sudo usually requires a PTY (cf. requiretty option), therefore
# we give it one by default (pty=True in ansble.cfg), and we try
# we give it one by default (pty=True in ansible.cfg), and we try
# to initialise from the calling environment when sudoable is enabled
if self.get_option('pty') and sudoable:
chan.get_pty(term=os.getenv('TERM', 'vt100'), width=int(os.getenv('COLUMNS', 0)), height=int(os.getenv('LINES', 0)))

View file

@ -34,12 +34,12 @@ import pty
import json
import subprocess
import sys
import termios
from ansible import constants as C
from ansible.plugins.connection import ConnectionBase
from ansible.module_utils._text import to_text
from ansible.module_utils.six.moves import cPickle
from ansible.module_utils.connection import Connection as SocketConnection
from ansible.module_utils.connection import Connection as SocketConnection, write_to_file_descriptor
from ansible.errors import AnsibleError
try:
@ -109,26 +109,24 @@ class Connection(ConnectionBase):
[python, find_file_in_path('ansible-connection'), to_text(os.getppid())],
stdin=slave, stdout=subprocess.PIPE, stderr=subprocess.PIPE
)
stdin = os.fdopen(master, 'wb', 0)
os.close(slave)
# Need to force a protocol that is compatible with both py2 and py3.
# That would be protocol=2 or less.
# Also need to force a protocol that excludes certain control chars as
# stdin in this case is a pty and control chars will cause problems.
# that means only protocol=0 will work.
src = cPickle.dumps(self._play_context.serialize(), protocol=0)
stdin.write(src)
stdin.write(b'\n#END_INIT#\n')
# We need to set the pty into noncanonical mode. This ensures that we
# can receive lines longer than 4095 characters (plus newline) without
# truncating.
old = termios.tcgetattr(master)
new = termios.tcgetattr(master)
new[3] = new[3] & ~termios.ICANON
src = cPickle.dumps({'ansible_command_timeout': self.get_option('persistent_command_timeout')}, protocol=0)
stdin.write(src)
stdin.write(b'\n#END_VARS#\n')
try:
termios.tcsetattr(master, termios.TCSANOW, new)
write_to_file_descriptor(master, {'ansible_command_timeout': self.get_option('persistent_command_timeout')})
write_to_file_descriptor(master, self._play_context.serialize())
stdin.flush()
(stdout, stderr) = p.communicate()
stdin.close()
(stdout, stderr) = p.communicate()
finally:
termios.tcsetattr(master, termios.TCSANOW, old)
os.close(master)
if p.returncode == 0:
result = json.loads(to_text(stdout, errors='surrogate_then_replace'))

View file

@ -0,0 +1,43 @@
#
# Copyright (c) 2018, Luca 'remix_tj' Lorenzetto <lorenzetto.luca@gmail.com>
#
# This file is part of Ansible
#
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
class ModuleDocFragment(object):
DOCUMENTATION = """
options:
- See respective platform section for more details
requirements:
- See respective platform section for more details
notes:
- Ansible modules are available for EMC VNX.
"""
# Documentation fragment for VNX (emc_vnx)
EMC_VNX = """
options:
sp_address:
description:
- Address of the SP of target/secondary storage.
required: true
sp_user:
description:
- Username for accessing SP.
default: sysadmin
required: false
sp_password:
description:
- password for accessing SP.
default: sysadmin
required: false
requirements:
- An EMC VNX Storage device.
- Ansible 2.7.
- storops (0.5.10 or greater). Install using 'pip install storops'.
notes:
- The modules prefixed with emc_vnx are built to support the ONTAP storage platform.
"""

View file

@ -0,0 +1,3 @@
---
dependencies:
- setup_docker

View file

@ -1,14 +1,2 @@
- include: RedHat.yml
when: ansible_os_family == 'RedHat' and ansible_distribution != 'Fedora' and ansible_distribution_major_version != '6'
- include: Fedora.yml
when: ansible_distribution == 'Fedora'
- include: OpenSuse.yml
when: ansible_os_family == 'Suse'
- include: Ubuntu.yml
when: ansible_os_family == 'Debian'
- include: test_secrets.yml
- include_tasks: test_secrets.yml
when: ansible_os_family != 'RedHat' or ansible_distribution_major_version != '6'

Some files were not shown because too many files have changed in this diff Show more