Merge branch 'devel' into mazer_role_loader

* devel: (50 commits)
  Add new module for Redfish APIs (#41656)
  VMware Module - vmware_guest_move (#42149)
  Lenovo port to persistence 1 (#43194)
  VMware: new module: vmware_guest_boot_manager (#40609)
  fixes #42042 (#42939)
  VMware: new module: vmware_category_facts (#39894)
  VMware: Dynamic Inventory plugin (#37456)
  Validate and reject if csr_path is not supplied when provider is not assertonly (#41385)
  VMware: new module : vmware_guest_custom_attributes (#38114)
  VMware: new module: vmware_guest_attribute_defs (#38144)
  VMware: Fix mark as virtual machine method (#40521)
  Ironware: Deprecate provider, support network_cli (#43285)
  feat: Add a enable_accelerated_networking flag in module + tests; fixes #41218  (#42109)
  fixing aiuth source (#42923)
  VMware: handle special characters in datacenter name (#42922)
  VMware: update examples in vmware_vm_shell (#42410)
  VMWare: refactor vmware_vm_shell module (#39957)
  VMware: additional VSAN facts about Hostsystem (#40456)
  nxos cliconf plugin refactor (#43203)
  Correcting conditionals looping (#43331)
  ...
This commit is contained in:
Adrian Likins 2018-07-27 12:40:36 -04:00
commit dc91f6a38e
138 changed files with 6079 additions and 1226 deletions

View file

@ -23,3 +23,4 @@ omit =
*/python*/distutils/*
*/pyshared/*
*/pytest
*/AnsiballZ_*.py

6
.github/BOTMETA.yml vendored
View file

@ -714,6 +714,12 @@ files:
maintainers: $team_windows
ignored: angstwad georgefrank h0nIg
$modules/windows/win_security_policy.py: rndmh3ro defionscode
bin/ansible-connection:
keywords:
- persistent connection
labels:
- networking
maintainers: $team_networking
contrib/inventory:
keywords:
- dynamic inventory script

View file

@ -0,0 +1,2 @@
bugfixes:
- plugins/inventory/openstack.py - Do not create group with empty name if region is not set

View file

@ -0,0 +1,5 @@
---
minor_changes:
- Ansible-2.7 changes the Ansiballz strategy for running modules remotely so
that invoking a module only needs to invoke python once per module on the
remote machine instead of twice.

View file

@ -0,0 +1,5 @@
---
deprecated_features:
- Modules will no longer be able to rely on the __file__ attribute pointing to
a real file. If your third party module is using __file__ for something it
should be changed before 2.8. See the 2.7 porting guide for more information.

View file

@ -0,0 +1,5 @@
---
minor_changes:
- Added the from_yaml_all filter to parse multi-document yaml strings.
Refer to the appropriate entry which as been added to user_guide
playbooks_filters.rst document.

View file

@ -0,0 +1,10 @@
---
minor_changes:
- In Ansible-2.4 and above, Ansible passes the temporary directory a module
should use to the module. This is done via a module parameter
(_ansible_tmpdir). An earlier version of this which was also prototyped in
Ansible-2.4 development used an environment variable, ANSIBLE_REMOTE_TMP to
pass this information to the module instead. When we switched to using
a module parameter, the environment variable was left in by mistake.
Ansible-2.7 removes that variable. Any third party modules which relied on
it should use the module parameter instead.

View file

@ -0,0 +1,5 @@
---
minor_changes:
- Explicit encoding for the output of the template module, to be able
to generate non-utf8 files from a utf-8 template.
(https://github.com/ansible/proposals/issues/121)

View file

@ -1,24 +1,96 @@
************************
########################
Other Tools And Programs
************************
########################
The Ansible community provides several useful tools for working with the Ansible project. This is a list
of some of the most popular of these tools.
.. contents:: Topics
- `PR by File <https://ansible.sivel.net/pr/byfile.html>`_ shows a current list of all open pull requests by individual file. An essential tool for Ansible module maintainers.
The Ansible community uses a range of tools for working with the Ansible project. This is a list of some of the most popular of these tools.
- `Ansible Lint <https://github.com/willthames/ansible-lint>`_ is a widely used, highly configurable best-practices linter for Ansible playbooks.
If you know of any other tools that should be added, this list can be updated by clicking "Edit on GitHub" on the top right of this page.
- `Ansible Review <http://willthames.github.io/2016/06/28/announcing-ansible-review.html>`_ is an extension of Ansible Lint designed for code review.
***************
Popular Editors
***************
- `jctanner's Ansible Tools <https://github.com/jctanner/ansible-tools>`_ is a miscellaneous collection of useful helper scripts for Ansible development.
Atom
====
- `Ansigenome <https://github.com/nickjj/ansigenome>`_ is a command line tool designed to help you manage your Ansible roles.
An open-source, free GUI text editor created and maintained by GitHub. You can keep track of git project
changes, commit from the GUI, and see what branch you are on. You can customize the themes for different colors and install syntax highlighting packages for different languages. You can install Atom on Linux, macOS and Windows. Useful Atom plugins include:
- `Awesome Ansible <https://github.com/jdauphant/awesome-ansible>`_ is a collaboratively curated list of awesome Ansible resources.
* `language-yaml <https://github.com/atom/language-yaml>`_ - YAML highlighting for Atom.
- `Ansible Inventory Grapher <http://github.com/willthames/ansible-inventory-grapher>`_ can be used to visually display inventory inheritance hierarchies and at what level a variable is defined in inventory.
- `Molecule <http://github.com/metacloud/molecule>`_ A testing framework for Ansible plays and roles.
PyCharm
=======
- `ARA Records Ansible <http://github.com/openstack/ara>`_ ARA Records Ansible playbook runs and makes the recorded data available and intuitive for users and systems.
A full IDE (integrated development environment) for Python software development. It ships with everything you need to write python scripts and complete software, including support for YAML syntax highlighting. It's a little overkill for writing roles/playbooks, but it can be a very useful tool if you write modules and submit code for Ansible. Can be used to debug the Ansible engine.
Sublime
=======
A closed-source, subscription GUI text editor. You can customize the GUI with themes and install packages for language highlighting and other refinements. You can install Sublime on Linux, macOS and Windows. Useful Sublime plugins include:
* `GitGutter <https://packagecontrol.io/packages/GitGutter>`_ - shows information about files in a git repository.
* `SideBarEnhancements <https://packagecontrol.io/packages/SideBarEnhancements>`_ - provides enhancements to the operations on Sidebar of Files and Folders.
* `Sublime Linter <https://packagecontrol.io/packages/SublimeLinter>`_ - a code-linting framework for Sublime Text 3.
* `Pretty YAML <https://packagecontrol.io/packages/Pretty%20YAML>`_ - prettifies YAML for Sublime Text 2 and 3.
* `Yamllint <https://packagecontrol.io/packages/SublimeLinter-contrib-yamllint>`_ - a Sublime wrapper around yamllint.
Visual Studio Code
==================
An open-source, free GUI text editor created and maintained by Microsoft. Useful Visual Studio Code plugins include:
* `YAML Support by Red Hat <https://marketplace.visualstudio.com/items?itemName=redhat.vscode-yaml>`_ - provides YAML support via yaml-language-server with built-in Kubernetes and Kedge syntax support.
* `Ansible Syntax Highlighting Extension <https://marketplace.visualstudio.com/items?itemName=haaaad.ansible>`_ - YAML & Jinja2 support.
* `Visual Studio Code extension for Ansible <https://marketplace.visualstudio.com/items?itemName=vscoss.vscode-ansible>`_ - provides autocompletion, syntax highlighting.
vim
===
An open-source, free command-line text editor. Useful vim plugins include:
* `Ansible vim <https://github.com/pearofducks/ansible-vim>`_ - vim syntax plugin for Ansible 2.x, it supports YAML playbooks, Jinja2 templates, and Ansible's hosts files.
*****************
Development Tools
*****************
Finding related issues and PRs
==============================
There are various ways to find existing issues and pull requests (PRs)
- `PR by File <https://ansible.sivel.net/pr/byfile.html>`_ - shows a current list of all open pull requests by individual file. An essential tool for Ansible module maintainers.
- `jctanner's Ansible Tools <https://github.com/jctanner/ansible-tools>`_ - miscellaneous collection of useful helper scripts for Ansible development.
******************************
Tools for Validating Playbooks
******************************
- `Ansible Lint <https://github.com/willthames/ansible-lint>`_ - widely used, highly configurable best-practices linter for Ansible playbooks.
- `Ansible Review <https://github.com/willthames/ansible-review>`_ - an extension of Ansible Lint designed for code review.
- `Molecule <http://github.com/metacloud/molecule>`_ is a testing framework for Ansible plays and roles.
***********
Other Tools
***********
- `Ansible cmdb <https://github.com/fboender/ansible-cmdb>`_ - takes the output of Ansible's fact gathering and converts it into a static HTML overview page containing system configuration information.
- `Ansible Inventory Grapher <http://github.com/willthames/ansible-inventory-grapher>`_ - visually displays inventory inheritance hierarchies and at what level a variable is defined in inventory.
- `Ansible Shell <https://github.com/dominis/ansible-shell>`_ - an interactive shell for Ansible with built-in tab completion for all the modules.
- `Ansible Silo <https://github.com/groupon/ansible-silo>`_ - a self-contained Ansible environment via Docker.
- `Ansigenome <https://github.com/nickjj/ansigenome>`_ - a command line tool designed to help you manage your Ansible roles.
- `ARA <http://github.com/openstack/ara>`_ - records Ansible playbook runs and makes the recorded data available and intuitive for users and systems by integrating with Ansible as a callback plugin.
- `Awesome Ansible <https://github.com/jdauphant/awesome-ansible>`_ - a collaboratively curated list of awesome Ansible resources.
- `AWX <https://github.com/ansible/awx>`_ - provides a web-based user interface, REST API, and task engine built on top of Ansible. AWX is the upstream project for Tower, a commercial derivative of AWX.
- `Mitogen for Ansible <https://mitogen.readthedocs.io/en/latest/ansible.html>`_ - uses the `Mitogen <https://github.com/dw/mitogen/>`_ library to execute Ansible playbooks in a more efficient way (decreases the execution time).
- `OpsTools-ansible <https://github.com/centos-opstools/opstools-ansible>`_ - uses Ansible to configure an environment that provides the support of `OpsTools <https://wiki.centos.org/SpecialInterestGroup/OpsTools>`_, namely centralized logging and analysis, availability monitoring, and performance monitoring.
- `TD4A <https://github.com/cidrblock/td4a>`_ - a template designer for automation. TD4A is a visual design aid for building and testing jinja2 templates. It will combine data in yaml format with a jinja2 template and render the output.

View file

@ -12,6 +12,7 @@ Some Ansible Network platforms support multiple connection types, privilege esca
platform_eos
platform_ios
platform_ironware
platform_junos
platform_nxos
@ -20,30 +21,32 @@ Some Ansible Network platforms support multiple connection types, privilege esca
Settings by Platform
================================
+----------------+-------------------------+----------------------+----------------------+------------------+------------------+
|.. | | ``ansible_connection:`` settings available |
+----------------+-------------------------+----------------------+----------------------+------------------+------------------+
| Network OS | ``ansible_network_os:`` | network_cli | netconf | httpapi | local |
+================+=========================+======================+======================+==================+==================+
| Arista EOS* | ``eos`` | in v. >=2.5 | N/A | in v. >=2.6 | in v. >=2.4 |
+----------------+-------------------------+----------------------+----------------------+------------------+------------------+
| Cisco ASA | ``asa`` | in v. >=2.5 | N/A | N/A | in v. >=2.4 |
+----------------+-------------------------+----------------------+----------------------+------------------+------------------+
| Cisco IOS* | ``ios`` | in v. >=2.5 | N/A | N/A | in v. >=2.4 |
+----------------+-------------------------+----------------------+----------------------+------------------+------------------+
| Cisco IOS XR* | ``iosxr`` | in v. >=2.5 | N/A | N/A | in v. >=2.4 |
+----------------+-------------------------+----------------------+----------------------+------------------+------------------+
| Cisco NX-OS* | ``nxos`` | in v. >=2.5 | N/A | in v. >=2.6 | in v. >=2.4 |
+----------------+-------------------------+----------------------+----------------------+------------------+------------------+
| F5 BIG-IP | N/A | N/A | N/A | N/A | in v. >=2.0 |
+----------------+-------------------------+----------------------+----------------------+------------------+------------------+
| F5 BIG-IQ | N/A | N/A | N/A | N/A | in v. >=2.0 |
+----------------+-------------------------+----------------------+----------------------+------------------+------------------+
| Junos OS* | ``junos`` | in v. >=2.5 | in v. >=2.5 | N/A | in v. >=2.4 |
+----------------+-------------------------+----------------------+----------------------+------------------+------------------+
| Nokia SR OS | ``sros`` | in v. >=2.5 | N/A | N/A | in v. >=2.4 |
+----------------+-------------------------+----------------------+----------------------+------------------+------------------+
| VyOS* | ``vyos`` | in v. >=2.5 | N/A | N/A | in v. >=2.4 |
+----------------+-------------------------+----------------------+----------------------+------------------+------------------+
+------------------+-------------------------+----------------------+----------------------+------------------+------------------+
|.. | | ``ansible_connection:`` settings available |
+------------------+-------------------------+----------------------+----------------------+------------------+------------------+
| Network OS | ``ansible_network_os:`` | network_cli | netconf | httpapi | local |
+==================+=========================+======================+======================+==================+==================+
| Arista EOS* | ``eos`` | in v. >=2.5 | N/A | in v. >=2.6 | in v. >=2.4 |
+------------------+-------------------------+----------------------+----------------------+------------------+------------------+
| Cisco ASA | ``asa`` | in v. >=2.5 | N/A | N/A | in v. >=2.4 |
+------------------+-------------------------+----------------------+----------------------+------------------+------------------+
| Cisco IOS* | ``ios`` | in v. >=2.5 | N/A | N/A | in v. >=2.4 |
+------------------+-------------------------+----------------------+----------------------+------------------+------------------+
| Cisco IOS XR* | ``iosxr`` | in v. >=2.5 | N/A | N/A | in v. >=2.4 |
+------------------+-------------------------+----------------------+----------------------+------------------+------------------+
| Cisco NX-OS* | ``nxos`` | in v. >=2.5 | N/A | in v. >=2.6 | in v. >=2.4 |
+------------------+-------------------------+----------------------+----------------------+------------------+------------------+
| Extreme IronWare | ``ironware`` | in v. >=2.5 | N/A | N/A | in v. >=2.5 |
+------------------+-------------------------+----------------------+----------------------+------------------+------------------+
| F5 BIG-IP | N/A | N/A | N/A | N/A | in v. >=2.0 |
+------------------+-------------------------+----------------------+----------------------+------------------+------------------+
| F5 BIG-IQ | N/A | N/A | N/A | N/A | in v. >=2.0 |
+------------------+-------------------------+----------------------+----------------------+------------------+------------------+
| Junos OS* | ``junos`` | in v. >=2.5 | in v. >=2.5 | N/A | in v. >=2.4 |
+------------------+-------------------------+----------------------+----------------------+------------------+------------------+
| Nokia SR OS | ``sros`` | in v. >=2.5 | N/A | N/A | in v. >=2.4 |
+------------------+-------------------------+----------------------+----------------------+------------------+------------------+
| VyOS* | ``vyos`` | in v. >=2.5 | N/A | N/A | in v. >=2.4 |
+------------------+-------------------------+----------------------+----------------------+------------------+------------------+
`*` Maintained by Ansible Network Team
`*` Maintained by Ansible Network Team

View file

@ -0,0 +1,70 @@
.. _ironware_platform_options:
***************************************
IronWare Platform Options
***************************************
IronWare supports Enable Mode (Privilege Escalation). This page offers details on how to use Enable Mode on IronWare in Ansible 2.7.
.. contents:: Topics
Connections Available
================================================================================
+---------------------------+-----------------------------------------------+
|.. | CLI |
+===========================+===============================================+
| **Protocol** | SSH |
+---------------------------+-----------------------------------------------+
| | **Credentials** | | uses SSH keys / SSH-agent if present |
| | | | accepts ``-u myuser -k`` if using password |
+---------------------------+-----------------------------------------------+
| **Indirect Access** | via a bastion (jump host) |
+---------------------------+-----------------------------------------------+
| | **Connection Settings** | | ``ansible_connection: network_cli`` |
| | | | |
| | | | |
+---------------------------+-----------------------------------------------+
| | **Enable Mode** | | supported - use ``ansible_become: yes`` |
| | (Privilege Escalation) | | with ``ansible_become_method: enable`` |
| | | | and ``ansible_become_pass:`` |
+---------------------------+-----------------------------------------------+
| **Returned Data Format** | ``stdout[0].`` |
+---------------------------+-----------------------------------------------+
For legacy playbooks, IronWare still supports ``ansible_connection: local``. We recommend modernizing to use ``ansible_connection: network_cli`` as soon as possible.
Using CLI in Ansible 2.6
================================================================================
Example CLI ``group_vars/mlx.yml``
----------------------------------
.. code-block:: yaml
ansible_connection: network_cli
ansible_network_os: ironware
ansible_user: myuser
ansible_ssh_pass: !vault...
ansible_become: yes
ansible_become_method: enable
ansible_become_pass: !vault...
ansible_ssh_common_args: '-o ProxyCommand="ssh -W %h:%p -q bastion01"'
- If you are using SSH keys (including an ssh-agent) you can remove the ``ansible_ssh_pass`` configuration.
- If you are accessing your host directly (not through a bastion/jump host) you can remove the ``ansible_ssh_common_args`` configuration.
- If you are accessing your host through a bastion/jump host, you cannot include your SSH password in the ``ProxyCommand`` directive. To prevent secrets from leaking out (for example in ``ps`` output), SSH does not support providing passwords via environment variables.
Example CLI Task
----------------
.. code-block:: yaml
- name: Backup current switch config (ironware)
ironware_config:
backup: yes
register: backup_ironware_location
when: ansible_network_os == 'ironware'
.. include:: shared_snippets/SSH_warning.txt

View file

@ -107,7 +107,7 @@ Deprecated
While all items listed here will show a deprecation warning message, they still work as they did in 1.9.x. Please note that they will be removed in 2.2 (Ansible always waits two major releases to remove a deprecated feature).
* Bare variables in ``with_`` loops should instead use the ``"{ {var }}"`` syntax, which helps eliminate ambiguity.
* Bare variables in ``with_`` loops should instead use the ``"{{ var }}"`` syntax, which helps eliminate ambiguity.
* The ansible-galaxy text format requirements file. Users should use the YAML format for requirements instead.
* Undefined variables within a ``with_`` loop's list currently do not interrupt the loop, but they do issue a warning; in the future, they will issue an error.
* Using dictionary variables to set all task parameters is unsafe and will be removed in a future version. For example::

View file

@ -38,6 +38,26 @@ There is an important difference in the way that ``include_role`` (dynamic) will
Deprecated
==========
Expedited Deprecation: Use of ``__file__`` in ``AnsibleModule``
---------------------------------------------------------------
.. note:: The use of the ``__file__`` variable is deprecated in Ansible 2.7 and **will be eliminated in Ansible 2.8**. This is much quicker than our usual 4-release deprecation cycle.
We are deprecating the use of the ``__file__`` variable to refer to the file containing the currently-running code. This common Python technique for finding a filesystem path does not always work (even in vanilla Python). Sometimes a Python module can be imported from a virtual location (like inside of a zip file). When this happens, the ``__file__`` variable will reference a virtual location pointing to inside of the zip file. This can cause problems if, for instance, the code was trying to use ``__file__`` to find the directory containing the python module to write some temporary information.
Before the introduction of AnsiBallZ in Ansible 2.1, using ``__file__`` worked in ``AnsibleModule`` sometimes, but any module that used it would fail when pipelining was turned on (because the module would be piped into the python interpreter's standard input, so ``__file__`` wouldn't contain a file path). AnsiBallZ unintentionally made using ``__file__`` always work, by always creating a temporary file for ``AnsibleModule`` to reside in.
Ansible 2.8 will no longer create a temporary file for ``AnsibleModule``; instead it will read the file out of a zip file. This change should speed up module execution, but it does mean that starting with Ansible 2.8, referencing ``__file__`` will always fail in ``AnsibleModule``.
If you are the author of a third-party module which uses ``__file__`` with ``AnsibleModule``, please update your module(s) now, while the use of ``__file__`` is deprecated but still available. The most common use of ``__file__`` is to find a directory to write a temporary file. In Ansible 2.5 and above, you can use the ``tmpdir`` attribute on an ``AnsibleModule`` instance instead, as shown in this code from the :ref:`apt module <apt_module>`:
.. code-block:: diff
- tempdir = os.path.dirname(__file__)
- package = os.path.join(tempdir, to_native(deb.rsplit('/', 1)[1]))
+ package = os.path.join(module.tmpdir, to_native(deb.rsplit('/', 1)[1]))
Using a loop on a package module via squash_actions
---------------------------------------------------

View file

@ -32,7 +32,7 @@ Cleaning Duty
Engine Improvements
-------------------
- Make ``become`` plugin based. `pr #38861 <https://github.com/ansible/ansible/pull/38861>`_
- Make ``become`` plugin based. `pr #38861 <https://github.com/ansible/ansible/pull/38861>`_
- Introduce a ``live`` keyword to provide modules the ability to push intermediate (live) updates `pr #13620 <https://github.com/ansible/ansible/pull/13620>`_
- Add content_path for mazer installed content `pr #42867 <https://github.com/ansible/ansible/pull/42867/>`_
- Investigate what it will take to utilise the work performed by Mitogen maintainers. `pr #41749 <https://github.com/ansible/ansible/pull/41749>`_, `branch <https://github.com/jimi-c/ansible/tree/abadger-ansiballz-one-interpreter>`_ and talk to jimi-c
@ -48,12 +48,12 @@ Core Modules
- Include feature changes and improvements
- Create new argument `apply` that will allow for included tasks to inherit explicitly provided attributes. `pr #39236 <https://github.com/ansible/ansible/pull/39236>`_
- Create new argument ``apply`` that will allow for included tasks to inherit explicitly provided attributes. `pr #39236 <https://github.com/ansible/ansible/pull/39236>`_
- Create "private" functionality for allowing vars/default sot be exposed outside of roles. `pr #41330 <https://github.com/ansible/ansible/pull/41330>`_
- Provide a parameter for the `template` module to output to different encoding formats `pr
- Provide a parameter for the ``template`` module to output to different encoding formats `pr
#42171 <https://github.com/ansible/ansible/pull/42171>`_
- `reboot` module for Linux hosts (@sdoran)
- ``reboot`` module for Linux hosts (@samdoran) `pr #35205 <https://github.com/ansible/ansible/pull/35205>`_
Cloud Modules
-------------

View file

@ -94,12 +94,12 @@ If you use more than one CloudStack region, you can define as many sections as y
key = api key
secret = api secret
[exmaple_cloud_one]
[example_cloud_one]
endpoint = https://cloud-one.example.com/client/api
key = api key
secret = api secret
[exmaple_cloud_two]
[example_cloud_two]
endpoint = https://cloud-two.example.com/client/api
key = api key
secret = api secret
@ -127,8 +127,8 @@ Or by looping over a regions list if you want to do the task in every region:
api_region: "{{ item }}"
loop:
- exoscale
- exmaple_cloud_one
- exmaple_cloud_two
- example_cloud_one
- example_cloud_two
Environment Variables
`````````````````````

View file

@ -47,6 +47,22 @@ for example::
- set_fact:
myvar: "{{ result.stdout | from_json }}"
.. versionadded:: 2.7
To parse multi-document yaml strings, the ``from_yaml_all`` filter is provided.
The ``from_yaml_all`` filter will return a generator of parsed yaml documents.
for example::
tasks:
- shell: cat /some/path/to/multidoc-file.yaml
register: result
- debug:
msg: '{{ item }}'
loop: '{{ result.stdout | from_yaml_all | list }}'
.. _forcing_variables_to_be_defined:
Forcing Variables To Be Defined

View file

@ -402,8 +402,6 @@ Support
~~~~~~~
For more information about Red Hat's support of this @{ plugin_type }@,
please refer to this `Knowledge Base article <https://access.redhat.com/articles/rhel-top-support-policies/>`_
{% else %}
This @{ plugin_type }@ is flagged as **preview** which means that @{module_states['preview']}@.
{% endif %}
{% endif %}

View file

@ -156,7 +156,7 @@ def boilerplate_module(modfile, args, interpreters, check, destfile):
task_vars=task_vars
)
if module_style == 'new' and 'ANSIBALLZ_WRAPPER = True' in to_native(module_data):
if module_style == 'new' and '_ANSIBALLZ_WRAPPER = True' in to_native(module_data):
module_style = 'ansiballz'
modfile2_path = os.path.expanduser(destfile)
@ -192,7 +192,7 @@ def ansiballz_setup(modfile, modname, interpreters):
debug_dir = lines[1].strip()
argsfile = os.path.join(debug_dir, 'args')
modfile = os.path.join(debug_dir, 'ansible_module_%s.py' % modname)
modfile = os.path.join(debug_dir, '__main__.py')
print("* ansiballz module detected; extracted module source to: %s" % debug_dir)
return modfile, argsfile

View file

@ -70,7 +70,7 @@ _MODULE_UTILS_PATH = os.path.join(os.path.dirname(__file__), '..', 'module_utils
ANSIBALLZ_TEMPLATE = u'''%(shebang)s
%(coding)s
ANSIBALLZ_WRAPPER = True # For test-module script to tell this is a ANSIBALLZ_WRAPPER
_ANSIBALLZ_WRAPPER = True # For test-module script to tell this is a ANSIBALLZ_WRAPPER
# This code is part of Ansible, but is an independent component.
# The code in this particular templatable string, and this templatable string
# only, is BSD licensed. Modules which end up using this snippet, which is
@ -98,201 +98,203 @@ ANSIBALLZ_WRAPPER = True # For test-module script to tell this is a ANSIBALLZ_WR
# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE
# USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
import os
import os.path
import sys
import __main__
def _ansiballz_main():
import os
import os.path
import sys
import __main__
# For some distros and python versions we pick up this script in the temporary
# directory. This leads to problems when the ansible module masks a python
# library that another import needs. We have not figured out what about the
# specific distros and python versions causes this to behave differently.
#
# Tested distros:
# Fedora23 with python3.4 Works
# Ubuntu15.10 with python2.7 Works
# Ubuntu15.10 with python3.4 Fails without this
# Ubuntu16.04.1 with python3.5 Fails without this
# To test on another platform:
# * use the copy module (since this shadows the stdlib copy module)
# * Turn off pipelining
# * Make sure that the destination file does not exist
# * ansible ubuntu16-test -m copy -a 'src=/etc/motd dest=/var/tmp/m'
# This will traceback in shutil. Looking at the complete traceback will show
# that shutil is importing copy which finds the ansible module instead of the
# stdlib module
scriptdir = None
try:
scriptdir = os.path.dirname(os.path.realpath(__main__.__file__))
except (AttributeError, OSError):
# Some platforms don't set __file__ when reading from stdin
# OSX raises OSError if using abspath() in a directory we don't have
# permission to read (realpath calls abspath)
pass
if scriptdir is not None:
sys.path = [p for p in sys.path if p != scriptdir]
# For some distros and python versions we pick up this script in the temporary
# directory. This leads to problems when the ansible module masks a python
# library that another import needs. We have not figured out what about the
# specific distros and python versions causes this to behave differently.
#
# Tested distros:
# Fedora23 with python3.4 Works
# Ubuntu15.10 with python2.7 Works
# Ubuntu15.10 with python3.4 Fails without this
# Ubuntu16.04.1 with python3.5 Fails without this
# To test on another platform:
# * use the copy module (since this shadows the stdlib copy module)
# * Turn off pipelining
# * Make sure that the destination file does not exist
# * ansible ubuntu16-test -m copy -a 'src=/etc/motd dest=/var/tmp/m'
# This will traceback in shutil. Looking at the complete traceback will show
# that shutil is importing copy which finds the ansible module instead of the
# stdlib module
scriptdir = None
try:
scriptdir = os.path.dirname(os.path.realpath(__main__.__file__))
except (AttributeError, OSError):
# Some platforms don't set __file__ when reading from stdin
# OSX raises OSError if using abspath() in a directory we don't have
# permission to read (realpath calls abspath)
pass
if scriptdir is not None:
sys.path = [p for p in sys.path if p != scriptdir]
import base64
import shutil
import zipfile
import tempfile
import subprocess
import base64
import imp
import shutil
import tempfile
import zipfile
if sys.version_info < (3,):
bytes = str
PY3 = False
else:
unicode = str
PY3 = True
ZIPDATA = """%(zipdata)s"""
def invoke_module(module, modlib_path, json_params):
pythonpath = os.environ.get('PYTHONPATH')
if pythonpath:
os.environ['PYTHONPATH'] = ':'.join((modlib_path, pythonpath))
if sys.version_info < (3,):
bytes = str
MOD_DESC = ('.py', 'U', imp.PY_SOURCE)
PY3 = False
else:
os.environ['PYTHONPATH'] = modlib_path
unicode = str
MOD_DESC = ('.py', 'r', imp.PY_SOURCE)
PY3 = True
p = subprocess.Popen([%(interpreter)s, module], env=os.environ, shell=False, stdout=subprocess.PIPE, stderr=subprocess.PIPE, stdin=subprocess.PIPE)
(stdout, stderr) = p.communicate(json_params)
ZIPDATA = """%(zipdata)s"""
if not isinstance(stderr, (bytes, unicode)):
stderr = stderr.read()
if not isinstance(stdout, (bytes, unicode)):
stdout = stdout.read()
if PY3:
sys.stderr.buffer.write(stderr)
sys.stdout.buffer.write(stdout)
else:
sys.stderr.write(stderr)
sys.stdout.write(stdout)
return p.returncode
# Note: temp_path isn't needed once we switch to zipimport
def invoke_module(modlib_path, temp_path, json_params):
# When installed via setuptools (including python setup.py install),
# ansible may be installed with an easy-install.pth file. That file
# may load the system-wide install of ansible rather than the one in
# the module. sitecustomize is the only way to override that setting.
z = zipfile.ZipFile(modlib_path, mode='a')
def debug(command, zipped_mod, json_params):
# The code here normally doesn't run. It's only used for debugging on the
# remote machine.
#
# The subcommands in this function make it easier to debug ansiballz
# modules. Here's the basic steps:
#
# Run ansible with the environment variable: ANSIBLE_KEEP_REMOTE_FILES=1 and -vvv
# to save the module file remotely::
# $ ANSIBLE_KEEP_REMOTE_FILES=1 ansible host1 -m ping -a 'data=october' -vvv
#
# Part of the verbose output will tell you where on the remote machine the
# module was written to::
# [...]
# <host1> SSH: EXEC ssh -C -q -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o
# PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o
# ControlPath=/home/badger/.ansible/cp/ansible-ssh-%%h-%%p-%%r -tt rhel7 '/bin/sh -c '"'"'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8
# LC_MESSAGES=en_US.UTF-8 /usr/bin/python /home/badger/.ansible/tmp/ansible-tmp-1461173013.93-9076457629738/ping'"'"''
# [...]
#
# Login to the remote machine and run the module file via from the previous
# step with the explode subcommand to extract the module payload into
# source files::
# $ ssh host1
# $ /usr/bin/python /home/badger/.ansible/tmp/ansible-tmp-1461173013.93-9076457629738/ping explode
# Module expanded into:
# /home/badger/.ansible/tmp/ansible-tmp-1461173408.08-279692652635227/ansible
#
# You can now edit the source files to instrument the code or experiment with
# different parameter values. When you're ready to run the code you've modified
# (instead of the code from the actual zipped module), use the execute subcommand like this::
# $ /usr/bin/python /home/badger/.ansible/tmp/ansible-tmp-1461173013.93-9076457629738/ping execute
# py3: modlib_path will be text, py2: it's bytes. Need bytes at the end
sitecustomize = u'import sys\\nsys.path.insert(0,"%%s")\\n' %% modlib_path
sitecustomize = sitecustomize.encode('utf-8')
# Use a ZipInfo to work around zipfile limitation on hosts with
# clocks set to a pre-1980 year (for instance, Raspberry Pi)
zinfo = zipfile.ZipInfo()
zinfo.filename = 'sitecustomize.py'
zinfo.date_time = ( %(year)i, %(month)i, %(day)i, %(hour)i, %(minute)i, %(second)i)
z.writestr(zinfo, sitecustomize)
# Note: Remove the following section when we switch to zipimport
# Write the module to disk for imp.load_module
module = os.path.join(temp_path, '__main__.py')
with open(module, 'wb') as f:
f.write(z.read('__main__.py'))
f.close()
# End pre-zipimport section
z.close()
# Okay to use __file__ here because we're running from a kept file
basedir = os.path.join(os.path.abspath(os.path.dirname(__file__)), 'debug_dir')
args_path = os.path.join(basedir, 'args')
script_path = os.path.join(basedir, 'ansible_module_%(ansible_module)s.py')
# Put the zipped up module_utils we got from the controller first in the python path so that we
# can monkeypatch the right basic
sys.path.insert(0, modlib_path)
if command == 'explode':
# transform the ZIPDATA into an exploded directory of code and then
# print the path to the code. This is an easy way for people to look
# at the code on the remote machine for debugging it in that
# environment
z = zipfile.ZipFile(zipped_mod)
for filename in z.namelist():
if filename.startswith('/'):
raise Exception('Something wrong with this module zip file: should not contain absolute paths')
# Monkeypatch the parameters into basic
from ansible.module_utils import basic
basic._ANSIBLE_ARGS = json_params
%(coverage)s
# Run the module! By importing it as '__main__', it thinks it is executing as a script
with open(module, 'rb') as mod:
imp.load_module('__main__', mod, module, MOD_DESC)
dest_filename = os.path.join(basedir, filename)
if dest_filename.endswith(os.path.sep) and not os.path.exists(dest_filename):
os.makedirs(dest_filename)
else:
directory = os.path.dirname(dest_filename)
if not os.path.exists(directory):
os.makedirs(directory)
f = open(dest_filename, 'wb')
f.write(z.read(filename))
f.close()
# write the args file
f = open(args_path, 'wb')
f.write(json_params)
f.close()
print('Module expanded into:')
print('%%s' %% basedir)
exitcode = 0
elif command == 'execute':
# Execute the exploded code instead of executing the module from the
# embedded ZIPDATA. This allows people to easily run their modified
# code on the remote machine to see how changes will affect it.
# This differs slightly from default Ansible execution of Python modules
# as it passes the arguments to the module via a file instead of stdin.
# Set pythonpath to the debug dir
pythonpath = os.environ.get('PYTHONPATH')
if pythonpath:
os.environ['PYTHONPATH'] = ':'.join((basedir, pythonpath))
else:
os.environ['PYTHONPATH'] = basedir
p = subprocess.Popen([%(interpreter)s, script_path, args_path],
env=os.environ, shell=False, stdout=subprocess.PIPE,
stderr=subprocess.PIPE, stdin=subprocess.PIPE)
(stdout, stderr) = p.communicate()
if not isinstance(stderr, (bytes, unicode)):
stderr = stderr.read()
if not isinstance(stdout, (bytes, unicode)):
stdout = stdout.read()
if PY3:
sys.stderr.buffer.write(stderr)
sys.stdout.buffer.write(stdout)
else:
sys.stderr.write(stderr)
sys.stdout.write(stdout)
return p.returncode
elif command == 'excommunicate':
# This attempts to run the module in-process (by importing a main
# function and then calling it). It is not the way ansible generally
# invokes the module so it won't work in every case. It is here to
# aid certain debuggers which work better when the code doesn't change
# from one process to another but there may be problems that occur
# when using this that are only artifacts of how we're invoking here,
# not actual bugs (as they don't affect the real way that we invoke
# ansible modules)
# stub the args and python path
sys.argv = ['%(ansible_module)s', args_path]
sys.path.insert(0, basedir)
from ansible_module_%(ansible_module)s import main
main()
print('WARNING: Module returned to wrapper instead of exiting')
# Ansible modules must exit themselves
print('{"msg": "New-style module did not handle its own exit", "failed": true}')
sys.exit(1)
else:
print('WARNING: Unknown debug command. Doing nothing.')
exitcode = 0
return exitcode
def debug(command, zipped_mod, json_params):
# The code here normally doesn't run. It's only used for debugging on the
# remote machine.
#
# The subcommands in this function make it easier to debug ansiballz
# modules. Here's the basic steps:
#
# Run ansible with the environment variable: ANSIBLE_KEEP_REMOTE_FILES=1 and -vvv
# to save the module file remotely::
# $ ANSIBLE_KEEP_REMOTE_FILES=1 ansible host1 -m ping -a 'data=october' -vvv
#
# Part of the verbose output will tell you where on the remote machine the
# module was written to::
# [...]
# <host1> SSH: EXEC ssh -C -q -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o
# PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o
# ControlPath=/home/badger/.ansible/cp/ansible-ssh-%%h-%%p-%%r -tt rhel7 '/bin/sh -c '"'"'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8
# LC_MESSAGES=en_US.UTF-8 /usr/bin/python /home/badger/.ansible/tmp/ansible-tmp-1461173013.93-9076457629738/ping'"'"''
# [...]
#
# Login to the remote machine and run the module file via from the previous
# step with the explode subcommand to extract the module payload into
# source files::
# $ ssh host1
# $ /usr/bin/python /home/badger/.ansible/tmp/ansible-tmp-1461173013.93-9076457629738/ping explode
# Module expanded into:
# /home/badger/.ansible/tmp/ansible-tmp-1461173408.08-279692652635227/ansible
#
# You can now edit the source files to instrument the code or experiment with
# different parameter values. When you're ready to run the code you've modified
# (instead of the code from the actual zipped module), use the execute subcommand like this::
# $ /usr/bin/python /home/badger/.ansible/tmp/ansible-tmp-1461173013.93-9076457629738/ping execute
# Okay to use __file__ here because we're running from a kept file
basedir = os.path.join(os.path.abspath(os.path.dirname(__file__)), 'debug_dir')
args_path = os.path.join(basedir, 'args')
script_path = os.path.join(basedir, '__main__.py')
if command == 'excommunicate':
print('The excommunicate debug command is deprecated and will be removed in 2.11. Use execute instead.')
command = 'execute'
if command == 'explode':
# transform the ZIPDATA into an exploded directory of code and then
# print the path to the code. This is an easy way for people to look
# at the code on the remote machine for debugging it in that
# environment
z = zipfile.ZipFile(zipped_mod)
for filename in z.namelist():
if filename.startswith('/'):
raise Exception('Something wrong with this module zip file: should not contain absolute paths')
dest_filename = os.path.join(basedir, filename)
if dest_filename.endswith(os.path.sep) and not os.path.exists(dest_filename):
os.makedirs(dest_filename)
else:
directory = os.path.dirname(dest_filename)
if not os.path.exists(directory):
os.makedirs(directory)
f = open(dest_filename, 'wb')
f.write(z.read(filename))
f.close()
# write the args file
f = open(args_path, 'wb')
f.write(json_params)
f.close()
print('Module expanded into:')
print('%%s' %% basedir)
exitcode = 0
elif command == 'execute':
# Execute the exploded code instead of executing the module from the
# embedded ZIPDATA. This allows people to easily run their modified
# code on the remote machine to see how changes will affect it.
# Set pythonpath to the debug dir
sys.path.insert(0, basedir)
# read in the args file which the user may have modified
with open(args_path, 'rb') as f:
json_params = f.read()
# Monkeypatch the parameters into basic
from ansible.module_utils import basic
basic._ANSIBLE_ARGS = json_params
# Run the module! By importing it as '__main__', it thinks it is executing as a script
import imp
with open(script_path, 'r') as f:
importer = imp.load_module('__main__', f, script_path, ('.py', 'r', imp.PY_SOURCE))
# Ansible modules must exit themselves
print('{"msg": "New-style module did not handle its own exit", "failed": true}')
sys.exit(1)
else:
print('WARNING: Unknown debug command. Doing nothing.')
exitcode = 0
return exitcode
if __name__ == '__main__':
#
# See comments in the debug() method for information on debugging
#
@ -304,40 +306,19 @@ if __name__ == '__main__':
# There's a race condition with the controller removing the
# remote_tmpdir and this module executing under async. So we cannot
# store this in remote_tmpdir (use system tempdir instead)
temp_path = tempfile.mkdtemp(prefix='ansible_')
# Only need to use [ansible_module]_payload_ in the temp_path until we move to zipimport
# (this helps ansible-test produce coverage stats)
temp_path = tempfile.mkdtemp(prefix='ansible_%(ansible_module)s_payload_')
zipped_mod = os.path.join(temp_path, 'ansible_modlib.zip')
modlib = open(zipped_mod, 'wb')
modlib.write(base64.b64decode(ZIPDATA))
modlib.close()
zipped_mod = os.path.join(temp_path, 'ansible_%(ansible_module)s_payload.zip')
with open(zipped_mod, 'wb') as modlib:
modlib.write(base64.b64decode(ZIPDATA))
if len(sys.argv) == 2:
exitcode = debug(sys.argv[1], zipped_mod, ANSIBALLZ_PARAMS)
else:
z = zipfile.ZipFile(zipped_mod, mode='r')
module = os.path.join(temp_path, 'ansible_module_%(ansible_module)s.py')
f = open(module, 'wb')
f.write(z.read('ansible_module_%(ansible_module)s.py'))
f.close()
# When installed via setuptools (including python setup.py install),
# ansible may be installed with an easy-install.pth file. That file
# may load the system-wide install of ansible rather than the one in
# the module. sitecustomize is the only way to override that setting.
z = zipfile.ZipFile(zipped_mod, mode='a')
# py3: zipped_mod will be text, py2: it's bytes. Need bytes at the end
sitecustomize = u'import sys\\nsys.path.insert(0,"%%s")\\n' %% zipped_mod
sitecustomize = sitecustomize.encode('utf-8')
# Use a ZipInfo to work around zipfile limitation on hosts with
# clocks set to a pre-1980 year (for instance, Raspberry Pi)
zinfo = zipfile.ZipInfo()
zinfo.filename = 'sitecustomize.py'
zinfo.date_time = ( %(year)i, %(month)i, %(day)i, %(hour)i, %(minute)i, %(second)i)
z.writestr(zinfo, sitecustomize)
z.close()
exitcode = invoke_module(module, zipped_mod, ANSIBALLZ_PARAMS)
# Note: temp_path isn't needed once we switch to zipimport
invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)
finally:
try:
shutil.rmtree(temp_path)
@ -345,6 +326,33 @@ if __name__ == '__main__':
# tempdir creation probably failed
pass
sys.exit(exitcode)
if __name__ == '__main__':
_ansiballz_main()
'''
ANSIBALLZ_COVERAGE_TEMPLATE = '''
# Access to the working directory is required by coverage.
# Some platforms, such as macOS, may not allow querying the working directory when using become to drop privileges.
try:
os.getcwd()
except OSError:
os.chdir('/')
os.environ['COVERAGE_FILE'] = '%(coverage_output)s'
import atexit
import coverage
cov = coverage.Coverage(config_file='%(coverage_config)s')
def atexit_coverage():
cov.stop()
cov.save()
atexit.register(atexit_coverage)
cov.start()
'''
@ -759,7 +767,7 @@ def _find_module_utils(module_name, b_module_data, module_path, module_args, tas
to_bytes(__author__) + b'"\n')
zf.writestr('ansible/module_utils/__init__.py', b'from pkgutil import extend_path\n__path__=extend_path(__path__,__name__)\n')
zf.writestr('ansible_module_%s.py' % module_name, b_module_data)
zf.writestr('__main__.py', b_module_data)
py_module_cache = {('__init__',): (b'', '[builtin]')}
recursive_finder(module_name, b_module_data, py_module_names, py_module_cache, zf)
@ -805,6 +813,18 @@ def _find_module_utils(module_name, b_module_data, module_path, module_args, tas
interpreter_parts = interpreter.split(u' ')
interpreter = u"'{0}'".format(u"', '".join(interpreter_parts))
coverage_config = os.environ.get('_ANSIBLE_COVERAGE_CONFIG')
if coverage_config:
# Enable code coverage analysis of the module.
# This feature is for internal testing and may change without notice.
coverage = ANSIBALLZ_COVERAGE_TEMPLATE % dict(
coverage_config=coverage_config,
coverage_output=os.environ['_ANSIBLE_COVERAGE_OUTPUT']
)
else:
coverage = ''
now = datetime.datetime.utcnow()
output.write(to_bytes(ACTIVE_ANSIBALLZ_TEMPLATE % dict(
zipdata=zipdata,
@ -819,6 +839,7 @@ def _find_module_utils(module_name, b_module_data, module_path, module_args, tas
hour=now.hour,
minute=now.minute,
second=now.second,
coverage=coverage,
)))
b_module_data = output.getvalue()

View file

@ -23,8 +23,7 @@ except ImportError:
AZURE_COMMON_ARGS = dict(
auth_source=dict(
type='str',
choices=['auto', 'cli', 'env', 'credential_file', 'msi'],
default='auto'
choices=['auto', 'cli', 'env', 'credential_file', 'msi']
),
profile=dict(type='str'),
subscription_id=dict(type='str', no_log=True),

View file

@ -62,6 +62,7 @@ PASS_BOOLS = ('no_log', 'debug', 'diff')
# The functions available here can be used to do many common tasks,
# to simplify development of Python modules.
import __main__
import atexit
import locale
import os

View file

@ -70,13 +70,19 @@ class PkgMgrFactCollector(BaseFactCollector):
_platform = 'Generic'
required_facts = set(['distribution'])
def _check_rh_versions(self, collected_facts):
def _check_rh_versions(self, pkg_mgr_name, collected_facts):
if collected_facts['ansible_distribution'] == 'Fedora':
try:
if int(collected_facts['ansible_distribution_major_version']) < 15:
pkg_mgr_name = 'yum'
if int(collected_facts['ansible_distribution_major_version']) < 23:
for yum in [pkg_mgr for pkg_mgr in PKG_MGRS if pkg_mgr['name'] == 'yum']:
if os.path.exists(yum['path']):
pkg_mgr_name = 'yum'
break
else:
pkg_mgr_name = 'dnf'
for dnf in [pkg_mgr for pkg_mgr in PKG_MGRS if pkg_mgr['name'] == 'dnf']:
if os.path.exists(dnf['path']):
pkg_mgr_name = 'dnf'
break
except ValueError:
# If there's some new magical Fedora version in the future,
# just default to dnf
@ -92,14 +98,14 @@ class PkgMgrFactCollector(BaseFactCollector):
if os.path.exists(pkg['path']):
pkg_mgr_name = pkg['name']
# apt is easily installable and supported by distros other than those
# that are debian based, this handles some of those scenarios as they
# are reported/requested
if pkg_mgr_name == 'apt' and collected_facts['ansible_os_family'] in ["RedHat", "Altlinux"]:
if collected_facts['ansible_os_family'] == 'RedHat':
pkg_mgr_name = self._check_rh_versions(collected_facts)
elif collected_facts['ansible_os_family'] == 'Altlinux':
# Handle distro family defaults when more than one package manager is
# installed, the ansible_fact entry should be the default package
# manager provided by the distro.
if collected_facts['ansible_os_family'] == "RedHat":
if pkg_mgr_name not in ('yum', 'dnf'):
pkg_mgr_name = self._check_rh_versions(pkg_mgr_name, collected_facts)
elif collected_facts['ansible_os_family'] == 'Altlinux':
if pkg_mgr_name == 'apt':
pkg_mgr_name = 'apt_rpm'
# pacman has become available by distros other than those that are Arch

View file

@ -164,7 +164,8 @@ def run_commands(module, commands, check_rc=True):
def run_cnos_commands(module, commands, check_rc=True):
retVal = ''
enter_config = {'command': 'configure terminal', 'prompt': None, 'answer': None}
enter_config = {'command': 'configure terminal', 'prompt': None,
'answer': None}
exit_config = {'command': 'end', 'prompt': None, 'answer': None}
commands.insert(0, enter_config)
commands.append(exit_config)
@ -212,128 +213,97 @@ def get_defaults_flag(module):
return 'full'
def interfaceConfig(
obj, deviceType, prompt, timeout, interfaceArg1,
interfaceArg2, interfaceArg3, interfaceArg4, interfaceArg5,
interfaceArg6, interfaceArg7, interfaceArg8, interfaceArg9):
def interfaceConfig(module, prompt, functionality, answer):
retVal = ""
command = "interface "
newPrompt = prompt
if(interfaceArg1 == "port-aggregation"):
command = command + " " + interfaceArg1 + " " + interfaceArg2 + "\n"
interfaceArg1 = functionality
interfaceArg2 = module.params['interfaceRange']
interfaceArg3 = module.params['interfaceArg1']
interfaceArg4 = module.params['interfaceArg2']
interfaceArg5 = module.params['interfaceArg3']
interfaceArg6 = module.params['interfaceArg4']
interfaceArg7 = module.params['interfaceArg5']
interfaceArg8 = module.params['interfaceArg6']
interfaceArg9 = module.params['interfaceArg7']
deviceType = module.params['deviceType']
if(interfaceArg1 == "port-channel"):
command = command + " " + interfaceArg1 + " " + interfaceArg2
# debugOutput(command)
value = checkSanityofVariable(
deviceType, "portchannel_interface_value", interfaceArg2)
if(value == "ok"):
newPrompt = "(config-if)#"
retVal = retVal + \
waitForDeviceResponse(command, newPrompt, timeout, obj)
cmd = [{'command': command, 'prompt': None, 'answer': None}]
else:
value = checkSanityofVariable(
deviceType, "portchannel_interface_range", interfaceArg2)
if(value == "ok"):
newPrompt = "(config-if-range)#"
retVal = retVal + \
waitForDeviceResponse(command, newPrompt, timeout, obj)
cmd = [{'command': command, 'prompt': None, 'answer': None}]
else:
value = checkSanityofVariable(
deviceType, "portchannel_interface_string", interfaceArg2)
if(value == "ok"):
newPrompt = "(config-if-range)#"
if '/' in interfaceArg2:
newPrompt = "(config-if)#"
retVal = retVal + \
waitForDeviceResponse(command, newPrompt, timeout, obj)
cmd = [{'command': command, 'prompt': None,
'answer': None}]
else:
retVal = "Error-102"
return retVal
retVal = retVal + interfaceLevel2Config(
obj, deviceType, newPrompt, timeout, interfaceArg3, interfaceArg4,
interfaceArg5, interfaceArg6, interfaceArg7, interfaceArg8,
interfaceArg9)
retVal = retVal + interfaceLevel2Config(module, cmd, prompt, answer)
elif(interfaceArg1 == "ethernet"):
# command = command + interfaceArg1 + " 1/"
value = checkSanityofVariable(
deviceType, "ethernet_interface_value", interfaceArg2)
if(value == "ok"):
newPrompt = "(config-if)#"
command = command + interfaceArg1 + " 1/" + interfaceArg2 + " \n"
retVal = retVal + \
waitForDeviceResponse(command, newPrompt, timeout, obj)
command = command + interfaceArg1 + " 1/" + interfaceArg2
cmd = [{'command': command, 'prompt': None, 'answer': None}]
else:
value = checkSanityofVariable(
deviceType, "ethernet_interface_range", interfaceArg2)
if(value == "ok"):
command = command + \
interfaceArg1 + " 1/" + interfaceArg2 + " \n"
newPrompt = "(config-if-range)#"
retVal = retVal + \
waitForDeviceResponse(command, newPrompt, timeout, obj)
command = command + interfaceArg1 + " 1/" + interfaceArg2
cmd = [{'command': command, 'prompt': None, 'answer': None}]
else:
value = checkSanityofVariable(
deviceType, "ethernet_interface_string", interfaceArg2)
if(value == "ok"):
command = command + \
interfaceArg1 + " " + interfaceArg2 + "\n"
newPrompt = "(config-if-range)#"
if '/' in interfaceArg2:
newPrompt = "(config-if)#"
retVal = retVal + \
waitForDeviceResponse(command, newPrompt, timeout, obj)
command = command + interfaceArg1 + " " + interfaceArg2
cmd = [{'command': command, 'prompt': None,
'answer': None}]
else:
retVal = "Error-102"
return retVal
retVal = retVal + interfaceLevel2Config(
obj, deviceType, newPrompt, timeout, interfaceArg3, interfaceArg4,
interfaceArg5, interfaceArg6, interfaceArg7, interfaceArg8,
interfaceArg9)
retVal = retVal + interfaceLevel2Config(module, cmd, prompt, answer)
elif(interfaceArg1 == "loopback"):
value = checkSanityofVariable(
deviceType, "loopback_interface_value", interfaceArg2)
if(value == "ok"):
newPrompt = "(config-if)#"
command = command + interfaceArg1 + " " + interfaceArg2 + "\n"
retVal = retVal + \
waitForDeviceResponse(command, newPrompt, timeout, obj)
command = command + interfaceArg1 + " " + interfaceArg2
cmd = [{'command': command, 'prompt': None, 'answer': None}]
else:
retVal = "Error-102"
return retVal
retVal = retVal + interfaceLevel2Config(
obj, deviceType, newPrompt, timeout, interfaceArg3, interfaceArg4,
interfaceArg5, interfaceArg6, interfaceArg7, interfaceArg8,
interfaceArg9)
retVal = retVal + interfaceLevel2Config(module, cmd, prompt, answer)
elif(interfaceArg1 == "mgmt"):
value = checkSanityofVariable(
deviceType, "mgmt_interface_value", interfaceArg2)
if(value == "ok"):
newPrompt = "(config-if)#"
command = command + interfaceArg1 + " " + interfaceArg2 + "\n"
retVal = retVal + \
waitForDeviceResponse(command, newPrompt, timeout, obj)
command = command + interfaceArg1 + " " + interfaceArg2
cmd = [{'command': command, 'prompt': None, 'answer': None}]
else:
retVal = "Error-102"
return retVal
retVal = retVal + interfaceLevel2Config(
obj, deviceType, newPrompt, timeout, interfaceArg3, interfaceArg4,
interfaceArg5, interfaceArg6, interfaceArg7, interfaceArg8,
interfaceArg9)
retVal = retVal + interfaceLevel2Config(module, cmd, prompt, answer)
elif(interfaceArg1 == "vlan"):
value = checkSanityofVariable(
deviceType, "vlan_interface_value", interfaceArg2)
if(value == "ok"):
newPrompt = "(config-if)#"
command = command + interfaceArg1 + " " + interfaceArg2 + "\n"
retVal = retVal + \
waitForDeviceResponse(command, newPrompt, timeout, obj)
command = command + interfaceArg1 + " " + interfaceArg2
cmd = [{'command': command, 'prompt': None, 'answer': None}]
else:
retVal = "Error-102"
return retVal
retVal = retVal + interfaceLevel2Config(
obj, deviceType, newPrompt, timeout, interfaceArg3, interfaceArg4,
interfaceArg5, interfaceArg6, interfaceArg7, interfaceArg8,
interfaceArg9)
retVal = retVal + interfaceLevel2Config(module, cmd, prompt, answer)
else:
retVal = "Error-102"
@ -341,14 +311,20 @@ def interfaceConfig(
# EOM
def interfaceLevel2Config(
obj, deviceType, prompt, timeout, interfaceL2Arg1, interfaceL2Arg2,
interfaceL2Arg3, interfaceL2Arg4, interfaceL2Arg5, interfaceL2Arg6,
interfaceL2Arg7):
def interfaceLevel2Config(module, cmd, prompt, answer):
retVal = ""
command = ""
if(interfaceL2Arg1 == "aggregation-group"):
# debugOutput("aggregation-group")
interfaceL2Arg1 = module.params['interfaceArg1']
interfaceL2Arg2 = module.params['interfaceArg2']
interfaceL2Arg3 = module.params['interfaceArg3']
interfaceL2Arg4 = module.params['interfaceArg4']
interfaceL2Arg5 = module.params['interfaceArg5']
interfaceL2Arg6 = module.params['interfaceArg6']
interfaceL2Arg7 = module.params['interfaceArg7']
deviceType = module.params['deviceType']
if(interfaceL2Arg1 == "channel-group"):
# debugOutput("channel-group")
command = interfaceL2Arg1 + " "
value = checkSanityofVariable(
deviceType, "aggregation_group_no", interfaceL2Arg2)
@ -583,8 +559,8 @@ def interfaceLevel2Config(
retVal = "Error-205"
return retVal
elif (interfaceL2Arg1 == "bridge-port"):
# debugOutput("bridge-port")
elif (interfaceL2Arg1 == "switchport"):
# debugOutput("switchport")
command = interfaceL2Arg1 + " "
if(interfaceL2Arg2 is None):
command = command.strip()
@ -1335,26 +1311,27 @@ def interfaceLevel2Config(
retVal = "Error-233"
return retVal
command = command + "\n"
# debugOutput(command)
retVal = retVal + waitForDeviceResponse(command, prompt, timeout, obj)
inner_cmd = [{'command': command, 'prompt': None, 'answer': None}]
cmd.extend(inner_cmd)
retVal = retVal + str(run_cnos_commands(module, cmd))
# Come back to config mode
if((prompt == "(config-if)#") or (prompt == "(config-if-range)#")):
command = "exit \n"
command = "exit"
# debugOutput(command)
retVal = retVal + \
waitForDeviceResponse(command, "(config)#", timeout, obj)
cmd = [{'command': command, 'prompt': None, 'answer': None}]
# retVal = retVal + str(run_cnos_commands(module, cmd))
return retVal
# EOM
def portChannelConfig(
obj, deviceType, prompt, timeout, portChArg1, portChArg2, portChArg3,
portChArg4, portChArg5, portChArg6, portChArg7):
retVal = ""
command = ""
if(portChArg1 == "port-aggregation" and prompt == "(config)#"):
def portChannelConfig(module, prompt, answer):
retVal = ''
command = ''
portChArg1 = module.params['interfaceArg1']
portChArg2 = module.params['interfaceArg2']
portChArg3 = module.params['interfaceArg3']
if(portChArg1 == "port-channel" and prompt == "(config)#"):
command = command + portChArg1 + " load-balance ethernet "
if(portChArg2 == "destination-ip" or
portChArg2 == "destination-mac" or
@ -1373,13 +1350,14 @@ def portChannelConfig(
command = command + ""
elif(portChArg3 == "source-interface"):
command = command + portChArg3
cmd = [{'command': command, 'prompt': None, 'answer': None}]
retVal = retVal + str(run_cnos_commands(module, cmd))
else:
retVal = "Error-231"
return retVal
else:
retVal = "Error-232"
return retVal
# EOM
@ -2554,15 +2532,17 @@ def createVlan(module, prompt, answer):
# EOM
def vlagConfig(
obj, deviceType, prompt, timeout, vlagArg1, vlagArg2, vlagArg3,
vlagArg4):
def vlagConfig(module, prompt, answer):
retVal = ""
# Wait time to get response from server
timeout = timeout
retVal = ''
# vlag config command happens here.
command = "vlag "
command = 'vlag '
vlagArg1 = module.params['vlagArg1']
vlagArg2 = module.params['vlagArg2']
vlagArg3 = module.params['vlagArg3']
vlagArg4 = module.params['vlagArg4']
deviceType = module.params['deviceType']
if(vlagArg1 == "enable"):
# debugOutput("enable")
@ -2592,7 +2572,7 @@ def vlagConfig(
elif(vlagArg1 == "isl"):
# debugOutput("isl")
command = command + vlagArg1 + " port-aggregation "
command = command + vlagArg1 + " port-channel "
value = checkSanityofVariable(
deviceType, "vlag_port_aggregation", vlagArg2)
if(value == "ok"):
@ -2651,7 +2631,7 @@ def vlagConfig(
if(value == "ok"):
command = command + vlagArg2
if(vlagArg3 is not None):
command = command + " port-aggregation "
command = command + " port-channel "
value = checkSanityofVariable(
deviceType, "vlag_port_aggregation", vlagArg3)
if(value == "ok"):
@ -2718,10 +2698,8 @@ def vlagConfig(
return retVal
# debugOutput(command)
command = command + "\n"
# debugOutput(command)
retVal = retVal + waitForDeviceResponse(command, "(config)#", timeout, obj)
cmd = [{'command': command, 'prompt': None, 'answer': None}]
retVal = retVal + str(run_cnos_commands(module, cmd))
return retVal
# EOM

View file

@ -1356,7 +1356,7 @@ g8272_cnos = {'vlan_id': 'INTEGER_VALUE:1-3999',
'portchannel_ipv6_address': 'IPV6Address:',
'portchannel_ipv6_options': 'TEXT_OPTIONS:address,dhcp,\
link-local,nd,neighbor',
'interface_speed': 'TEXT_OPTIONS:1000,10000,40000,auto',
'interface_speed': 'TEXT_OPTIONS:1000,10000,40000',
'stormcontrol_options': 'TEXT_OPTIONS:broadcast,multicast,\
unicast',
'stormcontrol_level': 'FLOAT:',

View file

@ -162,9 +162,10 @@ class Cli:
return response
def get_diff(self, candidate=None, running=None, match='line', diff_ignore_lines=None, path=None, replace='line'):
def get_diff(self, candidate=None, running=None, diff_match='line', diff_ignore_lines=None, path=None, diff_replace='line'):
conn = self._get_connection()
return conn.get_diff(candidate=candidate, running=running, match=match, diff_ignore_lines=diff_ignore_lines, path=path, replace=replace)
return conn.get_diff(candidate=candidate, running=running, diff_match=diff_match, diff_ignore_lines=diff_ignore_lines, path=path,
diff_replace=diff_replace)
class Eapi:
@ -361,17 +362,17 @@ class Eapi:
return result
# get_diff added here to support connection=local and transport=eapi scenario
def get_diff(self, candidate, running=None, match='line', diff_ignore_lines=None, path=None, replace='line'):
def get_diff(self, candidate, running=None, diff_match='line', diff_ignore_lines=None, path=None, diff_replace='line'):
diff = {}
# prepare candidate configuration
candidate_obj = NetworkConfig(indent=3)
candidate_obj.load(candidate)
if running and match != 'none' and replace != 'config':
if running and diff_match != 'none' and diff_replace != 'config':
# running configuration
running_obj = NetworkConfig(indent=3, contents=running, ignore_lines=diff_ignore_lines)
configdiffobjs = candidate_obj.difference(running_obj, path=path, match=match, replace=replace)
configdiffobjs = candidate_obj.difference(running_obj, path=path, match=diff_match, replace=diff_replace)
else:
configdiffobjs = candidate_obj.items
@ -424,6 +425,6 @@ def load_config(module, config, commit=False, replace=False):
return conn.load_config(config, commit, replace)
def get_diff(self, candidate=None, running=None, match='line', diff_ignore_lines=None, path=None, replace='line'):
def get_diff(self, candidate=None, running=None, diff_match='line', diff_ignore_lines=None, path=None, diff_replace='line'):
conn = self.get_connection()
return conn.get_diff(candidate=candidate, running=running, match=match, diff_ignore_lines=diff_ignore_lines, path=path, replace=replace)
return conn.get_diff(candidate=candidate, running=running, diff_match=diff_match, diff_ignore_lines=diff_ignore_lines, path=path, diff_replace=diff_replace)

View file

@ -36,6 +36,7 @@ from ansible.module_utils._text import to_text
from ansible.module_utils.basic import env_fallback, return_values
from ansible.module_utils.network.common.utils import to_list, ComplexList
from ansible.module_utils.connection import Connection, ConnectionError
from ansible.module_utils.network.common.config import NetworkConfig, dumps
from ansible.module_utils.six import iteritems, string_types
from ansible.module_utils.urls import fetch_url
@ -138,7 +139,7 @@ class Cli:
return self._device_configs[cmd]
except KeyError:
connection = self._get_connection()
out = connection.get_config(flags=flags)
out = connection.get_config(filter=flags)
cfg = to_text(out, errors='surrogate_then_replace').strip()
self._device_configs[cmd] = cfg
return cfg
@ -153,37 +154,42 @@ class Cli:
except ConnectionError as exc:
self._module.fail_json(msg=to_text(exc))
def load_config(self, config, return_error=False, opts=None):
def load_config(self, config, return_error=False, opts=None, replace=None):
"""Sends configuration commands to the remote device
"""
if opts is None:
opts = {}
connection = self._get_connection()
msgs = []
responses = []
try:
responses = connection.edit_config(config)
msg = json.loads(responses)
resp = connection.edit_config(config, replace=replace)
if isinstance(resp, collections.Mapping):
resp = resp['response']
except ConnectionError as e:
code = getattr(e, 'code', 1)
message = getattr(e, 'err', e)
err = to_text(message, errors='surrogate_then_replace')
if opts.get('ignore_timeout') and code:
msgs.append(code)
return msgs
responses.append(code)
return responses
elif code and 'no graceful-restart' in err:
if 'ISSU/HA will be affected if Graceful Restart is disabled' in err:
msg = ['']
msgs.extend(msg)
return msgs
responses.extend(msg)
return responses
else:
self._module.fail_json(msg=err)
elif code:
self._module.fail_json(msg=err)
msgs.extend(msg)
return msgs
responses.extend(resp)
return responses
def get_diff(self, candidate=None, running=None, diff_match='line', diff_ignore_lines=None, path=None, diff_replace='line'):
conn = self._get_connection()
return conn.get_diff(candidate=candidate, running=running, diff_match=diff_match, diff_ignore_lines=diff_ignore_lines, path=path,
diff_replace=diff_replace)
def get_capabilities(self):
"""Returns platform info of the remove device
@ -371,10 +377,14 @@ class Nxapi:
return responses
def load_config(self, commands, return_error=False, opts=None):
def load_config(self, commands, return_error=False, opts=None, replace=None):
"""Sends the ordered set of commands to the device
"""
if replace:
commands = 'config replace {0}'.format(replace)
commands = to_list(commands)
msg = self.send_request(commands, output='config', check_status=True,
return_error=return_error, opts=opts)
if return_error:
@ -382,6 +392,24 @@ class Nxapi:
else:
return []
def get_diff(self, candidate=None, running=None, diff_match='line', diff_ignore_lines=None, path=None, diff_replace='line'):
diff = {}
# prepare candidate configuration
candidate_obj = NetworkConfig(indent=2)
candidate_obj.load(candidate)
if running and diff_match != 'none' and diff_replace != 'config':
# running configuration
running_obj = NetworkConfig(indent=2, contents=running, ignore_lines=diff_ignore_lines)
configdiffobjs = candidate_obj.difference(running_obj, path=path, match=diff_match, replace=diff_replace)
else:
configdiffobjs = candidate_obj.items
diff['config_diff'] = dumps(configdiffobjs, 'commands') if configdiffobjs else ''
return diff
def get_device_info(self):
device_info = {}
@ -460,9 +488,9 @@ def run_commands(module, commands, check_rc=True):
return conn.run_commands(to_command(module, commands), check_rc)
def load_config(module, config, return_error=False, opts=None):
def load_config(module, config, return_error=False, opts=None, replace=None):
conn = get_connection(module)
return conn.load_config(config, return_error, opts)
return conn.load_config(config, return_error, opts, replace=replace)
def get_capabilities(module):
@ -470,6 +498,11 @@ def get_capabilities(module):
return conn.get_capabilities()
def get_diff(self, candidate=None, running=None, diff_match='line', diff_ignore_lines=None, path=None, diff_replace='line'):
conn = self.get_connection()
return conn.get_diff(candidate=candidate, running=running, diff_match=diff_match, diff_ignore_lines=diff_ignore_lines, path=path, diff_replace=diff_replace)
def normalize_interface(name):
"""Return the normalized interface name
"""

View file

@ -0,0 +1,920 @@
# Copyright (c) 2017-2018 Dell EMC Inc.
# GNU General Public License v3.0+ (see LICENSE or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
import json
import re
from ansible.module_utils.urls import open_url
from ansible.module_utils.six.moves.urllib.error import URLError, HTTPError
HEADERS = {'content-type': 'application/json'}
class RedfishUtils(object):
def __init__(self, creds, root_uri):
self.root_uri = root_uri
self.creds = creds
self._init_session()
return
# The following functions are to send GET/POST/PATCH/DELETE requests
def get_request(self, uri):
try:
resp = open_url(uri, method="GET",
url_username=self.creds['user'],
url_password=self.creds['pswd'],
force_basic_auth=True, validate_certs=False,
timeout=10, use_proxy=False)
data = json.loads(resp.read())
except HTTPError as e:
return {'ret': False, 'msg': "HTTP Error: %s" % e.code}
except URLError as e:
return {'ret': False, 'msg': "URL Error: %s" % e.reason}
# Almost all errors should be caught above, but just in case
except:
return {'ret': False, 'msg': "Unknown error"}
return {'ret': True, 'data': data}
def post_request(self, uri, pyld, hdrs):
try:
resp = open_url(uri, data=json.dumps(pyld),
headers=hdrs, method="POST",
url_username=self.creds['user'],
url_password=self.creds['pswd'],
force_basic_auth=True, validate_certs=False,
use_proxy=False)
except HTTPError as e:
return {'ret': False, 'msg': "HTTP Error: %s" % e.code}
except URLError as e:
return {'ret': False, 'msg': "URL Error: %s" % e.reason}
# Almost all errors should be caught above, but just in case
except:
return {'ret': False, 'msg': "Unknown error"}
return {'ret': True, 'resp': resp}
def patch_request(self, uri, pyld, hdrs):
try:
resp = open_url(uri, data=json.dumps(pyld),
headers=hdrs, method="PATCH",
url_username=self.creds['user'],
url_password=self.creds['pswd'],
force_basic_auth=True, validate_certs=False,
use_proxy=False)
except HTTPError as e:
return {'ret': False, 'msg': "HTTP Error: %s" % e.code}
except URLError as e:
return {'ret': False, 'msg': "URL Error: %s" % e.reason}
# Almost all errors should be caught above, but just in case
except:
return {'ret': False, 'msg': "Unknown error"}
return {'ret': True, 'resp': resp}
def delete_request(self, uri, pyld, hdrs):
try:
resp = open_url(uri, data=json.dumps(pyld),
headers=hdrs, method="DELETE",
url_username=self.creds['user'],
url_password=self.creds['pswd'],
force_basic_auth=True, validate_certs=False,
use_proxy=False)
except HTTPError as e:
return {'ret': False, 'msg': "HTTP Error: %s" % e.code}
except URLError as e:
return {'ret': False, 'msg': "URL Error: %s" % e.reason}
# Almost all errors should be caught above, but just in case
except:
return {'ret': False, 'msg': "Unknown error"}
return {'ret': True, 'resp': resp}
def _init_session(self):
pass
def _find_accountservice_resource(self, uri):
response = self.get_request(self.root_uri + uri)
if response['ret'] is False:
return response
data = response['data']
if 'AccountService' not in data:
return {'ret': False, 'msg': "AccountService resource not found"}
else:
account_service = data["AccountService"]["@odata.id"]
response = self.get_request(self.root_uri + account_service)
if response['ret'] is False:
return response
data = response['data']
accounts = data['Accounts']['@odata.id']
if accounts[-1:] == '/':
accounts = accounts[:-1]
self.accounts_uri = accounts
return {'ret': True}
def _find_systems_resource(self, uri):
response = self.get_request(self.root_uri + uri)
if response['ret'] is False:
return response
data = response['data']
if 'Systems' not in data:
return {'ret': False, 'msg': "Systems resource not found"}
else:
systems = data["Systems"]["@odata.id"]
response = self.get_request(self.root_uri + systems)
if response['ret'] is False:
return response
data = response['data']
for member in data[u'Members']:
systems_service = member[u'@odata.id']
self.systems_uri = systems_service
return {'ret': True}
def _find_updateservice_resource(self, uri):
response = self.get_request(self.root_uri + uri)
if response['ret'] is False:
return response
data = response['data']
if 'UpdateService' not in data:
return {'ret': False, 'msg': "UpdateService resource not found"}
else:
update = data["UpdateService"]["@odata.id"]
self.update_uri = update
response = self.get_request(self.root_uri + update)
if response['ret'] is False:
return response
data = response['data']
firmware_inventory = data['FirmwareInventory'][u'@odata.id']
self.firmware_uri = firmware_inventory
return {'ret': True}
def _find_chassis_resource(self, uri):
chassis_service = []
response = self.get_request(self.root_uri + uri)
if response['ret'] is False:
return response
data = response['data']
if 'Chassis' not in data:
return {'ret': False, 'msg': "Chassis resource not found"}
else:
chassis = data["Chassis"]["@odata.id"]
response = self.get_request(self.root_uri + chassis)
if response['ret'] is False:
return response
data = response['data']
for member in data[u'Members']:
chassis_service.append(member[u'@odata.id'])
self.chassis_uri_list = chassis_service
return {'ret': True}
def _find_managers_resource(self, uri):
response = self.get_request(self.root_uri + uri)
if response['ret'] is False:
return response
data = response['data']
if 'Managers' not in data:
return {'ret': False, 'msg': "Manager resource not found"}
else:
manager = data["Managers"]["@odata.id"]
response = self.get_request(self.root_uri + manager)
if response['ret'] is False:
return response
data = response['data']
for member in data[u'Members']:
manager_service = member[u'@odata.id']
self.manager_uri = manager_service
return {'ret': True}
def get_logs(self):
log_svcs_uri_list = []
list_of_logs = []
# Find LogService
response = self.get_request(self.root_uri + self.manager_uri)
if response['ret'] is False:
return response
data = response['data']
if 'LogServices' not in data:
return {'ret': False, 'msg': "LogServices resource not found"}
# Find all entries in LogServices
logs_uri = data["LogServices"]["@odata.id"]
response = self.get_request(self.root_uri + logs_uri)
if response['ret'] is False:
return response
data = response['data']
for log_svcs_entry in data[u'Members']:
response = self.get_request(self.root_uri + log_svcs_entry[u'@odata.id'])
if response['ret'] is False:
return response
_data = response['data']
log_svcs_uri_list.append(_data['Entries'][u'@odata.id'])
# For each entry in LogServices, get log name and all log entries
for log_svcs_uri in log_svcs_uri_list:
logs = {}
list_of_log_entries = []
response = self.get_request(self.root_uri + log_svcs_uri)
if response['ret'] is False:
return response
data = response['data']
logs['Description'] = data['Description']
# Get all log entries for each type of log found
for logEntry in data[u'Members']:
# I only extract some fields - Are these entry names standard?
list_of_log_entries.append(dict(
Name=logEntry[u'Name'],
Created=logEntry[u'Created'],
Message=logEntry[u'Message'],
Severity=logEntry[u'Severity']))
log_name = log_svcs_uri.split('/')[-1]
logs[log_name] = list_of_log_entries
list_of_logs.append(logs)
# list_of_logs[logs{list_of_log_entries[entry{}]}]
return {'ret': True, 'entries': list_of_logs}
def clear_logs(self):
# Find LogService
response = self.get_request(self.root_uri + self.manager_uri)
if response['ret'] is False:
return response
data = response['data']
if 'LogServices' not in data:
return {'ret': False, 'msg': "LogServices resource not found"}
# Find all entries in LogServices
logs_uri = data["LogServices"]["@odata.id"]
response = self.get_request(self.root_uri + logs_uri)
if response['ret'] is False:
return response
data = response['data']
for log_svcs_entry in data[u'Members']:
response = self.get_request(self.root_uri + log_svcs_entry["@odata.id"])
if response['ret'] is False:
return response
_data = response['data']
# Check to make sure option is available, otherwise error is ugly
if "Actions" in _data:
if "#LogService.ClearLog" in _data[u"Actions"]:
self.post_request(self.root_uri + _data[u"Actions"]["#LogService.ClearLog"]["target"], {}, HEADERS)
if response['ret'] is False:
return response
return {'ret': True}
def get_storage_controller_inventory(self):
result = {}
controllers_details = []
controller_list = []
# Find Storage service
response = self.get_request(self.root_uri + self.systems_uri)
if response['ret'] is False:
return response
data = response['data']
if 'SimpleStorage' not in data:
return {'ret': False, 'msg': "SimpleStorage resource not found"}
# Get a list of all storage controllers and build respective URIs
storage_uri = data["SimpleStorage"]["@odata.id"]
response = self.get_request(self.root_uri + storage_uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
for controller in data[u'Members']:
controller_list.append(controller[u'@odata.id'])
for c in controller_list:
uri = self.root_uri + c
response = self.get_request(uri)
if response['ret'] is False:
return response
data = response['data']
controllers_details.append(dict(
Name=data[u'Name'],
Health=data[u'Status'][u'Health']))
result["entries"] = controllers_details
return result
def get_disk_inventory(self):
result = {}
disks_details = []
controller_list = []
# Find Storage service
response = self.get_request(self.root_uri + self.systems_uri)
if response['ret'] is False:
return response
data = response['data']
if 'SimpleStorage' not in data:
return {'ret': False, 'msg': "SimpleStorage resource not found"}
# Get a list of all storage controllers and build respective URIs
storage_uri = data["SimpleStorage"]["@odata.id"]
response = self.get_request(self.root_uri + storage_uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
for controller in data[u'Members']:
controller_list.append(controller[u'@odata.id'])
for c in controller_list:
uri = self.root_uri + c
response = self.get_request(uri)
if response['ret'] is False:
return response
data = response['data']
for device in data[u'Devices']:
disks_details.append(dict(
Controller=data[u'Name'],
Name=device[u'Name'],
Manufacturer=device[u'Manufacturer'],
Model=device[u'Model'],
State=device[u'Status'][u'State'],
Health=device[u'Status'][u'Health']))
result["entries"] = disks_details
return result
def restart_manager_gracefully(self):
result = {}
key = "Actions"
# Search for 'key' entry and extract URI from it
response = self.get_request(self.root_uri + self.manager_uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
action_uri = data[key]["#Manager.Reset"]["target"]
payload = {'ResetType': 'GracefulRestart'}
response = self.post_request(self.root_uri + action_uri, payload, HEADERS)
if response['ret'] is False:
return response
return {'ret': True}
def manage_system_power(self, command):
result = {}
key = "Actions"
# Search for 'key' entry and extract URI from it
response = self.get_request(self.root_uri + self.systems_uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
action_uri = data[key]["#ComputerSystem.Reset"]["target"]
# Define payload accordingly
if command == "PowerOn":
payload = {'ResetType': 'On'}
elif command == "PowerForceOff":
payload = {'ResetType': 'ForceOff'}
elif command == "PowerGracefulRestart":
payload = {'ResetType': 'GracefulRestart'}
elif command == "PowerGracefulShutdown":
payload = {'ResetType': 'GracefulShutdown'}
else:
return {'ret': False, 'msg': 'Invalid Command'}
response = self.post_request(self.root_uri + action_uri, payload, HEADERS)
if response['ret'] is False:
return response
result['ret'] = True
return result
def list_users(self):
result = {}
# listing all users has always been slower than other operations, why?
allusers = []
allusers_details = []
response = self.get_request(self.root_uri + self.accounts_uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
for users in data[u'Members']:
allusers.append(users[u'@odata.id']) # allusers[] are URIs
# for each user, get details
for uri in allusers:
response = self.get_request(self.root_uri + uri)
if response['ret'] is False:
return response
data = response['data']
if not data[u'UserName'] == "": # only care if name is not empty
allusers_details.append(dict(
Id=data[u'Id'],
Name=data[u'Name'],
UserName=data[u'UserName'],
RoleId=data[u'RoleId']))
result["entries"] = allusers_details
return result
def add_user(self, user):
uri = self.root_uri + self.accounts_uri + "/" + user['userid']
username = {'UserName': user['username']}
pswd = {'Password': user['userpswd']}
roleid = {'RoleId': user['userrole']}
enabled = {'Enabled': True}
for payload in username, pswd, roleid, enabled:
response = self.patch_request(uri, payload, HEADERS)
if response['ret'] is False:
return response
return {'ret': True}
def enable_user(self, user):
uri = self.root_uri + self.accounts_uri + "/" + user['userid']
payload = {'Enabled': True}
response = self.patch_request(uri, payload, HEADERS)
if response['ret'] is False:
return response
return {'ret': True}
def delete_user(self, user):
uri = self.root_uri + self.accounts_uri + "/" + user['userid']
payload = {'UserName': ""}
response = self.patch_request(uri, payload, HEADERS)
if response['ret'] is False:
return response
return {'ret': True}
def disable_user(self, user):
uri = self.root_uri + self.accounts_uri + "/" + user['userid']
payload = {'Enabled': False}
response = self.patch_request(uri, payload, HEADERS)
if response['ret'] is False:
return response
return {'ret': True}
def update_user_role(self, user):
uri = self.root_uri + self.accounts_uri + "/" + user['userid']
payload = {'RoleId': user['userrole']}
response = self.patch_request(uri, payload, HEADERS)
if response['ret'] is False:
return response
return {'ret': True}
def update_user_password(self, user):
uri = self.root_uri + self.accounts_uri + "/" + user['userid']
payload = {'Password': user['userpswd']}
response = self.patch_request(uri, payload, HEADERS)
if response['ret'] is False:
return response
return {'ret': True}
def get_firmware_inventory(self):
result = {}
firmware = {}
response = self.get_request(self.root_uri + self.firmware_uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
for device in data[u'Members']:
d = device[u'@odata.id']
d = d.replace(self.firmware_uri, "") # leave just device name
if "Installed" in d:
uri = self.root_uri + self.firmware_uri + d
# Get details for each device that is relevant
response = self.get_request(uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
firmware[data[u'Name']] = data[u'Version']
result["entries"] = firmware
return result
def get_manager_attributes(self):
result = {}
manager_attributes = {}
attributes_id = "Attributes"
response = self.get_request(self.root_uri + self.manager_uri + "/" + attributes_id)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
for attribute in data[u'Attributes'].items():
manager_attributes[attribute[0]] = attribute[1]
result["entries"] = manager_attributes
return result
def get_bios_attributes(self):
result = {}
bios_attributes = {}
key = "Bios"
# Search for 'key' entry and extract URI from it
response = self.get_request(self.root_uri + self.systems_uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
bios_uri = data[key]["@odata.id"]
response = self.get_request(self.root_uri + bios_uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
for attribute in data[u'Attributes'].items():
bios_attributes[attribute[0]] = attribute[1]
result["entries"] = bios_attributes
return result
def get_bios_boot_order(self):
result = {}
boot_device_list = []
boot_device_details = []
key = "Bios"
bootsources = "BootSources"
# Search for 'key' entry and extract URI from it
response = self.get_request(self.root_uri + self.systems_uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
bios_uri = data[key]["@odata.id"]
# Get boot mode first as it will determine what attribute to read
response = self.get_request(self.root_uri + bios_uri)
if response['ret'] is False:
return response
data = response['data']
boot_mode = data[u'Attributes']["BootMode"]
if boot_mode == "Uefi":
boot_seq = "UefiBootSeq"
else:
boot_seq = "BootSeq"
response = self.get_request(self.root_uri + self.systems_uri + "/" + bootsources)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
boot_device_list = data[u'Attributes'][boot_seq]
for b in boot_device_list:
boot_device = {}
boot_device["Index"] = b[u'Index']
boot_device["Name"] = b[u'Name']
boot_device["Enabled"] = b[u'Enabled']
boot_device_details.append(boot_device)
result["entries"] = boot_device_details
return result
def set_bios_default_settings(self):
result = {}
key = "Bios"
# Search for 'key' entry and extract URI from it
response = self.get_request(self.root_uri + self.systems_uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
bios_uri = data[key]["@odata.id"]
# Extract proper URI
response = self.get_request(self.root_uri + bios_uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
reset_bios_settings_uri = data["Actions"]["#Bios.ResetBios"]["target"]
response = self.post_request(self.root_uri + reset_bios_settings_uri, {}, HEADERS)
if response['ret'] is False:
return response
return {'ret': True}
def set_one_time_boot_device(self, bootdevice):
result = {}
key = "Bios"
# Search for 'key' entry and extract URI from it
response = self.get_request(self.root_uri + self.systems_uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
bios_uri = data[key]["@odata.id"]
response = self.get_request(self.root_uri + bios_uri)
if response['ret'] is False:
return response
data = response['data']
boot_mode = data[u'Attributes']["BootMode"]
if boot_mode == "Uefi":
payload = {"Boot": {"BootSourceOverrideTarget": "UefiTarget", "UefiTargetBootSourceOverride": bootdevice}}
else:
payload = {"Boot": {"BootSourceOverrideTarget": bootdevice}}
response = self.patch_request(self.root_uri + self.systems_uri, payload, HEADERS)
if response['ret'] is False:
return response
return {'ret': True}
def set_manager_attributes(self, attr):
attributes = "Attributes"
# Example: manager_attr = {\"name\":\"value\"}
# Check if value is a number. If so, convert to int.
if attr['mgr_attr_value'].isdigit():
manager_attr = "{\"%s\": %i}" % (attr['mgr_attr_name'], int(attr['mgr_attr_value']))
else:
manager_attr = "{\"%s\": \"%s\"}" % (attr['mgr_attr_name'], attr['mgr_attr_value'])
payload = {"Attributes": json.loads(manager_attr)}
response = self.patch_request(self.root_uri + self.manager_uri + "/" + attributes, payload, HEADERS)
if response['ret'] is False:
return response
return {'ret': True}
def set_bios_attributes(self, attr):
result = {}
key = "Bios"
# Search for 'key' entry and extract URI from it
response = self.get_request(self.root_uri + self.systems_uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
bios_uri = data[key]["@odata.id"]
# Extract proper URI
response = self.get_request(self.root_uri + bios_uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
set_bios_attr_uri = data["@Redfish.Settings"]["SettingsObject"]["@odata.id"]
# Example: bios_attr = {\"name\":\"value\"}
bios_attr = "{\"" + attr['bios_attr_name'] + "\":\"" + attr['bios_attr_value'] + "\"}"
payload = {"Attributes": json.loads(bios_attr)}
response = self.patch_request(self.root_uri + set_bios_attr_uri, payload, HEADERS)
if response['ret'] is False:
return response
return {'ret': True}
def create_bios_config_job(self):
result = {}
key = "Bios"
jobs = "Jobs"
# Search for 'key' entry and extract URI from it
response = self.get_request(self.root_uri + self.systems_uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
bios_uri = data[key]["@odata.id"]
# Extract proper URI
response = self.get_request(self.root_uri + bios_uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
set_bios_attr_uri = data["@Redfish.Settings"]["SettingsObject"]["@odata.id"]
payload = {"TargetSettingsURI": set_bios_attr_uri, "RebootJobType": "PowerCycle"}
response = self.post_request(self.root_uri + self.manager_uri + "/" + jobs, payload, HEADERS)
if response['ret'] is False:
return response
response_output = response['resp'].__dict__
job_id = response_output["headers"]["Location"]
job_id = re.search("JID_.+", job_id).group()
return {'ret': True, 'msg': 'Config job created', 'job_id': job_id}
def get_fan_inventory(self):
result = {}
fan_details = []
key = "Thermal"
# Go through list
for chassis_uri in self.chassis_uri_list:
response = self.get_request(self.root_uri + chassis_uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
if key in data:
# match: found an entry for "Thermal" information = fans
thermal_uri = data[key]["@odata.id"]
response = self.get_request(self.root_uri + thermal_uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
for device in data[u'Fans']:
fan_details.append(dict(
# There is more information available but this is most important
Name=device[u'FanName'],
RPMs=device[u'Reading'],
State=device[u'Status'][u'State'],
Health=device[u'Status'][u'Health']))
result["entries"] = fan_details
return result
def get_cpu_inventory(self):
result = {}
cpu_details = []
cpu_list = []
key = "Processors"
# Search for 'key' entry and extract URI from it
response = self.get_request(self.root_uri + self.systems_uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
processors_uri = data[key]["@odata.id"]
# Get a list of all CPUs and build respective URIs
response = self.get_request(self.root_uri + processors_uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
for cpu in data[u'Members']:
cpu_list.append(cpu[u'@odata.id'])
for c in cpu_list:
uri = self.root_uri + c
response = self.get_request(uri)
if response['ret'] is False:
return response
data = response['data']
cpu_details.append(dict(
Name=data[u'Id'],
Manufacturer=data[u'Manufacturer'],
Model=data[u'Model'],
MaxSpeedMHz=data[u'MaxSpeedMHz'],
TotalCores=data[u'TotalCores'],
TotalThreads=data[u'TotalThreads'],
State=data[u'Status'][u'State'],
Health=data[u'Status'][u'Health']))
result["entries"] = cpu_details
return result
def get_nic_inventory(self):
result = {}
nic_details = []
nic_list = []
key = "EthernetInterfaces"
# Search for 'key' entry and extract URI from it
response = self.get_request(self.root_uri + self.systems_uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
ethernetinterfaces_uri = data[key]["@odata.id"]
# Get a list of all network controllers and build respective URIs
response = self.get_request(self.root_uri + ethernetinterfaces_uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
for nic in data[u'Members']:
nic_list.append(nic[u'@odata.id'])
for n in nic_list:
nic = {}
uri = self.root_uri + n
response = self.get_request(uri)
if response['ret'] is False:
return response
data = response['data']
nic['Name'] = data[u'Name']
nic['FQDN'] = data[u'FQDN']
for d in data[u'IPv4Addresses']:
nic['IPv4'] = d[u'Address']
if 'GateWay' in d: # not always available
nic['Gateway'] = d[u'GateWay']
nic['SubnetMask'] = d[u'SubnetMask']
for d in data[u'IPv6Addresses']:
nic['IPv6'] = d[u'Address']
for d in data[u'NameServers']:
nic['NameServers'] = d
nic['MACAddress'] = data[u'PermanentMACAddress']
nic['SpeedMbps'] = data[u'SpeedMbps']
nic['MTU'] = data[u'MTUSize']
nic['AutoNeg'] = data[u'AutoNeg']
if 'Status' in data: # not available when power is off
nic['Health'] = data[u'Status'][u'Health']
nic['State'] = data[u'Status'][u'State']
nic_details.append(nic)
result["entries"] = nic_details
return result
def get_psu_inventory(self):
result = {}
psu_details = []
psu_list = []
# Get a list of all PSUs and build respective URIs
response = self.get_request(self.root_uri + self.systems_uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
for psu in data[u'Links'][u'PoweredBy']:
psu_list.append(psu[u'@odata.id'])
for p in psu_list:
uri = self.root_uri + p
response = self.get_request(uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
psu = {}
psu['Name'] = data[u'Name']
psu['Model'] = data[u'Model']
psu['SerialNumber'] = data[u'SerialNumber']
psu['PartNumber'] = data[u'PartNumber']
if 'Manufacturer' in data: # not available in all generations
psu['Manufacturer'] = data[u'Manufacturer']
psu['FirmwareVersion'] = data[u'FirmwareVersion']
psu['PowerCapacityWatts'] = data[u'PowerCapacityWatts']
psu['PowerSupplyType'] = data[u'PowerSupplyType']
psu['Status'] = data[u'Status'][u'State']
psu['Health'] = data[u'Status'][u'Health']
psu_details.append(psu)
result["entries"] = psu_details
return result
def get_system_inventory(self):
result = {}
inventory = {}
response = self.get_request(self.root_uri + self.systems_uri)
if response['ret'] is False:
return response
result['ret'] = True
data = response['data']
# There could be more information to extract
inventory['Status'] = data[u'Status'][u'Health']
inventory['HostName'] = data[u'HostName']
inventory['PowerState'] = data[u'PowerState']
inventory['Model'] = data[u'Model']
inventory['Manufacturer'] = data[u'Manufacturer']
inventory['PartNumber'] = data[u'PartNumber']
inventory['SystemType'] = data[u'SystemType']
inventory['AssetTag'] = data[u'AssetTag']
inventory['ServiceTag'] = data[u'SKU']
inventory['SerialNumber'] = data[u'SerialNumber']
inventory['BiosVersion'] = data[u'BiosVersion']
inventory['MemoryTotal'] = data[u'MemorySummary'][u'TotalSystemMemoryGiB']
inventory['MemoryHealth'] = data[u'MemorySummary'][u'Status'][u'Health']
inventory['CpuCount'] = data[u'ProcessorSummary'][u'Count']
inventory['CpuModel'] = data[u'ProcessorSummary'][u'Model']
inventory['CpuHealth'] = data[u'ProcessorSummary'][u'Status'][u'Health']
datadict = data[u'Boot']
if 'BootSourceOverrideMode' in datadict.keys():
inventory['BootSourceOverrideMode'] = data[u'Boot'][u'BootSourceOverrideMode']
else:
# Not available in earlier server generations
inventory['BootSourceOverrideMode'] = "Not available"
if 'TrustedModules' in data:
for d in data[u'TrustedModules']:
if 'InterfaceType' in d.keys():
inventory['TPMInterfaceType'] = d[u'InterfaceType']
inventory['TPMStatus'] = d[u'Status'][u'State']
else:
# Not available in earlier server generations
inventory['TPMInterfaceType'] = "Not available"
inventory['TPMStatus'] = "Not available"
result["entries"] = inventory
return result

View file

@ -90,7 +90,7 @@ def find_obj(content, vimtype, name, first=True):
# Select the first match
if first is True:
for obj in obj_list:
if obj.name == name:
if to_text(obj.name) == to_text(name):
return obj
# If no object found, return None
@ -437,8 +437,10 @@ def list_snapshots(vm):
result['snapshots'] = list_snapshots_recursively(vm.snapshot.rootSnapshotList)
current_snapref = vm.snapshot.currentSnapshot
current_snap_obj = get_current_snap_obj(vm.snapshot.rootSnapshotList, current_snapref)
result['current_snapshot'] = deserialize_snapshot_obj(current_snap_obj[0])
if current_snap_obj:
result['current_snapshot'] = deserialize_snapshot_obj(current_snap_obj[0])
else:
result['current_snapshot'] = dict()
return result

View file

@ -94,12 +94,13 @@ EXAMPLES = '''
resource_group: TestGroup
name: testserver
sku:
name: MYSQLB50
tier: Basic
capacity: 100
name: GP_Gen4_2
tier: GeneralPurpose
capacity: 2
location: eastus
storage_mb: 1024
enforce_ssl: True
version: 5.6
admin_username: cloudsa
admin_password: password
'''

View file

@ -158,6 +158,12 @@ options:
type: bool
default: 'no'
version_added: 2.5
enable_accelerated_networking:
description:
- Specifies whether the network interface should be created with the accelerated networking feature or not
type: bool
version_added: 2.7
default: False
create_with_security_group:
description:
- Specifies whether a default security group should be be created with the NIC. Only applies when creating a new NIC.
@ -257,6 +263,14 @@ EXAMPLES = '''
- name: backendaddrpool1
load_balancer: loadbalancer001
- name: Create a network interface in accelerated networking mode
azure_rm_networkinterface:
name: nic005
resource_group: Testing
virtual_network_name: vnet001
subnet_name: subnet001
enable_accelerated_networking: True
- name: Delete network interface
azure_rm_networkinterface:
resource_group: Testing
@ -364,6 +378,7 @@ def nic_to_dict(nic):
enable_ip_forwarding=nic.enable_ip_forwarding,
provisioning_state=nic.provisioning_state,
etag=nic.etag,
enable_accelerated_networking=nic.enable_accelerated_networking,
)
@ -386,6 +401,7 @@ class AzureRMNetworkInterface(AzureRMModuleBase):
resource_group=dict(type='str', required=True),
name=dict(type='str', required=True),
location=dict(type='str'),
enable_accelerated_networking=dict(type='bool', default=False),
create_with_security_group=dict(type='bool', default=True),
security_group=dict(type='raw', aliases=['security_group_name']),
state=dict(default='present', choices=['present', 'absent']),
@ -409,6 +425,7 @@ class AzureRMNetworkInterface(AzureRMModuleBase):
self.name = None
self.location = None
self.create_with_security_group = None
self.enable_accelerated_networking = None
self.security_group = None
self.private_ip_address = None
self.private_ip_allocation_method = None
@ -489,6 +506,12 @@ class AzureRMNetworkInterface(AzureRMModuleBase):
self.log("CHANGED: add or remove network interface {0} network security group".format(self.name))
changed = True
if self.enable_accelerated_networking != bool(results.get('enable_accelerated_networking')):
self.log("CHANGED: Accelerated Networking set to {0} (previously {1})".format(
self.enable_accelerated_networking,
results.get('enable_accelerated_networking')))
changed = True
if not changed:
nsg = self.get_security_group(self.security_group['resource_group'], self.security_group['name'])
if nsg and results.get('network_security_group') and results['network_security_group'].get('id') != nsg.id:
@ -567,6 +590,7 @@ class AzureRMNetworkInterface(AzureRMModuleBase):
location=self.location,
tags=self.tags,
ip_configurations=nic_ip_configurations,
enable_accelerated_networking=self.enable_accelerated_networking,
network_security_group=nsg
)
self.results['state'] = self.create_or_update_nic(nic)

View file

@ -94,9 +94,9 @@ EXAMPLES = '''
resource_group: TestGroup
name: testserver
sku:
name: PGSQLS100
tier: Basic
capacity: 100
name: GP_Gen4_2
tier: GeneralPurpose
capacity: 2
location: eastus
storage_mb: 1024
enforce_ssl: True

View file

@ -104,8 +104,8 @@ EXAMPLES = '''
name: clh0002
type: Standard_RAGRS
tags:
- testing: testing
- delete: on-exit
testing: testing
delete: on-exit
'''

View file

@ -25,7 +25,7 @@ deprecated:
description:
- This is the original Ansible module for managing the Docker container life cycle.
- NOTE - Additional and newer modules are available. For the latest on orchestrating containers with Ansible
visit our Getting Started with Docker Guide at U(https://github.com/ansible/ansible/blob/devel/docs/docsite/rst/guide_docker.rst).
visit our Getting Started with Docker Guide at U(https://github.com/ansible/ansible/blob/devel/docs/docsite/rst/scenario_guides/guide_docker.rst).
options:
count:
description:

View file

@ -0,0 +1,131 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2018, Ansible Project
# Copyright: (c) 2018, Abhijeet Kasurde <akasurde@redhat.com>
#
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {
'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'
}
DOCUMENTATION = r'''
---
module: vmware_category_facts
short_description: Gather facts about VMware tag categories
description:
- This module can be used to gather facts about VMware tag categories.
- Tag feature is introduced in vSphere 6 version, so this module is not supported in earlier versions of vSphere.
- All variables and VMware object names are case sensitive.
version_added: '2.7'
author:
- Abhijeet Kasurde (@Akasurde)
notes:
- Tested on vSphere 6.5
requirements:
- python >= 2.6
- PyVmomi
- vSphere Automation SDK
- vCloud Suite SDK
extends_documentation_fragment: vmware_rest_client.documentation
'''
EXAMPLES = r'''
- name: Gather facts about tag categories
vmware_category_facts:
hostname: "{{ vcenter_server }}"
username: "{{ vcenter_user }}"
password: "{{ vcenter_pass }}"
register: all_tag_category_facts
- name: Gather category id from given tag category
vmware_category_facts:
hostname: "{{ vcenter_server }}"
username: "{{ vcenter_user }}"
password: "{{ vcenter_pass }}"
register: tag_category_results
- set_fact:
category_id: "{{ item.category_id }}"
with_items: "{{ tag_category_results.tag_category_facts|json_query(query) }}"
vars:
query: "[?category_name==`Category0001`]"
- debug: var=category_id
'''
RETURN = r'''
tag_category_facts:
description: metadata of tag categories
returned: always
type: list
sample: [
{
"category_associable_types": [],
"category_cardinality": "MULTIPLE",
"category_description": "awesome description",
"category_id": "urn:vmomi:InventoryServiceCategory:e785088d-6981-4b1c-9fb8-1100c3e1f742:GLOBAL",
"category_name": "Category0001",
"category_used_by": []
},
{
"category_associable_types": [
"VirtualMachine"
],
"category_cardinality": "SINGLE",
"category_description": "another awesome description",
"category_id": "urn:vmomi:InventoryServiceCategory:ae5b7c6c-e622-4671-9b96-76e93adb70f2:GLOBAL",
"category_name": "template_tag",
"category_used_by": []
}
]
'''
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.vmware_rest_client import VmwareRestClient
try:
from com.vmware.cis.tagging_client import Category
except ImportError:
pass
class VmwareCategoryFactsManager(VmwareRestClient):
def __init__(self, module):
super(VmwareCategoryFactsManager, self).__init__(module)
self.category_service = Category(self.connect)
def get_all_tag_categories(self):
"""Retrieve all tag category information."""
global_tag_categories = []
for category in self.category_service.list():
category_obj = self.category_service.get(category)
global_tag_categories.append(
dict(
category_description=category_obj.description,
category_used_by=category_obj.used_by,
category_cardinality=str(category_obj.cardinality),
category_associable_types=category_obj.associable_types,
category_id=category_obj.id,
category_name=category_obj.name,
)
)
self.module.exit_json(changed=False, tag_category_facts=global_tag_categories)
def main():
argument_spec = VmwareRestClient.vmware_client_argument_spec()
module = AnsibleModule(argument_spec=argument_spec, supports_check_mode=True)
vmware_category_facts = VmwareCategoryFactsManager(module)
vmware_category_facts.get_all_tag_categories()
if __name__ == '__main__':
main()

View file

@ -144,6 +144,8 @@ options:
Please check VMware documentation for correct virtual machine hardware version.
Incorrect hardware version may lead to failure in deployment. If hardware version is already equal to the given
version then no action is taken. version_added: 2.6'
- ' - C(boot_firmware) (string): Choose which firmware should be used to boot the virtual machine.
Allowed values are "bios" and "efi". version_added: 2.7'
guest_id:
description:
@ -387,6 +389,7 @@ EXAMPLES = r'''
hotremove_cpu: True
hotadd_memory: False
version: 12 # Hardware version of virtual machine
boot_firmware: "efi"
cdrom:
type: iso
iso_path: "[datastore1] livecd.iso"
@ -707,11 +710,11 @@ class PyVmomiCache(object):
""" Wrapper around find_obj to set datacenter context """
result = find_obj(content, types, name)
if result and confine_to_datacenter:
if self.get_parent_datacenter(result).name != self.dc_name:
if to_text(self.get_parent_datacenter(result).name) != to_text(self.dc_name):
result = None
objects = self.get_all_objs(content, types, confine_to_datacenter=True)
for obj in objects:
if name is None or obj.name == name:
if name is None or to_text(obj.name) == to_text(name):
return obj
return result
@ -946,6 +949,15 @@ class PyVmomiHelper(PyVmomi):
if vm_obj is None or self.configspec.memoryReservationLockedToMax != vm_obj.config.memoryReservationLockedToMax:
self.change_detected = True
if 'boot_firmware' in self.params['hardware']:
boot_firmware = self.params['hardware']['boot_firmware'].lower()
if boot_firmware not in ('bios', 'efi'):
self.module.fail_json(msg="hardware.boot_firmware value is invalid [%s]."
" Need one of ['bios', 'efi']." % boot_firmware)
self.configspec.firmware = boot_firmware
if vm_obj is None or self.configspec.firmware != vm_obj.config.firmware:
self.change_detected = True
def configure_cdrom(self, vm_obj):
# Configure the VM CD-ROM
if "cdrom" in self.params and self.params["cdrom"]:
@ -2186,27 +2198,44 @@ class PyVmomiHelper(PyVmomi):
# Mark Template as VM
elif not self.params['is_template'] and self.current_vm_obj.config.template:
if self.params['resource_pool']:
resource_pool = self.select_resource_pool_by_name(self.params['resource_pool'])
resource_pool = self.get_resource_pool()
kwargs = dict(pool=resource_pool)
if resource_pool is None:
self.module.fail_json(msg='Unable to find resource pool "%(resource_pool)s"' % self.params)
if self.params.get('esxi_hostname', None):
host_system_obj = self.select_host()
kwargs.update(host=host_system_obj)
self.current_vm_obj.MarkAsVirtualMachine(pool=resource_pool)
try:
self.current_vm_obj.MarkAsVirtualMachine(**kwargs)
except vim.fault.InvalidState as invalid_state:
self.module.fail_json(msg="Virtual machine is not marked"
" as template : %s" % to_native(invalid_state.msg))
except vim.fault.InvalidDatastore as invalid_ds:
self.module.fail_json(msg="Converting template to virtual machine"
" operation cannot be performed on the"
" target datastores: %s" % to_native(invalid_ds.msg))
except vim.fault.CannotAccessVmComponent as cannot_access:
self.module.fail_json(msg="Failed to convert template to virtual machine"
" as operation unable access virtual machine"
" component: %s" % to_native(cannot_access.msg))
except vmodl.fault.InvalidArgument as invalid_argument:
self.module.fail_json(msg="Failed to convert template to virtual machine"
" due to : %s" % to_native(invalid_argument.msg))
except Exception as generic_exc:
self.module.fail_json(msg="Failed to convert template to virtual machine"
" due to generic error : %s" % to_native(generic_exc))
# Automatically update VMWare UUID when converting template to VM.
# This avoids an interactive prompt during VM startup.
uuid_action = [x for x in self.current_vm_obj.config.extraConfig if x.key == "uuid.action"]
if not uuid_action:
uuid_action_opt = vim.option.OptionValue()
uuid_action_opt.key = "uuid.action"
uuid_action_opt.value = "create"
self.configspec.extraConfig.append(uuid_action_opt)
self.change_detected = True
# Automatically update VMWare UUID when converting template to VM.
# This avoids an interactive prompt during VM startup.
uuid_action = [x for x in self.current_vm_obj.config.extraConfig if x.key == "uuid.action"]
if not uuid_action:
uuid_action_opt = vim.option.OptionValue()
uuid_action_opt.key = "uuid.action"
uuid_action_opt.value = "create"
self.configspec.extraConfig.append(uuid_action_opt)
self.change_detected = True
change_applied = True
else:
self.module.fail_json(msg="Resource pool must be specified when converting template to VM!")
change_applied = True
vm_facts = self.gather_facts(self.current_vm_obj)
return {'changed': change_applied, 'failed': False, 'instance': vm_facts}

View file

@ -0,0 +1,335 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
#
# Copyright: (c) 2018, Ansible Project
# Copyright: (c) 2018, Abhijeet Kasurde <akasurde@redhat.com>
#
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {
'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'
}
DOCUMENTATION = '''
---
module: vmware_guest_boot_manager
short_description: Manage boot options for the given virtual machine
description:
- This module can be used to manage boot options for the given virtual machine.
version_added: 2.7
author:
- Abhijeet Kasurde (@Akasurde) <akasurde@redhat.com>
notes:
- Tested on vSphere 6.5
requirements:
- "python >= 2.6"
- PyVmomi
options:
name:
description:
- Name of the VM to work with.
- This is required if C(uuid) parameter is not supplied.
uuid:
description:
- UUID of the instance to manage if known, this is VMware's BIOS UUID.
- This is required if C(name) parameter is not supplied.
boot_order:
description:
- List of the boot devices.
default: []
name_match:
description:
- If multiple virtual machines matching the name, use the first or last found.
default: 'first'
choices: ['first', 'last']
boot_delay:
description:
- Delay in milliseconds before starting the boot sequence.
default: 0
enter_bios_setup:
description:
- If set to C(True), the virtual machine automatically enters BIOS setup the next time it boots.
- The virtual machine resets this flag, so that the machine boots proceeds normally.
type: 'bool'
default: False
boot_retry_enabled:
description:
- If set to C(True), the virtual machine that fails to boot, will try to boot again after C(boot_retry_delay) is expired.
- If set to C(False), the virtual machine waits indefinitely for user intervention.
type: 'bool'
default: False
boot_retry_delay:
description:
- Specify the time in milliseconds between virtual machine boot failure and subsequent attempt to boot again.
- If set, will automatically set C(boot_retry_enabled) to C(True) as this parameter is required.
default: 0
boot_firmware:
description:
- Choose which firmware should be used to boot the virtual machine.
choices: ["bios", "efi"]
extends_documentation_fragment: vmware.documentation
'''
EXAMPLES = r'''
- name: Change virtual machine's boot order and related parameters
vmware_guest_boot_manager:
hostname: "{{ vcenter_server }}"
username: "{{ vcenter_user }}"
password: "{{ vcenter_pass }}"
name: testvm
boot_delay: 2000
enter_bios_setup: True
boot_retry_enabled: True
boot_retry_delay: 22300
boot_firmware: bios
boot_order:
- floppy
- cdrom
- ethernet
- disk
register: vm_boot_order
'''
RETURN = r"""
vm_boot_status:
description: metadata about boot order of virtual machine
returned: always
type: dict
sample: {
"current_boot_order": [
"floppy",
"disk",
"ethernet",
"cdrom"
],
"current_boot_delay": 2000,
"current_boot_retry_delay": 22300,
"current_boot_retry_enabled": true,
"current_enter_bios_setup": true,
"current_boot_firmware": "bios",
"previous_boot_delay": 10,
"previous_boot_retry_delay": 10000,
"previous_boot_retry_enabled": true,
"previous_enter_bios_setup": false,
"previous_boot_firmware": "bios",
"previous_boot_order": [
"ethernet",
"cdrom",
"floppy",
"disk"
],
}
"""
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils._text import to_native
from ansible.module_utils.vmware import PyVmomi, vmware_argument_spec, find_vm_by_id, wait_for_task, TaskError
try:
from pyVmomi import vim
except ImportError:
pass
class VmBootManager(PyVmomi):
def __init__(self, module):
super(VmBootManager, self).__init__(module)
self.name = self.params['name']
self.uuid = self.params['uuid']
self.vm = None
def _get_vm(self):
vms = []
if self.uuid:
vm_obj = find_vm_by_id(self.content, vm_id=self.uuid, vm_id_type="uuid")
if vm_obj is None:
self.module.fail_json(msg="Failed to find the virtual machine with UUID : %s" % self.uuid)
vms = [vm_obj]
elif self.name:
objects = self.get_managed_objects_properties(vim_type=vim.VirtualMachine, properties=['name'])
for temp_vm_object in objects:
if temp_vm_object.obj.name == self.name:
vms.append(temp_vm_object.obj)
if vms:
if self.params.get('name_match') == 'first':
self.vm = vms[0]
elif self.params.get('name_match') == 'last':
self.vm = vms[-1]
else:
self.module.fail_json(msg="Failed to find virtual machine using %s" % (self.name or self.uuid))
@staticmethod
def humanize_boot_order(boot_order):
results = []
for device in boot_order:
if isinstance(device, vim.vm.BootOptions.BootableCdromDevice):
results.append('cdrom')
elif isinstance(device, vim.vm.BootOptions.BootableDiskDevice):
results.append('disk')
elif isinstance(device, vim.vm.BootOptions.BootableEthernetDevice):
results.append('ethernet')
elif isinstance(device, vim.vm.BootOptions.BootableFloppyDevice):
results.append('floppy')
return results
def ensure(self):
self._get_vm()
valid_device_strings = ['cdrom', 'disk', 'ethernet', 'floppy']
boot_order_list = []
for device_order in self.params.get('boot_order'):
if device_order not in valid_device_strings:
self.module.fail_json(msg="Invalid device found [%s], please specify device from ['%s']" % (device_order,
"', '".join(valid_device_strings)))
if device_order == 'cdrom':
first_cdrom = [device for device in self.vm.config.hardware.device if isinstance(device, vim.vm.device.VirtualCdrom)]
if first_cdrom:
boot_order_list.append(vim.vm.BootOptions.BootableCdromDevice())
elif device_order == 'disk':
first_hdd = [device for device in self.vm.config.hardware.device if isinstance(device, vim.vm.device.VirtualDisk)]
if first_hdd:
boot_order_list.append(vim.vm.BootOptions.BootableDiskDevice(deviceKey=first_hdd[0].key))
elif device_order == 'ethernet':
first_ether = [device for device in self.vm.config.hardware.device if isinstance(device, vim.vm.device.VirtualEthernetCard)]
if first_ether:
boot_order_list.append(vim.vm.BootOptions.BootableEthernetDevice(deviceKey=first_ether[0].key))
elif device_order == 'floppy':
first_floppy = [device for device in self.vm.config.hardware.device if isinstance(device, vim.vm.device.VirtualFloppy)]
if first_floppy:
boot_order_list.append(vim.vm.BootOptions.BootableFloppyDevice())
change_needed = False
kwargs = dict()
if len(boot_order_list) != len(self.vm.config.bootOptions.bootOrder):
kwargs.update({'bootOrder': boot_order_list})
change_needed = True
else:
for i in range(0, len(boot_order_list)):
boot_device_type = type(boot_order_list[i])
vm_boot_device_type = type(self.vm.config.bootOptions.bootOrder[i])
if boot_device_type != vm_boot_device_type:
kwargs.update({'bootOrder': boot_order_list})
change_needed = True
if self.vm.config.bootOptions.bootDelay != self.params.get('boot_delay'):
kwargs.update({'bootDelay': self.params.get('boot_delay')})
change_needed = True
if self.vm.config.bootOptions.enterBIOSSetup != self.params.get('enter_bios_setup'):
kwargs.update({'enterBIOSSetup': self.params.get('enter_bios_setup')})
change_needed = True
if self.vm.config.bootOptions.bootRetryEnabled != self.params.get('boot_retry_enabled'):
kwargs.update({'bootRetryEnabled': self.params.get('boot_retry_enabled')})
change_needed = True
if self.vm.config.bootOptions.bootRetryDelay != self.params.get('boot_retry_delay'):
if not self.vm.config.bootOptions.bootRetryEnabled:
kwargs.update({'bootRetryEnabled': True})
kwargs.update({'bootRetryDelay': self.params.get('boot_retry_delay')})
change_needed = True
boot_firmware_required = False
if self.vm.config.firmware != self.params.get('boot_firmware'):
change_needed = True
boot_firmware_required = True
changed = False
results = dict(
previous_boot_order=self.humanize_boot_order(self.vm.config.bootOptions.bootOrder),
previous_boot_delay=self.vm.config.bootOptions.bootDelay,
previous_enter_bios_setup=self.vm.config.bootOptions.enterBIOSSetup,
previous_boot_retry_enabled=self.vm.config.bootOptions.bootRetryEnabled,
previous_boot_retry_delay=self.vm.config.bootOptions.bootRetryDelay,
previous_boot_firmware=self.vm.config.firmware,
current_boot_order=[],
)
if change_needed:
vm_conf = vim.vm.ConfigSpec()
vm_conf.bootOptions = vim.vm.BootOptions(**kwargs)
if boot_firmware_required:
vm_conf.firmware = self.params.get('boot_firmware')
task = self.vm.ReconfigVM_Task(vm_conf)
try:
changed, result = wait_for_task(task)
except TaskError as e:
self.module.fail_json(msg="Failed to perform reconfigure virtual"
" machine %s for boot order due to: %s" % (self.name or self.uuid,
to_native(e)))
results.update(
{
'current_boot_order': self.humanize_boot_order(self.vm.config.bootOptions.bootOrder),
'current_boot_delay': self.vm.config.bootOptions.bootDelay,
'current_enter_bios_setup': self.vm.config.bootOptions.enterBIOSSetup,
'current_boot_retry_enabled': self.vm.config.bootOptions.bootRetryEnabled,
'current_boot_retry_delay': self.vm.config.bootOptions.bootRetryDelay,
'current_boot_firmware': self.vm.config.firmware,
}
)
self.module.exit_json(changed=changed, vm_boot_status=results)
def main():
argument_spec = vmware_argument_spec()
argument_spec.update(
name=dict(type='str'),
uuid=dict(type='str'),
boot_order=dict(
type='list',
default=[],
),
name_match=dict(
choices=['first', 'last'],
default='first'
),
boot_delay=dict(
type='int',
default=0,
),
enter_bios_setup=dict(
type='bool',
default=False,
),
boot_retry_enabled=dict(
type='bool',
default=False,
),
boot_retry_delay=dict(
type='int',
default=0,
),
boot_firmware=dict(
type='str',
choices=['efi', 'bios'],
)
)
module = AnsibleModule(
argument_spec=argument_spec,
required_one_of=[
['name', 'uuid']
],
mutually_exclusive=[
['name', 'uuid']
],
)
pyv = VmBootManager(module)
pyv.ensure()
if __name__ == '__main__':
main()

View file

@ -0,0 +1,164 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2018, Ansible Project
#
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {
'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'
}
DOCUMENTATION = '''
---
module: vmware_guest_custom_attribute_defs
short_description: Manage custom attributes definitions for virtual machine from VMWare
description:
- This module can be used to add, remove and list custom attributes definitions for the given virtual machine from VMWare.
version_added: 2.7
author:
- Jimmy Conner
- Abhijeet Kasurde (@Akasurde) <akasurde@redhat.com>
notes:
- Tested on vSphere 6.5
requirements:
- "python >= 2.6"
- PyVmomi
options:
attribute_key:
description:
- Name of the custom attribute definition.
- This is required parameter, if C(state) is set to C(present) or C(absent).
required: False
state:
description:
- Manage definition of custom attributes.
- If set to C(present) and definition not present, then custom attribute definition is created.
- If set to C(present) and definition is present, then no action taken.
- If set to C(absent) and definition is present, then custom attribute definition is removed.
- If set to C(absent) and definition is absent, then no action taken.
default: 'present'
choices: ['present', 'absent']
required: True
extends_documentation_fragment: vmware.documentation
'''
EXAMPLES = '''
- name: List VMWare Attribute Definitions
vmware_guest_custom_attribute_defs:
hostname: 192.168.1.209
username: administrator@vsphere.local
password: vmware
validate_certs: no
state: list
delegate_to: localhost
register: defs
- name: Add VMWare Attribute Definition
vmware_guest_custom_attribute_defs:
hostname: 192.168.1.209
username: administrator@vsphere.local
password: vmware
validate_certs: no
state: present
attribute_key: custom_attr_def_1
delegate_to: localhost
register: defs
- name: Remove VMWare Attribute Definition
vmware_guest_custom_attribute_defs:
hostname: 192.168.1.209
username: administrator@vsphere.local
password: vmware
validate_certs: no
state: absent
attribute_key: custom_attr_def_1
delegate_to: localhost
register: defs
'''
RETURN = """
custom_attribute_defs:
description: list of all current attribute definitions
returned: always
type: list
sample: ["sample_5", "sample_4"]
"""
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.vmware import PyVmomi, vmware_argument_spec
try:
import pyVmomi
from pyVmomi import vim
except ImportError:
pass
class VmAttributeDefManager(PyVmomi):
def __init__(self, module):
super(VmAttributeDefManager, self).__init__(module)
self.custom_field_mgr = self.content.customFieldsManager.field
def remove_custom_def(self, field):
changed = False
f = dict()
for x in self.custom_field_mgr:
if x.name == field:
changed = True
if not self.module.check_mode:
self.content.customFieldsManager.RemoveCustomFieldDef(key=x.key)
break
f[x.name] = (x.key, x.managedObjectType)
return {'changed': changed, 'failed': False, 'custom_attribute_defs': list(f.keys())}
def add_custom_def(self, field):
changed = False
found = False
f = dict()
for x in self.custom_field_mgr:
if x.name == field:
found = True
f[x.name] = (x.key, x.managedObjectType)
if not found:
changed = True
if not self.module.check_mode:
new_field = self.content.customFieldsManager.AddFieldDefinition(name=field, moType=vim.VirtualMachine)
f[new_field.name] = (new_field.key, new_field.type)
return {'changed': changed, 'failed': False, 'custom_attribute_defs': list(f.keys())}
def main():
argument_spec = vmware_argument_spec()
argument_spec.update(
attribute_key=dict(type='str'),
state=dict(type='str', default='present', choices=['absent', 'present']),
)
module = AnsibleModule(
argument_spec=argument_spec,
supports_check_mode=True,
required_if=[
['state', 'present', ['attribute_key']],
['state', 'absent', ['attribute_key']],
]
)
pyv = VmAttributeDefManager(module)
results = dict(changed=False, custom_attribute_defs=list())
if module.params['state'] == "present":
results = pyv.add_custom_def(module.params['attribute_key'])
elif module.params['state'] == "absent":
results = pyv.remove_custom_def(module.params['attribute_key'])
module.exit_json(**results)
if __name__ == '__main__':
main()

View file

@ -0,0 +1,226 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright, (c) 2018, Ansible Project
# Copyright, (c) 2018, Abhijeet Kasurde <akasurde@redhat.com>
#
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {
'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'
}
DOCUMENTATION = '''
---
module: vmware_guest_custom_attributes
short_description: Manage custom attributes from VMWare for the given virtual machine
description:
- This module can be used to add, remove and update custom attributes for the given virtual machine.
version_added: 2.7
author:
- Jimmy Conner
- Abhijeet Kasurde (@Akasurde)
notes:
- Tested on vSphere 6.5
requirements:
- "python >= 2.6"
- PyVmomi
options:
name:
description:
- Name of the virtual machine to work with.
required: True
state:
description:
- The action to take.
- If set to C(present), then custom attribute is added or updated.
- If set to C(absent), then custom attribute is removed.
default: 'present'
choices: ['present', 'absent']
uuid:
description:
- UUID of the virtual machine to manage if known. This is VMware's unique identifier.
- This is required parameter, if C(name) is not supplied.
folder:
description:
- Absolute path to find an existing guest.
- This is required parameter, if C(name) is supplied and multiple virtual machines with same name are found.
datacenter:
description:
- Datacenter name where the virtual machine is located in.
required: True
attributes:
description:
- A list of name and value of custom attributes that needs to be manage.
- Value of custom attribute is not required and will be ignored, if C(state) is set to C(absent).
default: []
extends_documentation_fragment: vmware.documentation
'''
EXAMPLES = '''
- name: Add virtual machine custom attributes
vmware_guest_custom_attributes:
hostname: 192.168.1.209
username: administrator@vsphere.local
password: vmware
uuid: 421e4592-c069-924d-ce20-7e7533fab926
state: present
attributes:
- name: MyAttribute
value: MyValue
delegate_to: localhost
register: attributes
- name: Add multiple virtual machine custom attributes
vmware_guest_custom_attributes:
hostname: 192.168.1.209
username: administrator@vsphere.local
password: vmware
uuid: 421e4592-c069-924d-ce20-7e7533fab926
state: present
attributes:
- name: MyAttribute
value: MyValue
- name: MyAttribute2
value: MyValue2
delegate_to: localhost
register: attributes
- name: Remove virtual machine Attribute
vmware_guest_custom_attributes:
hostname: 192.168.1.209
username: administrator@vsphere.local
password: vmware
uuid: 421e4592-c069-924d-ce20-7e7533fab926
state: absent
attributes:
- name: MyAttribute
delegate_to: localhost
register: attributes
'''
RETURN = """
custom_attributes:
description: metadata about the virtual machine attributes
returned: always
type: dict
sample: {
"mycustom": "my_custom_value",
"mycustom_2": "my_custom_value_2",
"sample_1": "sample_1_value",
"sample_2": "sample_2_value",
"sample_3": "sample_3_value"
}
"""
try:
from pyVmomi import vim
except ImportError:
pass
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.vmware import PyVmomi, vmware_argument_spec
class VmAttributeManager(PyVmomi):
def __init__(self, module):
super(VmAttributeManager, self).__init__(module)
self.custom_field_mgr = self.content.customFieldsManager.field
def set_custom_field(self, vm, user_fields):
result_fields = dict()
change_list = list()
changed = False
for field in user_fields:
field_key = self.check_exists(field['name'])
found = False
field_value = field.get('value', '')
for k, v in [(x.name, v.value) for x in self.custom_field_mgr for v in vm.customValue if x.key == v.key]:
if k == field['name']:
found = True
if v != field_value:
if not self.module.check_mode:
self.content.customFieldsManager.SetField(entity=vm, key=field_key.key, value=field_value)
result_fields[k] = field_value
change_list.append(True)
if not found and field_value != "":
if not field_key and not self.module.check_mode:
field_key = self.content.customFieldsManager.AddFieldDefinition(name=field['name'], moType=vim.VirtualMachine)
change_list.append(True)
if not self.module.check_mode:
self.content.customFieldsManager.SetField(entity=vm, key=field_key.key, value=field_value)
result_fields[field['name']] = field_value
if any(change_list):
changed = True
return {'changed': changed, 'failed': False, 'custom_attributes': result_fields}
def check_exists(self, field):
for x in self.custom_field_mgr:
if x.name == field:
return x
return False
def main():
argument_spec = vmware_argument_spec()
argument_spec.update(
datacenter=dict(type='str'),
name=dict(required=True, type='str'),
folder=dict(type='str'),
uuid=dict(type='str'),
state=dict(type='str', default='present',
choices=['absent', 'present']),
attributes=dict(
type='list',
default=[],
options=dict(
name=dict(type='str', required=True),
value=dict(type='str'),
)
),
)
module = AnsibleModule(
argument_spec=argument_spec,
supports_check_mode=True,
required_one_of=[['name', 'uuid']],
)
if module.params.get('folder'):
# FindByInventoryPath() does not require an absolute path
# so we should leave the input folder path unmodified
module.params['folder'] = module.params['folder'].rstrip('/')
pyv = VmAttributeManager(module)
results = {'changed': False, 'failed': False, 'instance': dict()}
# Check if the virtual machine exists before continuing
vm = pyv.get_vm()
if vm:
# virtual machine already exists
if module.params['state'] == "present":
results = pyv.set_custom_field(vm, module.params['attributes'])
elif module.params['state'] == "absent":
results = pyv.set_custom_field(vm, module.params['attributes'])
module.exit_json(**results)
else:
# virtual machine does not exists
module.fail_json(msg="Unable to manage custom attributes for non-existing"
" virtual machine %s" % (module.params.get('name') or module.params.get('uuid')))
if __name__ == '__main__':
main()

View file

@ -0,0 +1,218 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2018, Jose Angel Munoz <josea.munoz () gmail.com>
# Copyright: (c) 2018, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {
'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'
}
DOCUMENTATION = r'''
---
module: vmware_guest_move
short_description: Moves virtual machines in vCenter
description:
- This module can be used to move virtual machines between folders.
version_added: '2.7'
author:
- Jose Angel Munoz (@imjoseangel)
notes:
- Tested on vSphere 5.5 and vSphere 6.5
requirements:
- python >= 2.6
- PyVmomi
options:
name:
description:
- Name of the existing virtual machine to move.
- This is required if C(UUID) is not supplied.
uuid:
description:
- UUID of the virtual machine to manage if known, this is VMware's unique identifier.
- This is required if C(name) is not supplied.
name_match:
description:
- If multiple virtual machines matching the name, use the first or last found.
default: 'first'
choices: [ first, last ]
dest_folder:
description:
- Absolute path to move an existing guest
- The dest_folder should include the datacenter. ESX's datacenter is ha-datacenter.
- This parameter is case sensitive.
- 'Examples:'
- ' dest_folder: /ha-datacenter/vm'
- ' dest_folder: ha-datacenter/vm'
- ' dest_folder: /datacenter1/vm'
- ' dest_folder: datacenter1/vm'
- ' dest_folder: /datacenter1/vm/folder1'
- ' dest_folder: datacenter1/vm/folder1'
- ' dest_folder: /folder1/datacenter1/vm'
- ' dest_folder: folder1/datacenter1/vm'
- ' dest_folder: /folder1/datacenter1/vm/folder2'
required: True
datacenter:
description:
- Destination datacenter for the move operation
required: True
extends_documentation_fragment: vmware.documentation
'''
EXAMPLES = r'''
- name: Move Virtual Machine
vmware_guest_move:
hostname: "{{ vcenter_ip }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
datacenter: datacenter
validate_certs: False
name: testvm-1
dest_folder: datacenter/vm/prodvms
- name: Get VM UUID
vmware_guest_facts:
hostname: "{{ vcenter_ip }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
validate_certs: no
datacenter: "{{ datacenter }}"
folder: "/{{datacenter}}/vm"
name: "{{ vm_name }}"
register: vm_facts
- name: Get UUID from previous task and pass it to this task
vmware_guest_move:
hostname: "{{ vcenter_ip }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
validate_certs: no
uuid: "{{ vm_facts.instance.hw_product_uuid }}"
dest_folder: "/DataCenter/vm/path/to/new/folder/where/we/want"
datacenter: "{{ datacenter }}"
delegate_to: localhost
register: facts
'''
RETURN = """
instance:
description: metadata about the virtual machine
returned: always
type: dict
sample: {
"annotation": null,
"current_snapshot": null,
"customvalues": {},
"guest_consolidation_needed": false,
"guest_question": null,
"guest_tools_status": null,
"guest_tools_version": "0",
"hw_cores_per_socket": 1,
"hw_datastores": [
"LocalDS_0"
],
"hw_esxi_host": "DC0_H0",
"hw_eth0": {
"addresstype": "generated",
"ipaddresses": null,
"label": "ethernet-0",
"macaddress": "00:0c:29:6b:34:2c",
"macaddress_dash": "00-0c-29-6b-34-2c",
"summary": "DVSwitch: 43cdd1db-1ef7-4016-9bbe-d96395616199"
},
"hw_files": [
"[LocalDS_0] DC0_H0_VM0/DC0_H0_VM0.vmx"
],
"hw_folder": "/F0/DC0/vm/F0",
"hw_guest_full_name": null,
"hw_guest_ha_state": null,
"hw_guest_id": "otherGuest",
"hw_interfaces": [
"eth0"
],
"hw_is_template": false,
"hw_memtotal_mb": 32,
"hw_name": "DC0_H0_VM0",
"hw_power_status": "poweredOn",
"hw_processor_count": 1,
"hw_product_uuid": "581c2808-64fb-45ee-871f-6a745525cb29",
"instance_uuid": "8bcb0b6e-3a7d-4513-bf6a-051d15344352",
"ipv4": null,
"ipv6": null,
"module_hw": true,
"snapshots": []
}
"""
try:
import pyVmomi
from pyVmomi import vim
except ImportError:
pass
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils._text import to_native
from ansible.module_utils.vmware import PyVmomi, vmware_argument_spec, connect_to_api, wait_for_task
class PyVmomiHelper(PyVmomi):
def __init__(self, module):
super(PyVmomiHelper, self).__init__(module)
def main():
argument_spec = vmware_argument_spec()
argument_spec.update(
name=dict(type='str'),
name_match=dict(
type='str', choices=['first', 'last'], default='first'),
uuid=dict(type='str'),
dest_folder=dict(type='str', required=True),
datacenter=dict(type='str', required=True),
)
module = AnsibleModule(
argument_spec=argument_spec, required_one_of=[['name', 'uuid']])
# FindByInventoryPath() does not require an absolute path
# so we should leave the input folder path unmodified
module.params['dest_folder'] = module.params['dest_folder'].rstrip('/')
pyv = PyVmomiHelper(module)
search_index = pyv.content.searchIndex
# Check if the VM exists before continuing
vm = pyv.get_vm()
# VM exists
if vm:
try:
vm_path = pyv.get_vm_path(pyv.content, vm).lstrip('/')
vm_full = vm_path + '/' + module.params['name']
folder = search_index.FindByInventoryPath(
module.params['dest_folder'])
vm_to_move = search_index.FindByInventoryPath(vm_full)
if vm_path != module.params['dest_folder'].lstrip('/'):
move_task = folder.MoveInto([vm_to_move])
changed, err = wait_for_task(move_task)
if changed:
module.exit_json(
changed=True, instance=pyv.gather_facts(vm))
else:
module.exit_json(instance=pyv.gather_facts(vm))
except Exception as exc:
module.fail_json(msg="Failed to move VM with exception %s" %
to_native(exc))
else:
module.fail_json(msg="Unable to find VM %s to move to %s" % (
(module.params.get('uuid') or module.params.get('name')),
module.params.get('dest_folder')))
if __name__ == '__main__':
main()

View file

@ -1,7 +1,9 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2017, Wei Gao <gaowei3@qq.com>
# Copyright: (c) 2018, Ansible Project
#
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
@ -21,6 +23,7 @@ description:
- This module can be used to gathers facts like CPU, memory, datastore, network and system etc. about ESXi host system.
- Please specify hostname or IP address of ESXi host system as C(hostname).
- If hostname or IP address of vCenter is provided as C(hostname), then information about first ESXi hostsystem is returned.
- VSAN facts added in 2.7 version.
version_added: 2.5
author:
- Wei Gao (@woshihaoren)
@ -38,6 +41,16 @@ EXAMPLES = '''
password: password
register: host_facts
delegate_to: localhost
- name: Get VSAN Cluster UUID from host facts
vmware_host_facts:
hostname: esxi_ip_or_hostname
username: username
password: password
register: host_facts
- set_fact:
cluster_uuid: "{{ host_facts['ansible_facts']['vsan_cluster_uuid'] }}"
'''
RETURN = '''
@ -84,7 +97,10 @@ ansible_facts:
},
"macaddress": "52:54:00:56:7d:59",
"mtu": 1500
}
},
"vsan_cluster_uuid": null,
"vsan_node_uuid": null,
"vsan_health": "unknown",
}
'''
@ -111,8 +127,25 @@ class VMwareHostFactManager(PyVmomi):
ansible_facts.update(self.get_datastore_facts())
ansible_facts.update(self.get_network_facts())
ansible_facts.update(self.get_system_facts())
ansible_facts.update(self.get_vsan_facts())
self.module.exit_json(changed=False, ansible_facts=ansible_facts)
def get_vsan_facts(self):
config_mgr = self.host.configManager.vsanSystem
if config_mgr is None:
return {
'vsan_cluster_uuid': None,
'vsan_node_uuid': None,
'vsan_health': "unknown",
}
status = config_mgr.QueryHostStatus()
return {
'vsan_cluster_uuid': status.uuid,
'vsan_node_uuid': status.nodeUuid,
'vsan_health': status.health,
}
def get_cpu_facts(self):
return {
'ansible_processor': self.host.summary.hardware.cpuModel,

View file

@ -1,17 +1,20 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright: (c) 2015-16, Ritesh Khadgaray <khadgaray () gmail.com>
# Copyright: (c) 2018, Ansible Project
# Copyright: (c) 2018, Abhijeet Kasurde <akasurde@redhat.com>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
ANSIBLE_METADATA = {
'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'
}
DOCUMENTATION = '''
DOCUMENTATION = r'''
---
module: vmware_vm_shell
short_description: Run commands in a VMware guest operating system
@ -22,86 +25,92 @@ author:
- Ritesh Khadgaray (@ritzk)
- Abhijeet Kasurde (@Akasurde)
notes:
- Tested on vSphere 5.5
- Only the first match against vm_id is used, even if there are multiple matches
- Tested on vSphere 5.5, 6.0 and 6.5.
- Only the first match against vm_id is used, even if there are multiple matches.
requirements:
- "python >= 2.6"
- PyVmomi
options:
datacenter:
description:
- The datacenter hosting the virtual machine.
- If set, it will help to speed up virtual machine search.
description:
- The datacenter hosting the virtual machine.
- If set, it will help to speed up virtual machine search.
cluster:
description:
- The cluster hosting the virtual machine.
- If set, it will help to speed up virtual machine search.
description:
- The cluster hosting the virtual machine.
- If set, it will help to speed up virtual machine search.
folder:
description:
- Destination folder, absolute or relative path to find an existing guest or create the new guest.
- The folder should include the datacenter. ESX's datacenter is ha-datacenter
- 'Examples:'
- ' folder: /ha-datacenter/vm'
- ' folder: ha-datacenter/vm'
- ' folder: /datacenter1/vm'
- ' folder: datacenter1/vm'
- ' folder: /datacenter1/vm/folder1'
- ' folder: datacenter1/vm/folder1'
- ' folder: /folder1/datacenter1/vm'
- ' folder: folder1/datacenter1/vm'
- ' folder: /folder1/datacenter1/vm/folder2'
- ' folder: vm/folder2'
- ' folder: folder2'
default: /vm
version_added: "2.4"
description:
- Destination folder, absolute or relative path to find an existing guest or create the new guest.
- The folder should include the datacenter. ESX's datacenter is ha-datacenter.
- 'Examples:'
- ' folder: /ha-datacenter/vm'
- ' folder: ha-datacenter/vm'
- ' folder: /datacenter1/vm'
- ' folder: datacenter1/vm'
- ' folder: /datacenter1/vm/folder1'
- ' folder: datacenter1/vm/folder1'
- ' folder: /folder1/datacenter1/vm'
- ' folder: folder1/datacenter1/vm'
- ' folder: /folder1/datacenter1/vm/folder2'
version_added: "2.4"
vm_id:
description:
- Name of the virtual machine to work with.
required: True
description:
- Name of the virtual machine to work with.
required: True
vm_id_type:
description:
- The VMware identification method by which the virtual machine will be identified.
default: vm_name
choices:
- 'uuid'
- 'dns_name'
- 'inventory_path'
- 'vm_name'
description:
- The VMware identification method by which the virtual machine will be identified.
default: vm_name
choices: ['uuid', 'dns_name', 'inventory_path', 'vm_name']
vm_username:
description:
- The user to login-in to the virtual machine.
required: True
description:
- The user to login-in to the virtual machine.
required: True
vm_password:
description:
- The password used to login-in to the virtual machine.
required: True
description:
- The password used to login-in to the virtual machine.
required: True
vm_shell:
description:
- The absolute path to the program to start.
- On Linux, shell is executed via bash.
required: True
description:
- The absolute path to the program to start.
- On Linux, shell is executed via bash.
required: True
vm_shell_args:
description:
- The argument to the program.
default: " "
description:
- The argument to the program.
- The characters which must be escaped to the shell also be escaped on the command line provided.
default: " "
vm_shell_env:
description:
- Comma separated list of environment variable, specified in the guest OS notation.
description:
- Comma separated list of environment variable, specified in the guest OS notation.
vm_shell_cwd:
description:
- The current working directory of the application from which it will be run.
description:
- The current working directory of the application from which it will be run.
wait_for_process:
description:
- If set to C(True), module will wait for process to complete in the given virtual machine.
default: False
type: bool
version_added: 2.7
timeout:
description:
- Timeout in seconds.
- If set to positive integers, then C(wait_for_process) will honor this parameter and will exit after this timeout.
default: 3600
version_added: 2.7
extends_documentation_fragment: vmware.documentation
'''
EXAMPLES = '''
- name: Run command inside a vm
EXAMPLES = r'''
- name: Run command inside a virtual machine
vmware_vm_shell:
hostname: myVSphere
username: myUsername
password: mySecret
datacenter: myDatacenter
folder: /vm
vm_id: NameOfVM
hostname: "{{ vcenter_server }}"
username: "{{ vcenter_user }}"
password: "{{ vcenter_pass }}"
datacenter: "{{ datacenter }}"
folder: /"{{datacenter}}"/vm
vm_id: "{{ vm_name }}"
vm_username: root
vm_password: superSecret
vm_shell: /bin/echo
@ -112,90 +121,228 @@ EXAMPLES = '''
vm_shell_cwd: "/tmp"
delegate_to: localhost
register: shell_command_output
- name: Run command inside a virtual machine with wait and timeout
vmware_vm_shell:
hostname: "{{ vcenter_server }}"
username: "{{ vcenter_user }}"
password: "{{ vcenter_pass }}"
datacenter: "{{ datacenter }}"
folder: /"{{datacenter}}"/vm
vm_id: NameOfVM
vm_username: root
vm_password: superSecret
vm_shell: /bin/sleep
vm_shell_args: 100
wait_for_process: True
timeout: 2000
delegate_to: localhost
register: shell_command_with_wait_timeout
- name: Change user password in the guest machine
vmware_vm_shell:
hostname: "{{ vcenter_server }}"
username: "{{ vcenter_user }}"
password: "{{ vcenter_pass }}"
datacenter: "{{ datacenter }}"
folder: /"{{datacenter}}"/vm
vm_id: "{{ vm_name }}"
vm_username: sample
vm_password: old_password
vm_shell: "/bin/echo"
vm_shell_args: "-e 'old_password\nnew_password\nnew_password' | passwd sample > /tmp/$$.txt 2>&1"
delegate_to: localhost
- name: Change hostname of guest machine
vmware_vm_shell:
hostname: "{{ vcenter_server }}"
username: "{{ vcenter_user }}"
password: "{{ vcenter_pass }}"
validate_certs: no
datacenter: "{{ datacenter }}"
folder: /"{{datacenter}}"/vm
vm_id: "{{ vm_name }}"
vm_username: testUser
vm_password: SuperSecretPassword
vm_shell: "/usr/bin/hostnamectl"
vm_shell_args: "set-hostname new_hostname > /tmp/$$.txt 2>&1"
delegate_to: localhost
'''
RETURN = r'''
results:
description: metadata about the new process after completion with wait_for_process
returned: on success
type: dict
sample:
{
"cmd_line": "\"/bin/sleep\" 1",
"end_time": "2018-04-26T05:03:21+00:00",
"exit_code": 0,
"name": "sleep",
"owner": "dev1",
"start_time": "2018-04-26T05:03:19+00:00",
"uuid": "564db1e2-a3ff-3b0e-8b77-49c25570bb66",
}
'''
import time
try:
from pyVmomi import vim, vmodl
except ImportError:
pass
from ansible.module_utils._text import to_native
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.vmware import (connect_to_api, find_cluster_by_name, find_datacenter_by_name,
find_vm_by_id, HAS_PYVMOMI, vmware_argument_spec)
from ansible.module_utils.vmware import (PyVmomi, find_cluster_by_name,
find_datacenter_by_name, find_vm_by_id,
vmware_argument_spec)
# https://github.com/vmware/pyvmomi-community-samples/blob/master/samples/execute_program_in_vm.py
def execute_command(content, vm, params):
vm_username = params['vm_username']
vm_password = params['vm_password']
program_path = params['vm_shell']
args = params['vm_shell_args']
env = params['vm_shell_env']
cwd = params['vm_shell_cwd']
class VMwareShellManager(PyVmomi):
def __init__(self, module):
super(VMwareShellManager, self).__init__(module)
datacenter_name = module.params['datacenter']
cluster_name = module.params['cluster']
folder = module.params['folder']
self.pm = self.content.guestOperationsManager.processManager
self.timeout = self.params.get('timeout', 3600)
self.wait_for_pid = self.params.get('wait_for_process', False)
creds = vim.vm.guest.NamePasswordAuthentication(username=vm_username, password=vm_password)
cmdspec = vim.vm.guest.ProcessManager.ProgramSpec(arguments=args, envVariables=env, programPath=program_path, workingDirectory=cwd)
cmdpid = content.guestOperationsManager.processManager.StartProgramInGuest(vm=vm, auth=creds, spec=cmdspec)
datacenter = None
if datacenter_name:
datacenter = find_datacenter_by_name(self.content, datacenter_name)
if not datacenter:
module.fail_json(changed=False, msg="Unable to find %(datacenter)s datacenter" % module.params)
return cmdpid
cluster = None
if cluster_name:
cluster = find_cluster_by_name(self.content, cluster_name, datacenter)
if not cluster:
module.fail_json(changed=False, msg="Unable to find %(cluster)s cluster" % module.params)
if module.params['vm_id_type'] == 'inventory_path':
vm = find_vm_by_id(self.content,
vm_id=module.params['vm_id'],
vm_id_type="inventory_path",
folder=folder)
else:
vm = find_vm_by_id(self.content,
vm_id=module.params['vm_id'],
vm_id_type=module.params['vm_id_type'],
datacenter=datacenter, cluster=cluster)
if not vm:
module.fail_json(msg='Unable to find virtual machine.')
tools_status = vm.guest.toolsStatus
if tools_status in ['toolsNotInstalled', 'toolsNotRunning']:
self.module.fail_json(msg="VMWareTools is not installed or is not running in the guest."
" VMware Tools are necessary to run this module.")
try:
self.execute_command(vm, module.params)
except vmodl.RuntimeFault as runtime_fault:
module.fail_json(changed=False, msg=to_native(runtime_fault.msg))
except vmodl.MethodFault as method_fault:
module.fail_json(changed=False, msg=to_native(method_fault.msg))
except Exception as e:
module.fail_json(changed=False, msg=to_native(e))
def execute_command(self, vm, params):
# https://github.com/vmware/pyvmomi-community-samples/blob/master/samples/execute_program_in_vm.py
vm_username = params['vm_username']
vm_password = params['vm_password']
program_path = params['vm_shell']
args = params['vm_shell_args']
env = params['vm_shell_env']
cwd = params['vm_shell_cwd']
credentials = vim.vm.guest.NamePasswordAuthentication(username=vm_username,
password=vm_password)
cmd_spec = vim.vm.guest.ProcessManager.ProgramSpec(arguments=args,
envVariables=env,
programPath=program_path,
workingDirectory=cwd)
res = self.pm.StartProgramInGuest(vm=vm, auth=credentials, spec=cmd_spec)
if self.wait_for_pid:
res_data = self.wait_for_process(vm, res, credentials)
results = dict(uuid=vm.summary.config.uuid,
owner=res_data.owner,
start_time=res_data.startTime.isoformat(),
end_time=res_data.endTime.isoformat(),
exit_code=res_data.exitCode,
name=res_data.name,
cmd_line=res_data.cmdLine)
if res_data.exitCode != 0:
results['msg'] = "Failed to execute command"
results['changed'] = False
results['failed'] = True
self.module.fail_json(**results)
else:
results['changed'] = True
results['failed'] = False
self.module.exit_json(**results)
else:
self.module.exit_json(changed=True, uuid=vm.summary.config.uuid, msg=res)
def process_exists_in_guest(self, vm, pid, creds):
res = self.pm.ListProcessesInGuest(vm, creds, pids=[pid])
if not res:
return False
res = res[0]
if res.exitCode is None:
return True, ''
elif res.exitCode >= 0:
return False, res
else:
return True, res
def wait_for_process(self, vm, pid, creds):
start_time = time.time()
while True:
current_time = time.time()
process_status, res_data = self.process_exists_in_guest(vm, pid, creds)
if not process_status:
return res_data
elif current_time - start_time >= self.timeout:
break
else:
time.sleep(5)
def main():
argument_spec = vmware_argument_spec()
argument_spec.update(dict(datacenter=dict(type='str'),
cluster=dict(type='str'),
folder=dict(type='str', default='/vm'),
vm_id=dict(type='str', required=True),
vm_id_type=dict(default='vm_name', type='str', choices=['inventory_path', 'uuid', 'dns_name', 'vm_name']),
vm_username=dict(type='str', required=True),
vm_password=dict(type='str', no_log=True, required=True),
vm_shell=dict(type='str', required=True),
vm_shell_args=dict(default=" ", type='str'),
vm_shell_env=dict(type='list'),
vm_shell_cwd=dict(type='str')))
argument_spec.update(
dict(
datacenter=dict(type='str'),
cluster=dict(type='str'),
folder=dict(type='str'),
vm_id=dict(type='str', required=True),
vm_id_type=dict(default='vm_name', type='str',
choices=['inventory_path', 'uuid', 'dns_name', 'vm_name']),
vm_username=dict(type='str', required=True),
vm_password=dict(type='str', no_log=True, required=True),
vm_shell=dict(type='str', required=True),
vm_shell_args=dict(default=" ", type='str'),
vm_shell_env=dict(type='list'),
vm_shell_cwd=dict(type='str'),
wait_for_process=dict(type='bool', default=False),
timeout=dict(type='int', default=3600),
)
)
module = AnsibleModule(argument_spec=argument_spec,
supports_check_mode=False,
required_if=[['vm_id_type', 'inventory_path', ['folder']]],
)
module = AnsibleModule(
argument_spec=argument_spec,
supports_check_mode=False,
required_if=[
['vm_id_type', 'inventory_path', ['folder']]
],
)
if not HAS_PYVMOMI:
module.fail_json(changed=False, msg='pyvmomi is required for this module')
datacenter_name = module.params['datacenter']
cluster_name = module.params['cluster']
folder = module.params['folder']
content = connect_to_api(module)
datacenter = None
if datacenter_name:
datacenter = find_datacenter_by_name(content, datacenter_name)
if not datacenter:
module.fail_json(changed=False, msg="Unable to find %(datacenter)s datacenter" % module.params)
cluster = None
if cluster_name:
cluster = find_cluster_by_name(content, cluster_name, datacenter)
if not cluster:
module.fail_json(changed=False, msg="Unable to find %(cluster)s cluster" % module.params)
if module.params['vm_id_type'] == 'inventory_path':
vm = find_vm_by_id(content, vm_id=module.params['vm_id'], vm_id_type="inventory_path", folder=folder)
else:
vm = find_vm_by_id(content, vm_id=module.params['vm_id'], vm_id_type=module.params['vm_id_type'], datacenter=datacenter, cluster=cluster)
if not vm:
module.fail_json(msg='Unable to find virtual machine.')
try:
msg = execute_command(content, vm, module.params)
module.exit_json(changed=True, uuid=vm.summary.config.uuid, msg=msg)
except vmodl.RuntimeFault as runtime_fault:
module.fail_json(changed=False, msg=runtime_fault.msg)
except vmodl.MethodFault as method_fault:
module.fail_json(changed=False, msg=method_fault.msg)
except Exception as e:
module.fail_json(changed=False, msg=str(e))
vm_shell_mgr = VMwareShellManager(module)
if __name__ == '__main__':

View file

@ -1038,6 +1038,9 @@ def main():
except AttributeError:
module.fail_json(msg='You need to have PyOpenSSL>=0.15')
if module.params['provider'] != 'assertonly' and module.params['csr_path'] is None:
module.fail_json(msg='csr_path is required when provider is not assertonly')
base_dir = os.path.dirname(module.params['path'])
if not os.path.isdir(base_dir):
module.fail_json(

View file

@ -106,6 +106,13 @@ options:
may be specified as a symbolic mode (for example, C(u+rwx) or C(u=rw,g=r,o=r)). As of
version 2.6, the mode may also be the special string C(preserve). C(preserve) means that
the file will be given the same permissions as the source file."
output_encoding:
description:
- Overrides the encoding used to write the template file defined by C(dest).
- It defaults to C('utf-8'), but any encoding supported by python can be used.
- The source template file must always be encoded using C('utf-8'), for homogeneity.
default: 'utf-8'
version_added: "2.7"
notes:
- For Windows you can use M(win_template) which uses '\\r\\n' as C(newline_sequence).
- Including a string that uses a date in the template will result in the template being marked 'changed' each time

View file

@ -125,7 +125,7 @@ EXAMPLES = '''
server_url: http://127.0.0.1
template_name: Test Template
template_json:
zabbix_export:
zabbix_export:
version: '3.2'
templates:
- name: Template for Testing
@ -417,7 +417,8 @@ class Template(object):
# old api version support here
api_version = self._zapi.api_version()
# updateExisting for application removed from zabbix api after 3.2
if LooseVersion(api_version) <= LooseVersion('3.2.x'):
if LooseVersion(api_version).version[:2] <= LooseVersion(
'3.2').version:
update_rules['applications']['updateExisting'] = True
self._zapi.configuration.import_({

View file

@ -261,12 +261,13 @@ import datetime
import json
import os
import shutil
import sys
import tempfile
import traceback
from collections import Mapping, Sequence
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.six import iteritems, string_types
from ansible.module_utils.six import PY2, iteritems, string_types
from ansible.module_utils.six.moves.urllib.parse import urlencode, urlsplit
from ansible.module_utils._text import to_native, to_text
from ansible.module_utils.urls import fetch_url, url_argument_spec
@ -564,10 +565,8 @@ def main():
ukey = key.replace("-", "_").lower()
uresp[ukey] = value
try:
if 'location' in uresp:
uresp['location'] = absolute_location(url, uresp['location'])
except KeyError:
pass
# Default content_encoding to try
content_encoding = 'utf-8'
@ -581,7 +580,8 @@ def main():
js = json.loads(u_content)
uresp['json'] = js
except:
pass
if PY2:
sys.exc_clear() # Avoid false positive traceback in fail_json() on Python 2
else:
u_content = to_text(content, encoding=content_encoding)

View file

@ -557,6 +557,8 @@ class CloudflareAPI(object):
do_update = True
if (params['priority'] is not None) and ('priority' in cur_record) and (cur_record['priority'] != params['priority']):
do_update = True
if ('proxied' in new_record) and ('proxied' in cur_record) and (cur_record['proxied'] != params['proxied']):
do_update = True
if ('data' in new_record) and ('data' in cur_record):
if (cur_record['data'] != new_record['data']):
do_update = True

View file

@ -0,0 +1,191 @@
#!/usr/bin/python
# Copyright (c) 2018 Red Hat, Inc.
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = '''
---
module: nios_naptr_record
version_added: "2.6"
author: "Blair Rampling (@brampling)"
short_description: Configure Infoblox NIOS NAPTR records
description:
- Adds and/or removes instances of NAPTR record objects from
Infoblox NIOS servers. This module manages NIOS C(record:naptr) objects
using the Infoblox WAPI interface over REST.
requirements:
- infoblox_client
extends_documentation_fragment: nios
options:
name:
description:
- Specifies the fully qualified hostname to add or remove from
the system
required: true
view:
description:
- Sets the DNS view to associate this a record with. The DNS
view must already be configured on the system
required: true
default: default
aliases:
- dns_view
order:
description:
- Configures the order (0-65535) for this NAPTR record. This parameter
specifies the order in which the NAPTR rules are applied when
multiple rules are present.
required: true
preference:
description:
- Configures the preference (0-65535) for this NAPTR record. The
preference field determines the order NAPTR records are processed
when multiple records with the same order parameter are present.
required: true
replacement:
description:
- Configures the replacement field for this NAPTR record.
For nonterminal NAPTR records, this field specifies the
next domain name to look up.
required: true
services:
description:
- Configures the services field (128 characters maximum) for this
NAPTR record. The services field contains protocol and service
identifiers, such as "http+E2U" or "SIPS+D2T".
required: false
flags:
description:
- Configures the flags field for this NAPTR record. These control the
interpretation of the fields for an NAPTR record object. Supported
values for the flags field are "U", "S", "P" and "A".
required: false
regexp:
description:
- Configures the regexp field for this NAPTR record. This is the
regular expression-based rewriting rule of the NAPTR record. This
should be a POSIX compliant regular expression, including the
substitution rule and flags. Refer to RFC 2915 for the field syntax
details.
required: false
ttl:
description:
- Configures the TTL to be associated with this NAPTR record
extattrs:
description:
- Allows for the configuration of Extensible Attributes on the
instance of the object. This argument accepts a set of key / value
pairs for configuration.
comment:
description:
- Configures a text string comment to be associated with the instance
of this object. The provided text string will be configured on the
object instance.
state:
description:
- Configures the intended state of the instance of the object on
the NIOS server. When this value is set to C(present), the object
is configured on the device and when this value is set to C(absent)
the value is removed (if necessary) from the device.
default: present
choices:
- present
- absent
'''
EXAMPLES = '''
- name: configure a NAPTR record
nios_naptr_record:
name: '*.subscriber-100.ansiblezone.com'
order: 1000
preference: 10
replacement: replacement1.network.ansiblezone.com
state: present
provider:
host: "{{ inventory_hostname_short }}"
username: admin
password: admin
connection: local
- name: add a comment to an existing NAPTR record
nios_naptr_record:
name: '*.subscriber-100.ansiblezone.com'
order: 1000
preference: 10
replacement: replacement1.network.ansiblezone.com
comment: this is a test comment
state: present
provider:
host: "{{ inventory_hostname_short }}"
username: admin
password: admin
connection: local
- name: remove a NAPTR record from the system
nios_naptr_record:
name: '*.subscriber-100.ansiblezone.com'
order: 1000
preference: 10
replacement: replacement1.network.ansiblezone.com
state: absent
provider:
host: "{{ inventory_hostname_short }}"
username: admin
password: admin
connection: local
'''
RETURN = ''' # '''
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.six import iteritems
from ansible.module_utils.net_tools.nios.api import WapiModule
def main():
''' Main entry point for module execution
'''
ib_spec = dict(
name=dict(required=True, ib_req=True),
view=dict(default='default', aliases=['dns_view'], ib_req=True),
order=dict(type='int', ib_req=True),
preference=dict(type='int', ib_req=True),
replacement=dict(ib_req=True),
services=dict(),
flags=dict(),
regexp=dict(),
ttl=dict(type='int'),
extattrs=dict(type='dict'),
comment=dict(),
)
argument_spec = dict(
provider=dict(required=True),
state=dict(default='present', choices=['present', 'absent'])
)
argument_spec.update(ib_spec)
argument_spec.update(WapiModule.provider_spec)
module = AnsibleModule(argument_spec=argument_spec,
supports_check_mode=True)
wapi = WapiModule(module)
result = wapi.run('record:naptr', ib_spec)
module.exit_json(**result)
if __name__ == '__main__':
main()

View file

@ -59,10 +59,11 @@ options:
zone:
description:
- DNS record will be modified on this C(zone).
required: true
- When omitted DNS will be queried to attempt finding the correct zone.
- Starting with Ansible 2.7 this parameter is optional.
record:
description:
- Sets the DNS record to modify.
- Sets the DNS record to modify. When zone is omitted this has to be absolute (ending with a dot).
required: true
type:
description:
@ -172,11 +173,23 @@ class RecordManager(object):
def __init__(self, module):
self.module = module
if module.params['zone'][-1] != '.':
self.zone = module.params['zone'] + '.'
if module.params['zone'] is None:
if module.params['record'][-1] != '.':
self.module.fail_json(msg='record must be absolute when omitting zone parameter')
try:
self.zone = dns.resolver.zone_for_name(self.module.params['record']).to_text()
except (dns.exception.Timeout, dns.resolver.NoNameservers, dns.resolver.NoRootSOA) as e:
self.module.fail_json(msg='Zone resolver error (%s): %s' % (e.__class__.__name__, to_native(e)))
if self.zone is None:
self.module.fail_json(msg='Unable to find zone, dnspython returned None')
else:
self.zone = module.params['zone']
if self.zone[-1] != '.':
self.zone += '.'
if module.params['key_name']:
try:
self.keyring = dns.tsigkeyring.from_text({
@ -332,7 +345,7 @@ def main():
key_name=dict(required=False, type='str'),
key_secret=dict(required=False, type='str', no_log=True),
key_algorithm=dict(required=False, default='hmac-md5', choices=tsig_algs, type='str'),
zone=dict(required=True, type='str'),
zone=dict(required=False, default=None, type='str'),
record=dict(required=True, type='str'),
type=dict(required=False, default='A', type='str'),
ttl=dict(required=False, default=3600, type='int'),

View file

@ -57,7 +57,7 @@ extends_documentation_fragment: cnos
options:
interfaceRange:
description:
- This specifies the interface range in which the port aggregation is envisaged
- This specifies the interface range in which the port channel is envisaged
required: Yes
default: Null
interfaceOption:
@ -65,21 +65,21 @@ options:
- This specifies the attribute you specify subsequent to interface command
required: Yes
default: Null
choices: [None, ethernet, loopback, mgmt, port-aggregation, vlan]
choices: [None, ethernet, loopback, mgmt, port-channel, vlan]
interfaceArg1:
description:
- This is an overloaded interface first argument. Usage of this argument can be found is the User Guide referenced above.
required: Yes
default: Null
choices: [aggregation-group, bfd, bridgeport, description, duplex, flowcontrol, ip, ipv6, lacp, lldp,
choices: [channel-group, bfd, switchport, description, duplex, flowcontrol, ip, ipv6, lacp, lldp,
load-interval, mac, mac-address, mac-learn, microburst-detection, mtu, service, service-policy,
shutdown, snmp, spanning-tree, speed, storm-control, vlan, vrrp, port-aggregation]
shutdown, snmp, spanning-tree, speed, storm-control, vlan, vrrp, port-channel]
interfaceArg2:
description:
- This is an overloaded interface second argument. Usage of this argument can be found is the User Guide referenced above.
required: No
default: Null
choices: [aggregation-group number, access or mode or trunk, description, auto or full or half,
choices: [channel-group number, access or mode or trunk, description, auto or full or half,
receive or send, port-priority, suspend-individual, timeout, receive or transmit or trap-notification,
tlv-select, Load interval delay in seconds, counter, Name for the MAC Access List, mac-address in HHHH.HHHH.HHHH format,
THRESHOLD Value in unit of buffer cell, <64-9216> MTU in bytes-<64-9216> for L2 packet,<576-9216> for L3 IPv4 packet,
@ -132,7 +132,7 @@ options:
EXAMPLES = '''
Tasks : The following are examples of using the module cnos_interface. These are written in the main.yml file of the tasks directory.
---
- name: Test Interface Ethernet - aggregation-group
- name: Test Interface Ethernet - channel-group
cnos_interface:
host: "{{ inventory_hostname }}"
username: "{{ hostvars[inventory_hostname]['ansible_ssh_user'] }}"
@ -142,11 +142,11 @@ Tasks : The following are examples of using the module cnos_interface. These are
outputfile: "./results/test_interface_{{ inventory_hostname }}_output.txt"
interfaceOption: 'ethernet'
interfaceRange: 1
interfaceArg1: "aggregation-group"
interfaceArg1: "channel-group"
interfaceArg2: 33
interfaceArg3: "on"
- name: Test Interface Ethernet - bridge-port
- name: Test Interface Ethernet - switchport
cnos_interface:
host: "{{ inventory_hostname }}"
username: "{{ hostvars[inventory_hostname]['ansible_ssh_user'] }}"
@ -156,11 +156,11 @@ Tasks : The following are examples of using the module cnos_interface. These are
outputfile: "./results/test_interface_{{ inventory_hostname }}_output.txt"
interfaceOption: 'ethernet'
interfaceRange: 33
interfaceArg1: "bridge-port"
interfaceArg1: "switchport"
interfaceArg2: "access"
interfaceArg3: 33
- name: Test Interface Ethernet - bridgeport mode
- name: Test Interface Ethernet - switchport mode
cnos_interface:
host: "{{ inventory_hostname }}"
username: "{{ hostvars[inventory_hostname]['ansible_ssh_user'] }}"
@ -170,7 +170,7 @@ Tasks : The following are examples of using the module cnos_interface. These are
outputfile: "./results/test_interface_{{ inventory_hostname }}_output.txt"
interfaceOption: 'ethernet'
interfaceRange: 33
interfaceArg1: "bridge-port"
interfaceArg1: "switchport"
interfaceArg2: "mode"
interfaceArg3: "access"
@ -505,70 +505,23 @@ def main():
interfaceArg7=dict(required=False),),
supports_check_mode=False)
username = module.params['username']
password = module.params['password']
enablePassword = module.params['enablePassword']
interfaceRange = module.params['interfaceRange']
interfaceOption = module.params['interfaceOption']
interfaceArg1 = module.params['interfaceArg1']
interfaceArg2 = module.params['interfaceArg2']
interfaceArg3 = module.params['interfaceArg3']
interfaceArg4 = module.params['interfaceArg4']
interfaceArg5 = module.params['interfaceArg5']
interfaceArg6 = module.params['interfaceArg6']
interfaceArg7 = module.params['interfaceArg7']
outputfile = module.params['outputfile']
hostIP = module.params['host']
deviceType = module.params['deviceType']
output = ""
if not HAS_PARAMIKO:
module.fail_json(msg='paramiko is required for this module')
# Create instance of SSHClient object
remote_conn_pre = paramiko.SSHClient()
# Automatically add untrusted hosts (make sure okay for security policy in your environment)
remote_conn_pre.set_missing_host_key_policy(paramiko.AutoAddPolicy())
# initiate SSH connection with the switch
remote_conn_pre.connect(hostIP, username=username, password=password)
time.sleep(2)
# Use invoke_shell to establish an 'interactive session'
remote_conn = remote_conn_pre.invoke_shell()
time.sleep(2)
# Enable and enter configure terminal then send command
output = output + cnos.waitForDeviceResponse("\n", ">", 2, remote_conn)
output = output + cnos.enterEnableModeForDevice(enablePassword, 3, remote_conn)
# Make terminal length = 0
output = output + cnos.waitForDeviceResponse("terminal length 0\n", "#", 2, remote_conn)
# Go to config mode
output = output + cnos.waitForDeviceResponse("configure device\n", "(config)#", 2, remote_conn)
output = ''
# Send the CLi command
if(interfaceOption is None or interfaceOption == ""):
output = output + cnos.interfaceConfig(remote_conn, deviceType, "(config)#", 2, None, interfaceRange,
interfaceArg1, interfaceArg2, interfaceArg3, interfaceArg4, interfaceArg5, interfaceArg6, interfaceArg7)
output = output + cnos.interfaceConfig(module, "(config)#", None, None)
elif(interfaceOption == "ethernet"):
output = output + cnos.interfaceConfig(remote_conn, deviceType, "(config)#", 2, "ethernet", interfaceRange,
interfaceArg1, interfaceArg2, interfaceArg3, interfaceArg4, interfaceArg5, interfaceArg6, interfaceArg7)
output = output + cnos.interfaceConfig(module, "(config)#", "ethernet", None)
elif(interfaceOption == "loopback"):
output = output + cnos.interfaceConfig(remote_conn, deviceType, "(config)#", 2, "loopback", interfaceRange,
interfaceArg1, interfaceArg2, interfaceArg3, interfaceArg4, interfaceArg5, interfaceArg6, interfaceArg7)
output = output + cnos.interfaceConfig(module, "(config)#", "loopback", None)
elif(interfaceOption == "mgmt"):
output = output + cnos.interfaceConfig(remote_conn, deviceType, "(config)#", 2, "mgmt", interfaceRange,
interfaceArg1, interfaceArg2, interfaceArg3, interfaceArg4, interfaceArg5, interfaceArg6, interfaceArg7)
elif(interfaceOption == "port-aggregation"):
output = output + cnos.interfaceConfig(remote_conn, deviceType, "(config)#", 2, "port-aggregation", interfaceRange,
interfaceArg1, interfaceArg2, interfaceArg3, interfaceArg4, interfaceArg5, interfaceArg6, interfaceArg7)
output = output + cnos.interfaceConfig(module, "(config)#", "mgmt", None)
elif(interfaceOption == "port-channel"):
output = output + cnos.interfaceConfig(module, "(config)#", "port-channel", None)
elif(interfaceOption == "vlan"):
output = output + cnos.interfaceConfig(remote_conn, deviceType, "(config)#", 2, "vlan", interfaceRange,
interfaceArg1, interfaceArg2, interfaceArg3, interfaceArg4, interfaceArg5, interfaceArg6, interfaceArg7)
output = output + cnos.interfaceConfig(module, "(config)#", "vlan", None)
else:
output = "Invalid interface option \n"
# Save it into the file
@ -579,7 +532,7 @@ def main():
# Logic to check when changes occur or not
errorMsg = cnos.checkOutputForError(output)
if(errorMsg is None):
module.exit_json(changed=True, msg="Interface Configuration is done")
module.exit_json(changed=True, msg="Interface Configuration is Accomplished")
else:
module.fail_json(msg=errorMsg)

View file

@ -32,10 +32,10 @@ DOCUMENTATION = '''
---
module: cnos_portchannel
author: "Anil Kumar Muraleedharan (@amuraleedhar)"
short_description: Manage portchannel (port aggregation) configuration on devices running Lenovo CNOS
short_description: Manage portchannel (port channel) configuration on devices running Lenovo CNOS
description:
- This module allows you to work with port aggregation related configurations. The operators
used are overloaded to ensure control over switch port aggregation configurations. Apart
- This module allows you to work with port channel related configurations. The operators
used are overloaded to ensure control over switch port channel configurations. Apart
from the regular device connection related attributes, there are five LAG arguments which are
overloaded variables that will perform further configurations. They are interfaceArg1, interfaceArg2,
interfaceArg3, interfaceArg4, and interfaceArg5. For more details on how to use these arguments, see
@ -50,7 +50,7 @@ extends_documentation_fragment: cnos
options:
interfaceRange:
description:
- This specifies the interface range in which the port aggregation is envisaged
- This specifies the interface range in which the port channel is envisaged
required: Yes
default: Null
interfaceArg1:
@ -58,15 +58,15 @@ options:
- This is an overloaded Port Channel first argument. Usage of this argument can be found is the User Guide referenced above.
required: Yes
default: Null
choices: [aggregation-group, bfd, bridgeport, description, duplex, flowcontrol, ip, ipv6, lacp, lldp,
choices: [channel-group, bfd, bridgeport, description, duplex, flowcontrol, ip, ipv6, lacp, lldp,
load-interval, mac, mac-address, mac-learn, microburst-detection, mtu, service, service-policy,
shutdown, snmp, spanning-tree, speed, storm-control, vlan, vrrp, port-aggregation]
shutdown, snmp, spanning-tree, speed, storm-control, vlan, vrrp, port-channel]
interfaceArg2:
description:
- This is an overloaded Port Channel second argument. Usage of this argument can be found is the User Guide referenced above.
required: No
default: Null
choices: [aggregation-group number, access or mode or trunk, description, auto or full or half,
choices: [channel-group number, access or mode or trunk, description, auto or full or half,
receive or send, port-priority, suspend-individual, timeout, receive or transmit or trap-notification,
tlv-select, Load interval delay in seconds, counter, Name for the MAC Access List, mac-address in HHHH.HHHH.HHHH format,
THRESHOLD Value in unit of buffer cell, <64-9216> MTU in bytes-<64-9216> for L2 packet,<576-9216> for
@ -118,7 +118,7 @@ options:
EXAMPLES = '''
Tasks : The following are examples of using the module cnos_portchannel. These are written in the main.yml file of the tasks directory.
---
- name: Test Port Channel - aggregation-group
- name: Test Port Channel - channel-group
cnos_portchannel:
host: "{{ inventory_hostname }}"
username: "{{ hostvars[inventory_hostname]['ansible_ssh_user'] }}"
@ -126,11 +126,11 @@ Tasks : The following are examples of using the module cnos_portchannel. These a
deviceType: "{{ hostvars[inventory_hostname]['deviceType'] }}"
outputfile: "./results/test_portchannel_{{ inventory_hostname }}_output.txt"
interfaceRange: 33
interfaceArg1: "aggregation-group"
interfaceArg1: "channel-group"
interfaceArg2: 33
interfaceArg3: "on"
- name: Test Port Channel - aggregation-group - Interface Range
- name: Test Port Channel - channel-group - Interface Range
cnos_portchannel:
host: "{{ inventory_hostname }}"
username: "{{ hostvars[inventory_hostname]['ansible_ssh_user'] }}"
@ -138,7 +138,7 @@ Tasks : The following are examples of using the module cnos_portchannel. These a
deviceType: "{{ hostvars[inventory_hostname]['deviceType'] }}"
outputfile: "./results/test_portchannel_{{ inventory_hostname }}_output.txt"
interfaceRange: "1/1-2"
interfaceArg1: "aggregation-group"
interfaceArg1: "channel-group"
interfaceArg2: 33
interfaceArg3: "on"
@ -237,17 +237,6 @@ Tasks : The following are examples of using the module cnos_portchannel. These a
interfaceArg3: 2
interfaceArg4: 33
#- name: Test Port Channel - mac
# cnos_portchannel:
# host: "{{ inventory_hostname }}"
# username: "{{ hostvars[inventory_hostname]['ansible_ssh_user'] }}"
# password: "{{ hostvars[inventory_hostname]['ansible_ssh_pass'] }}"
# deviceType: "{{ hostvars[inventory_hostname]['deviceType'] }}"
# outputfile: "./results/test_portchannel_{{ inventory_hostname }}_output.txt"
# interfaceRange: 33,
# interfaceArg1: "mac"
# interfaceArg2: "copp-system-acl-vlag-hc"
- name: Test Port Channel - microburst-detection
cnos_portchannel:
host: "{{ inventory_hostname }}"
@ -305,16 +294,16 @@ Tasks : The following are examples of using the module cnos_portchannel. These a
interfaceArg2: "broadcast"
interfaceArg3: 12.5
#- name: Test Port Channel - vlan
# cnos_portchannel:
# host: "{{ inventory_hostname }}"
# username: "{{ hostvars[inventory_hostname]['ansible_ssh_user'] }}"
# password: "{{ hostvars[inventory_hostname]['ansible_ssh_pass'] }}"
# deviceType: "{{ hostvars[inventory_hostname]['deviceType'] }}"
# outputfile: "./results/test_portchannel_{{ inventory_hostname }}_output.txt"
# interfaceRange: 33
# interfaceArg1: "vlan"
# interfaceArg2: "disable"
- name: Test Port Channel - vlan
cnos_portchannel:
host: "{{ inventory_hostname }}"
username: "{{ hostvars[inventory_hostname]['ansible_ssh_user'] }}"
password: "{{ hostvars[inventory_hostname]['ansible_ssh_pass'] }}"
deviceType: "{{ hostvars[inventory_hostname]['deviceType'] }}"
outputfile: "./results/test_portchannel_{{ inventory_hostname }}_output.txt"
interfaceRange: 33
interfaceArg1: "vlan"
interfaceArg2: "disable"
- name: Test Port Channel - vrrp
cnos_portchannel:
@ -378,35 +367,6 @@ Tasks : The following are examples of using the module cnos_portchannel. These a
interfaceArg2: "port"
interfaceArg3: "anil"
- name: Test Port Channel - bfd
cnos_portchannel:
host: "{{ inventory_hostname }}"
username: "{{ hostvars[inventory_hostname]['ansible_ssh_user'] }}"
password: "{{ hostvars[inventory_hostname]['ansible_ssh_pass'] }}"
deviceType: "{{ hostvars[inventory_hostname]['deviceType'] }}"
outputfile: "./results/test_portchannel_{{ inventory_hostname }}_output.txt"
interfaceRange: 33
interfaceArg1: "bfd"
interfaceArg2: "interval"
interfaceArg3: 55
interfaceArg4: 55
interfaceArg5: 33
- name: Test Port Channel - bfd
cnos_portchannel:
host: "{{ inventory_hostname }}"
username: "{{ hostvars[inventory_hostname]['ansible_ssh_user'] }}"
password: "{{ hostvars[inventory_hostname]['ansible_ssh_pass'] }}"
deviceType: "{{ hostvars[inventory_hostname]['deviceType'] }}"
outputfile: "./results/test_portchannel_{{ inventory_hostname }}_output.txt"
interfaceRange: 33
interfaceArg1: "bfd"
interfaceArg2: "ipv4"
interfaceArg3: "authentication"
interfaceArg4: "meticulous-keyed-md5"
interfaceArg5: "key-chain"
interfaceArg6: "mychain"
'''
RETURN = '''
msg:
@ -417,11 +377,6 @@ msg:
'''
import sys
try:
import paramiko
HAS_PARAMIKO = True
except ImportError:
HAS_PARAMIKO = False
import time
import socket
import array
@ -457,56 +412,13 @@ def main():
interfaceArg7=dict(required=False),),
supports_check_mode=False)
username = module.params['username']
password = module.params['password']
enablePassword = module.params['enablePassword']
interfaceRange = module.params['interfaceRange']
interfaceArg1 = module.params['interfaceArg1']
interfaceArg2 = module.params['interfaceArg2']
interfaceArg3 = module.params['interfaceArg3']
interfaceArg4 = module.params['interfaceArg4']
interfaceArg5 = module.params['interfaceArg5']
interfaceArg6 = module.params['interfaceArg6']
interfaceArg7 = module.params['interfaceArg7']
outputfile = module.params['outputfile']
hostIP = module.params['host']
deviceType = module.params['deviceType']
output = ""
if not HAS_PARAMIKO:
module.fail_json(msg='paramiko is required for this module')
# Create instance of SSHClient object
remote_conn_pre = paramiko.SSHClient()
# Automatically add untrusted hosts (make sure okay for security policy in your environment)
remote_conn_pre.set_missing_host_key_policy(paramiko.AutoAddPolicy())
# initiate SSH connection with the switch
remote_conn_pre.connect(hostIP, username=username, password=password)
time.sleep(2)
# Use invoke_shell to establish an 'interactive session'
remote_conn = remote_conn_pre.invoke_shell()
time.sleep(2)
# Enable and enter configure terminal then send command
output = output + cnos.waitForDeviceResponse("\n", ">", 2, remote_conn)
output = output + cnos.enterEnableModeForDevice(enablePassword, 3, remote_conn)
# Make terminal length = 0
output = output + cnos.waitForDeviceResponse("terminal length 0\n", "#", 2, remote_conn)
# Go to config mode
output = output + cnos.waitForDeviceResponse("configure device\n", "(config)#", 2, remote_conn)
output = ''
# Send the CLi command
if(interfaceArg1 == "port-aggregation"):
output = output + cnos.portChannelConfig(remote_conn, deviceType, "(config)#", 2, interfaceArg1,
interfaceArg2, interfaceArg3, interfaceArg4, interfaceArg5, interfaceArg6, interfaceArg7)
if(module.params['interfaceArg1'] == "port-channel"):
output = output + cnos.portChannelConfig(module, "(config)#", None)
else:
output = output + cnos.interfaceConfig(remote_conn, deviceType, "(config)#", 2, "port-aggregation", interfaceRange,
interfaceArg1, interfaceArg2, interfaceArg3, interfaceArg4, interfaceArg5, interfaceArg6, interfaceArg7)
output = output + cnos.interfaceConfig(module, "(config)#", "port-channel", None)
# Save it into the file
file = open(outputfile, "a")
@ -516,7 +428,7 @@ def main():
# Logic to check when changes occur or not
errorMsg = cnos.checkOutputForError(output)
if(errorMsg is None):
module.exit_json(changed=True, msg="Port Aggregation configuration is done")
module.exit_json(changed=True, msg="Port Channel Configuration is done")
else:
module.fail_json(msg=errorMsg)

View file

@ -252,11 +252,6 @@ msg:
'''
import sys
try:
import paramiko
HAS_PARAMIKO = True
except ImportError:
HAS_PARAMIKO = False
import time
import socket
import array
@ -291,54 +286,11 @@ def main():
vlagArg4=dict(required=False),),
supports_check_mode=False)
username = module.params['username']
password = module.params['password']
enablePassword = module.params['enablePassword']
outputfile = module.params['outputfile']
hostIP = module.params['host']
deviceType = module.params['deviceType']
vlagArg1 = module.params['vlagArg1']
vlagArg2 = module.params['vlagArg2']
vlagArg3 = module.params['vlagArg3']
vlagArg4 = module.params['vlagArg4']
output = ""
if not HAS_PARAMIKO:
module.fail_json(msg='paramiko is required for this module')
# Create instance of SSHClient object
remote_conn_pre = paramiko.SSHClient()
# Automatically add untrusted hosts (make sure okay for security policy in
# your environment)
remote_conn_pre.set_missing_host_key_policy(paramiko.AutoAddPolicy())
# initiate SSH connection with the switch
remote_conn_pre.connect(hostIP, username=username, password=password)
time.sleep(2)
# Use invoke_shell to establish an 'interactive session'
remote_conn = remote_conn_pre.invoke_shell()
time.sleep(2)
# Enable and enter configure terminal then send command
output = output + cnos.waitForDeviceResponse("\n", ">", 2, remote_conn)
output = output + \
cnos.enterEnableModeForDevice(enablePassword, 3, remote_conn)
# Make terminal length = 0
output = output + \
cnos.waitForDeviceResponse("terminal length 0\n", "#", 2, remote_conn)
# Go to config mode
output = output + \
cnos.waitForDeviceResponse(
"configure device\n", "(config)#", 2, remote_conn)
# Send the CLi command
output = output + cnos.vlagConfig(
remote_conn, deviceType, "(config)#", 2, vlagArg1, vlagArg2, vlagArg3,
vlagArg4)
output = output + str(cnos.vlagConfig(module, '(config)#', None))
# Save it into the file
file = open(outputfile, "a")
@ -348,7 +300,7 @@ def main():
# need to add logic to check when changes occur or not
errorMsg = cnos.checkOutputForError(output)
if(errorMsg is None):
module.exit_json(changed=True, msg="vlag configurations accomplished")
module.exit_json(changed=True, msg="VLAG configurations accomplished")
else:
module.fail_json(msg=errorMsg)

View file

@ -382,7 +382,8 @@ def main():
candidate = get_candidate(module)
running = get_running_config(module, contents, flags=flags)
response = connection.get_diff(candidate=candidate, running=running, match=match, diff_ignore_lines=diff_ignore_lines, path=path, replace=replace)
response = connection.get_diff(candidate=candidate, running=running, diff_match=match, diff_ignore_lines=diff_ignore_lines, path=path,
diff_replace=replace)
config_diff = response['config_diff']
if config_diff:

View file

@ -420,7 +420,8 @@ def main():
candidate = get_candidate_config(module)
running = get_running_config(module, contents, flags=flags)
response = connection.get_diff(candidate=candidate, running=running, match=match, diff_ignore_lines=diff_ignore_lines, path=path, replace=replace)
response = connection.get_diff(candidate=candidate, running=running, diff_match=match, diff_ignore_lines=diff_ignore_lines, path=path,
diff_replace=replace)
config_diff = response['config_diff']
banner_diff = response['banner_diff']

View file

@ -24,7 +24,9 @@ description:
the netconf system service running on Junos devices. This module
can be used to easily enable the Netconf API. Netconf provides
a programmatic interface for working with configuration and state
resources as defined in RFC 6242.
resources as defined in RFC 6242. If the C(netconf_port) is not
mentioned in the task by default netconf will be enabled on port 830
only.
extends_documentation_fragment: junos
options:
netconf_port:
@ -50,6 +52,9 @@ notes:
- Tested against vSRX JUNOS version 15.1X49-D15.4, vqfx-10000 JUNOS Version 15.1X53-D60.4.
- Recommended connection is C(network_cli). See L(the Junos OS Platform Options,../network/user_guide/platform_junos.html).
- This module also works with C(local) connections for legacy playbooks.
- If C(netconf_port) value is not mentioned in task by default it will be enabled on port 830 only.
Although C(netconf_port) value can be from 1 through 65535, avoid configuring access on a port
that is normally assigned for another service. This practice avoids potential resource conflicts.
"""
EXAMPLES = """

View file

@ -100,7 +100,7 @@ options:
the modified lines are pushed to the device in configuration
mode. If the replace argument is set to I(block) then the entire
command block is pushed to the device in configuration mode if any
line is not correct. I(replace config) is supported only on Nexus 9K device.
line is not correct. replace I(config) is supported only on Nexus 9K device.
default: line
choices: ['line', 'block', 'config']
force:
@ -281,7 +281,7 @@ backup_path:
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.connection import ConnectionError
from ansible.module_utils.network.common.config import NetworkConfig, dumps
from ansible.module_utils.network.nxos.nxos import get_config, load_config, run_commands
from ansible.module_utils.network.nxos.nxos import get_config, load_config, run_commands, get_connection
from ansible.module_utils.network.nxos.nxos import get_capabilities
from ansible.module_utils.network.nxos.nxos import nxos_argument_spec
from ansible.module_utils.network.nxos.nxos import check_args as nxos_check_args
@ -296,19 +296,21 @@ def get_running_config(module, config=None):
else:
flags = ['all']
contents = get_config(module, flags=flags)
return NetworkConfig(indent=2, contents=contents)
return contents
def get_candidate(module):
candidate = NetworkConfig(indent=2)
candidate = ''
if module.params['src']:
if module.params['replace'] != 'config':
candidate.load(module.params['src'])
candidate = module.params['src']
if module.params['replace'] == 'config':
candidate.load('config replace {0}'.format(module.params['replace_src']))
candidate = 'config replace {0}'.format(module.params['replace_src'])
elif module.params['lines']:
candidate_obj = NetworkConfig(indent=2)
parents = module.params['parents'] or list()
candidate.add(module.params['lines'], parents=parents)
candidate_obj.add(module.params['lines'], parents=parents)
candidate = dumps(candidate_obj, 'raw')
return candidate
@ -404,7 +406,12 @@ def main():
if '9K' not in os_platform:
module.fail_json(msg='replace: config is supported only on Nexus 9K series switches')
if module.params['replace_src']:
diff_ignore_lines = module.params['diff_ignore_lines']
path = module.params['parents']
connection = get_connection(module)
contents = None
replace_src = module.params['replace_src']
if replace_src:
if module.params['replace'] != 'config':
module.fail_json(msg='replace: config is required with replace_src')
@ -414,48 +421,51 @@ def main():
if module.params['backup']:
result['__backup__'] = contents
if any((module.params['src'], module.params['lines'], module.params['replace_src'])):
if any((module.params['src'], module.params['lines'], replace_src)):
match = module.params['match']
replace = module.params['replace']
commit = not module.check_mode
candidate = get_candidate(module)
if match != 'none' and replace != 'config':
config = get_running_config(module, config)
path = module.params['parents']
configobjs = candidate.difference(config, match=match, replace=replace, path=path)
else:
configobjs = candidate.items
if configobjs:
commands = dumps(configobjs, 'commands').split('\n')
if module.params['before']:
commands[:0] = module.params['before']
if module.params['after']:
commands.extend(module.params['after'])
result['commands'] = commands
result['updates'] = commands
if not module.check_mode:
load_config(module, commands)
running = get_running_config(module, contents)
if replace_src:
commands = candidate.split('\n')
result['commands'] = result['updates'] = commands
if commit:
load_config(module, commands, replace=replace_src)
result['changed'] = True
else:
response = connection.get_diff(candidate=candidate, running=running, diff_match=match, diff_ignore_lines=diff_ignore_lines, path=path,
diff_replace=replace)
config_diff = response['config_diff']
if config_diff:
commands = config_diff.split('\n')
if module.params['before']:
commands[:0] = module.params['before']
if module.params['after']:
commands.extend(module.params['after'])
result['commands'] = commands
result['updates'] = commands
if commit:
load_config(module, commands, replace=replace_src)
result['changed'] = True
running_config = module.params['running_config']
startup_config = None
diff_ignore_lines = module.params['diff_ignore_lines']
if module.params['save_when'] == 'always' or module.params['save']:
save_config(module, result)
elif module.params['save_when'] == 'modified':
output = execute_show_commands(module, ['show running-config', 'show startup-config'])
running_config = NetworkConfig(indent=1, contents=output[0], ignore_lines=diff_ignore_lines)
startup_config = NetworkConfig(indent=1, contents=output[1], ignore_lines=diff_ignore_lines)
running_config = NetworkConfig(indent=2, contents=output[0], ignore_lines=diff_ignore_lines)
startup_config = NetworkConfig(indent=2, contents=output[1], ignore_lines=diff_ignore_lines)
if running_config.sha1 != startup_config.sha1:
save_config(module, result)
@ -470,7 +480,7 @@ def main():
contents = running_config
# recreate the object in order to process diff_ignore_lines
running_config = NetworkConfig(indent=1, contents=contents, ignore_lines=diff_ignore_lines)
running_config = NetworkConfig(indent=2, contents=contents, ignore_lines=diff_ignore_lines)
if module.params['diff_against'] == 'running':
if module.check_mode:
@ -484,14 +494,13 @@ def main():
output = execute_show_commands(module, 'show startup-config')
contents = output[0]
else:
contents = output[0]
contents = startup_config.config_text
elif module.params['diff_against'] == 'intended':
contents = module.params['intended_config']
if contents is not None:
base_config = NetworkConfig(indent=1, contents=contents, ignore_lines=diff_ignore_lines)
base_config = NetworkConfig(indent=2, contents=contents, ignore_lines=diff_ignore_lines)
if running_config.sha1 != base_config.sha1:
if module.params['diff_against'] == 'intended':

View file

@ -114,8 +114,7 @@ commands:
import re
from ansible.module_utils.network.nxos.nxos import get_config, load_config
from ansible.module_utils.network.nxos.nxos import get_config, load_config, run_commands
from ansible.module_utils.network.nxos.nxos import nxos_argument_spec, check_args
from ansible.module_utils.basic import AnsibleModule
@ -123,7 +122,7 @@ from ansible.module_utils.basic import AnsibleModule
DEST_GROUP = ['console', 'logfile', 'module', 'monitor', 'server']
def map_obj_to_commands(updates, module):
def map_obj_to_commands(updates):
commands = list()
want, have = updates
@ -286,6 +285,29 @@ def map_config_to_obj(module):
'dest_level': parse_dest_level(line, dest, parse_name(line, dest)),
'facility_level': parse_facility_level(line, facility)})
cmd = [{'command': 'show logging | section enabled | section console', 'output': 'text'},
{'command': 'show logging | section enabled | section monitor', 'output': 'text'}]
default_data = run_commands(module, cmd)
for line in default_data:
flag = False
match = re.search(r'Logging (\w+):(?:\s+) (?:\w+) (?:\W)Severity: (\w+)', str(line), re.M)
if match:
if match.group(1) == 'console' and match.group(2) == 'critical':
dest_level = '2'
flag = True
elif match.group(1) == 'monitor' and match.group(2) == 'notifications':
dest_level = '5'
flag = True
if flag:
obj.append({'dest': match.group(1),
'remote_server': None,
'name': None,
'facility': None,
'dest_level': dest_level,
'facility_level': None})
return obj

View file

@ -208,7 +208,7 @@ class Hardware(FactsBase):
self.facts['memfree_mb'] = int(round(int(self.parse_memfree(data)) / 1024, 0))
def parse_memtotal(self, data):
match = re.search(r'TotalMemory: (\d+)\s', data, re.M)
match = re.search(r'Total\s*Memory: (\d+)\s', data, re.M)
if match:
return match.group(1)

View file

@ -212,10 +212,10 @@ def main():
break
conditionals.remove(item)
if not conditionals:
break
if not conditionals:
break
time.sleep(interval)
time.sleep(interval)
if conditionals:
failed_conditions = [item.raw for item in conditionals]

View file

@ -208,7 +208,7 @@ def run(module, result):
# create loadable config that includes only the configuration updates
connection = get_connection(module)
response = connection.get_diff(candidate=candidate, running=config, match=module.params['match'])
response = connection.get_diff(candidate=candidate, running=config, diff_match=module.params['match'])
commands = response.get('config_diff')
sanitize_config(commands, result)

View file

@ -0,0 +1,266 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright (c) 2017-2018 Dell EMC Inc.
# GNU General Public License v3.0+ (see LICENSE or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'status': ['preview'],
'supported_by': 'community',
'metadata_version': '1.1'}
DOCUMENTATION = '''
---
module: redfish_facts
version_added: "2.7"
short_description: Manages Out-Of-Band controllers using Redfish APIs
description:
- Builds Redfish URIs locally and sends them to remote OOB controllers to
get information back.
- Information retrieved is placed in a location specified by the user.
options:
category:
required: false
description:
- List of categories to execute on OOB controller
default: ['Systems']
command:
required: false
description:
- List of commands to execute on OOB controller
baseuri:
required: true
description:
- Base URI of OOB controller
user:
required: true
description:
- User for authentication with OOB controller
password:
required: true
description:
- Password for authentication with OOB controller
author: "Jose Delarosa (github: jose-delarosa)"
'''
EXAMPLES = '''
- name: Get CPU inventory
redfish_facts:
category: Systems
command: GetCpuInventory
baseuri: "{{ baseuri }}"
user: "{{ user }}"
password: "{{ password }}"
- name: Get fan inventory
redfish_facts:
category: Chassis
command: GetFanInventory
baseuri: "{{ baseuri }}"
user: "{{ user }}"
password: "{{ password }}"
- name: Get default inventory information
redfish_facts:
baseuri: "{{ baseuri }}"
user: "{{ user }}"
password: "{{ password }}"
- name: Get several inventories
redfish_facts:
category: Systems
command: GetNicInventory,GetPsuInventory,GetBiosAttributes
baseuri: "{{ baseuri }}"
user: "{{ user }}"
password: "{{ password }}"
- name: Get default system inventory and user information
redfish_facts:
category: Systems,Accounts
baseuri: "{{ baseuri }}"
user: "{{ user }}"
password: "{{ password }}"
- name: Get default system, user and firmware information
redfish_facts:
category: ["Systems", "Accounts", "Update"]
baseuri: "{{ baseuri }}"
user: "{{ user }}"
password: "{{ password }}"
- name: Get all information available in the Manager category
redfish_facts:
category: Manager
command: all
baseuri: "{{ baseuri }}"
user: "{{ user }}"
password: "{{ password }}"
- name: Get all information available in all categories
redfish_facts:
category: all
command: all
baseuri: "{{ baseuri }}"
user: "{{ user }}"
password: "{{ password }}"
'''
RETURN = '''
result:
description: different results depending on task
returned: always
type: dict
sample: List of CPUs on system
'''
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.redfish_utils import RedfishUtils
CATEGORY_COMMANDS_ALL = {
"Systems": ["GetSystemInventory", "GetPsuInventory", "GetCpuInventory",
"GetNicInventory", "GetStorageControllerInventory",
"GetDiskInventory", "GetBiosAttributes", "GetBiosBootOrder"],
"Chassis": ["GetFanInventory"],
"Accounts": ["ListUsers"],
"Update": ["GetFirmwareInventory"],
"Manager": ["GetManagerAttributes", "GetLogs"],
}
CATEGORY_COMMANDS_DEFAULT = {
"Systems": "GetSystemInventory",
"Chassis": "GetFanInventory",
"Accounts": "ListUsers",
"Update": "GetFirmwareInventory",
"Manager": "GetManagerAttributes"
}
def main():
result = {}
resource = {}
category_list = []
module = AnsibleModule(
argument_spec=dict(
category=dict(type='list', default=['Systems']),
command=dict(type='list'),
baseuri=dict(required=True),
user=dict(required=True),
password=dict(required=True, no_log=True),
),
supports_check_mode=False
)
# admin credentials used for authentication
creds = {'user': module.params['user'],
'pswd': module.params['password']}
# Build root URI
root_uri = "https://" + module.params['baseuri']
rf_uri = "/redfish/v1"
rf_utils = RedfishUtils(creds, root_uri)
# Build Category list
if "all" in module.params['category']:
for entry in CATEGORY_COMMANDS_ALL:
category_list.append(entry)
else:
# one or more categories specified
category_list = module.params['category']
for category in category_list:
command_list = []
# Build Command list for each Category
if category in CATEGORY_COMMANDS_ALL:
if not module.params['command']:
# True if we don't specify a command --> use default
command_list.append(CATEGORY_COMMANDS_DEFAULT[category])
elif "all" in module.params['command']:
for entry in range(len(CATEGORY_COMMANDS_ALL[category])):
command_list.append(CATEGORY_COMMANDS_ALL[category][entry])
# one or more commands
else:
command_list = module.params['command']
# Verify that all commands are valid
for cmd in command_list:
# Fail if even one command given is invalid
if cmd not in CATEGORY_COMMANDS_ALL[category]:
module.fail_json(msg="Invalid Command: %s" % cmd)
else:
# Fail if even one category given is invalid
module.fail_json(msg="Invalid Category: %s" % category)
# Organize by Categories / Commands
if category == "Systems":
# execute only if we find a Systems resource
resource = rf_utils._find_systems_resource(rf_uri)
if resource['ret'] is False:
module.fail_json(msg=resource['msg'])
for command in command_list:
if command == "GetSystemInventory":
result["system"] = rf_utils.get_system_inventory()
elif command == "GetPsuInventory":
result["psu"] = rf_utils.get_psu_inventory()
elif command == "GetCpuInventory":
result["cpu"] = rf_utils.get_cpu_inventory()
elif command == "GetNicInventory":
result["nic"] = rf_utils.get_nic_inventory()
elif command == "GetStorageControllerInventory":
result["storage_controller"] = rf_utils.get_storage_controller_inventory()
elif command == "GetDiskInventory":
result["disk"] = rf_utils.get_disk_inventory()
elif command == "GetBiosAttributes":
result["bios_attribute"] = rf_utils.get_bios_attributes()
elif command == "GetBiosBootOrder":
result["bios_boot_order"] = rf_utils.get_bios_boot_order()
elif category == "Chassis":
# execute only if we find Chassis resource
resource = rf_utils._find_chassis_resource(rf_uri)
if resource['ret'] is False:
module.fail_json(msg=resource['msg'])
for command in command_list:
if command == "GetFanInventory":
result["fan"] = rf_utils.get_fan_inventory()
elif category == "Accounts":
# execute only if we find an Account service resource
resource = rf_utils._find_accountservice_resource(rf_uri)
if resource['ret'] is False:
module.fail_json(msg=resource['msg'])
for command in command_list:
if command == "ListUsers":
result["user"] = rf_utils.list_users()
elif category == "Update":
# execute only if we find UpdateService resources
resource = rf_utils._find_updateservice_resource(rf_uri)
if resource['ret'] is False:
module.fail_json(msg=resource['msg'])
for command in command_list:
if command == "GetFirmwareInventory":
result["firmware"] = rf_utils.get_firmware_inventory()
elif category == "Manager":
# execute only if we find a Manager service resource
resource = rf_utils._find_managers_resource(rf_uri)
if resource['ret'] is False:
module.fail_json(msg=resource['msg'])
for command in command_list:
if command == "GetManagerAttributes":
result["manager_attributes"] = rf_utils.get_manager_attributes()
elif command == "GetLogs":
result["log"] = rf_utils.get_logs()
# Return data back
module.exit_json(ansible_facts=dict(redfish_facts=result))
if __name__ == '__main__':
main()

View file

@ -56,10 +56,10 @@ options:
description:
- List of file extensions to read when using C(dir).
default: [yaml, yml, json]
ignore_unkown_extensions:
ignore_unknown_extensions:
version_added: "2.7"
description:
- Ignore unkown file extensions within the directory. This allows users to specify a directory containing vars files
- Ignore unknown file extensions within the directory. This allows users to specify a directory containing vars files
that are intermingled with non vars files extension types (For example, a directory with a README in it and vars files)
default: False
free-form:

View file

@ -321,8 +321,6 @@ class ActionBase(with_metaclass(ABCMeta, object)):
self._connection._shell.tmpdir = rc
if not become_unprivileged:
self._connection._shell.env.update({'ANSIBLE_REMOTE_TMP': self._connection._shell.tmpdir})
return rc
def _should_remove_tmp_path(self, tmp_path):
@ -764,7 +762,7 @@ class ActionBase(with_metaclass(ABCMeta, object)):
tmpdir = self._connection._shell.tmpdir
remote_module_filename = self._connection._shell.get_remote_filename(module_path)
remote_module_path = self._connection._shell.join_path(tmpdir, remote_module_filename)
remote_module_path = self._connection._shell.join_path(tmpdir, 'AnsiballZ_%s' % remote_module_filename)
args_file_path = None
if module_style in ('old', 'non_native_want_json', 'binary'):

View file

@ -103,7 +103,7 @@ class ActionModule(ActionBase):
raise AnsibleError('{0} is not a valid option in include_vars'.format(arg))
if dirs and files:
raise AnsibleError("Your are mixing file only and dir only arguments, these are incompatible")
raise AnsibleError("You are mixing file only and dir only arguments, these are incompatible")
# set internal vars from args
self._set_args()

View file

@ -21,11 +21,10 @@ __metaclass__ = type
import sys
import copy
import json
from ansible import constants as C
from ansible.module_utils._text import to_text
from ansible.module_utils.connection import Connection
from ansible.module_utils.connection import Connection, ConnectionError
from ansible.plugins.action.normal import ActionModule as _ActionModule
from ansible.module_utils.network.common.utils import load_provider
from ansible.module_utils.network.ironware.ironware import ironware_provider_spec
@ -42,55 +41,59 @@ class ActionModule(_ActionModule):
def run(self, tmp=None, task_vars=None):
del tmp # tmp no longer has any effect
if self._play_context.connection != 'local':
return dict(
failed=True,
msg='invalid connection specified, expected connection=local, '
'got %s' % self._play_context.connection
)
socket_path = None
provider = load_provider(ironware_provider_spec, self._task.args)
if self._play_context.connection == 'network_cli':
provider = self._task.args.get('provider', {})
if any(provider.values()):
display.warning('provider is unnecessary when using network_cli and will be ignored')
del self._task.args['provider']
elif self._play_context.connection == 'local':
provider = load_provider(ironware_provider_spec, self._task.args)
pc = copy.deepcopy(self._play_context)
pc.connection = 'network_cli'
pc.network_os = 'ironware'
pc.remote_addr = provider['host'] or self._play_context.remote_addr
pc.port = int(provider['port'] or self._play_context.port or 22)
pc.remote_user = provider['username'] or self._play_context.connection_user
pc.password = provider['password'] or self._play_context.password
pc.private_key_file = provider['ssh_keyfile'] or self._play_context.private_key_file
pc.become = provider['authorize'] or False
if pc.become:
pc.become_method = 'enable'
pc.become_pass = provider['auth_pass']
pc = copy.deepcopy(self._play_context)
pc.connection = 'network_cli'
pc.network_os = 'ironware'
pc.remote_addr = provider['host'] or self._play_context.remote_addr
pc.port = int(provider['port'] or self._play_context.port or 22)
pc.remote_user = provider['username'] or self._play_context.connection_user
pc.password = provider['password'] or self._play_context.password
pc.private_key_file = provider['ssh_keyfile'] or self._play_context.private_key_file
command_timeout = int(provider['timeout'] or C.PERSISTENT_COMMAND_TIMEOUT)
pc.become = provider['authorize'] or False
if pc.become:
pc.become_method = 'enable'
pc.become_pass = provider['auth_pass']
display.vvv('using connection plugin %s (was local)' % pc.connection, pc.remote_addr)
connection = self._shared_loader_obj.connection_loader.get('persistent', pc, sys.stdin)
display.vvv('using connection plugin %s (was local)' % pc.connection, pc.remote_addr)
connection = self._shared_loader_obj.connection_loader.get('persistent', pc, sys.stdin)
connection.set_options(direct={'persistent_command_timeout': command_timeout})
command_timeout = int(provider['timeout']) if provider['timeout'] else connection.get_option('persistent_command_timeout')
connection.set_options(direct={'persistent_command_timeout': command_timeout})
socket_path = connection.run()
socket_path = connection.run()
display.vvvv('socket_path: %s' % socket_path, pc.remote_addr)
if not socket_path:
return {'failed': True,
'msg': 'unable to open shell. Please see: ' +
'https://docs.ansible.com/ansible/network_debug_troubleshooting.html#unable-to-open-shell'}
display.vvvv('socket_path: %s' % socket_path, pc.remote_addr)
if not socket_path:
return {'failed': True,
'msg': 'unable to open shell. Please see: ' +
'https://docs.ansible.com/ansible/network_debug_troubleshooting.html#unable-to-open-shell'}
task_vars['ansible_socket'] = socket_path
else:
return {'failed': True, 'msg': 'Connection type %s is not valid for this module' % self._play_context.connection}
# make sure we are in the right cli context which should be
# enable mode and not config module
if socket_path is None:
socket_path = self._connection.socket_path
conn = Connection(socket_path)
out = conn.get_prompt()
if to_text(out, errors='surrogate_then_replace').strip().endswith(')#'):
display.vvvv('wrong context, sending end to device', self._play_context.remote_addr)
conn.send_command('end')
task_vars['ansible_socket'] = socket_path
if self._play_context.become_method == 'enable':
self._play_context.become = False
self._play_context.become_method = None
try:
out = conn.get_prompt()
while to_text(out, errors='surrogate_then_replace').strip().endswith(')#'):
display.vvvv('wrong context, sending exit to device', self._play_context.remote_addr)
conn.send_command('exit')
out = conn.get_prompt()
except ConnectionError as exc:
return {'failed': True, 'msg': to_text(exc)}
result = super(ActionModule, self).run(task_vars=task_vars)
return result

View file

@ -58,6 +58,7 @@ class ActionModule(ActionBase):
block_end_string = self._task.args.get('block_end_string', None)
trim_blocks = boolean(self._task.args.get('trim_blocks', True), strict=False)
lstrip_blocks = boolean(self._task.args.get('lstrip_blocks', False), strict=False)
output_encoding = self._task.args.get('output_encoding', 'utf-8') or 'utf-8'
# Option `lstrip_blocks' was added in Jinja2 version 2.7.
if lstrip_blocks:
@ -176,13 +177,14 @@ class ActionModule(ActionBase):
new_task.args.pop('variable_end_string', None)
new_task.args.pop('trim_blocks', None)
new_task.args.pop('lstrip_blocks', None)
new_task.args.pop('output_encoding', None)
local_tempdir = tempfile.mkdtemp(dir=C.DEFAULT_LOCAL_TMP)
try:
result_file = os.path.join(local_tempdir, os.path.basename(source))
with open(to_bytes(result_file, errors='surrogate_or_strict'), 'wb') as f:
f.write(to_bytes(resultant, errors='surrogate_or_strict'))
f.write(to_bytes(resultant, encoding=output_encoding, errors='surrogate_or_strict'))
new_task.args.update(
dict(

View file

@ -189,7 +189,7 @@ class CliconfBase(AnsiblePlugin):
pass
@abstractmethod
def edit_config(self, candidate=None, commit=True, replace=False, diff=False, comment=None):
def edit_config(self, candidate=None, commit=True, replace=None, diff=False, comment=None):
"""Loads the candidate configuration into the network device
This method will load the specified candidate config into the device
@ -203,8 +203,10 @@ class CliconfBase(AnsiblePlugin):
:param commit: Boolean value that indicates if the device candidate
configuration should be pushed in the running configuration or discarded.
:param replace: Boolean flag to indicate if running configuration should be completely
replace by candidate configuration.
:param replace: If the value is True/False it indicates if running configuration should be completely
replace by candidate configuration. If can also take configuration file path as value,
the file in this case should be present on the remote host in the mentioned path as a
prerequisite.
:param comment: Commit comment provided it is supported by remote host
:return: Returns a json string with contains configuration applied on remote host, the returned
response on executing configuration commands and platform relevant data.
@ -341,7 +343,7 @@ class CliconfBase(AnsiblePlugin):
with ssh.open_sftp() as sftp:
sftp.get(source, destination)
def get_diff(self, candidate=None, running=None, match=None, diff_ignore_lines=None, path=None, replace=None):
def get_diff(self, candidate=None, running=None, diff_match=None, diff_ignore_lines=None, path=None, diff_replace=None):
"""
Generate diff between candidate and running configuration. If the
remote host supports onbox diff capabilities ie. supports_onbox_diff in that case
@ -350,7 +352,7 @@ class CliconfBase(AnsiblePlugin):
and running argument is optional.
:param candidate: The configuration which is expected to be present on remote host.
:param running: The base configuration which is used to generate diff.
:param match: Instructs how to match the candidate configuration with current device configuration
:param diff_match: Instructs how to match the candidate configuration with current device configuration
Valid values are 'line', 'strict', 'exact', 'none'.
'line' - commands are matched line by line
'strict' - command lines are matched with respect to position
@ -364,7 +366,7 @@ class CliconfBase(AnsiblePlugin):
the commands should be checked against. If the parents argument
is omitted, the commands are checked against the set of top
level or global commands.
:param replace: Instructs on the way to perform the configuration on the device.
:param diff_replace: Instructs on the way to perform the configuration on the device.
If the replace argument is set to I(line) then the modified lines are
pushed to the device in configuration mode. If the replace argument is
set to I(block) then the entire command block is pushed to the device in
@ -396,3 +398,20 @@ class CliconfBase(AnsiblePlugin):
:return: List of returned response
"""
pass
def check_edit_config_capabiltiy(self, operations, candidate=None, commit=True, replace=None, comment=None):
if not candidate and not replace:
raise ValueError("must provide a candidate or replace to load configuration")
if commit not in (True, False):
raise ValueError("'commit' must be a bool, got %s" % commit)
if replace and not operations['supports_replace']:
raise ValueError("configuration replace is not supported")
if comment and not operations.get('supports_commit_comment', False):
raise ValueError("commit comment is not supported")
if replace and not operations.get('supports_replace', False):
raise ValueError("configuration replace is not supported")

View file

@ -59,23 +59,6 @@ class Cliconf(CliconfBase):
def __init__(self, *args, **kwargs):
super(Cliconf, self).__init__(*args, **kwargs)
self._session_support = None
if isinstance(self._connection, NetworkCli):
self.network_api = 'network_cli'
elif isinstance(self._connection, HttpApi):
self.network_api = 'eapi'
else:
raise ValueError("Invalid connection type")
def _get_command_with_output(self, command, output):
options_values = self.get_option_values()
if output not in options_values['output']:
raise ValueError("'output' value %s is invalid. Valid values are %s" % (output, ','.join(options_values['output'])))
if output == 'json' and not command.endswith('| json'):
cmd = '%s | json' % command
else:
cmd = command
return cmd
def send_command(self, command, **kwargs):
"""Executes a cli command and returns the results
@ -83,10 +66,12 @@ class Cliconf(CliconfBase):
the results to the caller. The command output will be returned as a
string
"""
if self.network_api == 'network_cli':
if isinstance(self._connection, NetworkCli):
resp = super(Cliconf, self).send_command(command, **kwargs)
else:
elif isinstance(self._connection, HttpApi):
resp = self._connection.send_request(command, **kwargs)
else:
raise ValueError("Invalid connection type")
return resp
@enable_mode
@ -108,32 +93,19 @@ class Cliconf(CliconfBase):
return self.send_command(cmd)
@enable_mode
def edit_config(self, candidate=None, commit=True, replace=False, comment=None):
if not candidate:
raise ValueError("must provide a candidate config to load")
if commit not in (True, False):
raise ValueError("'commit' must be a bool, got %s" % commit)
def edit_config(self, candidate=None, commit=True, replace=None, comment=None):
operations = self.get_device_operations()
if replace not in (True, False):
raise ValueError("'replace' must be a bool, got %s" % replace)
if replace and not operations['supports_replace']:
raise ValueError("configuration replace is supported only with configuration session")
if comment and not operations['supports_commit_comment']:
raise ValueError("commit comment is not supported")
self.check_edit_config_capabiltiy(operations, candidate, commit, replace, comment)
if (commit is False) and (not self.supports_sessions):
raise ValueError('check mode is not supported without configuration session')
response = {}
resp = {}
session = None
if self.supports_sessions:
session = 'ansible_%s' % int(time.time())
response.update({'session': session})
resp.update({'session': session})
self.send_command('configure session %s' % session)
if replace:
self.send_command('rollback clean-config')
@ -141,6 +113,7 @@ class Cliconf(CliconfBase):
self.send_command('configure')
results = []
requests = []
multiline = False
for line in to_list(candidate):
if not isinstance(line, collections.Mapping):
@ -160,15 +133,17 @@ class Cliconf(CliconfBase):
if cmd != 'end' and cmd[0] != '!':
try:
results.append(self.send_command(**line))
requests.append(cmd)
except AnsibleConnectionFailure as e:
self.discard_changes(session)
raise AnsibleConnectionFailure(e.message)
response['response'] = results
resp['request'] = requests
resp['response'] = results
if self.supports_sessions:
out = self.send_command('show session-config diffs')
if out:
response['diff'] = out.strip()
resp['diff'] = out.strip()
if commit:
self.commit()
@ -176,7 +151,7 @@ class Cliconf(CliconfBase):
self.discard_changes(session)
else:
self.send_command('end')
return response
return resp
def get(self, command, prompt=None, answer=None, sendonly=False, output=None):
if output:
@ -224,7 +199,7 @@ class Cliconf(CliconfBase):
responses.append(out)
return responses
def get_diff(self, candidate=None, running=None, match='line', diff_ignore_lines=None, path=None, replace='line'):
def get_diff(self, candidate=None, running=None, diff_match='line', diff_ignore_lines=None, path=None, diff_replace='line'):
diff = {}
device_operations = self.get_device_operations()
option_values = self.get_option_values()
@ -232,26 +207,25 @@ class Cliconf(CliconfBase):
if candidate is None and device_operations['supports_generate_diff']:
raise ValueError("candidate configuration is required to generate diff")
if match not in option_values['diff_match']:
raise ValueError("'match' value %s in invalid, valid values are %s" % (match, ', '.join(option_values['diff_match'])))
if diff_match not in option_values['diff_match']:
raise ValueError("'match' value %s in invalid, valid values are %s" % (diff_match, ', '.join(option_values['diff_match'])))
if replace not in option_values['diff_replace']:
raise ValueError("'replace' value %s in invalid, valid values are %s" % (replace, ', '.join(option_values['diff_replace'])))
if diff_replace not in option_values['diff_replace']:
raise ValueError("'replace' value %s in invalid, valid values are %s" % (diff_replace, ', '.join(option_values['diff_replace'])))
# prepare candidate configuration
candidate_obj = NetworkConfig(indent=3)
candidate_obj.load(candidate)
if running and match != 'none' and replace != 'config':
if running and diff_match != 'none' and diff_replace != 'config':
# running configuration
running_obj = NetworkConfig(indent=3, contents=running, ignore_lines=diff_ignore_lines)
configdiffobjs = candidate_obj.difference(running_obj, path=path, match=match, replace=replace)
configdiffobjs = candidate_obj.difference(running_obj, path=path, match=diff_match, replace=diff_replace)
else:
configdiffobjs = candidate_obj.items
configdiff = dumps(configdiffobjs, 'commands') if configdiffobjs else ''
diff['config_diff'] = configdiff if configdiffobjs else {}
diff['config_diff'] = dumps(configdiffobjs, 'commands') if configdiffobjs else ''
return diff
@property
@ -317,8 +291,25 @@ class Cliconf(CliconfBase):
result = {}
result['rpc'] = self.get_base_rpc()
result['device_info'] = self.get_device_info()
result['network_api'] = self.network_api
result['device_info'] = self.get_device_info()
result['device_operations'] = self.get_device_operations()
result.update(self.get_option_values())
if isinstance(self._connection, NetworkCli):
result['network_api'] = 'cliconf'
elif isinstance(self._connection, HttpApi):
result['network_api'] = 'eapi'
else:
raise ValueError("Invalid connection type")
return json.dumps(result)
def _get_command_with_output(self, command, output):
options_values = self.get_option_values()
if output not in options_values['output']:
raise ValueError("'output' value %s is invalid. Valid values are %s" % (output, ','.join(options_values['output'])))
if output == 'json' and not command.endswith('| json'):
cmd = '%s | json' % command
else:
cmd = command
return cmd

View file

@ -56,7 +56,7 @@ class Cliconf(CliconfBase):
return self.send_command(cmd)
def get_diff(self, candidate=None, running=None, match='line', diff_ignore_lines=None, path=None, replace='line'):
def get_diff(self, candidate=None, running=None, diff_match='line', diff_ignore_lines=None, path=None, diff_replace='line'):
"""
Generate diff between candidate and running configuration. If the
remote host supports onbox diff capabilities ie. supports_onbox_diff in that case
@ -65,7 +65,7 @@ class Cliconf(CliconfBase):
and running argument is optional.
:param candidate: The configuration which is expected to be present on remote host.
:param running: The base configuration which is used to generate diff.
:param match: Instructs how to match the candidate configuration with current device configuration
:param diff_match: Instructs how to match the candidate configuration with current device configuration
Valid values are 'line', 'strict', 'exact', 'none'.
'line' - commands are matched line by line
'strict' - command lines are matched with respect to position
@ -79,7 +79,7 @@ class Cliconf(CliconfBase):
the commands should be checked against. If the parents argument
is omitted, the commands are checked against the set of top
level or global commands.
:param replace: Instructs on the way to perform the configuration on the device.
:param diff_replace: Instructs on the way to perform the configuration on the device.
If the replace argument is set to I(line) then the modified lines are
pushed to the device in configuration mode. If the replace argument is
set to I(block) then the entire command block is pushed to the device in
@ -87,7 +87,7 @@ class Cliconf(CliconfBase):
:return: Configuration diff in json format.
{
'config_diff': '',
'banner_diff': ''
'banner_diff': {}
}
"""
@ -98,71 +98,57 @@ class Cliconf(CliconfBase):
if candidate is None and device_operations['supports_generate_diff']:
raise ValueError("candidate configuration is required to generate diff")
if match not in option_values['diff_match']:
raise ValueError("'match' value %s in invalid, valid values are %s" % (match, ', '.join(option_values['diff_match'])))
if diff_match not in option_values['diff_match']:
raise ValueError("'match' value %s in invalid, valid values are %s" % (diff_match, ', '.join(option_values['diff_match'])))
if replace not in option_values['diff_replace']:
raise ValueError("'replace' value %s in invalid, valid values are %s" % (replace, ', '.join(option_values['diff_replace'])))
if diff_replace not in option_values['diff_replace']:
raise ValueError("'replace' value %s in invalid, valid values are %s" % (diff_replace, ', '.join(option_values['diff_replace'])))
# prepare candidate configuration
candidate_obj = NetworkConfig(indent=1)
want_src, want_banners = self._extract_banners(candidate)
candidate_obj.load(want_src)
if running and match != 'none':
if running and diff_match != 'none':
# running configuration
have_src, have_banners = self._extract_banners(running)
running_obj = NetworkConfig(indent=1, contents=have_src, ignore_lines=diff_ignore_lines)
configdiffobjs = candidate_obj.difference(running_obj, path=path, match=match, replace=replace)
configdiffobjs = candidate_obj.difference(running_obj, path=path, match=diff_match, replace=diff_replace)
else:
configdiffobjs = candidate_obj.items
have_banners = {}
configdiff = dumps(configdiffobjs, 'commands') if configdiffobjs else ''
diff['config_diff'] = configdiff if configdiffobjs else {}
diff['config_diff'] = dumps(configdiffobjs, 'commands') if configdiffobjs else ''
banners = self._diff_banners(want_banners, have_banners)
diff['banner_diff'] = banners if banners else {}
return diff
@enable_mode
def edit_config(self, candidate=None, commit=True, replace=False, comment=None):
def edit_config(self, candidate=None, commit=True, replace=None, comment=None):
resp = {}
operations = self.get_device_operations()
if not candidate:
raise ValueError("must provide a candidate config to load")
if commit not in (True, False):
raise ValueError("'commit' must be a bool, got %s" % commit)
if replace not in (True, False):
raise ValueError("'replace' must be a bool, got %s" % replace)
if comment and not operations['supports_commit_comment']:
raise ValueError("commit comment is not supported")
operations = self.get_device_operations()
if replace and not operations['supports_replace']:
raise ValueError("configuration replace is not supported")
self.check_edit_config_capabiltiy(operations, candidate, commit, replace, comment)
results = []
requests = []
if commit:
for line in chain(['configure terminal'], to_list(candidate)):
self.send_command('configure terminal')
for line in to_list(candidate):
if not isinstance(line, collections.Mapping):
line = {'command': line}
cmd = line['command']
if cmd != 'end' and cmd[0] != '!':
results.append(self.send_command(**line))
requests.append(cmd)
results.append(self.send_command('end'))
self.send_command('end')
else:
raise ValueError('check mode is not supported')
resp['response'] = results[1:-1]
resp['request'] = requests
resp['response'] = results
return resp
def get(self, command=None, prompt=None, answer=None, sendonly=False, output=None):
@ -241,17 +227,23 @@ class Cliconf(CliconfBase):
resp = {}
banners_obj = json.loads(candidate)
results = []
requests = []
if commit:
for key, value in iteritems(banners_obj):
key += ' %s' % multiline_delimiter
for cmd in ['config terminal', key, value, multiline_delimiter, 'end']:
self.send_commad('config terminal', sendonly=True)
for cmd in [key, value, multiline_delimiter]:
obj = {'command': cmd, 'sendonly': True}
results.append(self.send_command(**obj))
requests.append(cmd)
self.send_commad('end', sendonly=True)
time.sleep(0.1)
results.append(self.send_command('\n'))
requests.append('\n')
resp['response'] = results[1:-1]
resp['request'] = requests
resp['response'] = results
return resp

View file

@ -19,6 +19,7 @@
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import collections
import json
import re
@ -27,30 +28,30 @@ from itertools import chain
from ansible.errors import AnsibleConnectionFailure
from ansible.module_utils._text import to_bytes, to_text
from ansible.module_utils.connection import ConnectionError
from ansible.module_utils.network.common.config import NetworkConfig, dumps
from ansible.module_utils.network.common.utils import to_list
from ansible.plugins.cliconf import CliconfBase
from ansible.plugins.cliconf import CliconfBase, enable_mode
from ansible.plugins.connection.network_cli import Connection as NetworkCli
from ansible.plugins.connection.httpapi import Connection as HttpApi
class Cliconf(CliconfBase):
def send_command(self, command, prompt=None, answer=None, sendonly=False, newline=True, prompt_retry_check=False):
def __init__(self, *args, **kwargs):
super(Cliconf, self).__init__(*args, **kwargs)
def send_command(self, command, **kwargs):
"""Executes a cli command and returns the results
This method will execute the CLI command on the connection and return
the results to the caller. The command output will be returned as a
string
"""
kwargs = {'command': to_bytes(command), 'sendonly': sendonly,
'newline': newline, 'prompt_retry_check': prompt_retry_check}
if prompt is not None:
kwargs['prompt'] = to_bytes(prompt)
if answer is not None:
kwargs['answer'] = to_bytes(answer)
if isinstance(self._connection, NetworkCli):
resp = self._connection.send(**kwargs)
else:
resp = super(Cliconf, self).send_command(command, **kwargs)
elif isinstance(self._connection, HttpApi):
resp = self._connection.send_request(command, **kwargs)
else:
raise ValueError("Invalid connection type")
return resp
def get_device_info(self):
@ -101,66 +102,169 @@ class Cliconf(CliconfBase):
return device_info
def get_config(self, source='running', format='text', flags=None):
def get_diff(self, candidate=None, running=None, diff_match='line', diff_ignore_lines=None, path=None, diff_replace='line'):
diff = {}
device_operations = self.get_device_operations()
option_values = self.get_option_values()
if candidate is None and device_operations['supports_generate_diff']:
raise ValueError("candidate configuration is required to generate diff")
if diff_match not in option_values['diff_match']:
raise ValueError("'match' value %s in invalid, valid values are %s" % (diff_match, ', '.join(option_values['diff_match'])))
if diff_replace not in option_values['diff_replace']:
raise ValueError("'replace' value %s in invalid, valid values are %s" % (diff_replace, ', '.join(option_values['diff_replace'])))
# prepare candidate configuration
candidate_obj = NetworkConfig(indent=2)
candidate_obj.load(candidate)
if running and diff_match != 'none' and diff_replace != 'config':
# running configuration
running_obj = NetworkConfig(indent=2, contents=running, ignore_lines=diff_ignore_lines)
configdiffobjs = candidate_obj.difference(running_obj, path=path, match=diff_match, replace=diff_replace)
else:
configdiffobjs = candidate_obj.items
diff['config_diff'] = dumps(configdiffobjs, 'commands') if configdiffobjs else ''
return diff
def get_config(self, source='running', format='text', filter=None):
options_values = self.get_option_values()
if format not in options_values['format']:
raise ValueError("'format' value %s is invalid. Valid values are %s" % (format, ','.join(options_values['format'])))
lookup = {'running': 'running-config', 'startup': 'startup-config'}
if source not in lookup:
return self.invalid_params("fetching configuration from %s is not supported" % source)
cmd = 'show {0} '.format(lookup[source])
if flags:
cmd += ' '.join(flags)
if format and format is not 'text':
cmd += '| %s ' % format
if filter:
cmd += ' '.join(to_list(filter))
cmd = cmd.strip()
return self.send_command(cmd)
def edit_config(self, command):
responses = []
for cmd in chain(['configure'], to_list(command), ['end']):
responses.append(self.send_command(cmd))
resp = responses[1:-1]
return json.dumps(resp)
def edit_config(self, candidate=None, commit=True, replace=None, comment=None):
resp = {}
operations = self.get_device_operations()
self.check_edit_config_capabiltiy(operations, candidate, commit, replace, comment)
results = []
requests = []
def get(self, command, prompt=None, answer=None, sendonly=False):
if replace:
candidate = 'config replace {0}'.format(replace)
if commit:
self.send_command('configure terminal')
for line in to_list(candidate):
if not isinstance(line, collections.Mapping):
line = {'command': line}
cmd = line['command']
if cmd != 'end':
results.append(self.send_command(**line))
requests.append(cmd)
self.send_command('end')
else:
raise ValueError('check mode is not supported')
resp['request'] = requests
resp['response'] = results
return resp
def get(self, command, prompt=None, answer=None, sendonly=False, output=None):
if output:
command = self._get_command_with_output(command, output)
return self.send_command(command, prompt=prompt, answer=answer, sendonly=sendonly)
def get_capabilities(self):
result = {}
result['rpc'] = self.get_base_rpc()
result['device_info'] = self.get_device_info()
if isinstance(self._connection, NetworkCli):
result['network_api'] = 'cliconf'
else:
result['network_api'] = 'nxapi'
return json.dumps(result)
def run_commands(self, commands=None, check_rc=True):
if commands is None:
raise ValueError("'commands' value is required")
# Migrated from module_utils
def run_commands(self, commands, check_rc=True):
"""Run list of commands on remote device and return results
"""
responses = list()
for cmd in to_list(commands):
if not isinstance(cmd, collections.Mapping):
cmd = {'command': cmd}
for item in to_list(commands):
if item['output'] == 'json' and not item['command'].endswith('| json'):
cmd = '%s | json' % item['command']
elif item['output'] == 'text' and item['command'].endswith('| json'):
cmd = item['command'].rsplit('|', 1)[0]
else:
cmd = item['command']
output = cmd.pop('output', None)
if output:
cmd['command'] = self._get_command_with_output(cmd['command'], output)
try:
out = self.get(cmd)
out = self.send_command(**cmd)
except AnsibleConnectionFailure as e:
if check_rc:
raise
out = getattr(e, 'err', e)
try:
out = to_text(out, errors='surrogate_or_strict').strip()
except UnicodeError:
raise ConnectionError(msg=u'Failed to decode output from %s: %s' % (cmd, to_text(out)))
if out is not None:
try:
out = to_text(out, errors='surrogate_or_strict').strip()
except UnicodeError:
raise ConnectionError(msg=u'Failed to decode output from %s: %s' % (cmd, to_text(out)))
try:
out = json.loads(out)
except ValueError:
pass
try:
out = json.loads(out)
except ValueError:
out = to_text(out, errors='surrogate_or_strict').strip()
responses.append(out)
responses.append(out)
return responses
def get_device_operations(self):
return {
'supports_diff_replace': True,
'supports_commit': False,
'supports_rollback': False,
'supports_defaults': True,
'supports_onbox_diff': False,
'supports_commit_comment': False,
'supports_multiline_delimiter': False,
'supports_diff_match': True,
'supports_diff_ignore_lines': True,
'supports_generate_diff': True,
'supports_replace': True
}
def get_option_values(self):
return {
'format': ['text', 'json'],
'diff_match': ['line', 'strict', 'exact', 'none'],
'diff_replace': ['line', 'block', 'config'],
'output': ['text', 'json']
}
def get_capabilities(self):
result = {}
result['rpc'] = self.get_base_rpc()
result['device_info'] = self.get_device_info()
result.update(self.get_option_values())
if isinstance(self._connection, NetworkCli):
result['network_api'] = 'cliconf'
elif isinstance(self._connection, HttpApi):
result['network_api'] = 'nxapi'
else:
raise ValueError("Invalid connection type")
return json.dumps(result)
def _get_command_with_output(self, command, output):
options_values = self.get_option_values()
if output not in options_values['output']:
raise ValueError("'output' value %s is invalid. Valid values are %s" % (output, ','.join(options_values['output'])))
if output == 'json' and not command.endswith('| json'):
cmd = '%s | json' % command
elif output == 'text' and command.endswith('| json'):
cmd = command.rsplit('|', 1)[0]
else:
cmd = command
return cmd

View file

@ -66,29 +66,20 @@ class Cliconf(CliconfBase):
out = self.send_command('show configuration commands')
return out
def edit_config(self, candidate=None, commit=True, replace=False, comment=None):
def edit_config(self, candidate=None, commit=True, replace=None, comment=None):
resp = {}
if not candidate:
raise ValueError('must provide a candidate config to load')
if commit not in (True, False):
raise ValueError("'commit' must be a bool, got %s" % commit)
if replace not in (True, False):
raise ValueError("'replace' must be a bool, got %s" % replace)
operations = self.get_device_operations()
if replace and not operations['supports_replace']:
raise ValueError("configuration replace is not supported")
self.check_edit_config_capabiltiy(operations, candidate, commit, replace, comment)
results = []
for cmd in chain(['configure'], to_list(candidate)):
requests = []
self.send_command('configure')
for cmd in to_list(candidate):
if not isinstance(cmd, collections.Mapping):
cmd = {'command': cmd}
results.append(self.send_command(**cmd))
requests.append(cmd['command'])
out = self.get('compare')
out = to_text(out, errors='surrogate_or_strict')
diff_config = out if not out.startswith('No changes') else None
@ -109,7 +100,8 @@ class Cliconf(CliconfBase):
self.send_command('exit')
resp['diff'] = diff_config
resp['response'] = results[1:-1]
resp['response'] = results
resp['request'] = requests
return resp
def get(self, command=None, prompt=None, answer=None, sendonly=False, output=None):
@ -131,7 +123,7 @@ class Cliconf(CliconfBase):
def discard_changes(self):
self.send_command('exit discard')
def get_diff(self, candidate=None, running=None, match='line', diff_ignore_lines=None, path=None, replace=None):
def get_diff(self, candidate=None, running=None, diff_match='line', diff_ignore_lines=None, path=None, diff_replace=None):
diff = {}
device_operations = self.get_device_operations()
option_values = self.get_option_values()
@ -139,10 +131,10 @@ class Cliconf(CliconfBase):
if candidate is None and device_operations['supports_generate_diff']:
raise ValueError("candidate configuration is required to generate diff")
if match not in option_values['diff_match']:
raise ValueError("'match' value %s in invalid, valid values are %s" % (match, ', '.join(option_values['diff_match'])))
if diff_match not in option_values['diff_match']:
raise ValueError("'match' value %s in invalid, valid values are %s" % (diff_match, ', '.join(option_values['diff_match'])))
if replace:
if diff_replace:
raise ValueError("'replace' in diff is not supported")
if diff_ignore_lines:
@ -169,7 +161,7 @@ class Cliconf(CliconfBase):
else:
candidate_commands = str(candidate).strip().split('\n')
if match == 'none':
if diff_match == 'none':
diff['config_diff'] = list(candidate_commands)
return diff

View file

@ -206,12 +206,12 @@ class Connection(NetworkConnectionBase):
httpapi = httpapi_loader.get(self._network_os, self)
if httpapi:
display.vvvv('loaded API plugin for network_os %s' % self._network_os, host=self._play_context.remote_addr)
self._implementation_plugins.append(httpapi)
httpapi.set_become(self._play_context)
httpapi.login(self.get_option('remote_user'), self.get_option('password'))
display.vvvv('loaded API plugin for network_os %s' % self._network_os, host=self._play_context.remote_addr)
else:
raise AnsibleConnectionFailure('unable to load API plugin for network_os %s' % self._network_os)
self._implementation_plugins.append(httpapi)
cliconf = cliconf_loader.get(self._network_os, self)
if cliconf:
@ -258,7 +258,9 @@ class Connection(NetworkConnectionBase):
return self.send(path, data, **kwargs)
raise AnsibleConnectionFailure('Could not connect to {0}: {1}'.format(self._url, exc.reason))
# Try to assign a new auth token if one is given
self._auth = self.update_auth(response) or self._auth
response_text = response.read()
return response
# Try to assign a new auth token if one is given
self._auth = self.update_auth(response, response_text) or self._auth
return response, response_text

View file

@ -210,6 +210,12 @@ def from_yaml(data):
return data
def from_yaml_all(data):
if isinstance(data, string_types):
return yaml.safe_load_all(data)
return data
@environmentfilter
def rand(environment, end, start=None, step=None, seed=None):
if seed is None:
@ -600,6 +606,7 @@ class FilterModule(object):
'to_yaml': to_yaml,
'to_nice_yaml': to_nice_yaml,
'from_yaml': from_yaml,
'from_yaml_all': from_yaml_all,
# path
'basename': partial(unicode_wrap, os.path.basename),

View file

@ -36,7 +36,7 @@ class HttpApiBase(AnsiblePlugin):
"""
pass
def update_auth(self, response):
def update_auth(self, response, response_text):
"""Return per-request auth token.
The response should be a dictionary that can be plugged into the

View file

@ -30,14 +30,13 @@ class HttpApi(HttpApiBase):
request = request_builder(data, output)
headers = {'Content-Type': 'application/json-rpc'}
response = self.connection.send('/command-api', request, headers=headers, method='POST')
response_text = to_text(response.read())
response, response_text = self.connection.send('/command-api', request, headers=headers, method='POST')
try:
response = json.loads(response_text)
response_text = json.loads(response_text)
except ValueError:
raise ConnectionError('Response was not valid JSON, got {0}'.format(response_text))
results = handle_response(response)
results = handle_response(response_text)
if self._become:
results = results[1:]
@ -50,8 +49,7 @@ class HttpApi(HttpApiBase):
# Fake a prompt for @enable_mode
if self._become:
return '#'
else:
return '>'
return '>'
# Imported from module_utils
def edit_config(self, config, commit=False, replace=False):
@ -113,7 +111,13 @@ class HttpApi(HttpApiBase):
responses = list()
def run_queue(queue, output):
response = to_list(self.send_request(queue, output=output))
try:
response = to_list(self.send_request(queue, output=output))
except Exception as exc:
if check_rc:
raise
return to_text(exc)
if output == 'json':
response = [json.loads(item) for item in response]
return response

View file

@ -27,14 +27,13 @@ class HttpApi(HttpApiBase):
request = request_builder(queue, output)
headers = {'Content-Type': 'application/json'}
response = self.connection.send('/ins', request, headers=headers, method='POST')
response_text = to_text(response.read())
response, response_text = self.connection.send('/ins', request, headers=headers, method='POST')
try:
response = json.loads(response_text)
response_text = json.loads(response_text)
except ValueError:
raise ConnectionError('Response was not valid JSON, got {0}'.format(response_text))
results = handle_response(response)
results = handle_response(response_text)
if self._become:
results = results[1:]
@ -73,17 +72,22 @@ class HttpApi(HttpApiBase):
return responses[0]
return responses
# Migrated from module_utils
def edit_config(self, command):
def edit_config(self, candidate=None, commit=True, replace=None, comment=None):
resp = list()
responses = self.send_request(command, output='config')
operations = self.connection.get_device_operations()
self.connection.check_edit_config_capabiltiy(operations, candidate, commit, replace, comment)
if replace:
candidate = 'config replace {0}'.format(replace)
responses = self.send_request(candidate, output='config')
for response in to_list(responses):
if response != '{}':
resp.append(response)
if not resp:
resp = ['']
return json.dumps(resp)
return resp
def run_commands(self, commands, check_rc=True):
"""Runs list of commands on remote device and returns results

View file

@ -164,7 +164,7 @@ class BaseInventoryPlugin(AnsiblePlugin):
self.templar = Templar(loader=loader)
def verify_file(self, path):
''' Verify if file is usable by this plugin, base does minimal accessability check
''' Verify if file is usable by this plugin, base does minimal accessibility check
:arg path: a string that was passed as an inventory source,
it normally is a path to a config file, but this is not a requirement,
it can also be parsed itself as the inventory data to process.
@ -273,7 +273,7 @@ class Cacheable(object):
class Constructable(object):
def _compose(self, template, variables):
''' helper method for pluigns to compose variables for Ansible based on jinja2 expression and inventory vars'''
''' helper method for plugins to compose variables for Ansible based on jinja2 expression and inventory vars'''
t = self.templar
t.set_available_variables(variables)
return t.template('%s%s%s' % (t.environment.variable_start_string, template, t.environment.variable_end_string), disable_lookups=True)
@ -291,7 +291,7 @@ class Constructable(object):
self.inventory.set_variable(host, varname, composite)
def _add_host_to_composed_groups(self, groups, variables, host, strict=False):
''' helper to create complex groups for plugins based on jinaj2 conditionals, hosts that meet the conditional are added to group'''
''' helper to create complex groups for plugins based on jinja2 conditionals, hosts that meet the conditional are added to group'''
# process each 'group entry'
if groups and isinstance(groups, dict):
self.templar.set_available_variables(variables)

View file

@ -258,7 +258,8 @@ class InventoryModule(BaseInventoryPlugin, Constructable, Cacheable):
groups.append(cloud)
# Create a group on region
groups.append(region)
if region:
groups.append(region)
# And one by cloud_region
groups.append("%s_%s" % (cloud, region))

View file

@ -0,0 +1,418 @@
#
# Copyright: (c) 2018, Ansible Project
# Copyright: (c) 2018, Abhijeet Kasurde <akasurde@redhat.com>
#
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
DOCUMENTATION = '''
name: vmware_vm_inventory
plugin_type: inventory
short_description: VMware Guest inventory source
version_added: "2.6"
description:
- Get virtual machines as inventory hosts from VMware environment.
- Uses any file which ends with vmware.yml or vmware.yaml as a YAML configuration file.
- The inventory_hostname is always the 'Name' and UUID of the virtual machine. UUID is added as VMware allows virtual machines with the same name.
extends_documentation_fragment:
- inventory_cache
requirements:
- "Python >= 2.7"
- "PyVmomi"
- "requests >= 2.3"
- "vSphere Automation SDK - For tag feature"
- "vCloud Suite SDK - For tag feature"
options:
hostname:
description: Name of vCenter or ESXi server.
required: True
env:
- name: VMWARE_SERVER
username:
description: Name of vSphere admin user.
required: True
env:
- name: VMWARE_USERNAME
password:
description: Password of vSphere admin user.
required: True
env:
- name: VMWARE_PASSWORD
port:
description: Port number used to connect to vCenter or ESXi Server.
default: 443
env:
- name: VMWARE_PORT
validate_certs:
description:
- Allows connection when SSL certificates are not valid. Set to C(false) when certificates are not trusted.
default: True
type: boolean
with_tags:
description:
- Include tags and associated virtual machines.
- Requires 'vSphere Automation SDK' and 'vCloud Suite SDK' libraries to be installed on the given controller machine.
- Please refer following URLs for installation steps
- 'https://code.vmware.com/web/sdk/65/vsphere-automation-python'
- 'https://code.vmware.com/web/sdk/60/vcloudsuite-python'
default: False
type: boolean
'''
EXAMPLES = '''
# Sample configuration file for VMware Guest dynamic inventory
plugin: vmware_vm_inventory
strict: False
hostname: 10.65.223.31
username: administrator@vsphere.local
password: Esxi@123$%
validate_certs: False
with_tags: True
'''
import ssl
import atexit
from ansible.errors import AnsibleError, AnsibleParserError
try:
# requests is required for exception handling of the ConnectionError
import requests
HAS_REQUESTS = True
except ImportError:
HAS_REQUESTS = False
try:
from pyVim import connect
from pyVmomi import vim, vmodl
HAS_PYVMOMI = True
except ImportError:
HAS_PYVMOMI = False
try:
from vmware.vapi.lib.connect import get_requests_connector
from vmware.vapi.security.session import create_session_security_context
from vmware.vapi.security.user_password import create_user_password_security_context
from com.vmware.cis_client import Session
from com.vmware.vapi.std_client import DynamicID
from com.vmware.cis.tagging_client import Tag, TagAssociation
HAS_VCLOUD = True
except ImportError:
HAS_VCLOUD = False
try:
from vmware.vapi.stdlib.client.factories import StubConfigurationFactory
HAS_VSPHERE = True
except ImportError:
HAS_VSPHERE = False
from ansible.plugins.inventory import BaseInventoryPlugin, Cacheable
class InventoryModule(BaseInventoryPlugin, Cacheable):
NAME = 'vmware_vm_inventory'
def _set_credentials(self):
"""
Set credentials
"""
self.hostname = self.get_option('hostname')
self.username = self.get_option('username')
self.password = self.get_option('password')
self.port = self.get_option('port')
self.with_tags = self.get_option('with_tags')
self.validate_certs = self.get_option('validate_certs')
if not HAS_VSPHERE and self.with_tags:
raise AnsibleError("Unable to find 'vSphere Automation SDK' Python library which is required."
" Please refer this URL for installation steps"
" - https://code.vmware.com/web/sdk/65/vsphere-automation-python")
if not HAS_VCLOUD and self.with_tags:
raise AnsibleError("Unable to find 'vCloud Suite SDK' Python library which is required."
" Please refer this URL for installation steps"
" - https://code.vmware.com/web/sdk/60/vcloudsuite-python")
if not all([self.hostname, self.username, self.password]):
raise AnsibleError("Missing one of the following : hostname, username, password. Please read "
"the documentation for more information.")
def _login_vapi(self):
"""
Login to vCenter API using REST call
Returns: connection object
"""
session = requests.Session()
session.verify = self.validate_certs
if not self.validate_certs:
# Disable warning shown at stdout
requests.packages.urllib3.disable_warnings()
vcenter_url = "https://%s/api" % self.hostname
# Get request connector
connector = get_requests_connector(session=session, url=vcenter_url)
# Create standard Configuration
stub_config = StubConfigurationFactory.new_std_configuration(connector)
# Use username and password in the security context to authenticate
security_context = create_user_password_security_context(self.username, self.password)
# Login
stub_config.connector.set_security_context(security_context)
# Create the stub for the session service and login by creating a session.
session_svc = Session(stub_config)
session_id = session_svc.create()
# After successful authentication, store the session identifier in the security
# context of the stub and use that for all subsequent remote requests
session_security_context = create_session_security_context(session_id)
stub_config.connector.set_security_context(session_security_context)
if stub_config is None:
raise AnsibleError("Failed to login to %s using %s" % (self.hostname, self.username))
return stub_config
def _login(self):
"""
Login to vCenter or ESXi server
Returns: connection object
"""
if self.validate_certs and not hasattr(ssl, 'SSLContext'):
raise AnsibleError('pyVim does not support changing verification mode with python < 2.7.9. Either update '
'python or set validate_certs to false in configuration YAML file.')
ssl_context = None
if not self.validate_certs and hasattr(ssl, 'SSLContext'):
ssl_context = ssl.SSLContext(ssl.PROTOCOL_SSLv23)
ssl_context.verify_mode = ssl.CERT_NONE
service_instance = None
try:
service_instance = connect.SmartConnect(host=self.hostname, user=self.username,
pwd=self.password, sslContext=ssl_context,
port=self.port)
except vim.fault.InvalidLogin as e:
raise AnsibleParserError("Unable to log on to vCenter or ESXi API at %s:%s as %s: %s" % (self.hostname, self.port, self.username, e.msg))
except vim.fault.NoPermission as e:
raise AnsibleParserError("User %s does not have required permission"
" to log on to vCenter or ESXi API at %s:%s : %s" % (self.username, self.hostname, self.port, e.msg))
except (requests.ConnectionError, ssl.SSLError) as e:
raise AnsibleParserError("Unable to connect to vCenter or ESXi API at %s on TCP/%s: %s" % (self.hostname, self.port, e))
except vmodl.fault.InvalidRequest as e:
# Request is malformed
raise AnsibleParserError("Failed to get a response from server %s:%s as "
"request is malformed: %s" % (self.hostname, self.port, e.msg))
except Exception as e:
raise AnsibleParserError("Unknown error while connecting to vCenter or ESXi API at %s:%s : %s" % (self.hostname, self.port, e))
if service_instance is None:
raise AnsibleParserError("Unknown error while connecting to vCenter or ESXi API at %s:%s" % (self.hostname, self.port))
atexit.register(connect.Disconnect, service_instance)
return service_instance.RetrieveContent()
def verify_file(self, path):
"""
Verify plugin configuration file and mark this plugin active
Args:
path: Path of configuration YAML file
Returns: True if everything is correct, else False
"""
valid = False
if super(InventoryModule, self).verify_file(path):
if path.endswith(('vmware.yaml', 'vmware.yml')):
valid = True
if not HAS_REQUESTS:
raise AnsibleParserError('Please install "requests" Python module as this is required'
' for VMware Guest dynamic inventory plugin.')
elif not HAS_PYVMOMI:
raise AnsibleParserError('Please install "PyVmomi" Python module as this is required'
' for VMware Guest dynamic inventory plugin.')
if HAS_REQUESTS:
# Pyvmomi 5.5 and onwards requires requests 2.3
# https://github.com/vmware/pyvmomi/blob/master/requirements.txt
required_version = (2, 3)
requests_version = requests.__version__.split(".")[:2]
try:
requests_major_minor = tuple(map(int, requests_version))
except ValueError:
raise AnsibleParserError("Failed to parse 'requests' library version.")
if requests_major_minor < required_version:
raise AnsibleParserError("'requests' library version should"
" be >= %s, found: %s." % (".".join([str(w) for w in required_version]),
requests.__version__))
valid = True
return valid
def parse(self, inventory, loader, path, cache=True):
"""
Parses the inventory file
"""
super(InventoryModule, self).parse(inventory, loader, path, cache=cache)
cache_key = self.get_cache_key(path)
config_data = self._read_config_data(path)
source_data = None
if cache:
cache = self.get_option('cache')
update_cache = False
if cache:
try:
source_data = self.cache.get(cache_key)
except KeyError:
update_cache = True
# set _options from config data
self._consume_options(config_data)
self._set_credentials()
self.content = self._login()
if self.with_tags:
self.rest_content = self._login_vapi()
using_current_cache = cache and not update_cache
cacheable_results = self._populate_from_source(source_data, using_current_cache)
if update_cache:
self.cache.set(cache_key, cacheable_results)
def _populate_from_cache(self, source_data):
"""
Populate inventory from cache
"""
hostvars = source_data.pop('_meta', {}).get('hostvars', {})
for group in source_data:
if group == 'all':
continue
else:
self.inventory.add_group(group)
self.inventory.add_child('all', group)
if not source_data:
for host in hostvars:
self.inventory.add_host(host)
def _populate_from_source(self, source_data, using_current_cache):
"""
Populate inventory data from direct source
"""
if using_current_cache:
self._populate_from_cache(source_data)
return source_data
cacheable_results = {}
hostvars = {}
objects = self._get_managed_objects_properties(vim_type=vim.VirtualMachine, properties=['name'])
if self.with_tags:
tag_svc = Tag(self.rest_content)
tag_association = TagAssociation(self.rest_content)
tags_info = dict()
tags = tag_svc.list()
for tag in tags:
tag_obj = tag_svc.get(tag)
tags_info[tag_obj.id] = tag_obj.name
if tag_obj.name not in cacheable_results:
cacheable_results[tag_obj.name] = {'hosts': []}
self.inventory.add_group(tag_obj.name)
for temp_vm_object in objects:
for temp_vm_object_property in temp_vm_object.propSet:
# VMware does not provide a way to uniquely identify VM by its name
# i.e. there can be two virtual machines with same name
# Appending "_" and VMware UUID to make it unique
current_host = temp_vm_object_property.val + "_" + temp_vm_object.obj.config.uuid
if current_host not in hostvars:
hostvars[current_host] = {}
self.inventory.add_host(current_host)
# Only gather facts related to tag if vCloud and vSphere is installed.
if HAS_VCLOUD and HAS_VSPHERE and self.with_tags:
# Add virtual machine to appropriate tag group
vm_mo_id = temp_vm_object.obj._GetMoId()
vm_dynamic_id = DynamicID(type='VirtualMachine', id=vm_mo_id)
attached_tags = tag_association.list_attached_tags(vm_dynamic_id)
for tag_id in attached_tags:
self.inventory.add_child(tags_info[tag_id], current_host)
cacheable_results[tags_info[tag_id]]['hosts'].append(current_host)
# Based on power state of virtual machine
vm_power = temp_vm_object.obj.summary.runtime.powerState
if vm_power not in cacheable_results:
cacheable_results[vm_power] = []
self.inventory.add_group(vm_power)
cacheable_results[vm_power].append(current_host)
self.inventory.add_child(vm_power, current_host)
# Based on guest id
vm_guest_id = temp_vm_object.obj.config.guestId
if vm_guest_id and vm_guest_id not in cacheable_results:
cacheable_results[vm_guest_id] = []
self.inventory.add_group(vm_guest_id)
cacheable_results[vm_guest_id].append(current_host)
self.inventory.add_child(vm_guest_id, current_host)
return cacheable_results
def _get_managed_objects_properties(self, vim_type, properties=None):
"""
Look up a Managed Object Reference in vCenter / ESXi Environment
:param vim_type: Type of vim object e.g, for datacenter - vim.Datacenter
:param properties: List of properties related to vim object e.g. Name
:return: local content object
"""
# Get Root Folder
root_folder = self.content.rootFolder
if properties is None:
properties = ['name']
# Create Container View with default root folder
mor = self.content.viewManager.CreateContainerView(root_folder, [vim_type], True)
# Create Traversal spec
traversal_spec = vmodl.query.PropertyCollector.TraversalSpec(
name="traversal_spec",
path='view',
skip=False,
type=vim.view.ContainerView
)
# Create Property Spec
property_spec = vmodl.query.PropertyCollector.PropertySpec(
type=vim_type, # Type of object to retrieved
all=False,
pathSet=properties
)
# Create Object Spec
object_spec = vmodl.query.PropertyCollector.ObjectSpec(
obj=mor,
skip=True,
selectSet=[traversal_spec]
)
# Create Filter Spec
filter_spec = vmodl.query.PropertyCollector.FilterSpec(
objectSet=[object_spec],
propSet=[property_spec],
reportMissingObjectsInResults=False
)
return self.content.propertyCollector.RetrieveContents([filter_spec])

View file

@ -47,7 +47,8 @@ class TerminalModule(TerminalBase):
re.compile(br"syntax error"),
re.compile(br"unknown command"),
re.compile(br"user not present"),
re.compile(br"invalid (.+?)at '\^' marker", re.I)
re.compile(br"invalid (.+?)at '\^' marker", re.I),
re.compile(br"baud rate of console should be (\d*) to increase severity level", re.I)
]
def on_become(self, passwd=None):

View file

@ -70,6 +70,7 @@ options:
auth_source:
description:
- Controls the source of the credentials to use for authentication.
- If not specified, ANSIBLE_AZURE_AUTH_SOURCE environment variable will be used and default to C(auto) if variable is not defined.
- C(auto) will follow the default precedence of module parameters -> environment variables -> default profile in credential file
C(~/.azure/credentials).
- When set to C(cli), the credentials will be sources from the default Azure CLI profile.
@ -84,7 +85,6 @@ options:
- credential_file
- env
- msi
default: auto
version_added: 2.5
api_profile:
description:

View file

@ -24,6 +24,10 @@ class ModuleDocFragment(object):
options:
authorize:
description:
- B(Deprecated)
- "Starting with Ansible 2.7 we recommend using C(connection: network_cli) and C(become: yes)."
- For more information please see the L(IronWare Platform Options guide, ../network/user_guide/platform_ironware.html).
- HORIZONTALLINE
- Instructs the module to enter privileged mode on the remote device
before sending any commands. If not specified, the device will
attempt to execute all commands in non-privileged mode. If the value
@ -33,6 +37,10 @@ options:
default: 'no'
provider:
description:
- B(Deprecated)
- "Starting with Ansible 2.7 we recommend using C(connection: network_cli) and C(become: yes)."
- For more information please see the L(IronWare Platform Options guide, ../network/user_guide/platform_ironware.html).
- HORIZONTALLINE
- A dict object containing connection details.
suboptions:
host:
@ -85,4 +93,6 @@ options:
if the console freezes before continuing. For example when saving
configurations.
default: 10
notes:
- For more information on using Ansible to manage network devices see the :ref:`Ansible Network Guide <network_guide>`
"""

View file

@ -252,6 +252,53 @@
- output.state.ip_configurations[0].public_ip_address == None
- output.state.network_security_group == None
- name: NIC with Accelerated networking enabled
azure_rm_networkinterface:
resource_group: "{{ resource_group }}"
name: "tn{{ rpfx }}an"
virtual_network: "{{ vn.state.id }}"
subnet: "tn{{ rpfx }}"
enable_accelerated_networking: True
register: output
- assert:
that:
- output.state.enable_accelerated_networking
- output.changed
- name: NIC with Accelerated networking enabled (check idempotent)
azure_rm_networkinterface:
resource_group: "{{ resource_group }}"
name: "tn{{ rpfx }}an"
virtual_network: "{{ vn.state.id }}"
subnet: "tn{{ rpfx }}"
enable_accelerated_networking: True
register: output
- assert:
that:
- output.state.enable_accelerated_networking
- not output.changed
- name: Disable (previously enabled) Accelerated networking
azure_rm_networkinterface:
resource_group: "{{ resource_group }}"
name: "tn{{ rpfx }}an"
virtual_network: "{{ vn.state.id }}"
subnet: "tn{{ rpfx }}"
enable_accelerated_networking: False
register: output
- assert:
that:
- not output.state.enable_accelerated_networking
- name: Delete AN NIC
azure_rm_networkinterface:
resource_group: "{{ resource_group }}"
name: "tn{{ rpfx }}an"
state: absent
- name: Delete the NIC (check mode)
azure_rm_networkinterface:
resource_group: "{{ resource_group }}"

View file

@ -24,6 +24,60 @@
that:
- 'result.changed == false'
- name: Set up console logging with level 2 (edge case)
nxos_logging: &clog2
dest: console
dest_level: 2
provider: "{{ connection }}"
state: present
register: result
- assert:
that:
- 'result.changed == true'
- '"logging console 2" in result.commands'
- name: Set up console logging with level 2 (edge case) (idempotent)
nxos_logging: *clog2
register: result
- assert: *false
- name: Set Baud Rate to less than 38400
nxos_config:
lines:
- speed 9600
parents: line console
provider: "{{ connection }}"
- name: Enable console logging with level 3 (will fail)
nxos_logging: &con3
dest: console
dest_level: 3
register: result
provider: "{{ connection }}"
ignore_errors: yes
- assert:
that:
- 'result.failed == true'
- name: Set Baud Rate to 38400
nxos_config:
lines:
- speed 38400
parents: line console
provider: "{{ connection }}"
- name: Enable console logging with level 3 (will pass)
nxos_logging: *con3
register: result
- assert:
that:
- 'result.changed == true'
- '"logging console 3" in result.commands'
- name: Logfile logging with level
nxos_logging: &llog
dest: logfile
@ -80,6 +134,24 @@
- assert: *false
- name: Configure monitor with level 5 (edge case)
nxos_logging: &mlog5
dest: monitor
dest_level: 5
provider: "{{ connection }}"
register: result
- assert:
that:
- 'result.changed == true'
- '"logging monitor 5" in result.commands'
- name: Configure monitor with level 5 (edge case) (idempotent)
nxos_logging: *mlog5
register: result
- assert: *false
- name: Configure facility with level
nxos_logging: &flog
facility: daemon
@ -122,9 +194,9 @@
- name: remove logging as collection tearDown
nxos_logging: &agg
aggregate:
- { dest: console, dest_level: 0 }
- { dest: console, dest_level: 3 }
- { dest: module, dest_level: 2 }
- { dest: monitor, dest_level: 3 }
- { dest: monitor, dest_level: 5 }
- { dest: logfile, dest_level: 1, name: test }
- { facility: daemon, facility_level: 4 }
- { dest: server, remote_server: test-syslogserver.com, facility: auth, dest_level: 1 }

View file

@ -0,0 +1 @@
windows-1252 Special Characters: €‚ƒ„…†‡ˆ‰Š‹ŒŽ‘’“”•–—˜™š›œžŸ ¡¢£¤¥¦§¨©ª«¬­®¯°±²³´µ¶·¸¹º»¼½¾¿ÀÁÂÃÄÅÆÇÈÉÊËÌÍÎÏÐÑÒÓÔÕÖ×ØÙÚÛÜÝÞßàáâãäåæçèéêëìíîïðñòóôõö÷øùúûüýþÿ

View file

@ -0,0 +1 @@
windows-1252 Special Characters: <20><><EFBFBD><EFBFBD><EFBFBD><EFBFBD><EFBFBD><EFBFBD><EFBFBD><EFBFBD><EFBFBD><EFBFBD><EFBFBD><EFBFBD><EFBFBD><EFBFBD><EFBFBD><EFBFBD><EFBFBD><EFBFBD><EFBFBD><EFBFBD><EFBFBD><EFBFBD><EFBFBD><EFBFBD><EFBFBD><EFBFBD><EFBFBD>ウЖ<E382A6><D096><EFBFBD>渦慨偽係杭纂従神疎団兎波品北洋椀冫嘖孛忤掣桀毳烙痰邃繙艾蜉謖邇關髓齡<E9AB93><E9BDA1>巐鄕<E5B790><E98495>

View file

@ -619,5 +619,28 @@
- 'template_results.mode == "0547"'
- 'stat_results.stat["mode"] == "0547"'
# Test output_encoding
- name: Prepare the list of encodings we want to check, including empty string for defaults
set_fact:
template_encoding_1252_encodings: ['', 'utf-8', 'windows-1252']
- name: Copy known good encoding_1252_*.expected into place
copy:
src: 'encoding_1252_{{ item | default("utf-8", true) }}.expected'
dest: '{{ output_dir }}/encoding_1252_{{ item }}.expected'
loop: '{{ template_encoding_1252_encodings }}'
- name: Generate the encoding_1252_* files from templates using various encoding combinations
template:
src: 'encoding_1252.j2'
dest: '{{ output_dir }}/encoding_1252_{{ item }}.txt'
output_encoding: '{{ item }}'
loop: '{{ template_encoding_1252_encodings }}'
- name: Compare the encoding_1252_* templated files to known good
command: diff -u {{ output_dir }}/encoding_1252_{{ item }}.expected {{ output_dir }}/encoding_1252_{{ item }}.txt
register: encoding_1252_diff_result
loop: '{{ template_encoding_1252_encodings }}'
# aliases file requires root for template tests so this should be safe
- include: backup_test.yml

View file

@ -0,0 +1 @@
windows-1252 Special Characters: €‚ƒ„…†‡ˆ‰Š‹ŒŽ‘’“”•–—˜™š›œžŸ ¡¢£¤¥¦§¨©ª«¬­®¯°±²³´µ¶·¸¹º»¼½¾¿ÀÁÂÃÄÅÆÇÈÉÊËÌÍÎÏÐÑÒÓÔÕÖ×ØÙÚÛÜÝÞßàáâãäåæçèéêëìíîïðñòóôõö÷øùúûüýþÿ

View file

@ -0,0 +1,149 @@
# Test code for the vmware_guest module.
# Copyright: (c) 2018, Abhijeet Kasurde <akasurde@redhat.com>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
- name: Wait for Flask controller to come up online
wait_for:
host: "{{ vcsim }}"
port: 5000
state: started
- name: kill vcsim
uri:
url: http://{{ vcsim }}:5000/killall
- name: start vcsim with no folders
uri:
url: http://{{ vcsim }}:5000/spawn?datacenter=1&cluster=1&folder=0
register: vcsim_instance
- name: Wait for Flask controller to come up online
wait_for:
host: "{{ vcsim }}"
port: 443
state: started
- name: get a list of VMS from vcsim
uri:
url: http://{{ vcsim }}:5000/govc_find?filter=VM
register: vmlist
- debug: var=vcsim_instance
- debug: var=vmlist
- name: create new VMs with boot_firmware as 'bios'
vmware_guest:
validate_certs: False
hostname: "{{ vcsim }}"
username: "{{ vcsim_instance['json']['username'] }}"
password: "{{ vcsim_instance['json']['password'] }}"
name: "{{ 'newvm_' + item|basename }}"
guest_id: centos64Guest
datacenter: "{{ (item|basename).split('_')[0] }}"
hardware:
num_cpus: 4
boot_firmware: "bios"
memory_mb: 512
disk:
- size: 1gb
type: thin
autoselect_datastore: True
state: poweredoff
folder: "{{ item|dirname }}"
with_items: "{{ vmlist['json'] }}"
register: clone_d1_c1_f0
- debug: var=clone_d1_c1_f0
- name: assert that changes were made
assert:
that:
- "clone_d1_c1_f0.results|map(attribute='changed')|unique|list == [true]"
# VCSIM does not recognizes existing VMs boot firmware
#- name: create new VMs again with boot_firmware as 'bios'
# vmware_guest:
# validate_certs: False
# hostname: "{{ vcsim }}"
# username: "{{ vcsim_instance['json']['username'] }}"
# password: "{{ vcsim_instance['json']['password'] }}"
# name: "{{ 'newvm_' + item|basename }}"
# guest_id: centos64Guest
# datacenter: "{{ (item|basename).split('_')[0] }}"
# hardware:
# num_cpus: 4
# boot_firmware: "bios"
# memory_mb: 512
# disk:
# - size: 1gb
# type: thin
# autoselect_datastore: True
# state: poweredoff
# folder: "{{ item|dirname }}"
# with_items: "{{ vmlist['json'] }}"
# register: clone_d1_c1_f0
#- debug: var=clone_d1_c1_f0
#- name: assert that changes were not made
# assert:
# that:
# - "clone_d1_c1_f0.results|map(attribute='changed')|unique|list == [false]"
- name: create new VMs with boot_firmware as 'efi'
vmware_guest:
validate_certs: False
hostname: "{{ vcsim }}"
username: "{{ vcsim_instance['json']['username'] }}"
password: "{{ vcsim_instance['json']['password'] }}"
name: "{{ 'newvm_efi_' + item|basename }}"
guest_id: centos64Guest
datacenter: "{{ (item|basename).split('_')[0] }}"
hardware:
num_cpus: 4
boot_firmware: "efi"
memory_mb: 512
disk:
- size: 1gb
type: thin
autoselect_datastore: True
state: poweredoff
folder: "{{ item|dirname }}"
with_items: "{{ vmlist['json'] }}"
register: clone_d1_c1_f0
- debug: var=clone_d1_c1_f0
- name: assert that changes were made
assert:
that:
- "clone_d1_c1_f0.results|map(attribute='changed')|unique|list == [true]"
# VCSIM does not recognizes existing VMs boot firmware
#- name: create new VMs again with boot_firmware as 'efi'
# vmware_guest:
# validate_certs: False
# hostname: "{{ vcsim }}"
# username: "{{ vcsim_instance['json']['username'] }}"
# password: "{{ vcsim_instance['json']['password'] }}"
# name: "{{ 'newvm_efi_' + item|basename }}"
# guest_id: centos64Guest
# datacenter: "{{ (item|basename).split('_')[0] }}"
# hardware:
# num_cpus: 4
# boot_firmware: "efi"
# memory_mb: 512
# disk:
# - size: 1gb
# type: thin
# autoselect_datastore: True
# state: poweredoff
# folder: "{{ item|dirname }}"
# with_items: "{{ vmlist['json'] }}"
# register: clone_d1_c1_f0
#- debug: var=clone_d1_c1_f0
#- name: assert that changes were not made
# assert:
# that:
# - "clone_d1_c1_f0.results|map(attribute='changed')|unique|list == [false]"

View file

@ -30,4 +30,5 @@
- include: disk_size_d1_c1_f0.yml
- include: network_with_device.yml
- include: disk_mode_d1_c1_f0.yml
- include: linked_clone_d1_c1_f0.yml
- include: linked_clone_d1_c1_f0.yml
- include: boot_firmware_d1_c1_f0.yml

View file

@ -0,0 +1,2 @@
cloud/vcenter
unsupported

View file

@ -0,0 +1,157 @@
# Test code for the vmware_guest_custom_attribute_defs module.
# Copyright: (c) 2018, Abhijeet Kasurde <akasurde@redhat.com>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
- name: store the vcenter container ip
set_fact:
vcsim: "{{ lookup('env', 'vcenter_host') }}"
- debug: var=vcsim
- name: Wait for Flask controller to come up online
wait_for:
host: "{{ vcsim }}"
port: 5000
state: started
- name: kill vcsim
uri:
url: http://{{ vcsim }}:5000/killall
- name: start vcsim
uri:
url: http://{{ vcsim }}:5000/spawn?datacenter=1&cluster=1&folder=0
register: vcsim_instance
- name: Wait for vcsim server to come up online
wait_for:
host: "{{ vcsim }}"
port: 443
state: started
- name: list custom attributes
vmware_guest_custom_attribute_defs:
hostname: "{{ vcsim }}"
username: "{{ vcsim_instance['json']['username'] }}"
password: "{{ vcsim_instance['json']['password'] }}"
validate_certs: False
state: list
register: list_attrib_def
- debug: var=list_attrib_def
- assert:
that:
- "not list_attrib_def.changed"
- name: add custom attribute definition
vmware_guest_custom_attribute_defs:
validate_certs: False
hostname: "{{ vcsim }}"
username: "{{ vcsim_instance['json']['username'] }}"
password: "{{ vcsim_instance['json']['password'] }}"
state: present
attribute_key: sample_5
register: add_attrib_def
- debug: var=add_attrib_def
- assert:
that:
- "add_attrib_def.changed"
- "'sample_5' in add_attrib_def.instance"
- name: list custom attributes
vmware_guest_custom_attribute_defs:
validate_certs: False
hostname: "{{ vcsim }}"
username: "{{ vcsim_instance['json']['username'] }}"
password: "{{ vcsim_instance['json']['password'] }}"
state: list
register: list_attrib_def
- debug: var=list_attrib_def
- assert:
that:
- "not list_attrib_def.changed"
- name: add attribute definition again
vmware_guest_custom_attribute_defs:
validate_certs: False
hostname: "{{ vcsim }}"
username: "{{ vcsim_instance['json']['username'] }}"
password: "{{ vcsim_instance['json']['password'] }}"
state: present
attribute_key: sample_5
register: add_attrib_def
- debug: var=add_attrib_def
- assert:
that:
- "not add_attrib_def.changed"
- name: list attribute definition
vmware_guest_custom_attribute_defs:
validate_certs: False
hostname: "{{ vcsim }}"
username: "{{ vcsim_instance['json']['username'] }}"
password: "{{ vcsim_instance['json']['password'] }}"
state: list
register: list_attrib_def
- debug: var=list_attrib_def
- assert:
that:
- "not list_attrib_def.changed"
- name: remove attribute definition
vmware_guest_custom_attribute_defs:
validate_certs: False
hostname: "{{ vcsim }}"
username: "{{ vcsim_instance['json']['username'] }}"
password: "{{ vcsim_instance['json']['password'] }}"
state: absent
attribute_key: sample_5
register: remove_attrib_def
- debug: var=remove_attrib_def
- assert:
that:
- "remove_attrib_def.changed"
- "'sample_5' not in remove_attrib_def.instance"
- name: remove attribute definition
vmware_guest_custom_attribute_defs:
validate_certs: False
hostname: "{{ vcsim }}"
username: "{{ vcsim_instance['json']['username'] }}"
password: "{{ vcsim_instance['json']['password'] }}"
state: absent
attribute_key: sample_5
register: remove_attrib_def
- debug: var=remove_attrib_def
- assert:
that:
- "not remove_attrib_def.changed"
- "'sample_5' not in remove_attrib_def.instance"
- name: list attribute definition
vmware_guest_custom_attribute_defs:
validate_certs: False
hostname: "{{ vcsim }}"
username: "{{ vcsim_instance['json']['username'] }}"
password: "{{ vcsim_instance['json']['password'] }}"
state: list
register: list_attrib_def
- debug: var=list_attrib_def
- assert:
that:
- "not list_attrib_def.changed"

View file

@ -0,0 +1,2 @@
cloud/vcenter
unsupported

View file

@ -0,0 +1,144 @@
# Test code for the vmware_guest_custom_attributes module.
# Copyright: (c) 2018, Abhijeet Kasurde <akasurde@redhat.com>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
# TODO: Current pinned version of vcsim does not support custom fields
# commenting testcase below
- name: store the vcenter container ip
set_fact:
vcsim: "{{ lookup('env', 'vcenter_host') }}"
- debug: var=vcsim
- name: Wait for Flask controller to come up online
wait_for:
host: "{{ vcsim }}"
port: 5000
state: started
- name: kill vcsim
uri:
url: http://{{ vcsim }}:5000/killall
- name: start vcsim
uri:
url: http://{{ vcsim }}:5000/spawn?datacenter=1&cluster=1&folder=0
register: vcsim_instance
- name: Wait for vcsim server to come up online
wait_for:
host: "{{ vcsim }}"
port: 443
state: started
- name: get a list of Datacenter from vcsim
uri:
url: http://{{ vcsim }}:5000/govc_find?filter=DC
register: datacenters
- set_fact: dc1="{{ datacenters['json'][0] }}"
- name: get a list of virtual machines from vcsim
uri:
url: http://{{ vcsim }}:5000/govc_find?filter=VM
register: vms
- set_fact: vm1="{{ vms['json'][0] }}"
- name: Add custom attribute to the given virtual machine
vmware_guest_custom_attributes:
validate_certs: False
hostname: "{{ vcsim }}"
username: "{{ vcsim_instance['json']['username'] }}"
password: "{{ vcsim_instance['json']['password'] }}"
datacenter: "{{ dc1 | basename }}"
name: "{{ vm1 | basename }}"
folder: "{{ vm1 | dirname }}"
state: present
attributes:
- name: 'sample_1'
value: 'sample_1_value'
- name: 'sample_2'
value: 'sample_2_value'
- name: 'sample_3'
value: 'sample_3_value'
register: guest_facts_0001
- debug: msg="{{ guest_facts_0001 }}"
- assert:
that:
- "guest_facts_0001.changed"
- name: Add custom attribute to the given virtual machine again
vmware_guest_custom_attributes:
validate_certs: False
hostname: "{{ vcsim }}"
username: "{{ vcsim_instance['json']['username'] }}"
password: "{{ vcsim_instance['json']['password'] }}"
datacenter: "{{ dc1 | basename }}"
name: "{{ vm1 | basename }}"
folder: "{{ vm1 | dirname }}"
state: present
attributes:
- name: 'sample_1'
value: 'sample_1_value'
- name: 'sample_2'
value: 'sample_2_value'
- name: 'sample_3'
value: 'sample_3_value'
register: guest_facts_0002
- debug: msg="{{ guest_facts_0002 }}"
- assert:
that:
- "not guest_facts_0002.changed"
- name: Remove custom attribute to the given virtual machine
vmware_guest_custom_attributes:
validate_certs: False
hostname: "{{ vcsim }}"
username: "{{ vcsim_instance['json']['username'] }}"
password: "{{ vcsim_instance['json']['password'] }}"
datacenter: "{{ dc1 | basename }}"
name: "{{ vm1 | basename }}"
folder: "{{ vm1 | dirname }}"
state: absent
attributes:
- name: 'sample_1'
- name: 'sample_2'
- name: 'sample_3'
register: guest_facts_0004
- debug: msg="{{ guest_facts_0004 }}"
- assert:
that:
- "guest_facts_0004.changed"
# TODO: vcsim returns duplicate values so removing custom attributes
# results in change. vCenter show correct behavior. Commenting this
# till this is supported by vcsim.
#- name: Remove custom attribute to the given virtual machine again
# vmware_guest_custom_attributes:
# validate_certs: False
# hostname: "{{ vcsim }}"
# username: "{{ vcsim_instance['json']['username'] }}"
# password: "{{ vcsim_instance['json']['password'] }}"
# datacenter: "{{ dc1 | basename }}"
# name: "{{ vm1 | basename }}"
# folder: "{{ vm1 | dirname }}"
# state: absent
# attributes:
# - name: 'sample_1'
# - name: 'sample_2'
# - name: 'sample_3'
# register: guest_facts_0005
#- debug: msg="{{ guest_facts_0005 }}"
#- assert:
# that:
# - "not guest_facts_0005.changed"

View file

@ -0,0 +1,2 @@
shippable/vcenter/group1
cloud/vcenter

View file

@ -0,0 +1,78 @@
# Test code for the vmware_guest_move module
# Copyright: (c) 2018, Jose Angel Munoz <josea.munoz@gmail.com>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
- name: store the vcenter container ip
set_fact:
vcsim: "{{ lookup('env', 'vcenter_host') }}"
- debug: var=vcsim
- name: Wait for Flask controller to come up online
wait_for:
host: "{{ vcsim }}"
port: 5000
state: started
- name: kill vcsim
uri:
url: http://{{ vcsim }}:5000/killall
- name: start vcsim
uri:
url: http://{{ vcsim }}:5000/spawn?folder=2&dc=2
register: vcsim_instance
- name: Wait for Flask controller to come up online
wait_for:
host: "{{ vcsim }}"
port: 443
state: started
- debug: var=vcsim_instance
- name: get a list of virtual machines from vcsim
uri:
url: http://{{ vcsim }}:5000/govc_find?filter=VM
register: vms
- set_fact: vm1="{{ vms['json'][0] }}"
# Testcase 0001: Move vm and get changed status
- name: Move VM (Changed)
vmware_guest_move:
validate_certs: false
hostname: "{{ vcsim }}"
username: "{{ vcsim_instance['json']['username'] }}"
password: "{{ vcsim_instance['json']['password'] }}"
datacenter: "{{ (vm1|basename).split('_')[0] }}"
name: "{{ vm1|basename }}"
dest_folder: F1/DC1/vm/F1
register: vm_facts_0001
# Testcase 0002: Move vm and get OK status (Already Moved)
- name: Move VM (OK)
vmware_guest_move:
validate_certs: false
hostname: "{{ vcsim }}"
username: "{{ vcsim_instance['json']['username'] }}"
password: "{{ vcsim_instance['json']['password'] }}"
datacenter: "{{ (vm1|basename).split('_')[0] }}"
name: "{{ vm1|basename }}"
dest_folder: F1/DC1/vm/F1
register: vm_facts_0002
- debug:
msg: "{{ vm_facts_0001 }}"
- debug:
msg: "{{ vm_facts_0002 }}"
- name: get all VMs
uri:
url: http://{{ vcsim }}:5000/govc_find?filter=VM
register: vms_diff
- name: Difference
debug:
var: vms_diff.json | difference(vms.json)

View file

@ -1,3 +1,2 @@
shippable/vcenter/group1
cloud/vcenter
unsupported

View file

@ -0,0 +1,3 @@
shippable/vcenter/group1
cloud/vcenter
destructive

View file

@ -0,0 +1,5 @@
[defaults]
inventory = test-config.vmware.yaml
[inventory]
enable_plugins = vmware_vm_inventory

View file

@ -0,0 +1,37 @@
#!/usr/bin/env bash
[[ -n "$DEBUG" || -n "$ANSIBLE_DEBUG" ]] && set -x
set -euo pipefail
export ANSIBLE_CONFIG=ansible.cfg
export vcenter_host="${vcenter_host:-0.0.0.0}"
export VMWARE_SERVER="${vcenter_host}"
export VMWARE_USERNAME="${VMWARE_USERNAME:-user}"
export VMWARE_PASSWORD="${VMWARE_PASSWORD:-pass}"
VMWARE_CONFIG=test-config.vmware.yaml
cat > "$VMWARE_CONFIG" <<VMWARE_YAML
plugin: vmware_vm_inventory
strict: False
validate_certs: False
with_tags: False
VMWARE_YAML
trap 'rm -f "${VMWARE_CONFIG}"' INT TERM EXIT
echo "DEBUG: Using ${vcenter_host} with username ${VMWARE_USERNAME} and password ${VMWARE_PASSWORD}"
echo "Kill all previous instances"
curl "http://${vcenter_host}:5000/killall" > /dev/null 2>&1
echo "Start new VCSIM server"
curl "http://${vcenter_host}:5000/spawn?datacenter=1&cluster=1&folder=0" > /dev/null 2>&1
echo "Debugging new instances"
curl "http://${vcenter_host}:5000/govc_find"
# Get inventory
ansible-inventory -i ${VMWARE_CONFIG} --list
# Test playbook with given inventory
ansible-playbook -i ${VMWARE_CONFIG} test_vmware_vm_inventory.yml --connection=local "$@"

Some files were not shown because too many files have changed in this diff Show more