Collections docs generation backport (#70515)

* Build documentation for Ansible-2.10 (formerly known as ACD).

Builds plugin docs from collections whose source is on galaxy

The new command downloads collections from galaxy, then finds the
plugins inside of them to get the documentation for those plugins.

* Update the python syntax checks
  * docs builds can now require python 3.6+.

* Move plugin formatter code out to an external tool, antsibull-docs.
  Collection owners want to be able to extract docs for their own
  websites as well.
* The jinja2 filters, tests, and other support code have moved to antsibull
* Remove document_plugins as that has now been integrated into antsibull-docs

* Cleanup and bugfix to other build script code:
  * The Commands class needed to have its metaclass set for abstractmethod
    to work correctly
  * Fix lint issues in some command plugins

* Add the docs/docsite/rst/collections to .gitignore as
  everything in that directory will be generated so we don't want any of
  it saved in the git repository
* gitignore the build dir and remove edit docs link on module pages

* Add docs/rst/collections as a directory to remove on make clean
* Split the collections docs from the main docs

* remove version and edit on github
* remove version banner for just collections
* clarify examples need collection keyword defined

* Remove references to plugin documentation locations that no longer exist.
  * Perhaps the pages in plugins/*.rst should be deprecated
    altogether and their content moved?
  * If not, perhaps we want to rephrase and link into the collection
    documentation?
  * Or perhaps we want to link to the plugins which are present in
    collections/ansible/builtin?

* Remove PYTHONPATH from the build-ansible calls
  One of the design goals of the build-ansible.py script was for it to
  automatically set its library path to include the checkout of ansible
  and the library of code to implement itself.  Because it automatically
  includes the checkout of ansible, we don't need to set PYTHONPATH in
  the Makefile any longer.

* Create a command to only build ansible-base plugin docs
  * When building docs for devel, only build the ansible-base docs for
    now.  This is because antsibull needs support for building a "devel
    tree" of docs.  This can be changed once that is implemented
  * When building docs for the sanity tests, only build the ansible-base
    plugin docs for now.  Those are the docs which are in this repo so
    that seems appropriate for now.

* Docs: User guide overhaul, part 5 (#70307)

(cherry picked from commit db354c0300)

* Need to return any error code from running antsibull-docs (#70763)

This way we fail early if there's a problem

(cherry picked from commit 1e3989c9f7)

Co-authored-by: Alicia Cozine <879121+acozine@users.noreply.github.com>
This commit is contained in:
Toshio Kuratomi 2020-07-20 14:28:35 -07:00 committed by GitHub
parent 3eba8b8792
commit 46b1a999c6
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
67 changed files with 781 additions and 2248 deletions

2
.gitignore vendored
View file

@ -37,6 +37,8 @@ docs/docsite/rst/cli/ansible.rst
docs/docsite/rst/dev_guide/collections_galaxy_meta.rst docs/docsite/rst/dev_guide/collections_galaxy_meta.rst
docs/docsite/rst/dev_guide/testing/sanity/index.rst.new docs/docsite/rst/dev_guide/testing/sanity/index.rst.new
docs/docsite/rst/modules/*.rst docs/docsite/rst/modules/*.rst
docs/docsite/rst/collections/*.rst
docs/docsite/rst/collections/*/*.rst
docs/docsite/rst/playbooks_directives.rst docs/docsite/rst/playbooks_directives.rst
docs/docsite/rst/plugins_by_category.rst docs/docsite/rst/plugins_by_category.rst
docs/docsite/rst/plugins/*/*.rst docs/docsite/rst/plugins/*/*.rst

View file

@ -275,7 +275,7 @@ linkcheckdocs:
.PHONY: generate_rst .PHONY: generate_rst
generate_rst: lib/ansible/cli/*.py generate_rst: lib/ansible/cli/*.py
mkdir -p ./docs/man/man1/ ; \ mkdir -p ./docs/man/man1/ ; \
PYTHONPATH=./lib $(GENERATE_CLI) --template-file=docs/templates/man.j2 --output-dir=docs/man/man1/ --output-format man lib/ansible/cli/*.py $(GENERATE_CLI) --template-file=docs/templates/man.j2 --output-dir=docs/man/man1/ --output-format man lib/ansible/cli/*.py
docs: generate_rst docs: generate_rst

View file

@ -1,6 +1,6 @@
OS := $(shell uname -s) OS := $(shell uname -s)
SITELIB = $(shell python -c "from distutils.sysconfig import get_python_lib; print get_python_lib()"): SITELIB = $(shell python -c "from distutils.sysconfig import get_python_lib; print get_python_lib()"):
PLUGIN_FORMATTER=../../hacking/build-ansible.py document-plugins PLUGIN_FORMATTER=../../hacking/build-ansible.py docs-build
TESTING_FORMATTER=../bin/testing_formatter.sh TESTING_FORMATTER=../bin/testing_formatter.sh
KEYWORD_DUMPER=../../hacking/build-ansible.py document-keywords KEYWORD_DUMPER=../../hacking/build-ansible.py document-keywords
CONFIG_DUMPER=../../hacking/build-ansible.py document-config CONFIG_DUMPER=../../hacking/build-ansible.py document-config
@ -12,23 +12,35 @@ else
CPUS ?= $(shell nproc) CPUS ?= $(shell nproc)
endif endif
# Sets the build output directory if it's not already specified # Sets the build output directory for the main docsite if it's not already specified
ifndef BUILDDIR ifndef BUILDDIR
BUILDDIR = _build BUILDDIR = _build
endif endif
MODULE_ARGS= # Backwards compat for separate VARS
ifdef MODULES
MODULE_ARGS = -l $(MODULES)
endif
PLUGIN_ARGS= PLUGIN_ARGS=
ifdef MODULES
ifndef PLUGINS
PLUGIN_ARGS = -l $(MODULES)
else
PLUGIN_ARGS = -l $(MODULES),$(PLUGINS)
endif
else
ifdef PLUGINS ifdef PLUGINS
PLUGIN_ARGS = -l $(PLUGINS) PLUGIN_ARGS = -l $(PLUGINS)
endif endif
endif
DOC_PLUGINS ?= become cache callback cliconf connection httpapi inventory lookup netconf shell strategy vars DOC_PLUGINS ?= become cache callback cliconf connection httpapi inventory lookup netconf shell strategy vars
PYTHON=python
# fetch version from project release.py as single source-of-truth
VERSION := $(shell $(PYTHON) ../../packaging/release/versionhelper/version_helper.py --raw || echo error)
ifeq ($(findstring error,$(VERSION)), error)
$(error "version_helper failed")
endif
assertrst: assertrst:
ifndef rst ifndef rst
$(error specify document or pattern with rst=somefile.rst) $(error specify document or pattern with rst=somefile.rst)
@ -38,7 +50,8 @@ all: docs
docs: htmldocs docs: htmldocs
generate_rst: collections_meta config cli keywords modules plugins testing generate_rst: collections_meta config cli keywords plugins testing
base_generate_rst: collections_meta config cli keywords base_plugins testing
htmldocs: generate_rst htmldocs: generate_rst
CPUS=$(CPUS) $(MAKE) -f Makefile.sphinx html CPUS=$(CPUS) $(MAKE) -f Makefile.sphinx html
@ -46,6 +59,9 @@ htmldocs: generate_rst
singlehtmldocs: generate_rst singlehtmldocs: generate_rst
CPUS=$(CPUS) $(MAKE) -f Makefile.sphinx singlehtml CPUS=$(CPUS) $(MAKE) -f Makefile.sphinx singlehtml
base_singlehtmldocs: base_generate_rst
CPUS=$(CPUS) $(MAKE) -f Makefile.sphinx singlehtml
linkcheckdocs: generate_rst linkcheckdocs: generate_rst
CPUS=$(CPUS) $(MAKE) -f Makefile.sphinx linkcheck CPUS=$(CPUS) $(MAKE) -f Makefile.sphinx linkcheck
@ -58,7 +74,7 @@ clean:
-rm -rf $(BUILDDIR)/html -rm -rf $(BUILDDIR)/html
-rm -rf htmlout -rm -rf htmlout
-rm -rf module_docs -rm -rf module_docs
-rm -rf _build -rm -rf $(BUILDDIR)
-rm -f .buildinfo -rm -f .buildinfo
-rm -f objects.inv -rm -f objects.inv
-rm -rf *.doctrees -rm -rf *.doctrees
@ -70,43 +86,44 @@ clean:
find . -type f \( -name "*~" -or -name "#*" \) -delete find . -type f \( -name "*~" -or -name "#*" \) -delete
find . -type f \( -name "*.swp" \) -delete find . -type f \( -name "*.swp" \) -delete
@echo "Cleaning up generated rst" @echo "Cleaning up generated rst"
rm -f rst/modules/*_by_category.rst
rm -f rst/modules/list_of_*.rst
rm -f rst/modules/*_maintained.rst
rm -f rst/modules/*_module.rst
rm -f rst/modules/*_plugin.rst
rm -f rst/playbooks_directives.rst rm -f rst/playbooks_directives.rst
rm -f rst/plugins/*/*.rst
rm -f rst/reference_appendices/config.rst rm -f rst/reference_appendices/config.rst
rm -f rst/reference_appendices/playbooks_keywords.rst rm -f rst/reference_appendices/playbooks_keywords.rst
rm -f rst/dev_guide/collections_galaxy_meta.rst rm -f rst/dev_guide/collections_galaxy_meta.rst
rm -f rst/cli/*.rst rm -f rst/cli/*.rst
rm -rf rst/collections/*
@echo "Cleaning up legacy generated rst locations"
rm -rf rst/modules
rm -f rst/plugins/*/*.rst
.PHONY: docs clean .PHONY: docs clean
collections_meta: ../templates/collections_galaxy_meta.rst.j2 collections_meta: ../templates/collections_galaxy_meta.rst.j2
PYTHONPATH=../../lib $(COLLECTION_DUMPER) --template-file=../templates/collections_galaxy_meta.rst.j2 --output-dir=rst/dev_guide/ ../../lib/ansible/galaxy/data/collections_galaxy_meta.yml $(COLLECTION_DUMPER) --template-file=../templates/collections_galaxy_meta.rst.j2 --output-dir=rst/dev_guide/ ../../lib/ansible/galaxy/data/collections_galaxy_meta.yml
# TODO: make generate_man output dir cli option # TODO: make generate_man output dir cli option
cli: cli:
mkdir -p rst/cli mkdir -p rst/cli
PYTHONPATH=../../lib $(GENERATE_CLI) --template-file=../templates/cli_rst.j2 --output-dir=rst/cli/ --output-format rst ../../lib/ansible/cli/*.py $(GENERATE_CLI) --template-file=../templates/cli_rst.j2 --output-dir=rst/cli/ --output-format rst ../../lib/ansible/cli/*.py
keywords: ../templates/playbooks_keywords.rst.j2 keywords: ../templates/playbooks_keywords.rst.j2
PYTHONPATH=../../lib $(KEYWORD_DUMPER) --template-dir=../templates --output-dir=rst/reference_appendices/ ./keyword_desc.yml $(KEYWORD_DUMPER) --template-dir=../templates --output-dir=rst/reference_appendices/ ./keyword_desc.yml
config: ../templates/config.rst.j2 config: ../templates/config.rst.j2
PYTHONPATH=../../lib $(CONFIG_DUMPER) --template-file=../templates/config.rst.j2 --output-dir=rst/reference_appendices/ ../../lib/ansible/config/base.yml $(CONFIG_DUMPER) --template-file=../templates/config.rst.j2 --output-dir=rst/reference_appendices/ ../../lib/ansible/config/base.yml
modules: ../templates/plugin.rst.j2 # For now, if we're building on devel, just build base docs. In the future we'll want to build docs that
PYTHONPATH=../../lib $(PLUGIN_FORMATTER) -t rst --template-dir=../templates --module-dir=../../lib/ansible/modules -o rst/modules/ $(MODULE_ARGS) # are the latest versions on galaxy (using a different antsibull-docs subcommand)
plugins:
if expr match "$(VERSION)" '.*[.]dev[0-9]\+$$' &> /dev/null; then \
$(PLUGIN_FORMATTER) base -o rst $(PLUGIN_ARGS);\
else \
$(PLUGIN_FORMATTER) full -o rst $(PLUGIN_ARGS);\
fi
plugins: ../templates/plugin.rst.j2 # This only builds the plugin docs included with ansible-base
@echo "looping over doc plugins" base_plugins:
for plugin in $(DOC_PLUGINS); \ $(PLUGIN_FORMATTER) base -o rst $(PLUGIN_ARGS);\
do \
PYTHONPATH=../../lib $(PLUGIN_FORMATTER) -t rst --plugin-type $$plugin --template-dir=../templates --module-dir=../../lib/ansible/plugins/$$plugin -o rst $(PLUGIN_ARGS); \
done
testing: testing:
$(TESTING_FORMATTER) $(TESTING_FORMATTER)

View file

@ -10,26 +10,27 @@
element.appendChild(para); element.appendChild(para);
document.write('</div>'); document.write('</div>');
} }
{% if (not READTHEDOCS) and (available_versions is defined) %}
// Create a banner if we're not the latest version // Create a banner if we're not the latest version
current_url = window.location.href; current_url = window.location.href;
if ((current_url.search("latest") > -1) || (current_url.search("/{{ latest_version }}/") > -1)) { if ((current_url.search("latest") > -1) || (current_url.search("/{{ latest_version }}/") > -1)) {
// no banner for latest release // no banner for latest release
} else if (current_url.search("devel") > -1) { } else if (current_url.search("devel") > -1) {
document.write('<div id="banner_id" class="admonition caution">'); document.write('<div id="banner_id" class="admonition caution">');
para = document.createElement('p'); para = document.createElement('p');
banner_text=document.createTextNode("You are reading the *devel* version of the Ansible documentation - most module documentation is currently missing as the modules have moved to collections. Until docs catches up to this change, use the version selection to the left if you want module documentation or the latest stable release version. The *devel* version is not guaranteed stable."); banner_text=document.createTextNode("You are reading the *devel* version of the Ansible documentation - this version is not guaranteed stable. Use the version selection to the left if you want the latest stable released version.");
para.appendChild(banner_text); para.appendChild(banner_text);
element = document.getElementById('banner_id'); element = document.getElementById('banner_id');
element.appendChild(para); element.appendChild(para);
document.write('</div>'); document.write('</div>');
} else { } else {
document.write('<div id="banner_id" class="admonition caution">'); document.write('<div id="banner_id" class="admonition caution">');
para = document.createElement('p'); para = document.createElement('p');
banner_text=document.createTextNode("You are reading an older version of the Ansible documentation. Use the version selection to the left if you want the latest stable released version."); banner_text=document.createTextNode("You are reading an older version of the Ansible documentation. Use the version selection to the left if you want the latest stable released version.");
para.appendChild(banner_text); para.appendChild(banner_text);
element = document.getElementById('banner_id'); element = document.getElementById('banner_id');
element.appendChild(para); element.appendChild(para);
document.write('</div>'); document.write('</div>');
} }
{% endif %}
</script> </script>

View file

@ -1,7 +1,7 @@
<!--- Based on https://github.com/rtfd/sphinx_rtd_theme/pull/438/files --> <!--- Based on https://github.com/rtfd/sphinx_rtd_theme/pull/438/files -->
{# Creates dropdown version selection in the top-left navigation. #} {# Creates dropdown version selection in the top-left navigation. #}
<div class="version"> <div class="version">
{% if not READTHEDOCS %} {% if (not READTHEDOCS) and (available_versions is defined) %}
<div class="version-dropdown"> <div class="version-dropdown">
<select class="version-list" id="version-list" onchange="javascript:location.href = this.value;"> <select class="version-list" id="version-list" onchange="javascript:location.href = this.value;">
<script> x = document.getElementById("version-list"); </script> <script> x = document.getElementById("version-list"); </script>

View file

@ -0,0 +1,17 @@
# We also need an example of modules hosted in Automation Hub
# We'll likely move to data hosted in botmeta instead of a standalone file but
# we'll need all of these same details.
module:
purefa_user:
source: 'https://galaxy.ansible.com/'
fqcn: 'purestorage.flasharray'
purefa_vg:
source: 'https://galaxy.ansible.com/'
fqcn: 'purestorage.flasharray'
gcp_compute_firewall_info:
source: 'https://galaxy.ansible.com/'
fqcn: 'google.cloud'
module_utils:
purefa:
source: 'https://galaxy.ansible.com/'
fqcn: 'purestorage.flasharray'

View file

@ -6,3 +6,4 @@ sphinx==2.1.2
sphinx-notfound-page sphinx-notfound-page
Pygments >= 2.4.0 Pygments >= 2.4.0
straight.plugin # Needed for hacking/build-ansible.py which is the backend build script straight.plugin # Needed for hacking/build-ansible.py which is the backend build script
antsibull >= 0.15.0

View file

@ -277,6 +277,10 @@ autoclass_content = 'both'
intersphinx_mapping = {'python': ('https://docs.python.org/2/', (None, '../python2.inv')), intersphinx_mapping = {'python': ('https://docs.python.org/2/', (None, '../python2.inv')),
'python3': ('https://docs.python.org/3/', (None, '../python3.inv')), 'python3': ('https://docs.python.org/3/', (None, '../python3.inv')),
'jinja2': ('http://jinja.palletsprojects.com/', (None, '../jinja2.inv')), 'jinja2': ('http://jinja.palletsprojects.com/', (None, '../jinja2.inv')),
'collections': ('https://docs.ansible.com/collections/',
(None, '../collections.inv',
'http://docs.testing.ansible.com/collections/objects.inv',
'../_collections_build/html/objects.inv')),
'ansible_2_9': ('https://docs.ansible.com/ansible/2.9/', (None, '../ansible_2_9.inv')), 'ansible_2_9': ('https://docs.ansible.com/ansible/2.9/', (None, '../ansible_2_9.inv')),
'ansible_2_8': ('https://docs.ansible.com/ansible/2.8/', (None, '../ansible_2_8.inv')), 'ansible_2_8': ('https://docs.ansible.com/ansible/2.8/', (None, '../ansible_2_8.inv')),
'ansible_2_7': ('https://docs.ansible.com/ansible/2.7/', (None, '../ansible_2_7.inv')), 'ansible_2_7': ('https://docs.ansible.com/ansible/2.7/', (None, '../ansible_2_7.inv')),

View file

@ -74,7 +74,7 @@ Ansible releases a new major release of Ansible approximately three to four time
:maxdepth: 1 :maxdepth: 1
:caption: Reference & Appendices :caption: Reference & Appendices
../modules/modules_by_category collections/index
reference_appendices/playbooks_keywords reference_appendices/playbooks_keywords
reference_appendices/common_return_values reference_appendices/common_return_values
reference_appendices/config reference_appendices/config

View file

@ -10,7 +10,7 @@ Ansible Network modules extend the benefits of simple, powerful, agentless autom
If you're new to Ansible, or new to using Ansible for network management, start with :ref:`network_getting_started`. If you are already familiar with network automation with Ansible, see :ref:`network_advanced`. If you're new to Ansible, or new to using Ansible for network management, start with :ref:`network_getting_started`. If you are already familiar with network automation with Ansible, see :ref:`network_advanced`.
For documentation on using a particular network module, consult the :ref:`list of all network modules<network_modules>`. Some network modules are maintained by the Ansible community - here's a list of :ref:`network modules maintained by the Ansible Network Team<network_supported>`. For documentation on using a particular network module, consult the :ref:`list of all network modules<network_modules>`. Network modules for various hardware are supported by different teams including the hardware vendors themselves, volunteers from the Ansible community, and the Ansible Network Team.
.. toctree:: .. toctree::
:maxdepth: 3 :maxdepth: 3

View file

@ -483,4 +483,4 @@ If you receive an connection error please double check the inventory and playboo
* :ref:`network_guide` * :ref:`network_guide`
* :ref:`intro_inventory` * :ref:`intro_inventory`
* :ref:`Vault best practices <best_practices_for_variables_and_vaults>` * :ref:`Keeping vaulted variables visible <tip_for_variables_and_vaults>`

View file

@ -47,11 +47,6 @@ Plugin List
You can use ``ansible-doc -t become -l`` to see the list of available plugins. You can use ``ansible-doc -t become -l`` to see the list of available plugins.
Use ``ansible-doc -t become <plugin name>`` to see specific documentation and examples. Use ``ansible-doc -t become <plugin name>`` to see specific documentation and examples.
.. toctree:: :maxdepth: 1
:glob:
become/*
.. seealso:: .. seealso::
:ref:`about_playbooks` :ref:`about_playbooks`

View file

@ -118,11 +118,6 @@ Plugin List
You can use ``ansible-doc -t cache -l`` to see the list of available plugins. You can use ``ansible-doc -t cache -l`` to see the list of available plugins.
Use ``ansible-doc -t cache <plugin name>`` to see specific documentation and examples. Use ``ansible-doc -t cache <plugin name>`` to see specific documentation and examples.
.. toctree:: :maxdepth: 1
:glob:
cache/*
.. seealso:: .. seealso::
:ref:`action_plugins` :ref:`action_plugins`

View file

@ -79,12 +79,6 @@ Plugin list
You can use ``ansible-doc -t callback -l`` to see the list of available plugins. You can use ``ansible-doc -t callback -l`` to see the list of available plugins.
Use ``ansible-doc -t callback <plugin name>`` to see specific documents and examples. Use ``ansible-doc -t callback <plugin name>`` to see specific documents and examples.
.. toctree:: :maxdepth: 1
:glob:
callback/*
.. seealso:: .. seealso::
:ref:`action_plugins` :ref:`action_plugins`

View file

@ -58,12 +58,6 @@ You can use ``ansible-doc -t connection -l`` to see the list of available plugin
Use ``ansible-doc -t connection <plugin name>`` to see detailed documentation and examples. Use ``ansible-doc -t connection <plugin name>`` to see detailed documentation and examples.
.. toctree:: :maxdepth: 1
:glob:
connection/*
.. seealso:: .. seealso::
:ref:`Working with Playbooks<working_with_playbooks>` :ref:`Working with Playbooks<working_with_playbooks>`

View file

@ -162,11 +162,6 @@ Plugin List
You can use ``ansible-doc -t inventory -l`` to see the list of available plugins. You can use ``ansible-doc -t inventory -l`` to see the list of available plugins.
Use ``ansible-doc -t inventory <plugin name>`` to see plugin-specific documentation and examples. Use ``ansible-doc -t inventory <plugin name>`` to see plugin-specific documentation and examples.
.. toctree:: :maxdepth: 1
:glob:
inventory/*
.. seealso:: .. seealso::
:ref:`about_playbooks` :ref:`about_playbooks`

View file

@ -138,11 +138,6 @@ Plugin list
You can use ``ansible-doc -t lookup -l`` to see the list of available plugins. Use ``ansible-doc -t lookup <plugin name>`` to see specific documents and examples. You can use ``ansible-doc -t lookup -l`` to see the list of available plugins. Use ``ansible-doc -t lookup <plugin name>`` to see specific documents and examples.
.. toctree:: :maxdepth: 1
:glob:
lookup/*
.. seealso:: .. seealso::
:ref:`about_playbooks` :ref:`about_playbooks`

View file

@ -33,11 +33,6 @@ In this case, you will also want to update the :ref:`ansible_shell_executable <a
You can further control the settings for each plugin via other configuration options You can further control the settings for each plugin via other configuration options
detailed in the plugin themselves (linked below). detailed in the plugin themselves (linked below).
.. toctree:: :maxdepth: 1
:glob:
shell/*
.. seealso:: .. seealso::
:ref:`about_playbooks` :ref:`about_playbooks`

View file

@ -59,11 +59,6 @@ You can use ``ansible-doc -t strategy -l`` to see the list of available plugins.
Use ``ansible-doc -t strategy <plugin name>`` to see plugin-specific specific documentation and examples. Use ``ansible-doc -t strategy <plugin name>`` to see plugin-specific specific documentation and examples.
.. toctree:: :maxdepth: 1
:glob:
strategy/*
.. seealso:: .. seealso::
:ref:`about_playbooks` :ref:`about_playbooks`

View file

@ -57,11 +57,6 @@ You can use ``ansible-doc -t vars -l`` to see the list of available plugins.
Use ``ansible-doc -t vars <plugin name>`` to see specific plugin-specific documentation and examples. Use ``ansible-doc -t vars <plugin name>`` to see specific plugin-specific documentation and examples.
.. toctree:: :maxdepth: 1
:glob:
vars/*
.. seealso:: .. seealso::
:ref:`action_plugins` :ref:`action_plugins`

View file

@ -698,7 +698,7 @@ Please see the section below for a link to IRC and the Google Group, where you c
:ref:`working_with_playbooks` :ref:`working_with_playbooks`
An introduction to playbooks An introduction to playbooks
:ref:`playbooks_best_practices` :ref:`playbooks_best_practices`
Best practices advice Tips and tricks for playbooks
`User Mailing List <https://groups.google.com/group/ansible-project>`_ `User Mailing List <https://groups.google.com/group/ansible-project>`_
Have a question? Stop by the google group! Have a question? Stop by the google group!
`irc.freenode.net <http://irc.freenode.net>`_ `irc.freenode.net <http://irc.freenode.net>`_

View file

@ -494,7 +494,7 @@ when a term comes up on the mailing list.
:ref:`working_with_playbooks` :ref:`working_with_playbooks`
An introduction to playbooks An introduction to playbooks
:ref:`playbooks_best_practices` :ref:`playbooks_best_practices`
Best practices advice Tips and tricks for playbooks
`User Mailing List <https://groups.google.com/group/ansible-devel>`_ `User Mailing List <https://groups.google.com/group/ansible-devel>`_
Have a question? Stop by the google group! Have a question? Stop by the google group!
`irc.freenode.net <http://irc.freenode.net>`_ `irc.freenode.net <http://irc.freenode.net>`_

View file

@ -234,7 +234,7 @@ For more information, see `this systemd issue
Become and network automation Become and network automation
============================= =============================
As of version 2.6, Ansible supports ``become`` for privilege escalation (entering ``enable`` mode or privileged EXEC mode) on all :ref:`Ansible-maintained platforms<network_supported>` that support ``enable`` mode. Using ``become`` replaces the ``authorize`` and ``auth_pass`` options in a ``provider`` dictionary. As of version 2.6, Ansible supports ``become`` for privilege escalation (entering ``enable`` mode or privileged EXEC mode) on all Ansible-maintained network platforms that support ``enable`` mode. Using ``become`` replaces the ``authorize`` and ``auth_pass`` options in a ``provider`` dictionary.
You must set the connection type to either ``connection: network_cli`` or ``connection: httpapi`` to use ``become`` for privilege escalation on network devices. Check the :ref:`platform_options` and :ref:`network_modules` documentation for details. You must set the connection type to either ``connection: network_cli`` or ``connection: httpapi`` to use ``become`` for privilege escalation on network devices. Check the :ref:`platform_options` and :ref:`network_modules` documentation for details.

View file

@ -5,7 +5,8 @@
Using collections Using collections
***************** *****************
Collections are a distribution format for Ansible content that can include playbooks, roles, modules, and plugins. Collections are a distribution format for Ansible content that can include playbooks, roles, modules, and plugins. As modules move from the core Ansible repository into collections, the module documentation will move to the `collections documentation page <https://docs.ansible.com/collections/>`_
You can install and use collections through `Ansible Galaxy <https://galaxy.ansible.com>`_. You can install and use collections through `Ansible Galaxy <https://galaxy.ansible.com>`_.
* For details on how to *develop* collections see :ref:`developing_collections`. * For details on how to *develop* collections see :ref:`developing_collections`.

View file

@ -543,7 +543,7 @@ ansible_port
ansible_user ansible_user
The user name to use when connecting to the host The user name to use when connecting to the host
ansible_password ansible_password
The password to use to authenticate to the host (never store this variable in plain text; always use a vault. See :ref:`best_practices_for_variables_and_vaults`) The password to use to authenticate to the host (never store this variable in plain text; always use a vault. See :ref:`tip_for_variables_and_vaults`)
Specific to the SSH connection: Specific to the SSH connection:
@ -575,7 +575,7 @@ ansible_become_method
ansible_become_user ansible_become_user
Equivalent to ``ansible_sudo_user`` or ``ansible_su_user``, allows to set the user you become through privilege escalation Equivalent to ``ansible_sudo_user`` or ``ansible_su_user``, allows to set the user you become through privilege escalation
ansible_become_password ansible_become_password
Equivalent to ``ansible_sudo_password`` or ``ansible_su_password``, allows you to set the privilege escalation password (never store this variable in plain text; always use a vault. See :ref:`best_practices_for_variables_and_vaults`) Equivalent to ``ansible_sudo_password`` or ``ansible_su_password``, allows you to set the privilege escalation password (never store this variable in plain text; always use a vault. See :ref:`tip_for_variables_and_vaults`)
ansible_become_exe ansible_become_exe
Equivalent to ``ansible_sudo_exe`` or ``ansible_su_exe``, allows you to set the executable for the escalation method selected Equivalent to ``ansible_sudo_exe`` or ``ansible_su_exe``, allows you to set the executable for the escalation method selected
ansible_become_flags ansible_become_flags

View file

@ -7,9 +7,8 @@ Working With Modules
:maxdepth: 1 :maxdepth: 1
modules_intro modules_intro
../reference_appendices/common_return_values
modules_support modules_support
../modules/modules_by_category ../reference_appendices/common_return_values
Ansible ships with a number of modules (called the 'module library') Ansible ships with a number of modules (called the 'module library')

View file

@ -4,68 +4,66 @@
Module Maintenance & Support Module Maintenance & Support
**************************** ****************************
If you are using a module and you discover a bug, you may want to know where to report that bug, who is responsible for fixing it, and how you can track changes to the module. If you are a Red Hat subscriber, you may want to know whether you can get support for the issue you are facing.
Starting in Ansible 2.10, most modules live in collections. The distribution method for each collection reflects the maintenance and support for the modules in that collection.
.. contents:: .. contents::
:depth: 2
:local: :local:
Maintenance Maintenance
=========== ===========
To clarify who maintains each included module, adding features and fixing bugs, each included module now has associated metadata that provides information about maintenance. .. table::
:class: documentation-table
Core ============================= ========================================== ==========================
---- Collection Code location Maintained by
============================= ========================================== ==========================
ansible.builtin `ansible/ansible repo`_ on GitHub core team
:ref:`Core Maintained<core_supported>` modules are maintained by the Ansible Engineering Team. distributed on Galaxy various; follow ``repo`` link community or partners
These modules are integral to the basic foundations of the Ansible distribution.
Network distributed on Automation Hub various; follow ``repo`` link content team or partners
------- ============================= ========================================== ==========================
:ref:`Network Maintained<network_supported>` modules are are maintained by the Ansible Network Team. Please note there are additional networking modules that are categorized as Certified or Community not maintained by Ansible. .. _ansible/ansible repo: https://github.com/ansible/ansible/tree/devel/lib/ansible/modules
Certified
---------
`Certified <https://access.redhat.com/articles/3642632>`_ modules are maintained by Ansible Partners.
Community
---------
:ref:`Community Maintained<community_supported>` modules are submitted and maintained by the Ansible community. These modules are not maintained by Ansible, and are included as a convenience.
Issue Reporting Issue Reporting
=============== ===============
If you believe you have found a bug in a module and are already running the latest stable or development version of Ansible, first look at the `issue tracker in the Ansible repo <https://github.com/ansible/ansible/issues>`_ to see if an issue has already been filed. If not, please file one. If you find a bug that affects a plugin in the main Ansible repo:
Should you have a question rather than a bug report, inquiries are welcome on the `ansible-project Google group <https://groups.google.com/forum/#%21forum/ansible-project>`_ or on Ansible's "#ansible" channel, located on irc.freenode.net. #. Confirm that you are running the latest stable version of Ansible or the devel branch.
#. Look at the `issue tracker in the Ansible repo <https://github.com/ansible/ansible/issues>`_ to see if an issue has already been filed.
#. Create an issue if one does not already exist. Include as much detail as you can about the behavior you discovered.
For development-oriented topics, use the `ansible-devel Google group <https://groups.google.com/forum/#%21forum/ansible-devel>`_ or Ansible's #ansible and #ansible-devel channels, located on irc.freenode.net. You should also read the :ref:`Community Guide <ansible_community_guide>`, :ref:`Testing Ansible <developing_testing>`, and the :ref:`Developer Guide <developer_guide>`. If you find a bug that affects a plugin in a Galaxy collection:
The modules are hosted on GitHub in a subdirectory of the `Ansible <https://github.com/ansible/ansible/tree/devel/lib/ansible/modules>`_ repo. #. Find the collection on Galaxy.
#. Find the issue tracker for the collection.
#. Look there to see if an issue has already been filed.
#. Create an issue if one does not already exist. Include as much detail as you can about the behavior you discovered.
NOTE: If you have a Red Hat Ansible Automation product subscription, please follow the standard issue reporting process via the `Red Hat Customer Portal <https://access.redhat.com/>`_. Some partner collections may be hosted in private repositories.
If you are not sure whether the behavior you see is a bug, if you have questions, if you want to discuss development-oriented topics, or if you just want to get in touch, use one of our Google groups or IRC channels to :ref:`communicate with Ansiblers <communication>`.
If you find a bug that affects a module in an Automation Hub collection:
#. If the collection offers an Issue Tracker link on Automation Hub, click there and open an issue on the collection repository. If it does not, follow the standard process for reporting issues on the `Red Hat Customer Portal <https://access.redhat.com/>`_. You must have a subscription to the Red Hat Ansible Automation Platform to create an issue on the portal.
Support Support
======= =======
For more information on how included Ansible modules are supported by Red Hat, All plugins that remain in ansible-base and all collections hosted in Automation Hub are supported by Red Hat. No other plugins or collections are supported by Red Hat. If you have a subscription to the Red Hat Ansible Automation Platform, you can find more information and resources on the `Red Hat Customer Portal. <https://access.redhat.com/>`_
please refer to the following `knowledge base article <https://access.redhat.com/articles/3166901>`_ as well as other resources on the `Red Hat Customer Portal. <https://access.redhat.com/>`_
.. seealso:: .. seealso::
:ref:`Module index<modules_by_category>`
A complete list of all available modules.
:ref:`intro_adhoc` :ref:`intro_adhoc`
Examples of using modules in /usr/bin/ansible Examples of using modules in /usr/bin/ansible
:ref:`working_with_playbooks` :ref:`working_with_playbooks`
Examples of using modules with /usr/bin/ansible-playbook Examples of using modules with /usr/bin/ansible-playbook
:ref:`developing_modules`
How to write your own modules
`List of Ansible Certified Modules <https://access.redhat.com/articles/3642632>`_
High level list of Ansible certified modules from Partners
`Mailing List <https://groups.google.com/group/ansible-project>`_ `Mailing List <https://groups.google.com/group/ansible-project>`_
Questions? Help? Ideas? Stop by the list on Google Groups Questions? Help? Ideas? Stop by the list on Google Groups
`irc.freenode.net <http://irc.freenode.net>`_ `irc.freenode.net <http://irc.freenode.net>`_

View file

@ -4,20 +4,21 @@
Search paths in Ansible Search paths in Ansible
*********************** ***********************
Absolute paths are not an issue as they always have a known start, but relative paths ... well, they are relative. You can control the paths Ansible searches to find resources on your control node (including configuration, modules, roles, ssh keys, and more) as well as resources on the remote nodes you are managing. Use absolute paths to tell Ansible where to find resources whenever you can. However, absolute paths are not always practical. This page covers how Ansible interprets relative search paths, along with ways to troubleshoot when Ansible cannot find the resource you need.
.. contents::
:local:
Config paths Config paths
============ ============
By default these should be relative to the config file, some are specifically relative to the 'cwd' or the playbook and should have this noted in their description. Things like ssh keys are left to use 'cwd' because it mirrors how the underlying tools would use it. By default these should be relative to the config file, some are specifically relative to the current working directory or the playbook and should have this noted in their description. Things like ssh keys are left to use the current working directory because it mirrors how the underlying tools would use it.
Task paths Task paths
========== ==========
Here things start getting complicated, there are 2 different scopes to consider, task evaluation (paths are all local, like in lookups) and task execution, which is normally on the remote, unless an action plugin is involved. Task paths include two different scopes: task evaluation and task execution. For task evaluation, all paths are local, like in lookups. For task execution, which usually happens on the remote nodes, local paths do not usually apply. However, if a task uses an action plugin, it uses a local path. The template and copy modules are examples of modules that use action plugins, and therefore use local paths.
Some tasks that require 'local' resources use action plugins (template and copy are examples of these), in which case the path is also local.
The magic of 'local' paths The magic of 'local' paths
-------------------------- --------------------------
@ -32,12 +33,10 @@ i.e ::
play search path is playdir/{files|vars|templates}/, playdir/. play search path is playdir/{files|vars|templates}/, playdir/.
The current working directory (cwd) is not searched. If you see it, it just happens to coincide with one of the paths above. By default, Ansible does not search the current working directory unless it happens to coincide with one of the paths above. If you `include` a task file from a role, it will NOT trigger role behavior, this only happens when running as a role, `include_role` will work. A new variable `ansible_search_path` var will have the search path used, in order (but without the appended subdirs). Using 5 "v"s (`-vvvvv`) should show the detail of the search as it happens.
If you `include` a task file from a role, it will NOT trigger role behavior, this only happens when running as a role, `include_role` will work.
A new variable `ansible_search_path` var will have the search path used, in order (but without the appended subdirs). Using 5 "v"s (`-vvvvv`) should show the detail of the search as it happens.
As for includes, they try the path of the included file first and fall back to the play/role that includes them. As for includes, they try the path of the included file first and fall back to the play/role that includes them.
.. note: The 'cwd' might vary depending on the connection plugin and if the action is local or remote. For the remote it is normally the directory on which the login shell puts the user. For local it is either the directory you executed ansible from or in some cases the playbook directory. .. note: The current working directory might vary depending on the connection plugin and if the action is local or remote. For the remote it is normally the directory on which the login shell puts the user. For local it is either the directory you executed ansible from or in some cases the playbook directory.

View file

@ -70,7 +70,7 @@ Separate production and staging inventory
You can keep your production environment separate from development, test, and staging environments by using separate inventory files or directories for each environment. This way you pick with -i what you are targeting. Keeping all your environments in one file can lead to surprises! You can keep your production environment separate from development, test, and staging environments by using separate inventory files or directories for each environment. This way you pick with -i what you are targeting. Keeping all your environments in one file can lead to surprises!
.. _best_practices_for_variables_and_vaults: .. _tip_for_variables_and_vaults:
Keep vaulted variables safely visible Keep vaulted variables safely visible
------------------------------------- -------------------------------------

View file

@ -464,7 +464,7 @@ Possible values (sample, not complete list)::
:ref:`playbooks_reuse_roles` :ref:`playbooks_reuse_roles`
Playbook organization by roles Playbook organization by roles
:ref:`playbooks_best_practices` :ref:`playbooks_best_practices`
Best practices in playbooks Tips and tricks for playbooks
:ref:`playbooks_variables` :ref:`playbooks_variables`
All about variables All about variables
`User Mailing List <https://groups.google.com/group/ansible-devel>`_ `User Mailing List <https://groups.google.com/group/ansible-devel>`_

View file

@ -1,9 +1,9 @@
.. _playbooks_delegation: .. _playbooks_delegation:
Delegation and local actions Controlling where tasks run: delegation and local actions
============================ =========================================================
By default Ansible executes all tasks on the machines that match the ``hosts`` line of your playbook. If you want to run some tasks on a different machine, you can use delegation. For example, when updating webservers, you might want to retrieve information from your database servers. In this scenario, your play would target the webservers group and you would delegate the database tasks to your dbservers group. With delegation, you can perform a task on one host on behalf of another, or execute tasks locally on behalf of remote hosts. By default Ansible gathers facts and executes all tasks on the machines that match the ``hosts`` line of your playbook. This page shows you how to delegate tasks to a different machine or group, delegate facts to specific machines or groups, or run an entire playbook locally. Using these approaches, you can manage inter-related environments precisely and efficiently. For example, when updating your webservers, you might need to remove them from a load-balanced pool temporarily. You cannot perform this task on the webservers themselves. By delegating the task to localhost, you keep all the tasks within the same play.
.. contents:: .. contents::
:local: :local:
@ -99,52 +99,10 @@ Delegating Ansible tasks is like delegating tasks in the real world - your groce
This task gathers facts for the machines in the dbservers group and assigns the facts to those machines, even though the play targets the app_servers group. This way you can lookup `hostvars['dbhost1']['ansible_default_ipv4']['address']` even though dbservers were not part of the play, or left out by using `--limit`. This task gathers facts for the machines in the dbservers group and assigns the facts to those machines, even though the play targets the app_servers group. This way you can lookup `hostvars['dbhost1']['ansible_default_ipv4']['address']` even though dbservers were not part of the play, or left out by using `--limit`.
.. _run_once:
Run once
--------
If you want a task to run only on the first host in your batch of hosts, set ``run_once`` to true on that task::
---
# ...
tasks:
# ...
- command: /opt/application/upgrade_db.py
run_once: true
# ...
Ansible executes this task on the first host in the current batch and applies all results and facts to all the hosts in the same batch. This approach is similar to applying a conditional to a task such as::
- command: /opt/application/upgrade_db.py
when: inventory_hostname == webservers[0]
However, with ``run_once``, the results are applied to all the hosts. To specify an individual host to execute on, delegate the task::
- command: /opt/application/upgrade_db.py
run_once: true
delegate_to: web01.example.org
As always with delegation, the action will be executed on the delegated host, but the information is still that of the original host in the task.
.. note::
When used together with "serial", tasks marked as "run_once" will be run on one host in *each* serial batch. If the task must run only once regardless of "serial" mode, use
:code:`when: inventory_hostname == ansible_play_hosts_all[0]` construct.
.. note::
Any conditional (i.e `when:`) will use the variables of the 'first host' to decide if the task runs or not, no other hosts will be tested.
.. note::
If you want to avoid the default behavior of setting the fact for all hosts, set `delegate_facts: True` for the specific task or block.
.. _local_playbooks: .. _local_playbooks:
Local playbooks Local playbooks
``````````````` ---------------
It may be useful to use a playbook locally on a remote host, rather than by connecting over SSH. This can be useful for assuring the configuration of a system by putting a playbook in a crontab. This may also be used It may be useful to use a playbook locally on a remote host, rather than by connecting over SSH. This can be useful for assuring the configuration of a system by putting a playbook in a crontab. This may also be used
to run a playbook inside an OS installer, such as an Anaconda kickstart. to run a playbook inside an OS installer, such as an Anaconda kickstart.
@ -165,11 +123,12 @@ use the default remote connection type::
under {{ ansible_playbook_python }}. Be sure to set ansible_python_interpreter: "{{ ansible_playbook_python }}" in under {{ ansible_playbook_python }}. Be sure to set ansible_python_interpreter: "{{ ansible_playbook_python }}" in
host_vars/localhost.yml, for example. You can avoid this issue by using ``local_action`` or ``delegate_to: localhost`` instead. host_vars/localhost.yml, for example. You can avoid this issue by using ``local_action`` or ``delegate_to: localhost`` instead.
.. seealso:: .. seealso::
:ref:`playbooks_intro` :ref:`playbooks_intro`
An introduction to playbooks An introduction to playbooks
:ref:`playbooks_strategies`
More ways to control how and where Ansible executes
`Ansible Examples on GitHub <https://github.com/ansible/ansible-examples>`_ `Ansible Examples on GitHub <https://github.com/ansible/ansible-examples>`_
Many examples of full-stack deployments Many examples of full-stack deployments
`User Mailing List <https://groups.google.com/group/ansible-devel>`_ `User Mailing List <https://groups.google.com/group/ansible-devel>`_

View file

@ -231,7 +231,7 @@ You can also use blocks to define responses to task errors. This approach is sim
:ref:`playbooks_intro` :ref:`playbooks_intro`
An introduction to playbooks An introduction to playbooks
:ref:`playbooks_best_practices` :ref:`playbooks_best_practices`
Best practices in playbooks Tips and tricks for playbooks
:ref:`playbooks_conditionals` :ref:`playbooks_conditionals`
Conditional statements in playbooks Conditional statements in playbooks
:ref:`playbooks_variables` :ref:`playbooks_variables`

View file

@ -1644,7 +1644,7 @@ This can then be used to reference hashes in Pod specifications::
:ref:`playbooks_reuse_roles` :ref:`playbooks_reuse_roles`
Playbook organization by roles Playbook organization by roles
:ref:`playbooks_best_practices` :ref:`playbooks_best_practices`
Best practices in playbooks Tips and tricks for playbooks
`User Mailing List <https://groups.google.com/group/ansible-devel>`_ `User Mailing List <https://groups.google.com/group/ansible-devel>`_
Have a question? Stop by the google group! Have a question? Stop by the google group!
`irc.freenode.net <http://irc.freenode.net>`_ `irc.freenode.net <http://irc.freenode.net>`_

View file

@ -737,7 +737,7 @@ the filter ``slaac()`` generates an IPv6 address for a given network and a MAC A
:ref:`playbooks_reuse_roles` :ref:`playbooks_reuse_roles`
Playbook organization by roles Playbook organization by roles
:ref:`playbooks_best_practices` :ref:`playbooks_best_practices`
Best practices in playbooks Tips and tricks for playbooks
`User Mailing List <https://groups.google.com/group/ansible-devel>`_ `User Mailing List <https://groups.google.com/group/ansible-devel>`_
Have a question? Stop by the google group! Have a question? Stop by the google group!
`irc.freenode.net <http://irc.freenode.net>`_ `irc.freenode.net <http://irc.freenode.net>`_

View file

@ -430,7 +430,7 @@ Migrating from with_X to loop
:ref:`playbooks_reuse_roles` :ref:`playbooks_reuse_roles`
Playbook organization by roles Playbook organization by roles
:ref:`playbooks_best_practices` :ref:`playbooks_best_practices`
Best practices in playbooks Tips and tricks for playbooks
:ref:`playbooks_conditionals` :ref:`playbooks_conditionals`
Conditional statements in playbooks Conditional statements in playbooks
:ref:`playbooks_variables` :ref:`playbooks_variables`

View file

@ -188,7 +188,7 @@ Imports are processed before the play begins, so the name of the import no longe
:ref:`playbooks_loops` :ref:`playbooks_loops`
Loops in playbooks Loops in playbooks
:ref:`playbooks_best_practices` :ref:`playbooks_best_practices`
Various tips about managing playbooks in the real world Tips and tricks for playbooks
:ref:`ansible_galaxy` :ref:`ansible_galaxy`
How to share roles on galaxy, role management How to share roles on galaxy, role management
`GitHub Ansible examples <https://github.com/ansible/ansible-examples>`_ `GitHub Ansible examples <https://github.com/ansible/ansible-examples>`_

View file

@ -15,7 +15,7 @@ The content on this page has been moved to :ref:`playbooks_reuse`.
:ref:`working_with_playbooks` :ref:`working_with_playbooks`
Review the basic Playbook language features Review the basic Playbook language features
:ref:`playbooks_best_practices` :ref:`playbooks_best_practices`
Various tips about managing playbooks in the real world Tips and tricks for playbooks
:ref:`playbooks_variables` :ref:`playbooks_variables`
All about variables in playbooks All about variables in playbooks
:ref:`playbooks_conditionals` :ref:`playbooks_conditionals`

View file

@ -441,7 +441,7 @@ Read the `Ansible Galaxy documentation <https://galaxy.ansible.com/docs/>`_ page
:ref:`working_with_playbooks` :ref:`working_with_playbooks`
Review the basic Playbook language features Review the basic Playbook language features
:ref:`playbooks_best_practices` :ref:`playbooks_best_practices`
Tips for managing playbooks in the real world Tips and tricks for playbooks
:ref:`playbooks_variables` :ref:`playbooks_variables`
Variables in playbooks Variables in playbooks
:ref:`playbooks_conditionals` :ref:`playbooks_conditionals`

View file

@ -19,7 +19,7 @@ As you write more playbooks and roles, you might have some special use cases. Fo
../plugins/plugins ../plugins/plugins
playbooks_prompts playbooks_prompts
playbooks_tags playbooks_tags
playbooks_vault vault
playbooks_startnstep playbooks_startnstep
../reference_appendices/playbooks_keywords ../reference_appendices/playbooks_keywords
playbooks_lookups playbooks_lookups

View file

@ -36,7 +36,7 @@ or pass it on the command line: `ansible-playbook -f 30 my_playbook.yml`.
Using keywords to control execution Using keywords to control execution
----------------------------------- -----------------------------------
In addition to strategies, several :ref:`keywords<playbook_keywords>` also affect play execution. You can set a number, a percentage, or a list of numbers of hosts you want to manage at a time with ``serial``. Ansible completes the play on the specified number or percentage of hosts before starting the next batch of hosts. You can restrict the number of workers allotted to a block or task with ``throttle``. You can control how Ansible selects the next host in a group to execute against with ``order``. These keywords are not strategies. They are directives or options applied to a play, block, or task. In addition to strategies, several :ref:`keywords<playbook_keywords>` also affect play execution. You can set a number, a percentage, or a list of numbers of hosts you want to manage at a time with ``serial``. Ansible completes the play on the specified number or percentage of hosts before starting the next batch of hosts. You can restrict the number of workers allotted to a block or task with ``throttle``. You can control how Ansible selects the next host in a group to execute against with ``order``. You can run a task on a single host with ``run_once``. These keywords are not strategies. They are directives or options applied to a play, block, or task.
.. _rolling_update_batch_size: .. _rolling_update_batch_size:
@ -160,10 +160,54 @@ shuffle:
Other keywords that affect play execution include ``ignore_errors``, ``ignore_unreachable``, and ``any_errors_fatal``. These options are documented in :ref:`playbooks_error_handling`. Other keywords that affect play execution include ``ignore_errors``, ``ignore_unreachable``, and ``any_errors_fatal``. These options are documented in :ref:`playbooks_error_handling`.
.. _run_once:
Running on a single machine with ``run_once``
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
If you want a task to run only on the first host in your batch of hosts, set ``run_once`` to true on that task::
---
# ...
tasks:
# ...
- command: /opt/application/upgrade_db.py
run_once: true
# ...
Ansible executes this task on the first host in the current batch and applies all results and facts to all the hosts in the same batch. This approach is similar to applying a conditional to a task such as::
- command: /opt/application/upgrade_db.py
when: inventory_hostname == webservers[0]
However, with ``run_once``, the results are applied to all the hosts. To run the task on a specific host, instead of the first host in the batch, delegate the task::
- command: /opt/application/upgrade_db.py
run_once: true
delegate_to: web01.example.org
As always with :ref:`delegation <playbooks_delegation>`, the action will be executed on the delegated host, but the information is still that of the original host in the task.
.. note::
When used together with ``serial``, tasks marked as ``run_once`` will be run on one host in *each* serial batch. If the task must run only once regardless of ``serial`` mode, use
:code:`when: inventory_hostname == ansible_play_hosts_all[0]` construct.
.. note::
Any conditional (i.e `when:`) will use the variables of the 'first host' to decide if the task runs or not, no other hosts will be tested.
.. note::
If you want to avoid the default behavior of setting the fact for all hosts, set ``delegate_facts: True`` for the specific task or block.
.. seealso:: .. seealso::
:ref:`about_playbooks` :ref:`about_playbooks`
An introduction to playbooks An introduction to playbooks
:ref:`playbooks_delegation`
Running tasks on or assigning facts to specific machines
:ref:`playbooks_reuse_roles` :ref:`playbooks_reuse_roles`
Playbook organization by roles Playbook organization by roles
`User Mailing List <https://groups.google.com/group/ansible-devel>`_ `User Mailing List <https://groups.google.com/group/ansible-devel>`_

View file

@ -48,7 +48,7 @@ fmt
:ref:`playbooks_reuse_roles` :ref:`playbooks_reuse_roles`
Playbook organization by roles Playbook organization by roles
:ref:`playbooks_best_practices` :ref:`playbooks_best_practices`
Best practices in playbooks Tips and tricks for playbooks
`User Mailing List <https://groups.google.com/group/ansible-devel>`_ `User Mailing List <https://groups.google.com/group/ansible-devel>`_
Have a question? Stop by the google group! Have a question? Stop by the google group!
`irc.freenode.net <http://irc.freenode.net>`_ `irc.freenode.net <http://irc.freenode.net>`_

View file

@ -386,7 +386,7 @@ The following tasks are illustrative of the tests meant to check the status of t
:ref:`playbooks_reuse_roles` :ref:`playbooks_reuse_roles`
Playbook organization by roles Playbook organization by roles
:ref:`playbooks_best_practices` :ref:`playbooks_best_practices`
Best practices in playbooks Tips and tricks for playbooks
`User Mailing List <https://groups.google.com/group/ansible-devel>`_ `User Mailing List <https://groups.google.com/group/ansible-devel>`_
Have a question? Stop by the google group! Have a question? Stop by the google group!
`irc.freenode.net <http://irc.freenode.net>`_ `irc.freenode.net <http://irc.freenode.net>`_

View file

@ -437,7 +437,7 @@ For information about advanced YAML syntax used to declare variables and have mo
:ref:`playbooks_reuse_roles` :ref:`playbooks_reuse_roles`
Playbook organization by roles Playbook organization by roles
:ref:`playbooks_best_practices` :ref:`playbooks_best_practices`
Best practices in playbooks Tips and tricks for playbooks
:ref:`special_variables` :ref:`special_variables`
List of special variables List of special variables
`User Mailing List <https://groups.google.com/group/ansible-devel>`_ `User Mailing List <https://groups.google.com/group/ansible-devel>`_

View file

@ -1,157 +1,6 @@
.. _playbooks_vault: :orphan:
Using Vault in playbooks Using vault in playbooks
======================== ========================
.. contents:: Topics The documentation regarding Ansible Vault has moved. The new location is here: :ref:`vault`. Please update any links you may have made directly to this page.
The "Vault" is a feature of Ansible that allows you to keep sensitive data such as passwords or keys protected at rest, rather than as plaintext in playbooks or roles. These vaults can then be distributed or placed in source control.
There are 2 types of vaulted content and each has their own uses and limitations:
:Vaulted files:
* The full file is encrypted in the vault, this can contain Ansible variables or any other type of content.
* It will always be decrypted when loaded or referenced, Ansible cannot know if it needs the content unless it decrypts it.
* It can be used for inventory, anything that loads variables (i.e vars_files, group_vars, host_vars, include_vars, etc)
and some actions that deal with files (i.e M(copy), M(assemble), M(script), etc).
:Single encrypted variable:
* Only specific variables are encrypted inside a normal 'variable file'.
* Does not work for other content, only variables.
* Decrypted on demand, so you can have vaulted variables with different vault secrets and only provide those needed.
* You can mix vaulted and non vaulted variables in the same file, even inline in a play or role.
.. warning::
* Vault ONLY protects data 'at rest'. Once decrypted, play and plugin authors are responsible for avoiding any secret disclosure,
see :ref:`no_log <keep_secret_data>` for details on hiding output.
To enable this feature, a command line tool, :ref:`ansible-vault` is used to edit files, and a command line flag :option:`--ask-vault-pass <ansible-vault-create --ask-vault-pass>`, :option:`--vault-password-file <ansible-vault-create --vault-password-file>` or :option:`--vault-id <ansible-playbook --vault-id>` is used. You can also modify your ``ansible.cfg`` file to specify the location of a password file or configure Ansible to always prompt for the password. These options require no command line flag usage.
For best practices advice, refer to :ref:`best_practices_for_variables_and_vaults`.
Running a Playbook With Vault
`````````````````````````````
To run a playbook that contains vault-encrypted data files, you must provide the vault password.
To specify the vault-password interactively::
ansible-playbook site.yml --ask-vault-pass
This prompt will then be used to decrypt (in memory only) any vault encrypted files that are accessed.
Alternatively, passwords can be specified with a file or a script (the script version will require Ansible 1.7 or later). When using this flag, ensure permissions on the file are such that no one else can access your key and do not add your key to source control::
ansible-playbook site.yml --vault-password-file ~/.vault_pass.txt
ansible-playbook site.yml --vault-password-file ~/.vault_pass.py
The password should be a string stored as a single line in the file.
If you are using a script instead of a flat file, ensure that it is marked as executable, and that the password is printed to standard output. If your script needs to prompt for data, prompts can be sent to standard error.
.. note::
You can also set :envvar:`ANSIBLE_VAULT_PASSWORD_FILE` environment variable, e.g. ``ANSIBLE_VAULT_PASSWORD_FILE=~/.vault_pass.txt`` and Ansible will automatically search for the password in that file.
This is something you may wish to do if using Ansible from a continuous integration system like Jenkins.
The :option:`--vault-password-file <ansible-pull --vault-password-file>` option can also be used with the :ref:`ansible-pull` command if you wish, though this would require distributing the keys to your nodes, so understand the implications -- vault is more intended for push mode.
Multiple Vault Passwords
````````````````````````
Ansible 2.4 and later support the concept of multiple vaults that are encrypted with different passwords
Different vaults can be given a label to distinguish them (generally values like dev, prod etc.).
The :option:`--ask-vault-pass <ansible-playbook --ask-vault-pass>` and
:option:`--vault-password-file <ansible-playbook --vault-password-file>` options can be used as long as
only a single password is needed for any given run.
Alternatively the :option:`--vault-id <ansible-playbook --vault-id>` option can be used to provide the
password and indicate which vault label it's for. This can be clearer when multiple vaults are used within
a single inventory. For example:
To be prompted for the 'dev' password:
.. code-block:: bash
ansible-playbook site.yml --vault-id dev@prompt
To get the 'dev' password from a file or script:
.. code-block:: bash
ansible-playbook site.yml --vault-id dev@~/.vault_pass.txt
ansible-playbook site.yml --vault-id dev@~/.vault_pass.py
If multiple vault passwords are required for a single run, :option:`--vault-id <ansible-playbook --vault-id>` must
be used as it can be specified multiple times to provide the multiple passwords. For example:
To read the 'dev' password from a file and prompt for the 'prod' password:
.. code-block:: bash
ansible-playbook site.yml --vault-id dev@~/.vault_pass.txt --vault-id prod@prompt
The :option:`--ask-vault-pass <ansible-playbook --ask-vault-pass>` or
:option:`--vault-password-file <ansible-playbook --vault-password-file>` options can be used to specify one of
the passwords, but it's generally cleaner to avoid mixing these with :option:`--vault-id <ansible-playbook --vault-id>`.
.. note::
By default the vault label (dev, prod etc.) is just a hint. Ansible will try to decrypt each
vault with every provided password.
Setting the config option :ref:`DEFAULT_VAULT_ID_MATCH` will change this behavior so that each password
is only used to decrypt data that was encrypted with the same label. See :ref:`specifying_vault_ids`
for more details.
Vault Password Client Scripts
`````````````````````````````
Ansible 2.5 and later support using a single executable script to get different passwords depending on the
vault label. These client scripts must have a file name that ends with :file:`-client`. For example:
To get the dev password from the system keyring using the :file:`contrib/vault/vault-keyring-client.py` script:
.. code-block:: bash
ansible-playbook --vault-id dev@contrib/vault/vault-keyring-client.py
See :ref:`vault_password_client_scripts` for a complete explanation of this topic.
.. _single_encrypted_variable:
Single Encrypted Variable
`````````````````````````
As of version 2.3, Ansible can now use a vaulted variable that lives in an otherwise 'clear text' YAML file::
notsecret: myvalue
mysecret: !vault |
$ANSIBLE_VAULT;1.1;AES256
66386439653236336462626566653063336164663966303231363934653561363964363833313662
6431626536303530376336343832656537303632313433360a626438346336353331386135323734
62656361653630373231613662633962316233633936396165386439616533353965373339616234
3430613539666330390a313736323265656432366236633330313963326365653937323833366536
34623731376664623134383463316265643436343438623266623965636363326136
other_plain_text: othervalue
To create a vaulted variable, use the :ref:`ansible-vault encrypt_string <ansible_vault_encrypt_string>` command. See :ref:`encrypt_string` for details.
This vaulted variable will be decrypted with the supplied vault secret and used as a normal variable. The ``ansible-vault`` command line supports stdin and stdout for encrypting data on the fly, which can be used from your favorite editor to create these vaulted variables; you just have to be sure to add the ``!vault`` tag so both Ansible and YAML are aware of the need to decrypt. The ``|`` is also required, as vault encryption results in a multi-line string.
.. note::
Inline vaults ONLY work on variables, you cannot use directly on a task's options.
.. _encrypt_string:
Using encrypt_string
````````````````````
This command will output a string in the above format ready to be included in a YAML file.
The string to encrypt can be provided via stdin, command line arguments, or via an interactive prompt.
See :ref:`encrypt_string_for_use_in_yaml`.

View file

@ -1,12 +1,9 @@
.. _plugin_filtering_config: .. _plugin_filtering_config:
Plugin Filter Configuration Blacklisting modules
=========================== ====================
Ansible 2.5 adds the ability for a site administrator to blacklist modules that they do not want to If you want to avoid using certain modules, you can blacklist them to prevent Ansible from loading them. To blacklist plugins, create a yaml configuration file. The default location for this file is :file:`/etc/ansible/plugin_filters.yml`, or you can select a different path for the blacklist file using the :ref:`PLUGIN_FILTERS_CFG` setting in the ``defaults`` section of your ansible.cfg. Here is an example blacklist file:
be available to Ansible. This is configured via a yaml configuration file (by default,
:file:`/etc/ansible/plugin_filters.yml`). Use ``plugin_filters_cfg`` configuration
in ``defaults`` section to change this configuration file path. The format of the file is:
.. code-block:: YAML .. code-block:: YAML
@ -20,12 +17,10 @@ in ``defaults`` section to change this configuration file path. The format of th
The file contains two fields: The file contains two fields:
* a version so that it will be possible to update the format while keeping backwards * A file version so that you can update the format while keeping backwards compatibility in the future. The present version should be the string, ``"1.0"``
compatibility in the future. The present version should be the string, ``"1.0"``
* a list of modules to blacklist. Any module listed here will not be found by Ansible when it * A list of modules to blacklist. Any module in this list will not be loaded by Ansible when it searches for a module to invoke for a task.
searches for a module to invoke for a task.
.. note:: .. note::
The ``stat`` module is required for Ansible to run. So, please make sure you do not add this module in a blacklist modules list. You cannot blacklist the ``stat`` module, as it is required for Ansible to run.

View file

@ -1,150 +1,329 @@
.. _vault: .. _vault:
Ansible Vault *************************************
============= Encrypting content with Ansible Vault
*************************************
.. contents:: Topics Ansible Vault encrypts variables and files so you can protect sensitive content such as passwords or keys rather than leaving it visible as plaintext in playbooks or roles. To use Ansible Vault you need one or more passwords to encrypt and decrypt content. If you store your vault passwords in a third-party tool such as a secret manager, you need a script to access them. Use the passwords with the :ref:`ansible-vault` command-line tool to create and view encrypted variables, create encrypted files, encrypt existing files, or edit, re-key, or decrypt files. You can then place encrypted content under source control and share it more safely.
Ansible Vault is a feature of ansible that allows you to keep sensitive data such as passwords or keys in encrypted files, rather than as plaintext in playbooks or roles. These vault files can then be distributed or placed in source control. .. warning::
* Encryption with Ansible Vault ONLY protects 'data at rest'. Once the content is decrypted ('data in use'), play and plugin authors are responsible for avoiding any secret disclosure, see :ref:`no_log <keep_secret_data>` for details on hiding output.
To enable this feature, a command line tool - :ref:`ansible-vault` - is used to edit files, and a command line flag (:option:`--ask-vault-pass <ansible-playbook --ask-vault-pass>`, :option:`--vault-password-file <ansible-playbook --vault-password-file>` or :option:`--vault-id <ansible-playbook --vault-id>`) is used. Alternately, you may specify the location of a password file or command Ansible to always prompt for the password in your ansible.cfg file. These options require no command line flag usage. You can use encrypted variables and files in ad-hoc commands and playbooks by supplying the passwords you used to encrypt them. You can modify your ``ansible.cfg`` file to specify the location of a password file or to always prompt for the password.
For best practices advice, refer to :ref:`best_practices_for_variables_and_vaults`. .. contents::
:local:
.. _what_can_be_encrypted_with_vault: Managing vault passwords
========================
What Can Be Encrypted With Vault Managing your encrypted content is easier if you develop a strategy for managing your vault passwords. A vault password can be any string you choose. There is no special command to create a vault password. However, you need to keep track of your vault passwords. Each time you encrypt a variable or file with Ansible Vault, you must provide a password. When you use an encrypted variable or file in a command or playbook, you must provide the same password that was used to encrypt it. To develop a strategy for managing vault passwords, start with two questions:
````````````````````````````````
File-level encryption * Do you want to encrypt all your content with the same password, or use different passwords for different needs?
^^^^^^^^^^^^^^^^^^^^^ * Where do you want to store your password or passwords?
Ansible Vault can encrypt any structured data file used by Ansible. Choosing between a single password and multiple passwords
---------------------------------------------------------
This can include "group_vars/" or "host_vars/" inventory variables, variables loaded by "include_vars" or "vars_files", or variable files passed on the ansible-playbook command line with ``-e @file.yml`` or ``-e @file.json``. Role variables and defaults are also included. If you have a small team or few sensitive values, you can use a single password for everything you encrypt with Ansible Vault. Store your vault password securely in a file or a secret manager as described below.
Ansible tasks, handlers, and so on are also data so these can be encrypted with vault as well. To hide the names of variables that you're using, you can encrypt the task files in their entirety.
Ansible Vault can also encrypt arbitrary files, even binary files. If a vault-encrypted file is
given as the ``src`` argument to the :ref:`copy <copy_module>`, :ref:`template <template_module>`,
:ref:`unarchive <unarchive_module>`, :ref:`script <script_module>` or :ref:`assemble
<assemble_module>` modules, the file will be placed at the destination on the target host decrypted
(assuming a valid vault password is supplied when running the play).
.. note::
The advantages of file-level encryption are that it is easy to use and that password rotation is straightforward with :ref:`rekeying <rekeying_files>`.
The drawback is that the contents of files are no longer easy to access and read. This may be problematic if it is a list of tasks (when encrypting a variables file, :ref:`best practice <best_practices_for_variables_and_vaults>` is to keep references to these variables in a non-encrypted file).
Variable-level encryption
^^^^^^^^^^^^^^^^^^^^^^^^^
Ansible also supports encrypting single values inside a YAML file, using the `!vault` tag to let YAML and Ansible know it uses special processing. This feature is covered in more detail :ref:`below <encrypt_string_for_use_in_yaml>`.
.. note::
The advantage of variable-level encryption is that files are still easily legible even if they mix plaintext and encrypted variables.
The drawback is that password rotation is not as simple as with file-level encryption: the :ref:`rekey <ansible_vault_rekey>` command does not work with this method.
If you have a larger team or many sensitive values, you can use multiple passwords. For example, you can use different passwords for different users or different levels of access. Depending on your needs, you might want a different password for each encrypted file, for each directory, or for each environment. For example, you might have a playbook that includes two vars files, one for the dev environment and one for the production environment, encrypted with two different passwords. When you run the playbook, select the correct vault password for the environment you are targeting, using a vault ID.
.. _vault_ids: .. _vault_ids:
Vault IDs and Multiple Vault Passwords Managing multiple passwords with vault IDs
`````````````````````````````````````` ------------------------------------------
If you use multiple vault passwords, you can differentiate one password from another with vault IDs. You use the vault ID in three ways:
A vault ID is an identifier for one or more vault secrets; * Pass it with :option:`--vault-id <ansible-playbook --vault-id>` to the :ref:`ansible-vault` command when you create encrypted content
Ansible supports multiple vault passwords. * Include it wherever you store the password for that vault ID (see :ref:`storing_vault_passwords`)
* Pass it with :option:`--vault-id <ansible-playbook --vault-id>` to the :ref:`ansible-playbook` command when you run a playbook that uses content you encrypted with that vault ID
Vault IDs provide labels to distinguish between individual vault passwords. When you pass a vault ID as an option to the :ref:`ansible-vault` command, you add a label (a hint or nickname) to the encrypted content. This label documents which password you used to encrypt it. The encrypted variable or file includes the vault ID label in plain text in the header. The vault ID is the last element before the encrypted content. For example::
To use vault IDs, you must provide an ID *label* of your choosing and a *source* to obtain its password (either ``prompt`` or a file path): my_encrytped_var: !vault |
$ANSIBLE_VAULT;1.2;AES256;dev
30613233633461343837653833666333643061636561303338373661313838333565653635353162
3263363434623733343538653462613064333634333464660a663633623939393439316636633863
61636237636537333938306331383339353265363239643939666639386530626330633337633833
6664656334373166630a363736393262666465663432613932613036303963343263623137386239
6330
In addition to the label, you must provide a source for the related password. The source can be a prompt, a file, or a script, depending on how you are storing your vault passwords. The pattern looks like this:
.. code-block:: bash .. code-block:: bash
--vault-id label@source --vault-id label@source
This switch is available for all Ansible commands that can interact with vaults: :ref:`ansible-vault`, :ref:`ansible-playbook`, etc. If your playbook uses multiple encrypted variables or files that you encrypted with different passwords, you must pass the vault IDs when you run that playbook. You can use :option:`--vault-id <ansible-playbook --vault-id>` by itself, with :option:`--vault-password-file <ansible-playbook --vault-password-file>`, or with :option:`--ask-vault-pass <ansible-playbook --ask-vault-pass>`. The pattern is the same as when you create encrypted content: include the label and the source for the matching password.
Vault-encrypted content can specify which vault ID it was encrypted with. See below for examples of encrypting content with vault IDs and using content encrypted with vault IDs. The :option:`--vault-id <ansible-playbook --vault-id>` option works with any Ansible command that interacts with vaults, including :ref:`ansible-vault`, :ref:`ansible-playbook`, and so on.
For example, a playbook can now include a vars file encrypted with a 'dev' vault Limitations of vault IDs
ID and a 'prod' vault ID. ^^^^^^^^^^^^^^^^^^^^^^^^
.. note: Ansible does not enforce using the same password every time you use a particular vault ID label. You can encrypt different variables or files with the same vault ID label but different passwords. This usually happens when you type the password at a prompt and make a mistake. It is possible to use different passwords with the same vault ID label on purpose. For example, you could use each label as a reference to a class of passwords, rather than a single password. In this scenario, you must always know which specific password or file to use in context. However, you are more likely to encrypt two files with the same vault ID label and different passwords by mistake. If you encrypt two files with the same label but different passwords by accident, you can :ref:`rekey <rekeying_files>` one file to fix the issue.
Older versions of Ansible, before 2.4, only supported using one single vault password at a time.
Enforcing vault ID matching
^^^^^^^^^^^^^^^^^^^^^^^^^^^
By default the vault ID label is only a hint to remind you which password you used to encrypt a variable or file. Ansible does not check that the vault ID in the header of the encrypted content matches the vault ID you provide when you use the content. Ansible decrypts all files and variables called by your command or playbook that are encrypted with the password you provide. To check the encrypted content and decrypt it only when the vault ID it contains matches the one you provide with ``--vault-id``, set the config option :ref:`DEFAULT_VAULT_ID_MATCH`. When you set :ref:`DEFAULT_VAULT_ID_MATCH`, each password is only used to decrypt data that was encrypted with the same label. This is efficient, predictable, and can reduce errors when different values are encrypted with different passwords.
.. note::
Even with the :ref:`DEFAULT_VAULT_ID_MATCH` setting enabled, Ansible does not enforce using the same password every time you use a particular vault ID label.
.. _storing_vault_passwords:
Storing and accessing vault passwords
-------------------------------------
You can memorize your vault password, or manually copy vault passwords from any source and paste them at a command-line prompt, but most users store them securely and access them as needed from within Ansible. You have two options for storing vault passwords that work from within Ansible: in files, or in a third-party tool such as the system keyring or a secret manager. If you store your passwords in a third-party tool, you need a vault password client script to retrieve them from within Ansible.
Storing passwords in files
^^^^^^^^^^^^^^^^^^^^^^^^^^
To store a vault password in a file, enter the password as a string on a single line in the file. Make sure the permissions on the file are appropriate. Do not add password files to source control. If you have multiple passwords, you can store them all in a single file, as long as they all have vault IDs. For each password, create a separate line and enter the vault ID, a space, then the password as a string. For example:
.. code-block:: text
dev my_dev_pass
test my_test_pass
prod my_prod_pass
.. _vault_password_client_scripts:
Storing passwords in third-party tools with vault password client scripts
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
You can store your vault passwords on the system keyring, in a database, or in a secret manager and retrieve them from within Ansible using a vault password client script. Enter the password as a string on a single line. If your password has a vault ID, store it in a way that works with your password storage tool.
To create a vault password client script:
* Create a file with a name ending in ``-client.py``
* Make the file executable
* Within the script itself:
* Print the passwords to standard output
* Accept a ``--vault-id`` option
* If the script prompts for data (for example, a database password), send the prompts to standard error
When you run a playbook that uses vault passwords stored in a third-party tool, specify the script as the source within the ``--vault-id`` flag. For example:
.. code-block:: bash
ansible-playbook --vault-id dev@contrib/vault/vault-keyring-client.py
Ansible executes the client script with a ``--vault-id`` option so the script knows which vault ID label you specified. For example a script loading passwords from a secret manager can use the vault ID label to pick either the 'dev' or 'prod' password. The example command above results in the following execution of the client script:
.. code-block:: bash
contrib/vault/vault-keyring-client.py --vault-id dev
For an example of a client script that loads passwords from the system keyring, see :file:`contrib/vault/vault-keyring-client.py`.
Encrypting content with Ansible Vault
=====================================
Once you have a strategy for managing and storing vault passwords, you can start encrypting content. You can encrypt two types of content with Ansible Vault: variables and files. Encrypted content always includes the ``!vault`` tag, which tells Ansible and YAML that the content needs to be decrypted, and a ``|`` character, which allows multi-line strings. Encrypted content created with ``--vault-id`` also contains the vault ID label. For more details about the encryption process and the format of content encrypted with Ansible Vault, see :ref:`vault_format`. This table shows the main differences between encrypted variables and encrypted files:
.. table::
:class: documentation-table
====================== ================================= ====================================
.. Encrypted variables Encrypted files
====================== ================================= ====================================
How much is encrypted? Variables within a plaintext file The entire file
When is it decrypted? On demand, only when needed Whenever loaded or referenced [#f1]_
What can be encrypted? Only variables Any structured data file
====================== ================================= ====================================
.. [#f1] Ansible cannot know if it needs content from an encrypted file unless it decrypts the file, so it decrypts all encrypted files referenced in your playbooks and roles.
.. _encrypting_variables:
.. _single_encrypted_variable:
Encrypting individual variables with Ansible Vault
--------------------------------------------------
You can encrypt single values inside a YAML file using the :ref:`ansible-vault encrypt_string <ansible_vault_encrypt_string>` command. For one way to keep your vaulted variables safely visible, see :ref:`tip_for_variables_and_vaults`.
Advantages and disadvantages of encrypting variables
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
With variable-level encryption, your files are still easily legible. You can mix plaintext and encrypted variables, even inline in a play or role. However, password rotation is not as simple as with file-level encryption. You cannot :ref:`rekey <rekeying_files>` encrypted variables. Also, variable-level encryption only works on variables. If you want to encrypt tasks or other content, you must encrypt the entire file.
.. _encrypt_string_for_use_in_yaml:
Creating encrypted variables
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The :ref:`ansible-vault encrypt_string <ansible_vault_encrypt_string>` command encrypts and formats any string you type (or copy or generate) into a format that can be included in a playbook, role, or variables file. To create a basic encrypted variable, pass three options to the :ref:`ansible-vault encrypt_string <ansible_vault_encrypt_string>` command:
* a source for the vault password (prompt, file, or script, with or without a vault ID)
* the string to encrypt
* the string name (the name of the variable)
The pattern looks like this:
.. code-block:: bash
ansible-vault encrypt_string <password_source> '<string_to_encrypt>' --name '<string_name_of_variable>'
For example, to encrypt the string 'foobar' using the only password stored in 'a_password_file' and name the variable 'the_secret':
.. code-block:: bash
ansible-vault encrypt_string --vault-password-file a_password_file 'foobar' --name 'the_secret'
The command above creates this content::
the_secret: !vault |
$ANSIBLE_VAULT;1.1;AES256
62313365396662343061393464336163383764373764613633653634306231386433626436623361
6134333665353966363534333632666535333761666131620a663537646436643839616531643561
63396265333966386166373632626539326166353965363262633030333630313338646335303630
3438626666666137650a353638643435666633633964366338633066623234616432373231333331
6564
To encrypt the string 'foooodev', add the vault ID label 'dev' with the 'dev' vault password stored in 'a_password_file', and call the encrypted variable 'the_dev_secret':
.. code-block:: bash
ansible-vault encrypt_string --vault-id dev@a_password_file 'foooodev' --name 'the_dev_secret'
The command above creates this content::
the_dev_secret: !vault |
$ANSIBLE_VAULT;1.2;AES256;dev
30613233633461343837653833666333643061636561303338373661313838333565653635353162
3263363434623733343538653462613064333634333464660a663633623939393439316636633863
61636237636537333938306331383339353265363239643939666639386530626330633337633833
6664656334373166630a363736393262666465663432613932613036303963343263623137386239
6330
To encrypt the string 'letmein' read from stdin, add the vault ID 'test' using the 'test' vault password stored in `a_password_file`, and name the variable 'test_db_password':
.. code-block:: bash
echo -n 'letmein' | ansible-vault encrypt_string --vault-id test@a_password_file --stdin-name 'test_db_password'
.. warning::
Typing secret content directly at the command line (without a prompt) leaves the secret string in your shell history. Do not do this outside of testing.
The command above creates this output::
Reading plaintext input from stdin. (ctrl-d to end input)
db_password: !vault |
$ANSIBLE_VAULT;1.2;AES256;dev
61323931353866666336306139373937316366366138656131323863373866376666353364373761
3539633234313836346435323766306164626134376564330a373530313635343535343133316133
36643666306434616266376434363239346433643238336464643566386135356334303736353136
6565633133366366360a326566323363363936613664616364623437336130623133343530333739
3039
To be prompted for a string to encrypt, encrypt it with the 'dev' vault password from 'a_password_file', name the variable 'new_user_password' and give it the vault ID label 'dev':
.. code-block:: bash
ansible-vault encrypt_string --vault-id dev@a_password_file --stdin-name 'new_user_password'
The command above triggers this prompt:
.. code-block:: bash
Reading plaintext input from stdin. (ctrl-d to end input)
Type the string to encrypt (for example, 'hunter2'), hit ctrl-d, and wait.
.. warning::
Do not press ``Enter`` after supplying the string to encrypt. That will add a newline to the encrypted value.
The sequence above creates this output::
new_user_password: !vault |
$ANSIBLE_VAULT;1.2;AES256;dev
37636561366636643464376336303466613062633537323632306566653533383833366462366662
6565353063303065303831323539656138653863353230620a653638643639333133306331336365
62373737623337616130386137373461306535383538373162316263386165376131623631323434
3866363862363335620a376466656164383032633338306162326639643635663936623939666238
3161
You can add the output from any of the examples above to any playbook, variables file, or role for future use. Encrypted variables are larger than plain-text variables, but they protect your sensitive content while leaving the rest of the playbook, variables file, or role in plain text so you can easily read it.
Viewing encrypted variables
^^^^^^^^^^^^^^^^^^^^^^^^^^^
You can view the original value of an encrypted variable using the debug module. You must pass the password that was used to encrypt the variable. For example, if you stored the variable created by the last example above in a file called 'vars.yml', you could view the unencrypted value of that variable like this:
.. code-block:: console
ansible localhost -m debug -a var="new_user_password" -e "@vars.yml" --vault-id dev@a_password_file
localhost | SUCCESS => {
"new_user_password": "hunter2"
}
Encrypting files with Ansible Vault
-----------------------------------
Ansible Vault can encrypt any structured data file used by Ansible, including:
* group variables files from inventory
* host variables files from inventory
* variables files passed to ansible-playbook with ``-e @file.yml`` or ``-e @file.json``
* variables files loaded by ``include_vars`` or ``vars_files``
* variables files in roles
* defaults files in roles
* tasks files
* handlers files
* binary files or other arbitrary files
The full file is encrypted in the vault.
Advantages and disadvantages of encrypting files
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File-level encryption is easy to use. Password rotation for encrypted files is straightforward with the :ref:`rekey <rekeying_files>` command. Encrypting files can hide not only sensitive values, but the names of the variables you use. However, with file-level encryption the contents of files are no longer easy to access and read. This may be a problem with encrypted tasks files. When encrypting a variables file, see :ref:`tip_for_variables_and_vaults` for one way to keep references to these variables in a non-encrypted file. Ansible always decrypts the entire encrypted file when it is when loaded or referenced, because Ansible cannot know if it needs the content unless it decrypts it.
.. _creating_files: .. _creating_files:
Creating Encrypted Files Creating encrypted files
```````````````````````` ^^^^^^^^^^^^^^^^^^^^^^^^
To create a new encrypted data file, run the following command: To create a new encrypted data file called 'foo.yml' with the 'test' vault password from 'multi_password_file':
.. code-block:: bash .. code-block:: bash
ansible-vault create foo.yml ansible-vault create --vault-id test@multi_password_file foo.yml
First you will be prompted for a password. After providing a password, the tool will launch whatever editor you have defined with $EDITOR, and defaults to vi. Once you are done with the editor session, the file will be saved as encrypted data. The tool launches an editor (whatever editor you have defined with $EDITOR, default editor is vi). Add the content. When you close the the editor session, the file is saved as encrypted data. The file header reflects the vault ID used to create it:
The default cipher is AES (which is shared-secret based). .. code-block:: text
To create a new encrypted data file with the Vault ID 'password1' assigned to it and be prompted for the password, run: ``$ANSIBLE_VAULT;1.2;AES256;test``
To create a new encrypted data file with the vault ID 'my_new_password' assigned to it and be prompted for the password:
.. code-block:: bash .. code-block:: bash
ansible-vault create --vault-id password1@prompt foo.yml ansible-vault create --vault-id my_new_password@prompt foo.yml
Again, add content to the file in the editor and save. Be sure to store the new password you created at the prompt, so you can find it when you want to decrypt that file.
.. _editing_encrypted_files:
Editing Encrypted Files
```````````````````````
To edit an encrypted file in place, use the :ref:`ansible-vault edit <ansible_vault_edit>` command.
This command will decrypt the file to a temporary file and allow you to edit
the file, saving it back when done and removing the temporary file:
.. code-block:: bash
ansible-vault edit foo.yml
To edit a file encrypted with the 'vault2' password file and assigned the 'pass2' vault ID:
.. code-block:: bash
ansible-vault edit --vault-id pass2@vault2 foo.yml
.. _rekeying_files:
Rekeying Encrypted Files
````````````````````````
Should you wish to change your password on a vault-encrypted file or files, you can do so with the rekey command:
.. code-block:: bash
ansible-vault rekey foo.yml bar.yml baz.yml
This command can rekey multiple data files at once and will ask for the original
password and also the new password.
To rekey files encrypted with the 'preprod2' vault ID and the 'ppold' file and be prompted for the new password:
.. code-block:: bash
ansible-vault rekey --vault-id preprod2@ppold --new-vault-id preprod2@prompt foo.yml bar.yml baz.yml
A different ID could have been set for the rekeyed files by passing it to ``--new-vault-id``.
.. _encrypting_files: .. _encrypting_files:
Encrypting Unencrypted Files Encrypting existing files
```````````````````````````` ^^^^^^^^^^^^^^^^^^^^^^^^^
If you have existing files that you wish to encrypt, use To encrypt an existing file, use the :ref:`ansible-vault encrypt <ansible_vault_encrypt>` command. This command can operate on multiple files at once. For example:
the :ref:`ansible-vault encrypt <ansible_vault_encrypt>` command. This command can operate on multiple files at once:
.. code-block:: bash .. code-block:: bash
@ -156,190 +335,105 @@ To encrypt existing files with the 'project' ID and be prompted for the password
ansible-vault encrypt --vault-id project@prompt foo.yml bar.yml baz.yml ansible-vault encrypt --vault-id project@prompt foo.yml bar.yml baz.yml
.. note::
It is technically possible to separately encrypt files or strings with the *same* vault ID but *different* passwords, if different password files or prompted passwords are provided each time.
This could be desirable if you use vault IDs as references to classes of passwords (rather than a single password) and you always know which specific password or file to use in context. However this may be an unnecessarily complex use-case.
If two files are encrypted with the same vault ID but different passwords by accident, you can use the :ref:`rekey <rekeying_files>` command to fix the issue.
.. _decrypting_files:
Decrypting Encrypted Files
``````````````````````````
If you have existing files that you no longer want to keep encrypted, you can permanently decrypt
them by running the :ref:`ansible-vault decrypt <ansible_vault_decrypt>` command. This command will save them unencrypted
to the disk, so be sure you do not want :ref:`ansible-vault edit <ansible_vault_edit>` instead:
.. code-block:: bash
ansible-vault decrypt foo.yml bar.yml baz.yml
.. _viewing_files: .. _viewing_files:
Viewing Encrypted Files Viewing encrypted files
``````````````````````` ^^^^^^^^^^^^^^^^^^^^^^^
If you want to view the contents of an encrypted file without editing it, you can use the :ref:`ansible-vault view <ansible_vault_view>` command: To view the contents of an encrypted file without editing it, you can use the :ref:`ansible-vault view <ansible_vault_view>` command:
.. code-block:: bash .. code-block:: bash
ansible-vault view foo.yml bar.yml baz.yml ansible-vault view foo.yml bar.yml baz.yml
.. _encrypt_string_for_use_in_yaml: .. _editing_encrypted_files:
Use encrypt_string to create encrypted variables to embed in yaml Editing encrypted files
````````````````````````````````````````````````````````````````` ^^^^^^^^^^^^^^^^^^^^^^^
The :ref:`ansible-vault encrypt_string <ansible_vault_encrypt_string>` command will encrypt and format a provided string into a format To edit an encrypted file in place, use the :ref:`ansible-vault edit <ansible_vault_edit>` command. This command decrypts the file to a temporary file, allows you to edit the content, then saves and re-encrypts the content and removes the temporary file when you close the editor. For example:
that can be included in :ref:`ansible-playbook` YAML files.
To encrypt a string provided as a cli arg:
.. code-block:: bash .. code-block:: bash
ansible-vault encrypt_string --vault-password-file a_password_file 'foobar' --name 'the_secret' ansible-vault edit foo.yml
Result:: To edit a file encrypted with the ``vault2`` password file and assigned the vault ID ``pass2``:
the_secret: !vault |
$ANSIBLE_VAULT;1.1;AES256
62313365396662343061393464336163383764373764613633653634306231386433626436623361
6134333665353966363534333632666535333761666131620a663537646436643839616531643561
63396265333966386166373632626539326166353965363262633030333630313338646335303630
3438626666666137650a353638643435666633633964366338633066623234616432373231333331
6564
To use a vault-id label for 'dev' vault-id:
.. code-block:: bash .. code-block:: bash
ansible-vault encrypt_string --vault-id dev@a_password_file 'foooodev' --name 'the_dev_secret' ansible-vault edit --vault-id pass2@vault2 foo.yml
Result::
the_dev_secret: !vault | .. _rekeying_files:
$ANSIBLE_VAULT;1.2;AES256;dev
30613233633461343837653833666333643061636561303338373661313838333565653635353162
3263363434623733343538653462613064333634333464660a663633623939393439316636633863
61636237636537333938306331383339353265363239643939666639386530626330633337633833
6664656334373166630a363736393262666465663432613932613036303963343263623137386239
6330
To encrypt a string read from stdin and name it 'db_password': Changing the password and/or vault ID on encrypted files
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
To change the password on an encrypted file or files, use the :ref:`rekey <ansible_vault_rekey>` command:
.. code-block:: bash .. code-block:: bash
echo -n 'letmein' | ansible-vault encrypt_string --vault-id dev@a_password_file --stdin-name 'db_password' ansible-vault rekey foo.yml bar.yml baz.yml
.. warning::
This method leaves the string in your shell history. Do not use it outside of testing.
Result::
Reading plaintext input from stdin. (ctrl-d to end input)
db_password: !vault |
$ANSIBLE_VAULT;1.2;AES256;dev
61323931353866666336306139373937316366366138656131323863373866376666353364373761
3539633234313836346435323766306164626134376564330a373530313635343535343133316133
36643666306434616266376434363239346433643238336464643566386135356334303736353136
6565633133366366360a326566323363363936613664616364623437336130623133343530333739
3039
To be prompted for a string to encrypt, encrypt it, and give it the name 'new_user_password':
This command can rekey multiple data files at once and will ask for the original password and also the new password. To set a different ID for the rekeyed files, pass the new ID to ``--new-vault-id``. For example, to rekey a list of files encrypted with the 'preprod1' vault ID from the 'ppold' file to the 'preprod2' vault ID and be prompted for the new password:
.. code-block:: bash .. code-block:: bash
ansible-vault encrypt_string --vault-id dev@a_password_file --stdin-name 'new_user_password' ansible-vault rekey --vault-id preprod1@ppold --new-vault-id preprod2@prompt foo.yml bar.yml baz.yml
Output::
Reading plaintext input from stdin. (ctrl-d to end input)
User enters 'hunter2' and hits ctrl-d.
.. warning::
Do not press Enter after supplying the string. That will add a newline to the encrypted value.
Result::
new_user_password: !vault |
$ANSIBLE_VAULT;1.2;AES256;dev
37636561366636643464376336303466613062633537323632306566653533383833366462366662
6565353063303065303831323539656138653863353230620a653638643639333133306331336365
62373737623337616130386137373461306535383538373162316263386165376131623631323434
3866363862363335620a376466656164383032633338306162326639643635663936623939666238
3161
See also :ref:`single_encrypted_variable`
After you added the encrypted value to a var file (vars.yml), you can see the original value using the debug module.
.. code-block:: console
ansible localhost -m debug -a var="new_user_password" -e "@vars.yml" --ask-vault-pass
Vault password:
localhost | SUCCESS => {
"new_user_password": "hunter2"
}
.. _decrypting_files:
Decrypting encrypted files
^^^^^^^^^^^^^^^^^^^^^^^^^^
If you have an encrypted file that you no longer want to keep encrypted, you can permanently decrypt it by running the :ref:`ansible-vault decrypt <ansible_vault_decrypt>` command. This command will save the file unencrypted to the disk, so be sure you do not want to :ref:`edit <ansible_vault_edit>` it instead.
.. code-block:: bash
ansible-vault decrypt foo.yml bar.yml baz.yml
.. _playbooks_vault:
.. _providing_vault_passwords: .. _providing_vault_passwords:
Providing Vault Passwords Using encrypted variables and files
````````````````````````` ===================================
When all data is encrypted using a single password the :option:`--ask-vault-pass <ansible-playbook --ask-vault-pass>` When you run a task or playbook that uses encrypted variables or files, you must provide the passwords to decrypt the variables or files. You can do this at the command line or in the playbook itself.
or :option:`--vault-password-file <ansible-playbook --vault-password-file>` cli options should be used.
For example, to use a password store in the text file :file:`/path/to/my/vault-password-file`: Passing a single password
-------------------------
.. code-block:: bash If all the encrypted variables and files your task or playbook needs use a single password, you can use the :option:`--ask-vault-pass <ansible-playbook --ask-vault-pass>` or :option:`--vault-password-file <ansible-playbook --vault-password-file>` cli options.
ansible-playbook --vault-password-file /path/to/my/vault-password-file site.yml To prompt for the password:
To prompt for a password:
.. code-block:: bash .. code-block:: bash
ansible-playbook --ask-vault-pass site.yml ansible-playbook --ask-vault-pass site.yml
To get the password from a vault password executable script :file:`my-vault-password.py`: To retrieve the password from the :file:`/path/to/my/vault-password-file` file:
.. code-block:: bash .. code-block:: bash
ansible-playbook --vault-password-file my-vault-password.py ansible-playbook --vault-password-file /path/to/my/vault-password-file site.yml
The config option :ref:`DEFAULT_VAULT_PASSWORD_FILE` can be used to specify a vault password file so that the To get the password from the vault password client script :file:`my-vault-password-client.py`:
:option:`--vault-password-file <ansible-playbook --vault-password-file>` cli option does not have to be
specified every time. .. code-block:: bash
ansible-playbook --vault-password-file my-vault-password-client.py
.. _specifying_vault_ids: .. _specifying_vault_ids:
Labelling Vaults Passing vault IDs
^^^^^^^^^^^^^^^^ -----------------
Since Ansible 2.4 the :option:`--vault-id <ansible-playbook --vault-id>` can be used to indicate which vault ID You can also use the :option:`--vault-id <ansible-playbook --vault-id>` option to pass a single password with its vault label. This approach is clearer when multiple vaults are used within a single inventory.
('dev', 'prod', 'cloud', etc) a password is for as well as how to source the password (prompt, a file path, etc).
By default the vault-id label is only a hint, any values encrypted with the password will be decrypted.
The config option :ref:`DEFAULT_VAULT_ID_MATCH` can be set to require the vault id to match the vault ID
used when the value was encrypted.
This can reduce errors when different values are encrypted with different passwords.
For example, to use a password file :file:`dev-password` for the vault-id 'dev':
.. code-block:: bash
ansible-playbook --vault-id dev@dev-password site.yml
To prompt for the password for the 'dev' vault ID: To prompt for the password for the 'dev' vault ID:
@ -347,20 +441,35 @@ To prompt for the password for the 'dev' vault ID:
ansible-playbook --vault-id dev@prompt site.yml ansible-playbook --vault-id dev@prompt site.yml
To get the 'dev' vault ID password from an executable script :file:`my-vault-password.py`: To retrieve the password for the 'dev' vault ID from the :file:`dev-password` file:
.. code-block:: bash .. code-block:: bash
ansible-playbook --vault-id dev@my-vault-password.py ansible-playbook --vault-id dev@dev-password site.yml
To get the password for the 'dev' vault ID from the vault password client script :file:`my-vault-password-client.py`:
The config option :ref:`DEFAULT_VAULT_IDENTITY_LIST` can be used to specify a default vault ID and password source .. code-block:: bash
so that the :option:`--vault-id <ansible-playbook --vault-id>` cli option does not have to be specified every time.
ansible-playbook --vault-id dev@my-vault-password-client.py
The :option:`--vault-id <ansible-playbook --vault-id>` option can also be used without specifying a vault-id. Passing multiple vault passwords
This behaviour is equivalent to :option:`--ask-vault-pass <ansible-playbook --ask-vault-pass>` or --------------------------------
:option:`--vault-password-file <ansible-playbook --vault-password-file>` so is rarely used.
If your task or playbook requires multiple encrypted variables or files that you encrypted with different vault IDs, you must use the :option:`--vault-id <ansible-playbook --vault-id>` option, passing multiple ``--vault-id`` options to specify the vault IDs ('dev', 'prod', 'cloud', 'db') and sources for the passwords (prompt, file, script). . For example, to use a 'dev' password read from a file and to be prompted for the 'prod' password:
.. code-block:: bash
ansible-playbook --vault-id dev@dev-password --vault-id prod@prompt site.yml
By default the vault ID labels (dev, prod etc.) are only hints. Ansible attempts to decrypt vault content with each password. The password with the same label as the encrypted data will be tried first, after that each vault secret will be tried in the order they were provided on the command line.
Where the encrypted data has no label, or the label does not match any of the provided labels, the passwords will be tried in the order they are specified. In the example above, the 'dev' password will be tried first, then the 'prod' password for cases where Ansible doesn't know which vault ID is used to encrypt something.
Using ``--vault-id`` without a vault ID
---------------------------------------
The :option:`--vault-id <ansible-playbook --vault-id>` option can also be used without specifying a vault-id. This behavior is equivalent to :option:`--ask-vault-pass <ansible-playbook --ask-vault-pass>` or :option:`--vault-password-file <ansible-playbook --vault-password-file>` so is rarely used.
For example, to use a password file :file:`dev-password`: For example, to use a password file :file:`dev-password`:
@ -374,89 +483,35 @@ To prompt for the password:
ansible-playbook --vault-id @prompt site.yml ansible-playbook --vault-id @prompt site.yml
To get the password from an executable script :file:`my-vault-password.py`: To get the password from an executable script :file:`my-vault-password-client.py`:
.. code-block:: bash .. code-block:: bash
ansible-playbook --vault-id my-vault-password.py ansible-playbook --vault-id my-vault-password-client.py
.. note::
Prior to Ansible 2.4, the :option:`--vault-id <ansible-playbook --vault-id>` option is not supported
so :option:`--ask-vault-pass <ansible-playbook --ask-vault-pass>` or
:option:`--vault-password-file <ansible-playbook --vault-password-file>` must be used.
Multiple Vault Passwords Configuring defaults for using encrypted content
^^^^^^^^^^^^^^^^^^^^^^^^ ================================================
Ansible 2.4 and later support using multiple vault passwords, :option:`--vault-id <ansible-playbook --vault-id>` can Setting a default vault ID
be provided multiple times. --------------------------
For example, to use a 'dev' password read from a file and to be prompted for the 'prod' password: If you use one vault ID more frequently than any other, you can set the config option :ref:`DEFAULT_VAULT_IDENTITY_LIST` to specify a default vault ID and password source. Ansible will use the default vault ID and source any time you do not specify :option:`--vault-id <ansible-playbook --vault-id>`. You can set multiple values for this option. Setting multiple values is equivalent to passing multiple :option:`--vault-id <ansible-playbook --vault-id>` cli options.
.. code-block:: bash Setting a default password source
---------------------------------
ansible-playbook --vault-id dev@dev-password --vault-id prod@prompt site.yml If you use one vault password file more frequently than any other, you can set the :ref:`DEFAULT_VAULT_PASSWORD_FILE` config option or the :envvar:`ANSIBLE_VAULT_PASSWORD_FILE` environment variable to specify that file. For example, if you set ``ANSIBLE_VAULT_PASSWORD_FILE=~/.vault_pass.txt``, Ansible will automatically search for the password in that file. This is useful if, for example, you use Ansible from a continuous integration system such as Jenkins.
By default the vault ID labels (dev, prod etc.) are only hints, Ansible will attempt to decrypt vault content When are encrypted files made visible?
with each password. The password with the same label as the encrypted data will be tried first, after that ======================================
each vault secret will be tried in the order they were provided on the command line.
Where the encrypted data doesn't have a label, or the label doesn't match any of the provided labels, the
passwords will be tried in the order they are specified.
In the above case, the 'dev' password will be tried first, then the 'prod' password for cases
where Ansible doesn't know which vault ID is used to encrypt something.
To add a vault ID label to the encrypted data use the :option:`--vault-id <ansible-vault-create --vault-id>` option
with a label when encrypting the data.
The :ref:`DEFAULT_VAULT_ID_MATCH` config option can be set so that Ansible will only use the password with
the same label as the encrypted data. This is more efficient and may be more predictable when multiple
passwords are used.
The config option :ref:`DEFAULT_VAULT_IDENTITY_LIST` can have multiple values which is equivalent to multiple :option:`--vault-id <ansible-playbook --vault-id>` cli options.
The :option:`--vault-id <ansible-playbook --vault-id>` can be used in lieu of the :option:`--vault-password-file <ansible-playbook --vault-password-file>` or :option:`--ask-vault-pass <ansible-playbook --ask-vault-pass>` options,
or it can be used in combination with them.
When using :ref:`ansible-vault` commands that encrypt content (:ref:`ansible-vault encrypt <ansible_vault_encrypt>`, :ref:`ansible-vault encrypt_string <ansible_vault_encrypt_string>`, etc)
only one vault-id can be used.
.. _vault_password_client_scripts:
Vault Password Client Scripts
`````````````````````````````
When implementing a script to obtain a vault password it may be convenient to know which vault ID label was
requested. For example a script loading passwords from a secret manager may want to use the vault ID label to pick
either the 'dev' or 'prod' password.
Since Ansible 2.5 this is supported through the use of Client Scripts. A Client Script is an executable script
with a name ending in ``-client``. Client Scripts are used to obtain vault passwords in the same way as any other
executable script. For example:
.. code-block:: bash
ansible-playbook --vault-id dev@contrib/vault/vault-keyring-client.py
The difference is in the implementation of the script. Client Scripts are executed with a ``--vault-id`` option
so they know which vault ID label was requested. So the above Ansible execution results in the below execution
of the Client Script:
.. code-block:: bash
contrib/vault/vault-keyring-client.py --vault-id dev
:file:`contrib/vault/vault-keyring-client.py` is an example of Client Script that loads passwords from the
system keyring.
In general, content you encrypt with Ansible Vault remains encrypted after execution. However, there is one exception. If you pass an encrypted file as the ``src`` argument to the :ref:`copy <copy_module>`, :ref:`template <template_module>`, :ref:`unarchive <unarchive_module>`, :ref:`script <script_module>` or :ref:`assemble <assemble_module>` module, the file will not be encrypted on the target host (assuming you supply the correct vault password when you run the play). This behavior is intended and useful. You can encrypt a configuration file or template to avoid sharing the details of your configuration, but when you copy that configuration to servers in your environment, you want it to be decrypted so local users and processes can access it.
.. _speeding_up_vault: .. _speeding_up_vault:
Speeding Up Vault Operations Speeding up Ansible Vault
```````````````````````````` =========================
If you have many encrypted files, decrypting them at startup may cause a perceptible delay. To speed this up, install the cryptography package: If you have many encrypted files, decrypting them at startup may cause a perceptible delay. To speed this up, install the cryptography package:
@ -467,14 +522,10 @@ If you have many encrypted files, decrypting them at startup may cause a percept
.. _vault_format: .. _vault_format:
Vault Format Format of files encrypted with Ansible Vault
```````````` ============================================
A vault encrypted file is a UTF-8 encoded txt file. Ansible Vault creates UTF-8 encoded txt files. The file format includes a newline terminated header. For example::
The file format includes a newline terminated header.
For example::
$ANSIBLE_VAULT;1.1;AES256 $ANSIBLE_VAULT;1.1;AES256
@ -482,25 +533,23 @@ or::
$ANSIBLE_VAULT;1.2;AES256;vault-id-label $ANSIBLE_VAULT;1.2;AES256;vault-id-label
The header contains the vault format id, the vault format version, the vault cipher, and a vault-id label (with format version 1.2), separated by semi-colons ';' The header contains up to four elements, separated by semi-colons (``;``).
The first field ``$ANSIBLE_VAULT`` is the format id. Currently ``$ANSIBLE_VAULT`` is the only valid file format id. This is used to identify files that are vault encrypted (via vault.is_encrypted_file()). 1. The format ID (``$ANSIBLE_VAULT``). Currently ``$ANSIBLE_VAULT`` is the only valid format ID. The format ID identifies content that is encrypted with Ansible Vault (via vault.is_encrypted_file()).
The second field (``1.X``) is the vault format version. All supported versions of ansible will currently default to '1.1' or '1.2' if a labeled vault-id is supplied. 2. The vault format version (``1.X``). All supported versions of Ansible will currently default to '1.1' or '1.2' if a labeled vault ID is supplied. The '1.0' format is supported for reading only (and will be converted automatically to the '1.1' format on write). The format version is currently used as an exact string compare only (version numbers are not currently 'compared').
The '1.0' format is supported for reading only (and will be converted automatically to the '1.1' format on write). The format version is currently used as an exact string compare only (version numbers are not currently 'compared'). 3. The cipher algorithm used to encrypt the data (``AES256``). Currently ``AES256`` is the only supported cipher algorithm. Vault format 1.0 used 'AES', but current code always uses 'AES256'.
The third field (``AES256``) identifies the cipher algorithm used to encrypt the data. Currently, the only supported cipher is 'AES256'. [vault format 1.0 used 'AES', but current code always uses 'AES256'] 4. The vault ID label used to encrypt the data (optional, ``vault-id-label``) For example, if you encrypt a file with ``--vault-id dev@prompt``, the vault-id-label is ``dev``.
The fourth field (``vault-id-label``) identifies the vault-id label used to encrypt the data. For example using a vault-id of ``dev@prompt`` results in a vault-id-label of 'dev' being used. Note: In the future, the header could change. Fields after the format ID and format version depend on the format version, and future vault format versions may add more cipher algorithm options and/or additional fields.
Note: In the future, the header could change. Anything after the vault id and version can be considered to depend on the vault format version. This includes the cipher id, and any additional fields that could be after that.
The rest of the content of the file is the 'vaulttext'. The vaulttext is a text armored version of the The rest of the content of the file is the 'vaulttext'. The vaulttext is a text armored version of the
encrypted ciphertext. Each line will be 80 characters wide, except for the last line which may be shorter. encrypted ciphertext. Each line is 80 characters wide, except for the last line which may be shorter.
Vault Payload Format 1.1 - 1.2 Ansible Vault payload format 1.1 - 1.2
`````````````````````````````` --------------------------------------
The vaulttext is a concatenation of the ciphertext and a SHA256 digest with the result 'hexlifyied'. The vaulttext is a concatenation of the ciphertext and a SHA256 digest with the result 'hexlifyied'.
@ -537,5 +586,3 @@ hexlify()'ed result of:
- the original plaintext - the original plaintext
- padding up to the AES256 blocksize. (The data used for padding is based on `RFC5652 <https://tools.ietf.org/html/rfc5652#section-6.3>`_) - padding up to the AES256 blocksize. (The data used for padding is based on `RFC5652 <https://tools.ietf.org/html/rfc5652#section-6.3>`_)

View file

@ -496,7 +496,7 @@ Setup IIS Website
:ref:`playbooks_intro` :ref:`playbooks_intro`
An introduction to playbooks An introduction to playbooks
:ref:`playbooks_best_practices` :ref:`playbooks_best_practices`
Best practices advice Tips and tricks for playbooks
:ref:`List of Windows Modules <windows_modules>` :ref:`List of Windows Modules <windows_modules>`
Windows specific module list, all implemented in PowerShell Windows specific module list, all implemented in PowerShell
`User Mailing List <https://groups.google.com/group/ansible-project>`_ `User Mailing List <https://groups.google.com/group/ansible-project>`_

View file

@ -229,7 +229,7 @@ host.
:ref:`about_playbooks` :ref:`about_playbooks`
An introduction to playbooks An introduction to playbooks
:ref:`playbooks_best_practices` :ref:`playbooks_best_practices`
Best practices advice Tips and tricks for playbooks
`User Mailing List <https://groups.google.com/group/ansible-project>`_ `User Mailing List <https://groups.google.com/group/ansible-project>`_
Have a question? Stop by the google group! Have a question? Stop by the google group!
`irc.freenode.net <http://irc.freenode.net>`_ `irc.freenode.net <http://irc.freenode.net>`_

View file

@ -564,7 +564,7 @@ Here are the known ones:
:ref:`about_playbooks` :ref:`about_playbooks`
An introduction to playbooks An introduction to playbooks
:ref:`playbooks_best_practices` :ref:`playbooks_best_practices`
Best practices advice Tips and tricks for playbooks
:ref:`List of Windows Modules <windows_modules>` :ref:`List of Windows Modules <windows_modules>`
Windows specific module list, all implemented in PowerShell Windows specific module list, all implemented in PowerShell
`User Mailing List <https://groups.google.com/group/ansible-project>`_ `User Mailing List <https://groups.google.com/group/ansible-project>`_

View file

@ -504,7 +504,7 @@ guides for Windows modules differ substantially from those for standard standard
:ref:`playbooks_intro` :ref:`playbooks_intro`
An introduction to playbooks An introduction to playbooks
:ref:`playbooks_best_practices` :ref:`playbooks_best_practices`
Best practices advice Tips and tricks for playbooks
:ref:`List of Windows Modules <windows_modules>` :ref:`List of Windows Modules <windows_modules>`
Windows specific module list, all implemented in PowerShell Windows specific module list, all implemented in PowerShell
`User Mailing List <https://groups.google.com/group/ansible-project>`_ `User Mailing List <https://groups.google.com/group/ansible-project>`_

View file

@ -904,7 +904,7 @@ Some of these limitations can be mitigated by doing one of the following:
:ref:`playbooks_intro` :ref:`playbooks_intro`
An introduction to playbooks An introduction to playbooks
:ref:`playbooks_best_practices` :ref:`playbooks_best_practices`
Best practices advice Tips and tricks for playbooks
:ref:`List of Windows Modules <windows_modules>` :ref:`List of Windows Modules <windows_modules>`
Windows specific module list, all implemented in PowerShell Windows specific module list, all implemented in PowerShell
`User Mailing List <https://groups.google.com/group/ansible-project>`_ `User Mailing List <https://groups.google.com/group/ansible-project>`_

View file

@ -1,48 +0,0 @@
{# avoids rST "isn't included in any toctree" errors for module docs #}
:orphan:
{% if title %}
.. _@{ title.lower() + '_' + plugin_type + 's' }@:
{% else %}
.. _@{ plugin_type + 's' }@:
{% endif %}
{% if title %}
@{ title }@ @{ plugin_type + 's' }@
@{ '`' * title | length }@````````
{% else %}
@{ plugin_type + 's' }@
```````
{% endif %}
{% if blurb %}
@{ blurb }@
{% endif %}
{% if category['_modules'] %}
{% for module in category['_modules'] | sort %}
* :ref:`@{ module }@_@{ plugin_type }@`{% if module_info[module]['deprecated'] %} **(D)**{% endif%}
{% endfor %}
{% endif %}
{% for name, info in subcategories.items() | sort %}
.. _@{ name.lower() + '_' + title.lower() + '_' + plugin_type + 's' }@:
@{ name.title() }@
@{ '-' * name | length }@
{% for module in info['_modules'] | sort %}
* :ref:`@{ module }@_@{ plugin_type }@`{% if module_info[module]['deprecated'] %} **(D)**{% endif%}
{% endfor %}
{% endfor %}
.. note::
- **(D)**: This marks a module as deprecated, which means a module is kept for backwards compatibility but usage is discouraged.
The module documentation details page may explain more about this rationale.

View file

@ -1,36 +0,0 @@
.. _@{ title.lower() + '_' + plugin_type + 's' }@:
@{ title }@ @{ plugin_type }@
@{ '`' * title | length }@````````
{% if blurb %}
@{ blurb }@
{% endif %}
.. toctree:: :maxdepth: 1
{% if category['_modules'] %}
{% for module in category['_modules'] | sort %}
@{ module }@{% if module_info[module]['deprecated'] %} **(D)**{% endif%}{% if module_info[module]['doc']['short_description'] %} -- @{ module_info[module]['doc']['short_description'] }@{% endif %} <plugins/@{ module_info[module]['primary_category'] }@/@{ module }@>
{% endfor %}
{% endif %}
{% for name, info in subcategories.items() | sort %}
.. _@{ name.lower() + '_' + title.lower() + '_' + plugin_type + 's' }@:
@{ name.title() }@
@{ '-' * name | length }@
.. toctree:: :maxdepth: 1
{% for module in info['_modules'] | sort %}
:ref:`@{ module }@_@{ plugin_type }@`{% if module_info[module]['deprecated'] %} **(D)**{% endif%} -- @{ module_info[module]['doc']['short_description'] }@
{% endfor %}
{% endfor %}
.. note::
- **(D)**: This marks a module as deprecated, which means a module is kept for backwards compatibility but usage is discouraged.
The module documentation details page may explain more about this rationale.

View file

@ -1,45 +0,0 @@
.. _@{ slug }@:
{# avoids rST "isn't included in any toctree" errors for module index docs #}
:orphan:
**************************@{ '*' * maintainers | length }@
Modules Maintained by the @{ maintainers }@
**************************@{ '*' * maintainers | length }@
.. contents::
:local:
{% for category, data in subcategories.items() | sort %}
{% if category.lower() %}
.. _@{ category.lower() + '_' + slug.lower() + '_categories' }@:
{% else %}
.. _@{ slug.lower() + '_categories' }@:
{% endif %}
@{ category.title() }@
@{ '=' * category | length }@
{% for name, info in data.items() | sort %}
{% if name.lower() %}
.. _@{ name.lower() + '_' + category + '_' + slug.lower() + '_' + plugin_type + 's' }@:
{% else %}
.. _@{ slug.lower() + '_' + category }@:
{% endif %}
@{ name.title() }@
@{ '-' * name | length }@
{% for module in info['_modules'] | sort %}
* :ref:`@{ module }@_@{plugin_type}@`{% if module_info[module]['deprecated'] %} **(D)** {% endif%}
{% endfor %}
{% endfor %}
{% endfor %}
.. note::
- **(D)**: This marks a module as deprecated, which means a module is kept for backwards compatibility but usage is discouraged.
The module documentation details page may explain more about this rationale.

View file

@ -1,442 +0,0 @@
:source: @{ source }@
{# avoids rST "isn't included in any toctree" errors for module docs #}
{% if plugin_type == 'module' %}
:orphan:
{% endif %}
.. _@{ module }@_@{ plugin_type }@:
{% for alias in aliases %}
.. _@{ alias }@_@{ plugin_type }@:
{% endfor %}
{% if short_description %}
{% set title = module + ' -- ' + short_description | rst_ify %}
{% else %}
{% set title = module %}
{% endif %}
@{ title }@
@{ '+' * title|length }@
{% if version_added is defined and version_added != '' -%}
.. versionadded:: @{ version_added | default('') }@
{% endif %}
.. contents::
:local:
:depth: 1
{# ------------------------------------------
#
# Please note: this looks like a core dump
# but it isn't one.
#
--------------------------------------------#}
{% if deprecated is defined -%}
DEPRECATED
----------
{# use unknown here? skip the fields? #}
:Removed in Ansible: version: @{ deprecated['removed_in'] | default('') | string | rst_ify }@
:Why: @{ deprecated['why'] | default('') | rst_ify }@
:Alternative: @{ deprecated['alternative'] | default('') | rst_ify }@
{% endif %}
Synopsis
--------
{% if description -%}
{% for desc in description %}
- @{ desc | rst_ify }@
{% endfor %}
{% endif %}
{% if aliases is defined -%}
Aliases: @{ ','.join(aliases) }@
{% endif %}
{% if requirements -%}
Requirements
------------
{% if plugin_type == 'module' %}
The below requirements are needed on the host that executes this @{ plugin_type }@.
{% else %}
The below requirements are needed on the local master node that executes this @{ plugin_type }@.
{% endif %}
{% for req in requirements %}
- @{ req | rst_ify }@
{% endfor %}
{% endif %}
{% if options -%}
Parameters
----------
.. raw:: html
<table border=0 cellpadding=0 class="documentation-table">
{# Pre-compute the nesting depth to allocate columns -#}
@{ to_kludge_ns('maxdepth', 1) -}@
{% for key, value in options|dictsort recursive -%}
@{ to_kludge_ns('maxdepth', [loop.depth, from_kludge_ns('maxdepth')] | max) -}@
{% if value.suboptions -%}
{% if value.suboptions.items -%}
@{ loop(value.suboptions.items()) -}@
{% elif value.suboptions[0].items -%}
@{ loop(value.suboptions[0].items()) -}@
{% endif -%}
{% endif -%}
{% endfor -%}
{# Header of the documentation -#}
<tr>
<th colspan="@{ from_kludge_ns('maxdepth') }@">Parameter</th>
<th>Choices/<font color="blue">Defaults</font></th>
{% if plugin_type != 'module' %}
<th>Configuration</th>
{% endif %}
<th width="100%">Comments</th>
</tr>
{% for key, value in options|dictsort recursive %}
<tr>
{# indentation based on nesting level #}
{% for i in range(1, loop.depth) %}
<td class="elbow-placeholder"></td>
{% endfor %}
{# parameter name with required and/or introduced label #}
<td colspan="@{ from_kludge_ns('maxdepth') - loop.depth0 }@">
<div class="ansibleOptionAnchor" id="parameter-{% for part in value.full_key %}@{ part }@{% if not loop.last %}/{% endif %}{% endfor %}"></div>
<b>@{ key }@</b>
<a class="ansibleOptionLink" href="#parameter-{% for part in value.full_key %}@{ part }@{% if not loop.last %}/{% endif %}{% endfor %}" title="Permalink to this option"></a>
<div style="font-size: small">
<span style="color: purple">@{ value.type | documented_type }@</span>
{% if value.get('elements') %} / <span style="color: purple">elements=@{ value.elements | documented_type }@</span>{% endif %}
{% if value.get('required', False) %} / <span style="color: red">required</span>{% endif %}
</div>
{% if value.version_added %}<div style="font-style: italic; font-size: small; color: darkgreen">added in @{value.version_added}@</div>{% endif %}
</td>
{# default / choices #}
<td>
{# Turn boolean values in 'yes' and 'no' values #}
{% if value.default is sameas true %}
{% set _x = value.update({'default': 'yes'}) %}
{% elif value.default is sameas false %}
{% set _x = value.update({'default': 'no'}) %}
{% endif %}
{% if value.type == 'bool' %}
{% set _x = value.update({'choices': ['no', 'yes']}) %}
{% endif %}
{# Show possible choices and highlight details #}
{% if value.choices %}
<ul style="margin: 0; padding: 0"><b>Choices:</b>
{% for choice in value.choices %}
{# Turn boolean values in 'yes' and 'no' values #}
{% if choice is sameas true %}
{% set choice = 'yes' %}
{% elif choice is sameas false %}
{% set choice = 'no' %}
{% endif %}
{% if (value.default is not list and value.default == choice) or (value.default is list and choice in value.default) %}
<li><div style="color: blue"><b>@{ choice | escape }@</b>&nbsp;&larr;</div></li>
{% else %}
<li>@{ choice | escape }@</li>
{% endif %}
{% endfor %}
</ul>
{% endif %}
{# Show default value, when multiple choice or no choices #}
{% if value.default is defined and value.default not in value.choices %}
<b>Default:</b><br/><div style="color: blue">@{ value.default | tojson | escape }@</div>
{% endif %}
</td>
{# configuration #}
{% if plugin_type != 'module' %}
<td>
{% if 'ini' in value %}
<div> ini entries:
{% for ini in value.ini %}
<p>[@{ ini.section }@]<br>@{ ini.key }@ = @{ value.default | default('VALUE') }@</p>
{% endfor %}
</div>
{% endif %}
{% if 'env' in value %}
{% for env in value.env %}
<div>env:@{ env.name }@</div>
{% endfor %}
{% endif %}
{% if 'vars' in value %}
{% for myvar in value.vars %}
<div>var: @{ myvar.name }@</div>
{% endfor %}
{% endif %}
</td>
{% endif %}
{# description #}
<td>
{% for desc in value.description %}
<div>@{ desc | replace('\n', '\n ') | html_ify }@</div>
{% endfor %}
{% if 'aliases' in value and value.aliases %}
<div style="font-size: small; color: darkgreen"><br/>aliases: @{ value.aliases|join(', ') }@</div>
{% endif %}
</td>
</tr>
{% if value.suboptions %}
{% if value.suboptions.items %}
@{ loop(value.suboptions|dictsort) }@
{% elif value.suboptions[0].items %}
@{ loop(value.suboptions[0]|dictsort) }@
{% endif %}
{% endif %}
{% endfor %}
</table>
<br/>
{% endif %}
{% if notes -%}
Notes
-----
.. note::
{% for note in notes %}
- @{ note | rst_ify }@
{% endfor %}
{% endif %}
{% if seealso -%}
See Also
--------
.. seealso::
{% for item in seealso %}
{% if item.module is defined and item.description is defined %}
:ref:`@{ item.module }@_module`
@{ item.description | rst_ify }@
{% elif item.module is defined %}
:ref:`@{ item.module }@_module`
The official documentation on the **@{ item.module }@** module.
{% elif item.name is defined and item.link is defined and item.description is defined %}
`@{ item.name }@ <@{ item.link }@>`_
@{ item.description | rst_ify }@
{% elif item.ref is defined and item.description is defined %}
:ref:`@{ item.ref }@`
@{ item.description | rst_ify }@
{% endif %}
{% endfor %}
{% endif %}
{% if examples or plainexamples -%}
Examples
--------
.. code-block:: yaml+jinja
{% for example in examples %}
{% if example['description'] %}@{ example['description'] | indent(4, True) }@{% endif %}
@{ example['code'] | escape | indent(4, True) }@
{% endfor %}
{% if plainexamples %}@{ plainexamples | indent(4, True) }@{% endif %}
{% endif %}
{% if not returnfacts and returndocs and returndocs.ansible_facts is defined %}
{% set returnfacts = returndocs.ansible_facts.contains %}
{% set _x = returndocs.pop('ansible_facts', None) %}
{% endif %}
{% if returnfacts -%}
Returned Facts
--------------
Facts returned by this module are added/updated in the ``hostvars`` host facts and can be referenced by name just like any other host fact. They do not need to be registered in order to use them.
.. raw:: html
<table border=0 cellpadding=0 class="documentation-table">
{# Pre-compute the nesting depth to allocate columns #}
@{ to_kludge_ns('maxdepth', 1) -}@
{% for key, value in returnfacts|dictsort recursive %}
@{ to_kludge_ns('maxdepth', [loop.depth, from_kludge_ns('maxdepth')] | max) -}@
{% if value.contains -%}
{% if value.contains.items -%}
@{ loop(value.contains.items()) -}@
{% elif value.contains[0].items -%}
@{ loop(value.contains[0].items()) -}@
{% endif -%}
{% endif -%}
{% endfor -%}
<tr>
<th colspan="@{ from_kludge_ns('maxdepth') }@">Fact</th>
<th>Returned</th>
<th width="100%">Description</th>
</tr>
{% for key, value in returnfacts|dictsort recursive %}
<tr>
{% for i in range(1, loop.depth) %}
<td class="elbow-placeholder"></td>
{% endfor %}
<td colspan="@{ from_kludge_ns('maxdepth') - loop.depth0 }@" colspan="@{ from_kludge_ns('maxdepth') - loop.depth0 }@">
<div class="ansibleOptionAnchor" id="return-{% for part in value.full_key %}@{ part }@{% if not loop.last %}/{% endif %}{% endfor %}"></div>
<b>@{ key }@</b>
<a class="ansibleOptionLink" href="#return-{% for part in value.full_key %}@{ part }@{% if not loop.last %}/{% endif %}{% endfor %}" title="Permalink to this fact"></a>
<div style="font-size: small">
<span style="color: purple">@{ value.type | documented_type }@</span>
{% if value.elements %} / <span style="color: purple">elements=@{ value.elements | documented_type }@</span>{% endif %}
</div>
{% if value.version_added %}<div style="font-style: italic; font-size: small; color: darkgreen">added in @{value.version_added}@</div>{% endif %}
</td>
<td>@{ value.returned | html_ify }@</td>
<td>
{% if value.description is string %}
<div>@{ value.description | html_ify }@
</div>
{% else %}
{% for desc in value.description %}
<div>@{ desc | html_ify }@
</div>
{% endfor %}
{% endif %}
<br/>
{% if value.sample is defined and value.sample %}
<div style="font-size: smaller"><b>Sample:</b></div>
{# TODO: The sample should be escaped, using | escape or | htmlify, but both mess things up beyond repair with dicts #}
<div style="font-size: smaller; color: blue; word-wrap: break-word; word-break: break-all;">@{ value.sample | replace('\n', '\n ') | html_ify }@</div>
{% endif %}
</td>
</tr>
{# ---------------------------------------------------------
# sadly we cannot blindly iterate through the child dicts,
# since in some documentations,
# lists are used instead of dicts. This handles both types
# ---------------------------------------------------------#}
{% if value.contains %}
{% if value.contains.items %}
@{ loop(value.contains|dictsort) }@
{% elif value.contains[0].items %}
@{ loop(value.contains[0]|dictsort) }@
{% endif %}
{% endif %}
{% endfor %}
</table>
<br/><br/>
{% endif %}
{% if returndocs -%}
Return Values
-------------
Common return values are documented :ref:`here <common_return_values>`, the following are the fields unique to this @{ plugin_type }@:
.. raw:: html
<table border=0 cellpadding=0 class="documentation-table">
@{ to_kludge_ns('maxdepth', 1) -}@
{% for key, value in returndocs|dictsort recursive -%}
@{ to_kludge_ns('maxdepth', [loop.depth, from_kludge_ns('maxdepth')] | max) -}@
{% if value.contains -%}
{% if value.contains.items -%}
@{ loop(value.contains.items()) -}@
{% elif value.contains[0].items -%}
@{ loop(value.contains[0].items()) -}@
{% endif -%}
{% endif -%}
{% endfor -%}
<tr>
<th colspan="@{ from_kludge_ns('maxdepth') }@">Key</th>
<th>Returned</th>
<th width="100%">Description</th>
</tr>
{% for key, value in returndocs|dictsort recursive %}
<tr>
{% for i in range(1, loop.depth) %}
<td class="elbow-placeholder">&nbsp;</td>
{% endfor %}
<td colspan="@{ from_kludge_ns('maxdepth') - loop.depth0 }@">
<div class="ansibleOptionAnchor" id="return-{% for part in value.full_key %}@{ part }@{% if not loop.last %}/{% endif %}{% endfor %}"></div>
<b>@{ key }@</b>
<a class="ansibleOptionLink" href="#return-{% for part in value.full_key %}@{ part }@{% if not loop.last %}/{% endif %}{% endfor %}" title="Permalink to this return value"></a>
<div style="font-size: small">
<span style="color: purple">@{ value.type | documented_type }@</span>
{% if value.elements %} / <span style="color: purple">elements=@{ value.elements | documented_type }@</span>{% endif %}
</div>
{% if value.version_added %}<div style="font-style: italic; font-size: small; color: darkgreen">added in @{value.version_added}@</div>{% endif %}
</td>
<td>@{ value.returned | html_ify }@</td>
<td>
{% if value.description is string %}
<div>@{ value.description | html_ify |indent(4) | trim}@</div>
{% else %}
{% for desc in value.description %}
<div>@{ desc | html_ify |indent(4) | trim}@</div>
{% endfor %}
{% endif %}
<br/>
{% if value.sample is defined and value.sample %}
<div style="font-size: smaller"><b>Sample:</b></div>
{# TODO: The sample should be escaped, using |escape or |htmlify, but both mess things up beyond repair with dicts #}
<div style="font-size: smaller; color: blue; word-wrap: break-word; word-break: break-all;">@{ value.sample | replace('\n', '\n ') | html_ify }@</div>
{% endif %}
</td>
</tr>
{# ---------------------------------------------------------
# sadly we cannot blindly iterate through the child dicts,
# since in some documentations,
# lists are used instead of dicts. This handles both types
# ---------------------------------------------------------#}
{% if value.contains %}
{% if value.contains.items %}
@{ loop(value.contains|dictsort) }@
{% elif value.contains[0].items %}
@{ loop(value.contains[0]|dictsort) }@
{% endif %}
{% endif %}
{% endfor %}
</table>
<br/><br/>
{% endif %}
Status
------
{% if deprecated %}
- This @{ plugin_type }@ will be removed in version @{ deprecated['removed_in'] | default('') | string | rst_ify }@. *[deprecated]*
- For more information see `DEPRECATED`_.
{% endif %}
{% if author is defined -%}
Authors
~~~~~~~
{% for author_name in author %}
- @{ author_name }@
{% endfor %}
{% endif %}
.. hint::
{% if plugin_type == 'module' %}
If you notice any issues in this documentation, you can `edit this document <https://github.com/ansible/ansible/edit/devel/lib/ansible/modules/@{ source }@?description=%23%23%23%23%23%20SUMMARY%0A%3C!---%20Your%20description%20here%20--%3E%0A%0A%0A%23%23%23%23%23%20ISSUE%20TYPE%0A-%20Docs%20Pull%20Request%0A%0A%2Blabel:%20docsite_pr>`_ to improve it.
{% else %}
If you notice any issues in this documentation, you can `edit this document <https://github.com/ansible/ansible/edit/devel/lib/ansible/plugins/@{ plugin_type }@/@{ source }@?description=%23%23%23%23%23%20SUMMARY%0A%3C!---%20Your%20description%20here%20--%3E%0A%0A%0A%23%23%23%23%23%20ISSUE%20TYPE%0A-%20Docs%20Pull%20Request%0A%0A%2Blabel:%20docsite_pr>`_ to improve it.
.. hint::
Configuration entries for each entry type have a low to high priority order. For example, a variable that is lower in the list will override a variable that is higher up.
{% endif %}

View file

@ -1,18 +0,0 @@
:source: @{ source }@
{# avoids rST "isn't included in any toctree" errors for module docs #}
:orphan:
.. _@{ module }@_@{ plugin_type }@_alias_@{ alias }@:
{% if short_description %}
{% set title = alias + ' -- ' + short_description | rst_ify %}
{% else %}
{% set title = alias %}
{% endif %}
@{ title }@
@{ '+' * title|length }@
This is an alias for :ref:`@{ module }@ <@{ module }@_@{ plugin_type }@>`.
This name has been **deprecated**. Please update your tasks to use the new name ``@{ module }@`` instead.

View file

@ -1,9 +0,0 @@
Plugin Index
============
.. toctree:: :maxdepth: 1
{% for name in categories %}
list_of_@{ name }@_plugins
{% endfor %}

View file

@ -1,15 +0,0 @@
.. _@{ slug }@:
Plugins Maintained by the @{ maintainers }@
``````````````````````````@{ '`' * maintainers | length }@
.. toctree:: :maxdepth: 1
{% for module in modules | sort %}
@{ module }@{% if module_info[module]['deprecated'] %} **(D)**{% endif %} - @{ module_info[module]['doc']['short_description'] }@ <plugins/@{ module_info[module]['primary_category'] }@/@{ module }@>
{% endfor %}
.. note::
- **(D)**: This marks a plugin as deprecated, which means a plugin is kept for backwards compatibility but usage is discouraged.
The plugin documentation details page may explain more about this rationale.

View file

@ -22,15 +22,25 @@ except ImportError:
def build_lib_path(this_script=__file__): def build_lib_path(this_script=__file__):
"""Return path to the common build library directory""" """Return path to the common build library directory."""
hacking_dir = os.path.dirname(this_script) hacking_dir = os.path.dirname(this_script)
libdir = os.path.abspath(os.path.join(hacking_dir, 'build_library')) libdir = os.path.abspath(os.path.join(hacking_dir, 'build_library'))
return libdir return libdir
def ansible_lib_path(this_script=__file__):
"""Return path to the common build library directory."""
hacking_dir = os.path.dirname(this_script)
libdir = os.path.abspath(os.path.join(hacking_dir, '..', 'lib'))
return libdir
sys.path.insert(0, ansible_lib_path())
sys.path.insert(0, build_lib_path()) sys.path.insert(0, build_lib_path())
from build_ansible import commands, errors from build_ansible import commands, errors
@ -47,14 +57,15 @@ def create_arg_parser(program_name):
def main(): def main():
""" """
Main entrypoint of the script Start our run.
"It all starts here" "It all starts here"
""" """
subcommands = load('build_ansible.command_plugins', subclasses=commands.Command) subcommands = load('build_ansible.command_plugins', subclasses=commands.Command)
arg_parser = create_arg_parser(os.path.basename(sys.argv[0])) arg_parser = create_arg_parser(os.path.basename(sys.argv[0]))
arg_parser.add_argument('--debug', dest='debug', required=False, default=False, action='store_true', arg_parser.add_argument('--debug', dest='debug', required=False, default=False,
action='store_true',
help='Show tracebacks and other debugging information') help='Show tracebacks and other debugging information')
subparsers = arg_parser.add_subparsers(title='Subcommands', dest='command', subparsers = arg_parser.add_subparsers(title='Subcommands', dest='command',
help='for help use build-ansible.py SUBCOMMANDS -h') help='for help use build-ansible.py SUBCOMMANDS -h')

View file

@ -11,14 +11,13 @@ import os.path
import pathlib import pathlib
import yaml import yaml
from jinja2 import Environment, FileSystemLoader
from ansible.module_utils.six import string_types from ansible.module_utils.six import string_types
from ansible.module_utils._text import to_bytes from ansible.module_utils._text import to_bytes
from antsibull.jinja2.environment import doc_environment
# Pylint doesn't understand Python3 namespace modules. # Pylint doesn't understand Python3 namespace modules.
from ..change_detection import update_file_if_different # pylint: disable=relative-beyond-top-level from ..change_detection import update_file_if_different # pylint: disable=relative-beyond-top-level
from ..commands import Command # pylint: disable=relative-beyond-top-level from ..commands import Command # pylint: disable=relative-beyond-top-level
from ..jinja2.filters import documented_type, rst_ify # pylint: disable=relative-beyond-top-level
DEFAULT_TEMPLATE_FILE = 'collections_galaxy_meta.rst.j2' DEFAULT_TEMPLATE_FILE = 'collections_galaxy_meta.rst.j2'
@ -61,12 +60,7 @@ class DocumentCollectionMeta(Command):
normalize_options(options) normalize_options(options)
env = Environment(loader=FileSystemLoader(template_dir), env = doc_environment(template_dir)
variable_start_string="@{",
variable_end_string="}@",
trim_blocks=True)
env.filters['documented_type'] = documented_type
env.filters['rst_ify'] = rst_ify
template = env.get_template(template_file) template = env.get_template(template_file)
output_name = os.path.join(output_dir, template_file.replace('.j2', '')) output_name = os.path.join(output_dir, template_file.replace('.j2', ''))

View file

@ -0,0 +1,164 @@
# coding: utf-8
# Copyright: (c) 2020, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
# Make coding more python3-ish
from __future__ import absolute_import, division, print_function
import glob
import os
import os.path
import pathlib
import shutil
from tempfile import TemporaryDirectory
import yaml
from ansible.release import __version__ as ansible_base__version__
# Pylint doesn't understand Python3 namespace modules.
# pylint: disable=relative-beyond-top-level
from ..commands import Command
# pylint: enable=relative-beyond-top-level
__metaclass__ = type
DEFAULT_TOP_DIR = pathlib.Path(__file__).parents[4]
DEFAULT_OUTPUT_DIR = pathlib.Path(__file__).parents[4] / 'docs/docsite'
#
# Subcommand base
#
def generate_base_docs(args):
"""Regenerate the documentation for all plugins listed in the plugin_to_collection_file."""
# imports here so that they don't cause unnecessary deps for all of the plugins
from antsibull.cli import antsibull_docs
with TemporaryDirectory() as tmp_dir:
#
# Construct a deps file with our version of ansible_base in it
#
modified_deps_file = os.path.join(tmp_dir, 'ansible.deps')
# The _acd_version doesn't matter
deps_file_contents = {'_acd_version': ansible_base__version__,
'_ansible_base_version': ansible_base__version__}
with open(modified_deps_file, 'w') as f:
f.write(yaml.dump(deps_file_contents))
# Generate the plugin rst
return antsibull_docs.run(['antsibull-docs', 'stable', '--deps-file', modified_deps_file,
'--ansible-base-cache', str(args.top_dir),
'--dest-dir', args.output_dir])
# If we make this more than just a driver for antsibull:
# Run other rst generation
# Run sphinx build
#
# Subcommand full
#
def generate_full_docs(args):
"""Regenerate the documentation for all plugins listed in the plugin_to_collection_file."""
# imports here so that they don't cause unnecessary deps for all of the plugins
import sh
from antsibull.cli import antsibull_docs
from packaging.version import Version
ansible_base_ver = Version(ansible_base__version__)
ansible_base_major_ver = '{0}.{1}'.format(ansible_base_ver.major, ansible_base_ver.minor)
with TemporaryDirectory() as tmp_dir:
sh.git(['clone', 'https://github.com/ansible-community/ansible-build-data'], _cwd=tmp_dir)
deps_files = glob.glob(os.path.join(tmp_dir, 'ansible-build-data',
ansible_base_major_ver, '*.deps'))
if not deps_files:
raise Exception('No deps files exist for version {0}'.format(ansible_base_major_ver))
# Find the latest version of the deps file for this version
latest = None
latest_ver = Version('0')
for filename in deps_files:
with open(filename, 'r') as f:
deps_data = yaml.safe_load(f.read())
new_version = Version(deps_data['_ansible_base_version'])
if new_version > latest_ver:
latest_ver = new_version
latest = filename
# Make a copy of the deps file so that we can set the ansible-base version to use
modified_deps_file = os.path.join(tmp_dir, 'ansible.deps')
shutil.copyfile(latest, modified_deps_file)
# Put our version of ansible-base into the deps file
with open(modified_deps_file, 'r') as f:
deps_data = yaml.safe_load(f.read())
deps_data['_ansible_base_version'] = ansible_base__version__
with open(modified_deps_file, 'w') as f:
f.write(yaml.dump(deps_data))
# Generate the plugin rst
return antsibull_docs.run(['antsibull-docs', 'stable', '--deps-file', modified_deps_file,
'--ansible-base-cache', str(args.top_dir),
'--dest-dir', args.output_dir])
# If we make this more than just a driver for antsibull:
# Run other rst generation
# Run sphinx build
class CollectionPluginDocs(Command):
name = 'docs-build'
_ACTION_HELP = """Action to perform.
full: Regenerate the rst for the full ansible website.
base: Regenerate the rst for plugins in ansible-base and then build the website.
named: Regenerate the rst for the named plugins and then build the website.
"""
@classmethod
def init_parser(cls, add_parser):
parser = add_parser(cls.name,
description='Generate documentation for plugins in collections.'
' Plugins in collections will have a stub file in the normal plugin'
' documentation location that says the module is in a collection and'
' point to generated plugin documentation under the collections/'
' hierarchy.')
parser.add_argument('action', action='store', choices=('full', 'base', 'named'),
default='full', help=cls._ACTION_HELP)
parser.add_argument("-o", "--output-dir", action="store", dest="output_dir",
default=DEFAULT_OUTPUT_DIR,
help="Output directory for generated doc files")
parser.add_argument("-t", "--top-dir", action="store", dest="top_dir",
default=DEFAULT_TOP_DIR,
help="Toplevel directory of this ansible-base checkout or expanded"
" tarball.")
parser.add_argument("-l", "--limit-to-modules", '--limit-to', action="store",
dest="limit_to", default=None,
help="Limit building module documentation to comma-separated list of"
" plugins. Specify non-existing plugin name for no plugins.")
@staticmethod
def main(args):
# normalize CLI args
if not args.output_dir:
args.output_dir = os.path.abspath(str(DEFAULT_OUTPUT_DIR))
if args.action == 'full':
return generate_full_docs(args)
if args.action == 'base':
return generate_base_docs(args)
# args.action == 'named' (Invalid actions are caught by argparse)
raise NotImplementedError('Building docs for specific files is not yet implemented')
# return 0

View file

@ -1,807 +0,0 @@
# Copyright: (c) 2012, Jan-Piet Mens <jpmens () gmail.com>
# Copyright: (c) 2012-2014, Michael DeHaan <michael@ansible.com> and others
# Copyright: (c) 2017, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
import datetime
import glob
import json
import os
import re
import sys
import warnings
from collections import defaultdict
from copy import deepcopy
from distutils.version import LooseVersion
from functools import partial
from pprint import PrettyPrinter
try:
from html import escape as html_escape
except ImportError:
# Python-3.2 or later
import cgi
def html_escape(text, quote=True):
return cgi.escape(text, quote)
import jinja2
import yaml
from jinja2 import Environment, FileSystemLoader
from ansible.errors import AnsibleError
from ansible.module_utils._text import to_bytes
from ansible.module_utils.common.collections import is_sequence
from ansible.module_utils.parsing.convert_bool import boolean
from ansible.module_utils.six import iteritems, string_types
from ansible.plugins.loader import fragment_loader
from ansible.utils import plugin_docs
from ansible.utils.display import Display
# Pylint doesn't understand Python3 namespace modules.
from ..change_detection import update_file_if_different # pylint: disable=relative-beyond-top-level
from ..commands import Command # pylint: disable=relative-beyond-top-level
from ..jinja2.filters import do_max, documented_type, html_ify, rst_fmt, rst_ify, rst_xline # pylint: disable=relative-beyond-top-level
#####################################################################################
# constants and paths
# if a module is added in a version of Ansible older than this, don't print the version added information
# in the module documentation because everyone is assumed to be running something newer than this already.
TOO_OLD_TO_BE_NOTABLE = 2.4
# Get parent directory of the directory this script lives in
MODULEDIR = os.path.abspath(os.path.join(
os.path.dirname(os.path.realpath(__file__)), os.pardir, 'lib', 'ansible', 'modules'
))
# The name of the DOCUMENTATION template
EXAMPLE_YAML = os.path.abspath(os.path.join(
os.path.dirname(os.path.realpath(__file__)), os.pardir, 'examples', 'DOCUMENTATION.yml'
))
DEPRECATED = b" (D)"
pp = PrettyPrinter()
display = Display()
# kludge_ns gives us a kludgey way to set variables inside of loops that need to be visible outside
# the loop. We can get rid of this when we no longer need to build docs with less than Jinja-2.10
# http://jinja.pocoo.org/docs/2.10/templates/#assignments
# With Jinja-2.10 we can use jinja2's namespace feature, restoring the namespace template portion
# of: fa5c0282a4816c4dd48e80b983ffc1e14506a1f5
NS_MAP = {}
def to_kludge_ns(key, value):
NS_MAP[key] = value
return ""
def from_kludge_ns(key):
return NS_MAP[key]
test_list = partial(is_sequence, include_strings=False)
def normalize_options(value):
"""Normalize boolean option value."""
if value.get('type') == 'bool' and 'default' in value:
try:
value['default'] = boolean(value['default'], strict=True)
except TypeError:
pass
return value
def write_data(text, output_dir, outputname, module=None):
''' dumps module output to a file or the screen, as requested '''
if output_dir is not None:
if module:
outputname = outputname % module
if not os.path.exists(output_dir):
os.makedirs(output_dir)
fname = os.path.join(output_dir, outputname)
fname = fname.replace(".py", "")
try:
updated = update_file_if_different(fname, to_bytes(text))
except Exception as e:
display.display("while rendering %s, an error occured: %s" % (module, e))
raise
if updated:
display.display("rendering: %s" % module)
else:
print(text)
IS_STDOUT_TTY = sys.stdout.isatty()
def show_progress(progress):
'''Show a little process indicator.'''
if IS_STDOUT_TTY:
sys.stdout.write('\r%s\r' % ("-/|\\"[progress % 4]))
sys.stdout.flush()
def get_plugin_info(module_dir, limit_to=None, verbose=False):
'''
Returns information about plugins and the categories that they belong to
:arg module_dir: file system path to the top of the plugin directory
:kwarg limit_to: If given, this is a list of plugin names to
generate information for. All other plugins will be ignored.
:returns: Tuple of two dicts containing module_info, categories, and
aliases and a set listing deprecated modules:
:module_info: mapping of module names to information about them. The fields of the dict are:
:path: filesystem path to the module
:deprecated: boolean. True means the module is deprecated otherwise not.
:aliases: set of aliases to this module name
:metadata: The modules metadata (as recorded in the module)
:doc: The documentation structure for the module
:seealso: The list of dictionaries with references to related subjects
:examples: The module's examples
:returndocs: The module's returndocs
:categories: maps category names to a dict. The dict contains at
least one key, '_modules' which contains a list of module names in
that category. Any other keys in the dict are subcategories with
the same structure.
'''
categories = dict()
module_info = defaultdict(dict)
# * windows powershell modules have documentation stubs in python docstring
# format (they are not executed) so skip the ps1 format files
# * One glob level for every module level that we're going to traverse
files = (
glob.glob("%s/*.py" % module_dir) +
glob.glob("%s/*/*.py" % module_dir) +
glob.glob("%s/*/*/*.py" % module_dir) +
glob.glob("%s/*/*/*/*.py" % module_dir)
)
module_index = 0
for module_path in files:
# Do not list __init__.py files
if module_path.endswith('__init__.py'):
continue
# Do not list blacklisted modules
module = os.path.splitext(os.path.basename(module_path))[0]
if module in plugin_docs.BLACKLIST['MODULE'] or module == 'base':
continue
# If requested, limit module documentation building only to passed-in
# modules.
if limit_to is not None and module.lower() not in limit_to:
continue
deprecated = False
if module.startswith("_"):
if os.path.islink(module_path):
# Handle aliases
source = os.path.splitext(os.path.basename(os.path.realpath(module_path)))[0]
module = module.replace("_", "", 1)
if source.startswith("_"):
source = source.replace("_", "", 1)
aliases = module_info[source].get('aliases', set())
aliases.add(module)
aliases_deprecated = module_info[source].get('aliases_deprecated', set())
aliases_deprecated.add(module)
# In case we just created this via get()'s fallback
module_info[source]['aliases'] = aliases
module_info[source]['aliases_deprecated'] = aliases_deprecated
continue
else:
# Handle deprecations
module = module.replace("_", "", 1)
deprecated = True
#
# Regular module to process
#
module_index += 1
show_progress(module_index)
# use ansible core library to parse out doc metadata YAML and plaintext examples
doc, examples, returndocs, metadata = plugin_docs.get_docstring(
module_path, fragment_loader, verbose=verbose, collection_name='ansible.builtin')
if metadata and 'removed' in metadata.get('status', []):
continue
category = categories
# Start at the second directory because we don't want the "vendor"
mod_path_only = os.path.dirname(module_path[len(module_dir):])
# Find the subcategory for each module
relative_dir = mod_path_only.split('/')[1]
sub_category = mod_path_only[len(relative_dir) + 2:]
primary_category = ''
module_categories = []
# build up the categories that this module belongs to
for new_cat in mod_path_only.split('/')[1:]:
if new_cat not in category:
category[new_cat] = dict()
category[new_cat]['_modules'] = []
module_categories.append(new_cat)
category = category[new_cat]
category['_modules'].append(module)
# the category we will use in links (so list_of_all_plugins can point to plugins/action_plugins/*'
if module_categories:
primary_category = module_categories[0]
if not doc:
display.error("*** ERROR: DOCUMENTATION section missing for %s. ***" % module_path)
continue
if 'options' in doc and doc['options'] is None:
display.error("*** ERROR: DOCUMENTATION.options must be a dictionary/hash when used. ***")
pos = getattr(doc, "ansible_pos", None)
if pos is not None:
display.error("Module position: %s, %d, %d" % doc.ansible_pos)
doc['options'] = dict()
for key, opt in doc.get('options', {}).items():
doc['options'][key] = normalize_options(opt)
# save all the information
module_info[module] = {'path': module_path,
'source': os.path.relpath(module_path, module_dir),
'deprecated': deprecated,
'aliases': module_info[module].get('aliases', set()),
'aliases_deprecated': module_info[module].get('aliases_deprecated', set()),
'metadata': metadata,
'doc': doc,
'examples': examples,
'returndocs': returndocs,
'categories': module_categories,
'primary_category': primary_category,
'sub_category': sub_category,
}
# keep module tests out of becoming module docs
if 'test' in categories:
del categories['test']
return module_info, categories
def jinja2_environment(template_dir, typ, plugin_type):
env = Environment(loader=FileSystemLoader(template_dir),
variable_start_string="@{",
variable_end_string="}@",
trim_blocks=True)
env.globals['xline'] = rst_xline
# Can be removed (and template switched to use namespace) when we no longer need to build
# with <Jinja-2.10
env.globals['to_kludge_ns'] = to_kludge_ns
env.globals['from_kludge_ns'] = from_kludge_ns
if 'max' not in env.filters:
# Jinja < 2.10
env.filters['max'] = do_max
if 'tojson' not in env.filters:
# Jinja < 2.9
env.filters['tojson'] = json.dumps
templates = {}
if typ == 'rst':
env.filters['rst_ify'] = rst_ify
env.filters['html_ify'] = html_ify
env.filters['fmt'] = rst_fmt
env.filters['xline'] = rst_xline
env.filters['documented_type'] = documented_type
env.tests['list'] = test_list
templates['plugin'] = env.get_template('plugin.rst.j2')
templates['plugin_deprecation_stub'] = env.get_template('plugin_deprecation_stub.rst.j2')
if plugin_type == 'module':
name = 'modules'
else:
name = 'plugins'
templates['category_list'] = env.get_template('%s_by_category.rst.j2' % name)
templates['support_list'] = env.get_template('%s_by_support.rst.j2' % name)
templates['list_of_CATEGORY_modules'] = env.get_template('list_of_CATEGORY_%s.rst.j2' % name)
else:
raise Exception("Unsupported format type: %s" % typ)
return templates
def process_version_added(version_added):
if not isinstance(version_added, string_types):
return version_added
if ':' not in version_added:
return version_added
# Strip tag from version_added. It suffices to do this here since
# this is only used for ansible-base, and there the only valid tag
# is `ansible.builtin:`.
return version_added[version_added.index(':') + 1:]
def too_old(added):
if not added:
return False
try:
added_tokens = str(added).split(".")
readded = added_tokens[0] + "." + added_tokens[1]
added_float = float(readded)
except ValueError as e:
warnings.warn("Could not parse %s: %s" % (added, str(e)))
return False
return added_float < TOO_OLD_TO_BE_NOTABLE
def process_options(module, options, full_key=None):
option_names = []
if full_key is None:
full_key = []
if options:
for (k, v) in iteritems(options):
# Make sure that "full key" is contained
full_key_k = full_key + [k]
v['full_key'] = full_key_k
# Error out if there's no description
if 'description' not in v:
raise AnsibleError("Missing required description for parameter '%s' in '%s' " % (k, module))
# Make sure description is a list of lines for later formatting
if isinstance(v['description'], string_types):
v['description'] = [v['description']]
elif not isinstance(v['description'], (list, tuple)):
raise AnsibleError("Invalid type for options['%s']['description']."
" Must be string or list of strings. Got %s" %
(k, type(v['description'])))
# Error out if required isn't a boolean (people have been putting
# information on when something is required in here. Those need
# to go in the description instead).
required_value = v.get('required', False)
if not isinstance(required_value, bool):
raise AnsibleError("Invalid required value '%s' for parameter '%s' in '%s' (must be truthy)" % (required_value, k, module))
# Strip old version_added information for options
if 'version_added' in v:
v['version_added'] = process_version_added(v['version_added'])
if too_old(v['version_added']):
del v['version_added']
if 'suboptions' in v and v['suboptions']:
if isinstance(v['suboptions'], dict):
process_options(module, v['suboptions'], full_key=full_key_k)
elif isinstance(v['suboptions'][0], dict):
process_options(module, v['suboptions'][0], full_key=full_key_k)
option_names.append(k)
option_names.sort()
return option_names
def process_returndocs(returndocs, full_key=None):
if full_key is None:
full_key = []
if returndocs:
for (k, v) in iteritems(returndocs):
# Make sure that "full key" is contained
full_key_k = full_key + [k]
v['full_key'] = full_key_k
# Strip old version_added information for options
if 'version_added' in v:
v['version_added'] = process_version_added(v['version_added'])
if too_old(v['version_added']):
del v['version_added']
# Process suboptions
suboptions = v.get('contains')
if suboptions:
if isinstance(suboptions, dict):
process_returndocs(suboptions, full_key=full_key_k)
elif is_sequence(suboptions):
process_returndocs(suboptions[0], full_key=full_key_k)
def process_plugins(module_map, templates, outputname, output_dir, ansible_version, plugin_type):
for module_index, module in enumerate(module_map):
show_progress(module_index)
fname = module_map[module]['path']
display.vvvvv(pp.pformat(('process_plugins info: ', module_map[module])))
# crash if module is missing documentation and not explicitly hidden from docs index
if module_map[module]['doc'] is None:
display.error("%s MISSING DOCUMENTATION" % (fname,))
_doc = {plugin_type: module,
'version_added': '2.4',
'filename': fname}
module_map[module]['doc'] = _doc
# continue
# Going to reference this heavily so make a short name to reference it by
doc = module_map[module]['doc']
display.vvvvv(pp.pformat(('process_plugins doc: ', doc)))
# add some defaults for plugins that dont have most of the info
doc['module'] = doc.get('module', module)
doc['version_added'] = process_version_added(doc.get('version_added', 'historical'))
doc['plugin_type'] = plugin_type
if module_map[module]['deprecated'] and 'deprecated' not in doc:
display.warning("%s PLUGIN MISSING DEPRECATION DOCUMENTATION: %s" % (fname, 'deprecated'))
required_fields = ('short_description',)
for field in required_fields:
if field not in doc:
display.warning("%s PLUGIN MISSING field '%s'" % (fname, field))
not_nullable_fields = ('short_description',)
for field in not_nullable_fields:
if field in doc and doc[field] in (None, ''):
print("%s: WARNING: MODULE field '%s' DOCUMENTATION is null/empty value=%s" % (fname, field, doc[field]))
if 'description' in doc:
if isinstance(doc['description'], string_types):
doc['description'] = [doc['description']]
elif not isinstance(doc['description'], (list, tuple)):
raise AnsibleError("Description must be a string or list of strings. Got %s"
% type(doc['description']))
else:
doc['description'] = []
if 'version_added' not in doc:
# Will never happen, since it has been explicitly inserted above.
raise AnsibleError("*** ERROR: missing version_added in: %s ***\n" % module)
#
# The present template gets everything from doc so we spend most of this
# function moving data into doc for the template to reference
#
if module_map[module]['aliases']:
doc['aliases'] = module_map[module]['aliases']
# don't show version added information if it's too old to be called out
added = 0
if doc['version_added'] == 'historical':
del doc['version_added']
else:
added = doc['version_added']
# Strip old version_added for the module
if too_old(added):
del doc['version_added']
doc['option_keys'] = process_options(module, doc.get('options'))
doc['filename'] = fname
doc['source'] = module_map[module]['source']
doc['docuri'] = doc['module'].replace('_', '-')
doc['now_date'] = datetime.date.today().strftime('%Y-%m-%d')
doc['ansible_version'] = ansible_version
# check the 'deprecated' field in doc. We expect a dict potentially with 'why', 'version', and 'alternative' fields
# examples = module_map[module]['examples']
# print('\n\n%s: type of examples: %s\n' % (module, type(examples)))
# if examples and not isinstance(examples, (str, unicode, list)):
# raise TypeError('module %s examples is wrong type (%s): %s' % (module, type(examples), examples))
# use 'examples' for 'plainexamples' if 'examples' is a string
if isinstance(module_map[module]['examples'], string_types):
doc['plainexamples'] = module_map[module]['examples'] # plain text
else:
doc['plainexamples'] = ''
doc['metadata'] = module_map[module]['metadata']
display.vvvvv(pp.pformat(module_map[module]))
if module_map[module]['returndocs']:
doc['returndocs'] = module_map[module]['returndocs']
process_returndocs(doc['returndocs'])
else:
doc['returndocs'] = None
doc['author'] = doc.get('author', ['UNKNOWN'])
if isinstance(doc['author'], string_types):
doc['author'] = [doc['author']]
display.v('about to template %s' % module)
display.vvvvv(pp.pformat(doc))
try:
text = templates['plugin'].render(doc)
except Exception as e:
display.warning(msg="Could not parse %s due to %s" % (module, e))
continue
if LooseVersion(jinja2.__version__) < LooseVersion('2.10'):
# jinja2 < 2.10's indent filter indents blank lines. Cleanup
text = re.sub(' +\n', '\n', text)
write_data(text, output_dir, outputname, module)
# Create deprecation stub pages for deprecated aliases
if module_map[module]['aliases']:
for alias in module_map[module]['aliases']:
if alias in module_map[module]['aliases_deprecated']:
doc['alias'] = alias
display.v('about to template %s (deprecation alias %s)' % (module, alias))
display.vvvvv(pp.pformat(doc))
try:
text = templates['plugin_deprecation_stub'].render(doc)
except Exception as e:
display.warning(msg="Could not parse %s (deprecation alias %s) due to %s" % (module, alias, e))
continue
if LooseVersion(jinja2.__version__) < LooseVersion('2.10'):
# jinja2 < 2.10's indent filter indents blank lines. Cleanup
text = re.sub(' +\n', '\n', text)
write_data(text, output_dir, outputname, alias)
def process_categories(plugin_info, categories, templates, output_dir, output_name, plugin_type):
# For some reason, this line is changing plugin_info:
# text = templates['list_of_CATEGORY_modules'].render(template_data)
# To avoid that, make a deepcopy of the data.
# We should track that down and fix it at some point in the future.
plugin_info = deepcopy(plugin_info)
for category in sorted(categories.keys()):
module_map = categories[category]
category_filename = output_name % category
display.display("*** recording category %s in %s ***" % (category, category_filename))
# start a new category file
category_name = category.replace("_", " ")
category_title = category_name.title()
subcategories = dict((k, v) for k, v in module_map.items() if k != '_modules')
template_data = {'title': category_title,
'category_name': category_name,
'category': module_map,
'subcategories': subcategories,
'module_info': plugin_info,
'plugin_type': plugin_type
}
text = templates['list_of_CATEGORY_modules'].render(template_data)
write_data(text, output_dir, category_filename)
def process_support_levels(plugin_info, categories, templates, output_dir, plugin_type):
supported_by = {'Ansible Core Team': {'slug': 'core_supported',
'modules': [],
'output': 'core_maintained.rst',
'blurb': "These are :doc:`modules maintained by the"
" Ansible Core Team<core_maintained>` and will always ship"
" with Ansible itself."},
'Ansible Network Team': {'slug': 'network_supported',
'modules': [],
'output': 'network_maintained.rst',
'blurb': "These are :doc:`modules maintained by the"
" Ansible Network Team<network_maintained>` in"
" a relationship similar to how the Ansible Core Team"
" maintains the Core modules."},
'Ansible Partners': {'slug': 'certified_supported',
'modules': [],
'output': 'partner_maintained.rst',
'blurb': """
Some examples of :doc:`Certified Modules<partner_maintained>` are those submitted by other
companies. Maintainers of these types of modules must watch for any issues reported or pull requests
raised against the module.
The Ansible Core Team will review all modules becoming certified. Core committers will review
proposed changes to existing Certified Modules once the community maintainers of the module have
approved the changes. Core committers will also ensure that any issues that arise due to Ansible
engine changes will be remediated. Also, it is strongly recommended (but not presently required)
for these types of modules to have unit tests.
These modules are currently shipped with Ansible, but might be shipped separately in the future.
"""},
'Ansible Community': {'slug': 'community_supported',
'modules': [],
'output': 'community_maintained.rst',
'blurb': """
These are :doc:`modules maintained by the Ansible Community<community_maintained>`. They **are
not** supported by the Ansible Core Team or by companies/partners associated to the module.
They are still fully usable, but the response rate to issues is purely up to the community. Best
effort support will be provided but is not covered under any support contracts.
These modules are currently shipped with Ansible, but will most likely be shipped separately in the future.
"""},
}
# only gen support pages for modules for now, need to split and namespace templates and generated docs
if plugin_type == 'plugins':
return
# Separate the modules by support_level
for module, info in plugin_info.items():
if not info.get('metadata', None):
display.warning('no metadata for %s' % module)
continue
if info['metadata']['supported_by'] == 'core':
supported_by['Ansible Core Team']['modules'].append(module)
elif info['metadata']['supported_by'] == 'network':
supported_by['Ansible Network Team']['modules'].append(module)
elif info['metadata']['supported_by'] == 'certified':
supported_by['Ansible Partners']['modules'].append(module)
elif info['metadata']['supported_by'] == 'community':
supported_by['Ansible Community']['modules'].append(module)
else:
raise AnsibleError('Unknown supported_by value: %s' % info['metadata']['supported_by'])
# Render the module lists based on category and subcategory
for maintainers, data in supported_by.items():
subcategories = {}
subcategories[''] = {}
for module in data['modules']:
new_cat = plugin_info[module]['sub_category']
category = plugin_info[module]['primary_category']
if category not in subcategories:
subcategories[category] = {}
subcategories[category][''] = {}
subcategories[category]['']['_modules'] = []
if new_cat not in subcategories[category]:
subcategories[category][new_cat] = {}
subcategories[category][new_cat]['_modules'] = []
subcategories[category][new_cat]['_modules'].append(module)
template_data = {'maintainers': maintainers,
'subcategories': subcategories,
'modules': data['modules'],
'slug': data['slug'],
'module_info': plugin_info,
'plugin_type': plugin_type
}
text = templates['support_list'].render(template_data)
write_data(text, output_dir, data['output'])
def validate_options(options):
''' validate option parser options '''
if not options.module_dir:
sys.exit("--module-dir is required")
if not os.path.exists(options.module_dir):
sys.exit("--module-dir does not exist: %s" % options.module_dir)
if not options.template_dir:
sys.exit("--template-dir must be specified")
class DocumentPlugins(Command):
name = 'document-plugins'
@classmethod
def init_parser(cls, add_parser):
parser = add_parser(cls.name, description='Generate module documentation from metadata')
parser.add_argument("-A", "--ansible-version", action="store", dest="ansible_version",
default="unknown", help="Ansible version number")
parser.add_argument("-M", "--module-dir", action="store", dest="module_dir",
default=MODULEDIR, help="Ansible library path")
parser.add_argument("-P", "--plugin-type", action="store", dest="plugin_type",
default='module', help="The type of plugin (module, lookup, etc)")
parser.add_argument("-T", "--template-dir", action="append", dest="template_dir",
help="directory containing Jinja2 templates")
parser.add_argument("-t", "--type", action='store', dest='type', choices=['rst'],
default='rst', help="Document type")
parser.add_argument("-o", "--output-dir", action="store", dest="output_dir", default=None,
help="Output directory for module files")
parser.add_argument("-I", "--includes-file", action="store", dest="includes_file",
default=None, help="Create a file containing list of processed modules")
parser.add_argument("-l", "--limit-to-modules", '--limit-to', action="store",
dest="limit_to", default=None, help="Limit building module documentation"
" to comma-separated list of plugins. Specify non-existing plugin name"
" for no plugins.")
parser.add_argument('-V', action='version', help='Show version number and exit')
parser.add_argument('-v', '--verbose', dest='verbosity', default=0, action="count",
help="verbose mode (increase number of 'v's for more)")
@staticmethod
def main(args):
if not args.template_dir:
args.template_dir = ["hacking/templates"]
validate_options(args)
display.verbosity = args.verbosity
plugin_type = args.plugin_type
display.display("Evaluating %s files..." % plugin_type)
# prep templating
templates = jinja2_environment(args.template_dir, args.type, plugin_type)
# set file/directory structure
if plugin_type == 'module':
# trim trailing s off of plugin_type for plugin_type=='modules'. ie 'copy_module.rst'
outputname = '%s_' + '%s.rst' % plugin_type
output_dir = args.output_dir
else:
# for plugins, just use 'ssh.rst' vs 'ssh_module.rst'
outputname = '%s.rst'
output_dir = '%s/plugins/%s' % (args.output_dir, plugin_type)
display.vv('output name: %s' % outputname)
display.vv('output dir: %s' % output_dir)
# Convert passed-in limit_to to None or list of modules.
if args.limit_to is not None:
args.limit_to = [s.lower() for s in args.limit_to.split(",")]
plugin_info, categories = get_plugin_info(args.module_dir, limit_to=args.limit_to, verbose=(args.verbosity > 0))
categories['all'] = {'_modules': plugin_info.keys()}
if display.verbosity >= 3:
display.vvv(pp.pformat(categories))
if display.verbosity >= 5:
display.vvvvv(pp.pformat(plugin_info))
# Transform the data
if args.type == 'rst':
display.v('Generating rst')
for key, record in plugin_info.items():
display.vv(key)
if display.verbosity >= 5:
display.vvvvv(pp.pformat(('record', record)))
if record.get('doc', None):
short_desc = record['doc']['short_description'].rstrip('.')
if short_desc is None:
display.warning('short_description for %s is None' % key)
short_desc = ''
record['doc']['short_description'] = rst_ify(short_desc)
if plugin_type == 'module':
display.v('Generating Categories')
# Write module master category list
category_list_text = templates['category_list'].render(categories=sorted(categories.keys()))
category_index_name = '%ss_by_category.rst' % plugin_type
write_data(category_list_text, output_dir, category_index_name)
# Render all the individual plugin pages
display.v('Generating plugin pages')
process_plugins(plugin_info, templates, outputname, output_dir, args.ansible_version, plugin_type)
# Render all the categories for modules
if plugin_type == 'module':
display.v('Generating Category lists')
category_list_name_template = 'list_of_%s_' + '%ss.rst' % plugin_type
process_categories(plugin_info, categories, templates, output_dir, category_list_name_template, plugin_type)
# Render all the categories for modules
process_support_levels(plugin_info, categories, templates, output_dir, plugin_type)
return 0

View file

@ -1,100 +0,0 @@
# Copyright: (c) 2019, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import re
try:
from html import escape as html_escape
except ImportError:
# Python-3.2 or later
import cgi
def html_escape(text, quote=True):
return cgi.escape(text, quote)
from jinja2.runtime import Undefined
from ansible.errors import AnsibleError
from ansible.module_utils._text import to_text
from ansible.module_utils.six import string_types
_ITALIC = re.compile(r"I\(([^)]+)\)")
_BOLD = re.compile(r"B\(([^)]+)\)")
_MODULE = re.compile(r"M\(([^)]+)\)")
_URL = re.compile(r"U\(([^)]+)\)")
_LINK = re.compile(r"L\(([^)]+), *([^)]+)\)")
_CONST = re.compile(r"C\(([^)]+)\)")
_RULER = re.compile(r"HORIZONTALLINE")
def html_ify(text):
''' convert symbols like I(this is in italics) to valid HTML '''
if not isinstance(text, string_types):
text = to_text(text)
t = html_escape(text)
t = _ITALIC.sub(r"<em>\1</em>", t)
t = _BOLD.sub(r"<b>\1</b>", t)
t = _MODULE.sub(r"<span class='module'>\1</span>", t)
t = _URL.sub(r"<a href='\1'>\1</a>", t)
t = _LINK.sub(r"<a href='\2'>\1</a>", t)
t = _CONST.sub(r"<code>\1</code>", t)
t = _RULER.sub(r"<hr/>", t)
return t.strip()
def documented_type(text):
''' Convert any python type to a type for documentation '''
if isinstance(text, Undefined):
return '-'
if text == 'str':
return 'string'
if text == 'bool':
return 'boolean'
if text == 'int':
return 'integer'
if text == 'dict':
return 'dictionary'
return text
# The max filter was added in Jinja2-2.10. Until we can require that version, use this
def do_max(seq):
return max(seq)
def rst_ify(text):
''' convert symbols like I(this is in italics) to valid restructured text '''
try:
t = _ITALIC.sub(r"*\1*", text)
t = _BOLD.sub(r"**\1**", t)
t = _MODULE.sub(r":ref:`\1 <\1_module>`", t)
t = _LINK.sub(r"`\1 <\2>`_", t)
t = _URL.sub(r"\1", t)
t = _CONST.sub(r"``\1``", t)
t = _RULER.sub(r"------------", t)
except Exception as e:
raise AnsibleError("Could not process (%s) : %s" % (text, e))
return t
def rst_fmt(text, fmt):
''' helper for Jinja2 to do format strings '''
return fmt % (text)
def rst_xline(width, char="="):
''' return a restructured text line of a given length '''
return char * width

View file

@ -11,7 +11,7 @@ import sys
def main(): def main():
base_dir = os.getcwd() + os.path.sep base_dir = os.getcwd() + os.path.sep
docs_dir = os.path.abspath('docs/docsite') docs_dir = os.path.abspath('docs/docsite')
cmd = ['make', 'singlehtmldocs'] cmd = ['make', 'base_singlehtmldocs']
sphinx = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, cwd=docs_dir) sphinx = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, cwd=docs_dir)
stdout, stderr = sphinx.communicate() stdout, stderr = sphinx.communicate()

View file

@ -3,3 +3,4 @@ pyyaml
sphinx sphinx
sphinx-notfound-page sphinx-notfound-page
straight.plugin straight.plugin
antsibull

View file

@ -29,9 +29,6 @@ hacking/build_library/build_ansible/command_plugins/dump_keywords.py compile-3.5
hacking/build_library/build_ansible/command_plugins/generate_man.py compile-2.6!skip # docs build only, 3.6+ required hacking/build_library/build_ansible/command_plugins/generate_man.py compile-2.6!skip # docs build only, 3.6+ required
hacking/build_library/build_ansible/command_plugins/generate_man.py compile-2.7!skip # docs build only, 3.6+ required hacking/build_library/build_ansible/command_plugins/generate_man.py compile-2.7!skip # docs build only, 3.6+ required
hacking/build_library/build_ansible/command_plugins/generate_man.py compile-3.5!skip # docs build only, 3.6+ required hacking/build_library/build_ansible/command_plugins/generate_man.py compile-3.5!skip # docs build only, 3.6+ required
hacking/build_library/build_ansible/command_plugins/plugin_formatter.py compile-2.6!skip # docs build only, 3.6+ required
hacking/build_library/build_ansible/command_plugins/plugin_formatter.py compile-2.7!skip # docs build only, 3.6+ required
hacking/build_library/build_ansible/command_plugins/plugin_formatter.py compile-3.5!skip # docs build only, 3.6+ required
hacking/build_library/build_ansible/command_plugins/porting_guide.py compile-2.6!skip # release process only, 3.6+ required hacking/build_library/build_ansible/command_plugins/porting_guide.py compile-2.6!skip # release process only, 3.6+ required
hacking/build_library/build_ansible/command_plugins/porting_guide.py compile-2.7!skip # release process only, 3.6+ required hacking/build_library/build_ansible/command_plugins/porting_guide.py compile-2.7!skip # release process only, 3.6+ required
hacking/build_library/build_ansible/command_plugins/porting_guide.py compile-3.5!skip # release process only, 3.6+ required hacking/build_library/build_ansible/command_plugins/porting_guide.py compile-3.5!skip # release process only, 3.6+ required