Docs: User guide overhaul, part 2 (#65474)

This commit is contained in:
Alicia Cozine 2019-12-12 12:35:17 -06:00 committed by Sandra McCann
parent 7af98f9724
commit 860cacc54f
17 changed files with 1218 additions and 1551 deletions

View file

@ -10,7 +10,7 @@ Common options are:
* ``become`` and ``become_method`` as described in :ref:`privilege_escalation`.
* ``network_os`` - set to match your network platform you are communicating with. See the :ref:`platform-specific <platform_options>` pages.
* ``remote_user`` as described in :ref:`playbook_hosts_and_users`.
* ``remote_user`` as described in :ref:`connection_set_user`.
* Timeout options - ``persistent_command_timeout``, ``persistent_connect_timeout``, and ``timeout``.
.. _timeout_options:

View file

@ -63,6 +63,9 @@ To do something as the ``nobody`` user when the shell is nologin:
become_user: nobody
become_flags: '-s /bin/sh'
To specify a password for sudo, run ``ansible-playbook`` with ``--ask-become-pass`` (``-K`` for short).
If you run a playbook utilizing ``become`` and the playbook seems to hang, most likely it is stuck at the privilege escalation prompt. Stop it with `CTRL-c`, then execute the playbook with ``-K`` and the appropriate password.
Become connection variables
---------------------------

View file

@ -11,8 +11,44 @@ ControlPersist and paramiko
By default, Ansible uses native OpenSSH, because it supports ControlPersist (a performance feature), Kerberos, and options in ``~/.ssh/config`` such as Jump Host setup. If your control machine uses an older version of OpenSSH that does not support ControlPersist, Ansible will fallback to a Python implementation of OpenSSH called 'paramiko'.
SSH key setup
-------------
.. _connection_set_user:
Setting a remote user
---------------------
By default, Ansible connects to all remote devices with the user name you are using on the control node. If that user name does not exist on a remote device, you can set a different user name for the connection. If you just need to do some tasks as a different user, look at :ref:`become`. You can set the connection user in a playbook:
.. code-block:: yaml
---
- name: update webservers
hosts: webservers
remote_user: admin
tasks:
- name: thing to do first in this playbook
. . .
as a host variable in inventory:
.. code-block:: text
other1.example.com ansible_connection=ssh ansible_user=myuser
other2.example.com ansible_connection=ssh ansible_user=myotheruser
or as a group variable in inventory:
.. code-block:: yaml
cloud:
hosts:
cloud1: my_backup.cloud.com
cloud2: my_backup2.cloud.com
vars:
ansible_user: admin
Setting up SSH keys
-------------------
By default, Ansible assumes you are using SSH keys to connect to remote machines. SSH keys are encouraged, but you can use password authentication if needed with the ``--ask-pass`` option. If you need to provide a password for :ref:`privilege escalation <become>` (sudo, pbrun, etc.), use ``--ask-become-pass``.
@ -51,8 +87,8 @@ You can specify localhost explicitly by adding this to your inventory file:
.. _host_key_checking_on:
Host key checking
-----------------
Managing host key checking
--------------------------
Ansible enables host key checking by default. Checking host keys guards against server spoofing and man-in-the-middle attacks, but it does require some maintenance.

View file

@ -21,6 +21,7 @@ This guide covers how to work with Ansible, including using the command line, wo
playbooks
become
vault
sample_setup
modules
../plugins/plugins
intro_bsd

View file

@ -661,6 +661,8 @@ For a full list with available plugins and examples, see :ref:`connection_plugin
Inventory setup examples
========================
See also :ref:`sample_setup`, which shows inventory along with playbooks and other Ansible artifacts.
.. _inventory_setup-per_environment:
Example: One inventory per environment

View file

@ -11,30 +11,28 @@ The advanced YAML syntax examples on this page give you more control over the da
.. _unsafe_strings:
Unsafe or Raw Strings
Unsafe or raw strings
=====================
Ansible provides an internal data type for declaring variable values as "unsafe". This means that the data held within the variables value should be treated as unsafe preventing unsafe character substitution and information disclosure.
When handling values returned by lookup plugins, Ansible uses a data type called ``unsafe`` to block templating. Marking data as unsafe prevents malicious users from abusing Jinja2 templates to execute arbitrary code on target machines. The Ansible implementation ensures that unsafe values are never templated. It is more comprehensive than escaping Jinja2 with ``{% raw %} ... {% endraw %}`` tags.
Jinja2 contains functionality for escaping, or telling Jinja2 to not template data by means of functionality such as ``{% raw %} ... {% endraw %}``, however this uses a more comprehensive implementation to ensure that the value is never templated.
Using YAML tags, you can also mark a value as "unsafe" by using the ``!unsafe`` tag such as:
You can use the same ``unsafe`` data type in variables you define, to prevent templating errors and information disclosure. You can mark values supplied by :ref:`vars_prompts<unsafe_prompts>` as unsafe. You can also use ``unsafe`` in playbooks. The most common use cases include passwords that allow special characters like ``{`` or ``%``, and JSON arguments that look like templates but should not be templated. For example:
.. code-block:: yaml
---
my_unsafe_variable: !unsafe 'this variable has {{ characters that should not be treated as a jinja2 template'
mypassword: !unsafe 234%234{435lkj{{lkjsdf
In a playbook, this may look like::
In a playbook::
---
hosts: all
vars:
my_unsafe_variable: !unsafe 'unsafe value'
my_unsafe_variable: !unsafe 'unsafe % value'
tasks:
...
For complex variables such as hashes or arrays, ``!unsafe`` should be used on the individual elements such as::
For complex variables such as hashes or arrays, use ``!unsafe`` on the individual elements::
---
my_unsafe_array:

View file

@ -1,54 +1,43 @@
.. _playbooks_async:
Asynchronous Actions and Polling
Asynchronous actions and polling
================================
By default tasks in playbooks block, meaning the connections stay open
until the task is done on each node. This may not always be desirable, or you may
be running operations that take longer than the SSH timeout.
By default Ansible runs tasks synchronously, holding the connection to the remote node open until the action is completed. This means within a playbook, each task blocks the next task by default, meaning subsequent tasks will not run until the current task completes. This behavior can create challenges. For example, a task may take longer to complete than the SSH session allows for, causing a timeout. Or you may want a long-running process to execute in the background while you perform other tasks concurrently. Asynchronous mode lets you control how long-running tasks execute.
Time-limited background operations
----------------------------------
.. contents::
:local:
You can run long-running operations in the background and check their status later.
For example, to execute ``long_running_operation``
asynchronously in the background, with a timeout of 3600 seconds (``-B``),
and without polling (``-P``)::
Asynchronous ad-hoc tasks
-------------------------
You can execute long-running operations in the background with :ref:`ad-hoc tasks <intro_adhoc>`. For example, to execute ``long_running_operation`` asynchronously in the background, with a timeout (``-B``) of 3600 seconds, and without polling (``-P``)::
$ ansible all -B 3600 -P 0 -a "/usr/bin/long_running_operation --do-stuff"
If you want to check on the job status later, you can use the
``async_status`` module, passing it the job ID that was returned when you ran
the original job in the background::
To check on the job status later, use the ``async_status`` module, passing it the job ID that was returned when you ran the original job in the background::
$ ansible web1.example.com -m async_status -a "jid=488359678239.2844"
To run for 30 minutes and poll for status every 60 seconds::
Ansible can also check on the status of your long-running job automatically with polling. In most cases, Ansible will keep the connection to your remote node open between polls. To run for 30 minutes and poll for status every 60 seconds::
$ ansible all -B 1800 -P 60 -a "/usr/bin/long_running_operation --do-stuff"
Poll mode is smart so all jobs will be started before polling will begin on any machine.
Be sure to use a high enough ``--forks`` value if you want to get all of your jobs started
very quickly. After the time limit (in seconds) runs out (``-B``), the process on
the remote nodes will be terminated.
Poll mode is smart so all jobs will be started before polling begins on any machine. Be sure to use a high enough ``--forks`` value if you want to get all of your jobs started very quickly. After the time limit (in seconds) runs out (``-B``), the process on the remote nodes will be terminated.
Typically you'll only be backgrounding long-running
shell commands or software upgrades. Backgrounding the copy module does not do a background file transfer. :ref:`Playbooks <working_with_playbooks>` also support polling, and have a simplified syntax for this.
Asynchronous mode is best suited to long-running shell commands or software upgrades. Running the copy module asynchronously, for example, does not do a background file transfer.
To avoid blocking or timeout issues, you can use asynchronous mode to run all of your tasks at once and then poll until they are done.
Asynchronous playbook tasks
---------------------------
The behavior of asynchronous mode depends on the value of `poll`.
:ref:`Playbooks <working_with_playbooks>` also support asynchronous mode and polling, with a simplified syntax. You can use asynchronous mode in playbooks to avoid connection timeouts or to avoid blocking subsequent tasks. The behavior of asynchronous mode in a playbook depends on the value of `poll`.
Avoid connection timeouts: poll > 0
-----------------------------------
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
When ``poll`` is a positive value, the playbook will *still* block on the task until it either completes, fails or times out.
If you want to set a longer timeout limit for a certain task in your playbook, use ``async`` with ``poll`` set to a positive value. Ansible will still block the next task in your playbook, waiting until the async task either completes, fails or times out. However, the task will only time out if it exceeds the timeout limit you set with the ``async`` parameter.
In this case, however, `async` explicitly sets the timeout you wish to apply to this task rather than being limited by the connection method timeout.
To launch a task asynchronously, specify its maximum runtime
and how frequently you would like to poll for status. The default
poll value is set by the ``DEFAULT_POLL_INTERVAL`` setting if you do not specify a value for `poll`::
To avoid timeouts on a task, specify its maximum runtime and how frequently you would like to poll for status::
---
@ -63,6 +52,7 @@ poll value is set by the ``DEFAULT_POLL_INTERVAL`` setting if you do not specify
poll: 5
.. note::
The default poll value is set by the :ref:`DEFAULT_POLL_INTERVAL` setting.
There is no default for the async time limit. If you leave off the
'async' keyword, the task runs synchronously, which is Ansible's
default.
@ -72,21 +62,12 @@ poll value is set by the ``DEFAULT_POLL_INTERVAL`` setting if you do not specify
task when run in check mode. See :ref:`check_mode_dry` on how to
skip a task in check mode.
Run tasks concurrently: poll = 0
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Concurrent tasks: poll = 0
--------------------------
If you want to run multiple tasks in a playbook concurrently, use ``async`` with ``poll`` set to 0. When you set ``poll: 0``, Ansible starts the task and immediately moves on to the next task without waiting for a result. Each async task runs until it either completes, fails or times out (runs longer than its ``async`` value). The playbook run ends without checking back on async tasks.
When ``poll`` is 0, Ansible will start the task and immediately move on to the next one without waiting for a result.
From the point of view of sequencing this is asynchronous programming: tasks may now run concurrently.
The playbook run will end without checking back on async tasks.
The async tasks will run until they either complete, fail or timeout according to their `async` value.
If you need a synchronization point with a task, register it to obtain its job ID and use the :ref:`async_status <async_status_module>` module to observe it.
You may run a task asynchronously by specifying a poll value of 0::
To run a playbook task asynchronously::
---
@ -101,19 +82,13 @@ You may run a task asynchronously by specifying a poll value of 0::
poll: 0
.. note::
You shouldn't attempt run a task asynchronously by specifying a poll value of 0 with operations that require
exclusive locks (such as yum transactions) if you expect to run other
commands later in the playbook against those same resources.
Do not specify a poll value of 0 with operations that require exclusive locks (such as yum transactions) if you expect to run other commands later in the playbook against those same resources.
.. note::
Using a higher value for ``--forks`` will result in kicking off asynchronous
tasks even faster. This also increases the efficiency of polling.
Using a higher value for ``--forks`` will result in kicking off asynchronous tasks even faster. This also increases the efficiency of polling.
If you would like to perform a task asynchronously and check on it later you can perform a task similar to the
following::
If you need a synchronization point with an async task, you can register it to obtain its job ID and use the :ref:`async_status <async_status_module>` module to observe it in a later task. For example::
---
# Requires ansible 1.8+
- name: 'YUM - async task'
yum:
name: docker-io
@ -134,8 +109,7 @@ following::
"check on it later" task to fail because the temporary status file that
the ``async_status:`` is looking for will not have been written or no longer exist
If you would like to run multiple asynchronous tasks while limiting the amount
of tasks running concurrently, you can do it this way::
To run multiple asynchronous tasks while limiting the number of tasks running concurrently::
#####################
# main.yml
@ -176,6 +150,8 @@ of tasks running concurrently, you can do it this way::
.. seealso::
:ref:`playbooks_strategies`
Options for controlling playbook execution
:ref:`playbooks_intro`
An introduction to playbooks
`User Mailing List <https://groups.google.com/group/ansible-devel>`_

View file

@ -1,398 +1,113 @@
.. _playbooks_tips_and_tricks:
.. _playbooks_best_practices:
Best Practices
***************
Tips and tricks
***************
These tips and tricks have helped us optimize our Ansible usage, and we offer them here as suggestions. We hope they will help you organize content, write playbooks, maintain inventory, and execute Ansible. Ultimately, though, you should use Ansible in the way that makes most sense for your organization and your goals.
.. contents::
:local:
General tips
============
These concepts apply to all Ansible activities and artifacts.
Keep it simple
--------------
Whenever you can, do things simply. Use advanced features only when necessary, and select the feature that best matches your use case. For example, you will probably not need ``vars``, ``vars_files``, ``vars_prompt`` and ``--extra-vars`` all at once, while also using an external inventory file. If something feels complicated, it probably is. Take the time to look for a simpler solution.
Use version control
-------------------
Keep your playbooks, roles, inventory, and variables files in git or another version control system and make commits to the repository when you make changes. Version control gives you an audit trail describing when and why you changed the rules that automate your infrastructure.
Playbook tips
=============
These tips help make playbooks and roles easier to read, maintain, and debug.
Use whitespace
--------------
Generous use of whitespace, for example, a blank line before each block or task, makes a playbook easy to scan.
Always name tasks
-----------------
Task names are optional, but extremely useful. In its output, Ansible shows you the name of each task it runs. Choose names that describe what each task does and why.
Always mention the state
------------------------
For many modules, the 'state' parameter is optional. Different modules have different default settings for 'state', and some modules support several 'state' settings. Explicitly setting 'state=present' or 'state=absent' makes playbooks and roles clearer.
Use comments
------------
Even with task names and explicit state, sometimes a part of a playbook or role (or inventory/variable file) needs more explanation. Adding a comment (any line starting with '#') helps others (and possibly yourself in future) understand what a play or task (or variable setting) does, how it does it, and why.
Inventory tips
==============
Here are some tips for making the most of Ansible and Ansible playbooks.
These tips help keep your inventory well organized.
You can find some example playbooks illustrating these best practices in our `ansible-examples repository <https://github.com/ansible/ansible-examples>`_. (NOTE: These may not use all of the features in the latest release, but are still an excellent reference!).
Use dynamic inventory with clouds
---------------------------------
.. contents:: Topics
With cloud providers and other systems that maintain canonical lists of your infrastructure, use :ref:`dynamic inventory <intro_dynamic_inventory>` to retrieve those lists instead of manually updating static inventory files. With cloud resources, you can use tags to differentiate production and staging environments.
.. _content_organization:
Group inventory by function
---------------------------
Content Organization
++++++++++++++++++++++
A system can be in multiple groups. See :ref:`intro_inventory` and :ref:`intro_patterns`. If you create groups named for the function of the nodes in the group, for example *webservers* or *dbservers*, your playbooks can target machines based on function. You can assign function-specific variables using the group variable system, and design Ansible roles to handle function-specific use cases. See :ref:`playbooks_reuse_roles`.
The following section shows one of many possible ways to organize playbook content.
Separate production and staging inventory
-----------------------------------------
Your usage of Ansible should fit your needs, however, not ours, so feel free to modify this approach and organize as you see fit.
You can keep your production environment separate from development, test, and staging environments by using separate inventory files or directories for each environment. This way you pick with -i what you are targeting. Keeping all your environments in one file can lead to surprises!
One crucial way to organize your playbook content is Ansible's "roles" organization feature, which is documented as part
of the main playbooks page. You should take the time to read and understand the roles documentation which is available here: :ref:`playbooks_reuse_roles`.
.. _best_practices_for_variables_and_vaults:
.. _directory_layout:
Keep vaulted variables safely visible
-------------------------------------
Directory Layout
````````````````
You should encrypt sensitive or secret variables with Ansible Vault. However, encrypting the variable names as well as the variable values makes it hard to find the source of the values. You can keep the names of your variables accessible (by ``grep``, for example) without exposing any secrets by adding a layer of indirection:
The top level of the directory would contain files and directories like so::
#. Create a ``group_vars/`` subdirectory named after the group.
#. Inside this subdirectory, create two files named ``vars`` and ``vault``.
#. In the ``vars`` file, define all of the variables needed, including any sensitive ones.
#. Copy all of the sensitive variables over to the ``vault`` file and prefix these variables with ``vault_``.
#. Adjust the variables in the ``vars`` file to point to the matching ``vault_`` variables using jinja2 syntax: ``db_password: {{ vault_db_password }}``.
#. Encrypt the ``vault`` file to protect its contents.
#. Use the variable name from the ``vars`` file in your playbooks.
production # inventory file for production servers
staging # inventory file for staging environment
When running a playbook, Ansible finds the variables in the unencrypted file, which pulls the sensitive variable values from the encrypted file. There is no limit to the number of variable and vault files or their names.
group_vars/
group1.yml # here we assign variables to particular groups
group2.yml
host_vars/
hostname1.yml # here we assign variables to particular systems
hostname2.yml
Execution tricks
================
library/ # if any custom modules, put them here (optional)
module_utils/ # if any custom module_utils to support modules, put them here (optional)
filter_plugins/ # if any custom filter plugins, put them here (optional)
These tips apply to using Ansible, rather than to Ansible artifacts.
site.yml # master playbook
webservers.yml # playbook for webserver tier
dbservers.yml # playbook for dbserver tier
tasks/ # task files included from playbooks
webservers-extra.yml # <-- avoids confusing playbook with task files
Try it in staging first
-----------------------
roles/
common/ # this hierarchy represents a "role"
tasks/ #
main.yml # <-- tasks file can include smaller files if warranted
handlers/ #
main.yml # <-- handlers file
templates/ # <-- files for use with the template resource
ntp.conf.j2 # <------- templates end in .j2
files/ #
bar.txt # <-- files for use with the copy resource
foo.sh # <-- script files for use with the script resource
vars/ #
main.yml # <-- variables associated with this role
defaults/ #
main.yml # <-- default lower priority variables for this role
meta/ #
main.yml # <-- role dependencies
library/ # roles can also include custom modules
module_utils/ # roles can also include custom module_utils
lookup_plugins/ # or other types of plugins, like lookup in this case
Testing changes in a staging environment before rolling them out in production is always a great idea. Your environments need not be the same size and you can use group variables to control the differences between those environments.
webtier/ # same kind of structure as "common" was above, done for the webtier role
monitoring/ # ""
fooapp/ # ""
Update in batches
-----------------
.. note: If you find yourself having too many top level playbooks (for instance you have a playbook you wrote for a specific hotfix, etc), it may make sense to have a playbooks/ directory instead. This can be a good idea as you get larger. If you do this, configure your roles_path in ansible.cfg to find your roles location.
.. _alternative_directory_layout:
Alternative Directory Layout
````````````````````````````
Alternatively you can put each inventory file with its ``group_vars``/``host_vars`` in a separate directory. This is particularly useful if your ``group_vars``/``host_vars`` don't have that much in common in different environments. The layout could look something like this::
inventories/
production/
hosts # inventory file for production servers
group_vars/
group1.yml # here we assign variables to particular groups
group2.yml
host_vars/
hostname1.yml # here we assign variables to particular systems
hostname2.yml
staging/
hosts # inventory file for staging environment
group_vars/
group1.yml # here we assign variables to particular groups
group2.yml
host_vars/
stagehost1.yml # here we assign variables to particular systems
stagehost2.yml
library/
module_utils/
filter_plugins/
site.yml
webservers.yml
dbservers.yml
roles/
common/
webtier/
monitoring/
fooapp/
This layout gives you more flexibility for larger environments, as well as a total separation of inventory variables between different environments. The downside is that it is harder to maintain, because there are more files.
.. _use_dynamic_inventory_with_clouds:
Use Dynamic Inventory With Clouds
`````````````````````````````````
If you are using a cloud provider, you should not be managing your inventory in a static file. See :ref:`intro_dynamic_inventory`.
This does not just apply to clouds -- If you have another system maintaining a canonical list of systems
in your infrastructure, usage of dynamic inventory is a great idea in general.
.. _staging_vs_prod:
How to Differentiate Staging vs Production
``````````````````````````````````````````
If managing static inventory, it is frequently asked how to differentiate different types of environments. The following example
shows a good way to do this. Similar methods of grouping could be adapted to dynamic inventory (for instance, consider applying the AWS
tag "environment:production", and you'll get a group of systems automatically discovered named "ec2_tag_environment_production".
Let's show a static inventory example though. Below, the *production* file contains the inventory of all of your production hosts.
It is suggested that you define groups based on purpose of the host (roles) and also geography or datacenter location (if applicable)::
# file: production
[atlanta_webservers]
www-atl-1.example.com
www-atl-2.example.com
[boston_webservers]
www-bos-1.example.com
www-bos-2.example.com
[atlanta_dbservers]
db-atl-1.example.com
db-atl-2.example.com
[boston_dbservers]
db-bos-1.example.com
# webservers in all geos
[webservers:children]
atlanta_webservers
boston_webservers
# dbservers in all geos
[dbservers:children]
atlanta_dbservers
boston_dbservers
# everything in the atlanta geo
[atlanta:children]
atlanta_webservers
atlanta_dbservers
# everything in the boston geo
[boston:children]
boston_webservers
boston_dbservers
.. _groups_and_hosts:
Group And Host Variables
````````````````````````
This section extends on the previous example.
Groups are nice for organization, but that's not all groups are good for. You can also assign variables to them! For instance, atlanta has its own NTP servers, so when setting up ntp.conf, we should use them. Let's set those now::
---
# file: group_vars/atlanta
ntp: ntp-atlanta.example.com
backup: backup-atlanta.example.com
Variables aren't just for geographic information either! Maybe the webservers have some configuration that doesn't make sense for the database servers::
---
# file: group_vars/webservers
apacheMaxRequestsPerChild: 3000
apacheMaxClients: 900
If we had any default values, or values that were universally true, we would put them in a file called group_vars/all::
---
# file: group_vars/all
ntp: ntp-boston.example.com
backup: backup-boston.example.com
We can define specific hardware variance in systems in a host_vars file, but avoid doing this unless you need to::
---
# file: host_vars/db-bos-1.example.com
foo_agent_port: 86
bar_agent_port: 99
Again, if we are using dynamic inventory sources, many dynamic groups are automatically created. So a tag like "class:webserver" would load in
variables from the file "group_vars/ec2_tag_class_webserver" automatically.
.. _split_by_role:
Top Level Playbooks Are Separated By Role
`````````````````````````````````````````
In site.yml, we import a playbook that defines our entire infrastructure. This is a very short example, because it's just importing
some other playbooks::
---
# file: site.yml
- import_playbook: webservers.yml
- import_playbook: dbservers.yml
In a file like webservers.yml (also at the top level), we map the configuration of the webservers group to the roles performed by the webservers group::
---
# file: webservers.yml
- hosts: webservers
roles:
- common
- webtier
The idea here is that we can choose to configure our whole infrastructure by "running" site.yml or we could just choose to run a subset by running
webservers.yml. This is analogous to the "--limit" parameter to ansible but a little more explicit::
ansible-playbook site.yml --limit webservers
ansible-playbook webservers.yml
.. _role_organization:
Task And Handler Organization For A Role
````````````````````````````````````````
Below is an example tasks file that explains how a role works. Our common role here just sets up NTP, but it could do more if we wanted::
---
# file: roles/common/tasks/main.yml
- name: be sure ntp is installed
yum:
name: ntp
state: present
tags: ntp
- name: be sure ntp is configured
template:
src: ntp.conf.j2
dest: /etc/ntp.conf
notify:
- restart ntpd
tags: ntp
- name: be sure ntpd is running and enabled
service:
name: ntpd
state: started
enabled: yes
tags: ntp
Here is an example handlers file. As a review, handlers are only fired when certain tasks report changes, and are run at the end
of each play::
---
# file: roles/common/handlers/main.yml
- name: restart ntpd
service:
name: ntpd
state: restarted
See :ref:`playbooks_reuse_roles` for more information.
.. _organization_examples:
What This Organization Enables (Examples)
`````````````````````````````````````````
Above we've shared our basic organizational structure.
Now what sort of use cases does this layout enable? Lots! If I want to reconfigure my whole infrastructure, it's just::
ansible-playbook -i production site.yml
To reconfigure NTP on everything::
ansible-playbook -i production site.yml --tags ntp
To reconfigure just my webservers::
ansible-playbook -i production webservers.yml
For just my webservers in Boston::
ansible-playbook -i production webservers.yml --limit boston
For just the first 10, and then the next 10::
ansible-playbook -i production webservers.yml --limit boston[0:9]
ansible-playbook -i production webservers.yml --limit boston[10:19]
And of course just basic ad-hoc stuff is also possible::
ansible boston -i production -m ping
ansible boston -i production -m command -a '/sbin/reboot'
And there are some useful commands to know::
# confirm what task names would be run if I ran this command and said "just ntp tasks"
ansible-playbook -i production webservers.yml --tags ntp --list-tasks
# confirm what hostnames might be communicated with if I said "limit to boston"
ansible-playbook -i production webservers.yml --limit boston --list-hosts
.. _dep_vs_config:
Deployment vs Configuration Organization
````````````````````````````````````````
The above setup models a typical configuration topology. When doing multi-tier deployments, there are going
to be some additional playbooks that hop between tiers to roll out an application. In this case, 'site.yml'
may be augmented by playbooks like 'deploy_exampledotcom.yml' but the general concepts can still apply.
Consider "playbooks" as a sports metaphor -- you don't have to just have one set of plays to use against your infrastructure
all the time -- you can have situational plays that you use at different times and for different purposes.
Ansible allows you to deploy and configure using the same tool, so you would likely reuse groups and just
keep the OS configuration in separate playbooks from the app deployment.
.. _staging_vs_production:
Staging vs Production
+++++++++++++++++++++
As also mentioned above, a good way to keep your staging (or testing) and production environments separate is to use a separate inventory file for staging and production. This way you pick with -i what you are targeting. Keeping them all in one file can lead to surprises!
Testing things in a staging environment before trying in production is always a great idea. Your environments need not be the same
size and you can use group variables to control the differences between those environments.
.. _rolling_update:
Rolling Updates
+++++++++++++++
Understand the 'serial' keyword. If updating a webserver farm you really want to use it to control how many machines you are
updating at once in the batch.
See :ref:`playbooks_delegation`.
.. _mention_the_state:
Always Mention The State
++++++++++++++++++++++++
The 'state' parameter is optional to a lot of modules. Whether 'state=present' or 'state=absent', it's always best to leave that
parameter in your playbooks to make it clear, especially as some modules support additional states.
.. _group_by_roles:
Group By Roles
++++++++++++++
We're somewhat repeating ourselves with this tip, but it's worth repeating. A system can be in multiple groups. See :ref:`intro_inventory` and :ref:`intro_patterns`. Having groups named after things like
*webservers* and *dbservers* is repeated in the examples because it's a very powerful concept.
This allows playbooks to target machines based on role, as well as to assign role specific variables
using the group variable system.
See :ref:`playbooks_reuse_roles`.
Use the 'serial' keyword to control how many machines you update at once in the batch. See :ref:`playbooks_delegation`.
.. _os_variance:
Operating System and Distribution Variance
++++++++++++++++++++++++++++++++++++++++++
Handling OS and distro differences
----------------------------------
When dealing with a parameter that is different between two different operating systems, a great way to handle this is
by using the group_by module.
This makes a dynamic group of hosts matching certain criteria, even if that group is not defined in the inventory file::
Group variables files and the ``group_by`` module work together to help Ansible execute across a range of operating systems and distributions that require different settings, packages, and tools. The ``group_by`` module creates a dynamic group of hosts matching certain criteria. This group does not need to be defined in the inventory file. This approach lets you execute different tasks on different operating systems or distributions. For example::
---
@ -408,24 +123,22 @@ This makes a dynamic group of hosts matching certain criteria, even if that grou
- hosts: os_CentOS
gather_facts: False
tasks:
- # tasks that only happen on CentOS go here
- # tasks that only happen on CentOS go in this play
This will throw all systems into a dynamic group based on the operating system name.
If group-specific settings are needed, this can also be done. For example::
The first play categorizes all systems into dynamic groups based on the operating system name. Later plays can use these groups as patterns on the ``hosts`` line. You can also add group-specific settings in group vars files. All three names must match: the name created by the ``group_by`` task, the name of the pattern in subsequent plays, and the name of the group vars file. For example::
---
# file: group_vars/all
asdf: 10
---
# file: group_vars/os_CentOS
# file: group_vars/os_CentOS.yml
asdf: 42
In the above example, CentOS machines get the value of '42' for asdf, but other machines get '10'.
In this example, CentOS machines get the value of '42' for asdf, but other machines get '10'.
This can be used not only to set variables, but also to apply certain roles to only certain systems.
Alternatively, if only variables are needed::
You can use the same setup with ``include_vars`` when you only need OS-specific variables, not tasks::
- hosts: all
tasks:
@ -434,66 +147,7 @@ Alternatively, if only variables are needed::
- debug:
var: asdf
This will pull in variables based on the OS name.
.. _ship_modules_with_playbooks:
Bundling Ansible Modules With Playbooks
+++++++++++++++++++++++++++++++++++++++
If a playbook has a :file:`./library` directory relative to its YAML file, this directory can be used to add ansible modules that will
automatically be in the ansible module path. This is a great way to keep modules that go with a playbook together. This is shown
in the directory structure example at the start of this section.
.. _whitespace:
Whitespace and Comments
+++++++++++++++++++++++
Generous use of whitespace to break things up, and use of comments (which start with '#'), is encouraged.
.. _name_tasks:
Always Name Tasks
+++++++++++++++++
It is possible to leave off the 'name' for a given task, though it is recommended to provide a description
about why something is being done instead. This name is shown when the playbook is run.
.. _keep_it_simple:
Keep It Simple
++++++++++++++
When you can do something simply, do something simply. Do not reach
to use every feature of Ansible together, all at once. Use what works
for you. For example, you will probably not need ``vars``,
``vars_files``, ``vars_prompt`` and ``--extra-vars`` all at once,
while also using an external inventory file.
If something feels complicated, it probably is, and may be a good opportunity to simplify things.
.. _version_control:
Version Control
+++++++++++++++
Use version control. Keep your playbooks and inventory file in git
(or another version control system), and commit when you make changes
to them. This way you have an audit trail describing when and why you
changed the rules that are automating your infrastructure.
.. _best_practices_for_variables_and_vaults:
Variables and Vaults
++++++++++++++++++++++++++++++++++++++++
For general maintenance, it is often easier to use ``grep``, or similar tools, to find variables in your Ansible setup. Since vaults obscure these variables, it is best to work with a layer of indirection. When running a playbook, Ansible finds the variables in the unencrypted file and all sensitive variables come from the encrypted file.
A best practice approach for this is to start with a ``group_vars/`` subdirectory named after the group. Inside of this subdirectory, create two files named ``vars`` and ``vault``. Inside of the ``vars`` file, define all of the variables needed, including any sensitive ones. Next, copy all of the sensitive variables over to the ``vault`` file and prefix these variables with ``vault_``. You should adjust the variables in the ``vars`` file to point to the matching ``vault_`` variables using jinja2 syntax, and ensure that the ``vault`` file is vault encrypted.
This best practice has no limit on the amount of variable and vault files or their names.
This pulls in variables from the group_vars/os_CentOS.yml file.
.. seealso::

View file

@ -1,10 +1,18 @@
.. _playbooks_blocks:
******
Blocks
======
******
Blocks allow for logical grouping of tasks and in play error handling. Most of what you can apply to a single task (with the exception of loops) can be applied at the block level, which also makes it much easier to set data or directives common to the tasks. This does not mean the directive affects the block itself, but is inherited by the tasks enclosed by a block. i.e. a `when` will be applied to the tasks, not the block itself.
Blocks create logical groups of tasks. Blocks also offer ways to handle task errors, similar to exception handling in many programming languages.
.. contents::
:local:
Grouping tasks with blocks
==========================
All tasks in a block inherit directives applied at the block level. Most of what you can apply to a single task (with the exception of loops) can be applied at the block level, so blocks make it much easier to set data or directives common to the tasks. The directive does not affect the block itself, it is only inherited by the tasks enclosed by a block. For example, a `when` statement is applied to the tasks within a block, not to the block itself.
.. code-block:: YAML
:emphasize-lines: 3
@ -19,7 +27,6 @@ Blocks allow for logical grouping of tasks and in play error handling. Most of w
- httpd
- memcached
state: present
- name: apply the foo config template
template:
src: templates/src.j2
@ -34,19 +41,18 @@ Blocks allow for logical grouping of tasks and in play error handling. Most of w
become_user: root
ignore_errors: yes
In the example above, each of the 3 tasks will be executed after appending the `when` condition from the block
and evaluating it in the task's context. Also they inherit the privilege escalation directives enabling "become to root"
for all the enclosed tasks. Finally, ``ignore_errors: yes`` will continue executing the playbook even if some of the tasks fail.
In the example above, the 'when' condition will be evaluated before Ansible runs each of the three tasks in the block. All three tasks also inherit the privilege escalation directives, running as the root user. Finally, ``ignore_errors: yes`` ensures that Ansible continues to execute the playbook even if some of the tasks fail.
Names for tasks within blocks have been available since Ansible 2.3. We recommend using names in all tasks, within blocks or elsewhere, for better visibility into the tasks being executed when you run the playbook.
.. _block_error_handling:
Blocks error handling
`````````````````````
Handling errors with blocks
===========================
Blocks also introduce the ability to handle errors in a way similar to exceptions in most programming languages.
Blocks only deal with 'failed' status of a task. A bad task definition or an unreachable host are not 'rescuable' errors.
You can control how Ansible responds to task errors using blocks with ``rescue`` and ``always`` sections.
Rescue blocks specify tasks to run when an earlier task in a block fails. This approach is similar to exception handling in many programming languages. Ansible only runs rescue blocks after a task returns a 'failed' state. Bad task definitions and unreachable hosts will not trigger the rescue block.
.. _block_rescue:
.. code-block:: YAML
@ -66,9 +72,7 @@ Blocks only deal with 'failed' status of a task. A bad task definition or an unr
- debug:
msg: 'I caught an error, can do stuff here to fix it, :-)'
This will 'revert' the failed status of the task for the run and the play will continue as if it had succeeded.
There is also an ``always`` section, that will run no matter what the task status is.
You can also add an ``always`` section to a block. Tasks in the ``always`` section run no matter what the task status of the previous block is.
.. _block_always:
.. code-block:: YAML
@ -87,7 +91,7 @@ There is also an ``always`` section, that will run no matter what the task statu
- debug:
msg: "This always executes, :-)"
They can be added all together to do complex error handling.
Together, these elements offer complex error handling.
.. code-block:: YAML
:emphasize-lines: 2,9,16
@ -112,20 +116,16 @@ They can be added all together to do complex error handling.
- debug:
msg: "This always executes"
The tasks in the ``block`` execute normally. If any tasks in the block return ``failed``, the ``rescue`` section executes tasks to recover from the error. The ``always`` section runs regardless of the results of the ``block`` and ``rescue`` sections.
The tasks in the ``block`` would execute normally, if there is any error the ``rescue`` section would get executed
with whatever you need to do to recover from the previous error.
The ``always`` section runs no matter what previous error did or did not occur in the ``block`` and ``rescue`` sections.
It should be noted that the play continues if a ``rescue`` section completes successfully as it 'erases' the error status (but not the reporting),
this means it won't trigger ``max_fail_percentage`` nor ``any_errors_fatal`` configurations but will appear in the playbook statistics.
If an error occurs in the block and the rescue task succeeds, Ansible reverts the failed status of the original task for the run and continues to run the play as if the original task had succeeded. The rescued task is considered successful, and does not not trigger ``max_fail_percentage`` or ``any_errors_fatal`` configurations. However, Ansible still reports a failure in the playbook statistics.
Another example is how to run handlers after an error occurred :
You can use blocks with ``flush_handlers`` in a rescue task to ensure that all handlers run even if an error occurs:
.. code-block:: YAML
:emphasize-lines: 6,10
:caption: Block run handlers in error handling
tasks:
- name: Attempt and graceful roll back demo
block:
@ -145,7 +145,7 @@ Another example is how to run handlers after an error occurred :
.. versionadded:: 2.1
Ansible also provides a couple of variables for tasks in the ``rescue`` portion of a block:
Ansible provides a couple of variables for tasks in the ``rescue`` portion of a block:
ansible_failed_task
The task that returned 'failed' and triggered the rescue. For example, to get the name use ``ansible_failed_task.name``.

View file

@ -1,35 +1,29 @@
Error Handling In Playbooks
===========================
.. _playbooks_error_handling:
.. contents:: Topics
***************************
Error handling in playbooks
***************************
Ansible normally has defaults that make sure to check the return codes of commands and modules and
it fails fast -- forcing an error to be dealt with unless you decide otherwise.
When Ansible receives a non-zero return code from a command or a failure from a module, by default it stops executing on that host and continues on other hosts. However, in some circumstances you may want different behavior. Sometimes a non-zero return code indicates success. Sometimes you want a failure on one host to stop execution on all hosts. Ansible provides tools and settings to handle these situations and help you get the behavior, output, and reporting you want.
Sometimes a command that returns different than 0 isn't an error. Sometimes a command might not always
need to report that it 'changed' the remote system. This section describes how to change
the default behavior of Ansible for certain tasks so output and error handling behavior is
as desired.
.. contents::
:local:
.. _ignoring_failed_commands:
Ignoring Failed Commands
````````````````````````
Ignoring failed commands
========================
Generally playbooks will stop executing any more steps on a host that has a task fail.
Sometimes, though, you want to continue on. To do so, write a task that looks like this::
By default Ansible stops executing tasks on a host when a task fails on that host. You can use ``ignore_errors`` to continue on in spite of the failure::
- name: this will not be counted as a failure
- name: this will not count as a failure
command: /bin/false
ignore_errors: yes
Note that the above system only governs the return value of failure of the particular task,
so if you have an undefined variable used or a syntax error, it will still raise an error that users will need to address.
Note that this will not prevent failures on connection or execution issues.
This feature only works when the task must be able to run and return a value of 'failed'.
The ``ignore_errors`` directive only works when the task is able to run and returns a value of 'failed'. It will not make Ansible ignore undefined variable errors, connection failures, execution issues (for example, missing packages), or syntax errors.
Ignoring Unreachable Host Errors
````````````````````````````````````````
Ignoring unreachable host errors
================================
.. versionadded:: 2.7
@ -59,38 +53,34 @@ And at the playbook level::
.. _resetting_unreachable:
Resetting Unreachable Hosts
```````````````````````````
Resetting unreachable hosts
===========================
.. versionadded:: 2.2
Connection failures set hosts as 'UNREACHABLE', which will remove them from the list of active hosts for the run.
To recover from these issues you can use `meta: clear_host_errors` to have all currently flagged hosts reactivated,
so subsequent tasks can try to use them again.
If Ansible cannot connect to a host, it marks that host as 'UNREACHABLE' and removes it from the list of active hosts for the run. You can use `meta: clear_host_errors` to reactivate all hosts, so subsequent tasks can try to reach them again.
.. _handlers_and_failure:
Handlers and Failure
````````````````````
Handlers and failure
====================
When a task fails on a host, handlers which were previously notified
will *not* be run on that host. This can lead to cases where an unrelated failure
can leave a host in an unexpected state. For example, a task could update
Ansible runs :ref:`handlers <handlers>` at the end of each play. If a task notifies a handler but
another task fails later in the play, by default the handler does *not* run on that host,
which may leave the host in an unexpected state. For example, a task could update
a configuration file and notify a handler to restart some service. If a
task later on in the same play fails, the service will not be restarted despite
the configuration change.
task later in the same play fails, the configuration file might be changed but
the service will not be restarted.
You can change this behavior with the ``--force-handlers`` command-line option,
or by including ``force_handlers: True`` in a play, or ``force_handlers = True``
in ansible.cfg. When handlers are forced, they will run when notified even
if a task fails on that host. (Note that certain errors could still prevent
by including ``force_handlers: True`` in a play, or by adding ``force_handlers = True``
to ansible.cfg. When handlers are forced, Ansible will run all notified handlers on
all hosts, even hosts with failed tasks. (Note that certain errors could still prevent
the handler from running, such as a host becoming unreachable.)
.. _controlling_what_defines_failure:
Controlling What Defines Failure
````````````````````````````````
Defining failure
================
Ansible lets you define what "failure" means in each task using the ``failed_when`` conditional. As with all conditionals in Ansible, lists of multiple ``failed_when`` conditions are joined with an implicit ``and``, meaning the task only fails when *all* conditions are met. If you want to trigger a failure when any of the conditions is met, you must define the conditions in a string with an explicit ``or`` operator.
@ -108,18 +98,6 @@ or based on the return code::
register: diff_cmd
failed_when: diff_cmd.rc == 0 or diff_cmd.rc >= 2
In previous version of Ansible, this can still be accomplished as follows::
- name: this command prints FAILED when it fails
command: /usr/bin/example-command -x -y -z
register: command_result
ignore_errors: True
- name: fail the play if the previous command did not succeed
fail:
msg: "the command failed"
when: "'FAILED' in command_result.stderr"
You can also combine multiple conditions for failure. This task will fail if both conditions are true::
- name: Check if a file exists in temp and fail task if it does
@ -135,7 +113,6 @@ If you want the task to fail when only one condition is satisfied, change the ``
If you have too many conditions to fit neatly into one line, you can split it into a multi-line yaml value with ``>``::
- name: example of many failed_when conditions with OR
shell: "./myBinary"
register: ret
@ -146,16 +123,10 @@ If you have too many conditions to fit neatly into one line, you can split it in
.. _override_the_changed_result:
Overriding The Changed Result
`````````````````````````````
Defining "changed"
==================
When a shell/command or other module runs it will typically report
"changed" status based on whether it thinks it affected machine state.
Sometimes you will know, based on the return code
or output that it did not make any changes, and wish to override
the "changed" result such that it does not appear in report output or
does not cause handlers to fire::
Ansible lets you define when a particular task has "changed" a remote node using the ``changed_when`` conditional. This lets you determine, based on return codes or output, whether a change should be reported in Ansible statistics and whether a handler should be triggered or not. As with all conditionals in Ansible, lists of multiple ``changed_when`` conditions are joined with an implicit ``and``, meaning the task only reports a change when *all* conditions are met. If you want to report a change when any of the conditions is met, you must define the conditions in a string with an explicit ``or`` operator. For example::
tasks:
@ -176,12 +147,22 @@ You can also combine multiple conditions to override "changed" result::
- '"ERROR" in result.stderr'
- result.rc == 2
Aborting the play
`````````````````
See :ref:`controlling_what_defines_failure` for more conditional syntax examples.
Sometimes it's desirable to abort the entire play on failure, not just skip remaining tasks for a host.
Ensuring success for command and shell
======================================
The ``any_errors_fatal`` option will end the play and prevent any subsequent plays from running. When an error is encountered, all hosts in the current batch are given the opportunity to finish the fatal task and then the execution of the play stops. ``any_errors_fatal`` can be set at the play or block level::
The :ref:`command <command_module>` and :ref:`shell <shell_module>` modules care about return codes, so if you have a command whose successful exit code is not zero, you may wish to do this::
tasks:
- name: run this command and ignore the result
shell: /usr/bin/somecommand || /bin/true
Aborting a play on all hosts
============================
Sometimes you want a failure on a single host to abort the entire play on all hosts. If you set ``any_errors_fatal`` and a task returns an error, Ansible lets all hosts in the current batch finish the fatal task and then stops executing the play on all hosts. You can set ``any_errors_fatal`` at the play or block level::
- hosts: somehosts
any_errors_fatal: true
@ -194,30 +175,12 @@ The ``any_errors_fatal`` option will end the play and prevent any subsequent pla
- include_tasks: mytasks.yml
any_errors_fatal: true
for finer-grained control ``max_fail_percentage`` can be used to abort the run after a given percentage of hosts has failed.
For finer-grained control, you can use ``max_fail_percentage`` to abort the run after a given percentage of hosts has failed.
Using blocks
````````````
Controlling errors in blocks
============================
Most of what you can apply to a single task (with the exception of loops) can be applied at the :ref:`playbooks_blocks` level, which also makes it much easier to set data or directives common to the tasks.
Blocks also introduce the ability to handle errors in a way similar to exceptions in most programming languages.
Blocks only deal with 'failed' status of a task. A bad task definition or an unreachable host are not 'rescuable' errors::
tasks:
- name: Handle the error
block:
- debug:
msg: 'I execute normally'
- name: i force a failure
command: /bin/false
- debug:
msg: 'I never execute, due to the above task failing, :-('
rescue:
- debug:
msg: 'I caught an error, can do stuff here to fix it, :-)'
This will 'revert' the failed status of the outer ``block`` task for the run and the play will continue as if it had succeeded.
See :ref:`block_error_handling` for more examples.
You can also use blocks to define responses to task errors. This approach is similar to exception handling in many programming languages. See :ref:`block_error_handling` for details and examples.
.. seealso::

File diff suppressed because it is too large Load diff

View file

@ -1,71 +1,110 @@
.. _about_playbooks:
.. _playbooks_intro:
******************
Intro to Playbooks
==================
******************
Ansible Playbooks offer a repeatable, re-usable, simple configuration management and multi-machine deployment system, one that is well suited to deploying complex applications. If you need to execute a task with Ansible more than once, write a playbook and put it under source control. Then you can use the playbook to push out new configuration or confirm the configuration of remote systems. The playbooks in the `ansible-examples repository <https://github.com/ansible/ansible-examples>`_ illustrate many useful techniques. You may want to look at these in another tab as you read the documentation.
Playbooks can:
* declare configurations
* orchestrate steps of any manual ordered process, on multiple sets of machines, in a defined order
* launch tasks synchronously or :ref:`asynchronously <playbooks_async>`
.. contents::
:local:
.. _about_playbooks:
.. _playbooks_intro:
About Playbooks
```````````````
Playbooks are a completely different way to use ansible than in ad-hoc task execution mode, and are
particularly powerful.
Simply put, playbooks are the basis for a really simple configuration management and multi-machine deployment system,
unlike any that already exist, and one that is very well suited to deploying complex applications.
Playbooks can declare configurations, but they can also orchestrate steps of
any manual ordered process, even as different steps must bounce back and forth
between sets of machines in particular orders. They can launch tasks
synchronously or asynchronously.
While you might run the main ``/usr/bin/ansible`` program for ad-hoc
tasks, playbooks are more likely to be kept in source control and used
to push out your configuration or assure the configurations of your
remote systems are in spec.
There are also some full sets of playbooks illustrating a lot of these techniques in the
`ansible-examples repository <https://github.com/ansible/ansible-examples>`_. We'd recommend
looking at these in another tab as you go along.
There are also many jumping off points after you learn playbooks, so hop back to the documentation
index after you're done with this section.
.. _playbook_language_example:
Playbook Language Example
`````````````````````````
Playbook syntax
===============
Playbooks are expressed in YAML format (see :ref:`yaml_syntax`) and have a minimum of syntax, which intentionally
tries to not be a programming language or script, but rather a model of a configuration or a process.
Playbooks are expressed in YAML format with a minimum of syntax. If you are not familiar with YAML, look at our overview of :ref:`yaml_syntax` and consider installing an add-on for your text editor (see :ref:`other_tools_and_programs`) to help you write clean YAML syntax in your playbooks.
.. note::
Some editors have add-ons that can help you write clean YAML syntax in your playbooks. See :ref:`other_tools_and_programs` for details.
A playbook is composed of one or more 'plays' in an ordered list. The terms 'playbook' and 'play' are sports analogies. Each play executes part of the overall goal of the playbook, running one or more tasks. Each task calls an Ansible module.
Playbook execution
==================
Each playbook is composed of one or more 'plays' in a list.
A playbook runs in order from top to bottom. Within each play, tasks also run in order from top to bottom. Playbooks with multiple 'plays' can orchestrate multi-machine deployments, running one play on your webservers, then another play on your database servers, then a third play on your network infrastructure, and so on. At a minimum, each play defines two things:
The goal of a play is to map a group of hosts to some well defined roles, represented by
things ansible calls tasks. At a basic level, a task is nothing more than a call
to an ansible module.
* the managed nodes to target, using a :ref:`pattern <intro_patterns>`
* at least one task to execute
By composing a playbook of multiple 'plays', it is possible to
orchestrate multi-machine deployments, running certain steps on all
machines in the webservers group, then certain steps on the database
server group, then more commands back on the webservers group, etc.
"plays" are more or less a sports analogy. You can have quite a lot of plays that affect your systems
to do different things. It's not as if you were just defining one particular state or model, and you
can run different plays at different times.
.. _apache-playbook:
For starters, here's a playbook, ``verify-apache.yml`` that contains just one play::
In this example, the first play targets the web servers; the second play targets the database servers::
---
- hosts: webservers
- name: update web servers
hosts: webservers
remote_user: root
tasks:
- name: ensure apache is at the latest version
yum:
name: httpd
state: latest
- name: write the apache config file
template:
src: /srv/httpd.j2
dest: /etc/httpd.conf
- name: update db servers
hosts: databases
remote_user: root
tasks:
- name: ensure postgresql is at the latest version
yum:
name: postgresql
state: latest
- name: ensure that postgresql is started
service:
name: postgresql
state: started
Your playbook can include more than just a hosts line and tasks. For example, the playbook above sets a ``remote_user`` for each play. This is the user account for the SSH connection. You can add other :ref:`playbook_keywords` at the playbook, play, or task level to influence how Ansible behaves. Playbook keywords can control the :ref:`connection plugin <connection_plugins>`, whether to use :ref:`privilege escalation <become>`, how to handle errors, and more. To support a variety of environments, Ansible lets you set many of these parameters as command-line flags, in your Ansible configuration, or in your inventory. Learning the :ref:`precedence rules <general_precedence_rules>` for these sources of data will help you as you expand your Ansible ecosystem.
.. _tasks_list:
Task execution
--------------
By default, Ansible executes each task in order, one at a time, against all machines matched by the host pattern. Each task executes a module with specific arguments. When a task has executed on all target machines, Ansible moves on to the next task. You can use :ref:`strategies <playbooks_strategies>` to change this default behavior. Within each play, Ansible applies the same task directives to all hosts. If a task fails on a host, Ansible takes that host out of the rotation for the rest of the playbook.
When you run a playbook, Ansible returns information about connections, the ``name`` lines of all your plays and tasks, whether each task has succeeded or failed on each machine, and whether each task has made a change on each machine. At the bottom of the playbook execution, Ansible provides a summary of the nodes that were targeted and how they performed. General failures and fatal "unreachable" communication attempts are kept separate in the counts.
.. _idempotency:
Desired state and 'idempotency'
-------------------------------
Most Ansible modules check whether the desired final state has already been achieved, and exit without performing any actions if that state has been achieved, so that repeating the task does not change the final state. Modules that behave this way are often called 'idempotent.' Whether you run a playbook once, or multiple times, the outcome should be the same. However, not all playbooks and not all modules behave this way. If you are unsure, test your playbooks in a sandbox environment before running them multiple times in production.
.. _executing_a_playbook:
Running playbooks
-----------------
To run your playbook, use the :ref:`ansible-playbook` command::
ansible-playbook playbook.yml -f 10
Use the ``--verbose`` flag when running your playbook to see detailed output from successful modules as well as unsuccessful ones.
.. _handlers:
Handlers: running operations on change
======================================
Sometimes you want a task to run only when a change is made on a machine. For example, you may want to restart a service if a task updates the configuration of that service, but not if the configuration is unchanged. Ansible uses handlers to address this use case. Handlers are tasks that only run when notified. Each handler should have a globally unique name.
This playbook, ``verify-apache.yml``, contains a single play with variables, the remote user, and a handler::
---
- name: verify apache installation
hosts: webservers
vars:
http_port: 80
max_clients: 200
@ -91,291 +130,7 @@ For starters, here's a playbook, ``verify-apache.yml`` that contains just one pl
name: httpd
state: restarted
Playbooks can contain multiple plays. You may have a playbook that targets first
the web servers, and then the database servers. For example::
---
- hosts: webservers
remote_user: root
tasks:
- name: ensure apache is at the latest version
yum:
name: httpd
state: latest
- name: write the apache config file
template:
src: /srv/httpd.j2
dest: /etc/httpd.conf
- hosts: databases
remote_user: root
tasks:
- name: ensure postgresql is at the latest version
yum:
name: postgresql
state: latest
- name: ensure that postgresql is started
service:
name: postgresql
state: started
You can use this method to switch between the host group you're targeting,
the username logging into the remote servers, whether to sudo or not, and so
forth. Plays, like tasks, run in the order specified in the playbook: top to
bottom.
Below, we'll break down what the various features of the playbook language are.
.. _playbook_basics:
Basics
``````
.. _playbook_hosts_and_users:
Hosts and Users
+++++++++++++++
For each play in a playbook, you get to choose which machines in your infrastructure
to target and what remote user to complete the steps (called tasks) as.
The ``hosts`` line is a list of one or more groups or host patterns,
separated by colons, as described in the :ref:`intro_patterns`
documentation. The ``remote_user`` is just the name of the user account::
---
- hosts: webservers
remote_user: root
.. note::
The ``remote_user`` parameter was formerly called just ``user``. It was renamed in Ansible 1.4 to make it more distinguishable from the **user** module (used to create users on remote systems).
Remote users can also be defined per task::
---
- hosts: webservers
remote_user: root
tasks:
- name: test connection
ping:
remote_user: yourname
Support for running things as another user is also available (see :ref:`become`)::
---
- hosts: webservers
remote_user: yourname
become: yes
You can also use keyword ``become`` on a particular task instead of the whole play::
---
- hosts: webservers
remote_user: yourname
tasks:
- service:
name: nginx
state: started
become: yes
become_method: sudo
You can also login as you, and then become a user different than root::
---
- hosts: webservers
remote_user: yourname
become: yes
become_user: postgres
You can also use other privilege escalation methods, like su::
---
- hosts: webservers
remote_user: yourname
become: yes
become_method: su
If you need to specify a password for sudo, run ``ansible-playbook`` with ``--ask-become-pass`` or ``-K``.
If you run a playbook utilizing ``become`` and the playbook seems to hang, it's probably stuck at the privilege
escalation prompt and can be stopped using `Control-C`, allowing you to re-execute the playbook adding the
appropriate password.
.. important::
When using ``become_user`` to a user other than root, the module
arguments are briefly written into a random tempfile in ``/tmp``.
These are deleted immediately after the command is executed. This
only occurs when changing privileges from a user like 'bob' to 'timmy',
not when going from 'bob' to 'root', or logging in directly as 'bob' or
'root'. If it concerns you that this data is briefly readable
(not writable), avoid transferring unencrypted passwords with
`become_user` set. In other cases, ``/tmp`` is not used and this does
not come into play. Ansible also takes care to not log password
parameters.
.. _order:
.. versionadded:: 2.4
You can also control the order in which hosts are run. The default is to follow the order supplied by the inventory::
- hosts: all
order: sorted
gather_facts: False
tasks:
- debug:
var: inventory_hostname
Possible values for order are:
inventory:
The default. The order is 'as provided' by the inventory
reverse_inventory:
As the name implies, this reverses the order 'as provided' by the inventory
sorted:
Hosts are alphabetically sorted by name
reverse_sorted:
Hosts are sorted by name in reverse alphabetical order
shuffle:
Hosts are randomly ordered each run
.. _tasks_list:
Tasks list
++++++++++
Each play contains a list of tasks. Tasks are executed in order, one
at a time, against all machines matched by the host pattern,
before moving on to the next task. It is important to understand that, within a play,
all hosts are going to get the same task directives. It is the purpose of a play to map
a selection of hosts to tasks.
When running the playbook, which runs top to bottom, hosts with failed tasks are
taken out of the rotation for the entire playbook. If things fail, simply correct the playbook file and rerun.
The goal of each task is to execute a module, with very specific arguments.
Variables can be used in arguments to modules.
Modules should be idempotent, that is, running a module multiple times
in a sequence should have the same effect as running it just once. One
way to achieve idempotency is to have a module check whether its desired
final state has already been achieved, and if that state has been achieved,
to exit without performing any actions. If all the modules a playbook uses
are idempotent, then the playbook itself is likely to be idempotent, so
re-running the playbook should be safe.
The **command** and **shell** modules will typically rerun the same command again,
which is totally ok if the command is something like
``chmod`` or ``setsebool``, etc. Though there is a ``creates`` flag available which can
be used to make these modules also idempotent.
Every task should have a ``name``, which is included in the output from
running the playbook. This is human readable output, and so it is
useful to provide good descriptions of each task step. If the name
is not provided though, the string fed to 'action' will be used for
output.
Tasks can be declared using the legacy ``action: module options`` format, but
it is recommended that you use the more conventional ``module: options`` format.
This recommended format is used throughout the documentation, but you may
encounter the older format in some playbooks.
Here is what a basic task looks like. As with most modules,
the service module takes ``key=value`` arguments::
tasks:
- name: make sure apache is running
service:
name: httpd
state: started
The **command** and **shell** modules are the only modules that just take a list
of arguments and don't use the ``key=value`` form. This makes
them work as simply as you would expect::
tasks:
- name: enable selinux
command: /sbin/setenforce 1
The **command** and **shell** module care about return codes, so if you have a command
whose successful exit code is not zero, you may wish to do this::
tasks:
- name: run this command and ignore the result
shell: /usr/bin/somecommand || /bin/true
Or this::
tasks:
- name: run this command and ignore the result
shell: /usr/bin/somecommand
ignore_errors: True
If the action line is getting too long for comfort you can break it on
a space and indent any continuation lines::
tasks:
- name: Copy ansible inventory file to client
copy: src=/etc/ansible/hosts dest=/etc/ansible/hosts
owner=root group=root mode=0644
Variables can be used in action lines. Suppose you defined
a variable called ``vhost`` in the ``vars`` section, you could do this::
tasks:
- name: create a virtual host file for {{ vhost }}
template:
src: somefile.j2
dest: /etc/httpd/conf.d/{{ vhost }}
Those same variables are usable in templates, which we'll get to later.
Now in a very basic playbook all the tasks will be listed directly in that play, though it will usually
make more sense to break up tasks as described in :ref:`playbooks_reuse`.
.. _action_shorthand:
Action Shorthand
````````````````
.. versionadded:: 0.8
Ansible prefers listing modules like this::
template:
src: templates/foo.j2
dest: /etc/foo.conf
Early versions of Ansible used the following format, which still works::
action: template src=templates/foo.j2 dest=/etc/foo.conf
.. _handlers:
Handlers: Running Operations On Change
``````````````````````````````````````
As we've mentioned, modules should be idempotent and can relay when
they have made a change on the remote system. Playbooks recognize this and
have a basic event system that can be used to respond to change.
These 'notify' actions are triggered at the end of each block of tasks in a play, and will only be
triggered once even if notified by multiple different tasks.
For instance, multiple resources may indicate
that apache needs to be restarted because they have changed a config file,
but apache will only be bounced once to avoid unnecessary restarts.
Here's an example of restarting two services when the contents of a file
change, but only if the file changes::
In the example above, the second task notifies the handler. A single task can notify more than one handler::
- name: template configuration file
template:
@ -384,18 +139,6 @@ change, but only if the file changes::
notify:
- restart memcached
- restart apache
The things listed in the ``notify`` section of a task are called
handlers.
Handlers are lists of tasks, not really any different from regular
tasks, that are referenced by a globally unique name, and are notified
by notifiers. If nothing notifies a handler, it will not
run. Regardless of how many tasks notify a handler, it will run only
once, after all of the tasks complete in a particular play.
Here's an example handlers section::
handlers:
- name: restart memcached
service:
@ -406,6 +149,23 @@ Here's an example handlers section::
name: apache
state: restarted
Controlling when handlers run
-----------------------------
By default, handlers run after all the tasks in a particular play have been completed. This approach is efficient, because the handler only runs once, regardless of how many tasks notify it. For example, if multiple tasks update a configuration file and notify a handler to restart Apache, Ansible only bounces Apache once to avoid unnecessary restarts.
If you need handlers to run before the end of the play, add a task to flush them using the :ref:`meta module <meta_module>`, which executes Ansible actions::
tasks:
- shell: some tasks go here
- meta: flush_handlers
- shell: some other tasks
The ``meta: flush_handlers`` task triggers any handlers that have been notified at that point in the play.
Using variables with handlers
-----------------------------
You may want your Ansible handlers to use variables. For example, if the name of a service varies slightly by distribution, you want your output to show the exact name of the restarted service for each target machine. Avoid placing variables in the name of the handler. Since handler names are templated early on, Ansible may not have a value available for a handler name like this::
handlers:
@ -428,8 +188,7 @@ Instead, place variables in the task parameters of your handler. You can load th
name: "{{ web_service_name | default('httpd') }}"
state: restarted
As of Ansible 2.2, handlers can also "listen" to generic topics, and tasks can notify those topics as follows::
Handlers can also "listen" to generic topics, and tasks can notify those topics as follows::
handlers:
- name: restart memcached
@ -453,43 +212,23 @@ making it easier to share handlers among playbooks and roles (especially when us
a shared source like Galaxy).
.. note::
* Notify handlers are always run in the same order they are defined, `not` in the order listed in the notify-statement. This is also the case for handlers using `listen`.
* Handlers always run in the order they are defined, not in the order listed in the notify-statement. This is also the case for handlers using `listen`.
* Handler names and `listen` topics live in a global namespace.
* Handler names are templatable and `listen` topics are not.
* Use unique handler names. If you trigger more than one handler with the same name, the first one(s) get overwritten. Only the last one defined will run.
* You cannot notify a handler that is defined inside of an include. As of Ansible 2.1, this does work, however the include must be `static`.
* You can notify a handler defined inside a static include.
* You cannot notify a handler defined inside a dynamic include.
Roles are described later on, but it's worthwhile to point out that:
When using handlers within roles, note that:
* handlers notified within ``pre_tasks``, ``tasks``, and ``post_tasks`` sections are automatically flushed in the end of section where they were notified,
* handlers notified within ``roles`` section are automatically flushed in the end of ``tasks`` section, but before any ``tasks`` handlers,
* handlers notified within ``pre_tasks``, ``tasks``, and ``post_tasks`` sections are automatically flushed in the end of section where they were notified.
* handlers notified within ``roles`` section are automatically flushed in the end of ``tasks`` section, but before any ``tasks`` handlers.
* handlers are play scoped and as such can be used outside of the role they are defined in.
If you ever want to flush all the handler commands immediately you can do this::
tasks:
- shell: some tasks go here
- meta: flush_handlers
- shell: some other tasks
In the above example any queued up handlers would be processed early when the ``meta``
statement was reached. This is a bit of a niche case but can come in handy from
time to time.
.. _executing_a_playbook:
Executing A Playbook
````````````````````
Now that you've learned playbook syntax, how do you run a playbook? It's simple.
Let's run a playbook using a parallelism level of 10::
ansible-playbook playbook.yml -f 10
.. _playbook_ansible-pull:
Ansible-Pull
````````````
============
Should you want to invert the architecture of Ansible, so that nodes check in to a central location, instead
of pushing configuration out to them, you can.
@ -503,14 +242,17 @@ Run ``ansible-pull --help`` for details.
There's also a `clever playbook <https://github.com/ansible/ansible-examples/blob/master/language_features/ansible_pull.yml>`_ available to configure ``ansible-pull`` via a crontab from push mode.
Verifying playbooks
===================
You may want to verify your playbooks to catch syntax errors and other problems before you run them. The :ref:`ansible-playbook` command offers several options for verification, including ``--check``, ``--diff``, ``--list-hosts``, ``list-tasks``, and ``--syntax-check``. The :ref:`validate-playbook-tools` describes other tools for validating and testing playbooks.
.. _linting_playbooks:
Linting playbooks
`````````````````
ansible-lint
------------
You can use `ansible-lint <https://docs.ansible.com/ansible-lint/index.html>`_ to run a detail check of your playbooks before you execute them.
For example, if you run ``ansible-lint`` on the :ref:`verify-apache.yml playbook <apache-playbook>` introduced earlier in this section, you'll get the following results:
You can use `ansible-lint <https://docs.ansible.com/ansible-lint/index.html>`_ for detailed, Ansible-specific feedback on your playbooks before you execute them. For example, if you run ``ansible-lint`` on the playbook called ``verify-apache.yml`` near the top of this page, you should get the following results:
.. code-block:: bash
@ -521,25 +263,6 @@ For example, if you run ``ansible-lint`` on the :ref:`verify-apache.yml playbook
The `ansible-lint default rules <https://docs.ansible.com/ansible-lint/rules/default_rules.html>`_ page describes each error. For ``[403]``, the recommended fix is to change ``state: latest`` to ``state: present`` in the playbook.
Other playbook verification options
```````````````````````````````````
See :ref:`validate-playbook-tools` for a detailed list of tools you can use to verify your playbooks. Here are some others that you should consider:
* To check the syntax of a playbook, use ``ansible-playbook`` with the ``--syntax-check`` flag. This will run the
playbook file through the parser to ensure its included files, roles, etc. have no syntax problems.
* Look at the bottom of the playbook execution for a summary of the nodes that were targeted
and how they performed. General failures and fatal "unreachable" communication attempts are kept separate in the counts.
* If you ever want to see detailed output from successful modules as well as unsuccessful ones,
use the ``--verbose`` flag. This is available in Ansible 0.5 and later.
* To see what hosts would be affected by a playbook before you run it, you
can do this::
ansible-playbook playbook.yml --list-hosts
.. seealso::
`ansible-lint <https://docs.ansible.com/ansible-lint/index.html>`_
@ -547,11 +270,11 @@ See :ref:`validate-playbook-tools` for a detailed list of tools you can use to v
:ref:`yaml_syntax`
Learn about YAML syntax
:ref:`playbooks_best_practices`
Various tips about managing playbooks in the real world
Tips for managing playbooks in the real world
:ref:`all_modules`
Learn about available modules
:ref:`developing_modules`
Learn how to extend Ansible by writing your own modules
Learn to extend Ansible by writing your own modules
:ref:`intro_patterns`
Learn about how to select hosts
`GitHub examples directory <https://github.com/ansible/ansible-examples>`_

View file

@ -1,14 +1,13 @@
Prompts
=======
.. _playbooks_prompts:
When running a playbook, you may wish to prompt the user for certain input, and can
do so with the 'vars_prompt' section.
**************************
Interactive input: prompts
**************************
A common use for this might be for asking for sensitive data that you do not want to record.
If you want your playbook to prompt the user for certain input, add a 'vars_prompt' section. Prompting the user for variables lets you avoid recording sensitive data like passwords. In addition to security, prompts support flexibility. For example, if you use one playbook across multiple software releases, you could prompt for the particular release version.
This has uses beyond security, for instance, you may use the same playbook for all
software releases and would prompt for a particular release version
in a push-script.
.. contents::
:local:
Here is a most basic example::
@ -31,11 +30,9 @@ Here is a most basic example::
The user input is hidden by default but it can be made visible by setting ``private: no``.
.. note::
Prompts for individual ``vars_prompt`` variables will be skipped for any variable that is already defined through the command line ``--extra-vars`` option, or when running from a non-interactive session (such as cron or Ansible Tower). See :ref:`passing_variables_on_the_command_line` in the /Variables/ chapter.
Prompts for individual ``vars_prompt`` variables will be skipped for any variable that is already defined through the command line ``--extra-vars`` option, or when running from a non-interactive session (such as cron or Ansible Tower). See :ref:`passing_variables_on_the_command_line`.
If you have a variable that changes infrequently, it might make sense to
provide a default value that can be overridden. This can be accomplished using
the default argument::
If you have a variable that changes infrequently, you can provide a default value that can be overridden::
vars_prompt:
@ -43,8 +40,10 @@ the default argument::
prompt: "Product release version"
default: "1.0"
If `Passlib <https://passlib.readthedocs.io/en/stable/>`_ is installed, vars_prompt can also encrypt the
entered value so you can use it, for instance, with the user module to define a password::
Encrypting values supplied by ``vars_prompt``
---------------------------------------------
You can encrypt the entered value so you can use it, for instance, with the user module to define a password::
vars_prompt:
@ -55,7 +54,7 @@ entered value so you can use it, for instance, with the user module to define a
confirm: yes
salt_size: 7
You can use any crypt scheme supported by 'Passlib':
If you have `Passlib <https://passlib.readthedocs.io/en/stable/>`_ installed, you can use any crypt scheme the library supports:
- *des_crypt* - DES Crypt
- *bsdi_crypt* - BSDi Crypt
@ -75,14 +74,13 @@ You can use any crypt scheme supported by 'Passlib':
- *scram* - SCRAM Hash
- *bsd_nthash* - FreeBSD's MCF-compatible nthash encoding
However, the only parameters accepted are 'salt' or 'salt_size'. You can use your own salt using
'salt', or have one generated automatically using 'salt_size'. If nothing is specified, a salt
of size 8 will be generated.
The only parameters accepted are 'salt' or 'salt_size'. You can use your own salt by defining
'salt', or have one generated automatically using 'salt_size'. By default Ansible generates a salt
of size 8.
.. versionadded:: 2.7
When Passlib is not installed the `crypt <https://docs.python.org/2/library/crypt.html>`_ library is used as fallback.
Depending on your platform at most the following crypt schemes are supported:
If you do not have Passlib installed, Ansible uses the `crypt <https://docs.python.org/2/library/crypt.html>`_ library as a fallback. Ansible supports at most four crypt schemes, depending on your platform at most the following crypt schemes are supported:
- *bcrypt* - BCrypt
- *md5_crypt* - MD5 Crypt
@ -90,8 +88,12 @@ Depending on your platform at most the following crypt schemes are supported:
- *sha512_crypt* - SHA-512 Crypt
.. versionadded:: 2.8
.. _unsafe_prompts:
If you need to put in special characters (i.e `{%`) that might create templating errors, use the ``unsafe`` option::
Allowing special characters in ``vars_prompt`` values
-----------------------------------------------------
Some special characters, such as ``{`` and ``%`` can create templating errors. If you need to accept special characters, use the ``unsafe`` option::
vars_prompt:
- name: "my_password_with_weird_chars"

View file

@ -38,12 +38,25 @@ Using keywords to control execution
-----------------------------------
Several play-level :ref:`keyword<playbook_keywords>` also affect play execution. The most common one is ``serial``, which sets a number, a percentage, or a list of numbers of hosts you want to manage at a time. Setting ``serial`` with any strategy directs Ansible to 'batch' the hosts, completing the play on the specified number or percentage of hosts before starting the next 'batch'. This is especially useful for :ref:`rolling updates<rolling_update_batch_size>`.
The second keyword to affect execution is ``throttle``, which can also be used at the block and task level. This keyword limits the number of workers up to the maximum set via the forks setting or ``serial``. This can be useful in restricting tasks that may be CPU-intensive or interact with a rate-limiting API::
The ``throttle`` keyword also affects execution and can be set at the block and task level. This keyword limits the number of workers up to the maximum set with the forks setting or ``serial``. Use ``throttle`` to restrict tasks that may be CPU-intensive or interact with a rate-limiting API::
tasks:
- command: /path/to/cpu_intensive_command
throttle: 1
The ``order`` keyword controls the order in which hosts are run. Possible values for order are:
inventory:
(default) The order provided in the inventory
reverse_inventory:
The reverse of the order provided by the inventory
sorted:
Sorted alphabetically sorted by name
reverse_sorted:
Sorted by name in reverse alphabetical order
shuffle:
Randomly ordered on each run
Other keywords that affect play execution include ``ignore_errors``, ``ignore_unreachable``, and ``any_errors_fatal``. Please note that these keywords are not strategies. They are play-level directives or options.
.. seealso::

View file

@ -1,14 +1,15 @@
.. _playbooks_templating:
*******************
Templating (Jinja2)
===================
*******************
As already referenced in the variables section, Ansible uses Jinja2 templating to enable dynamic expressions and access to variables.
Ansible greatly expands the number of filters and tests available, as well as adding a new plugin type: lookups.
Ansible uses Jinja2 templating to enable dynamic expressions and access to variables. Ansible includes a lot of specialized filters and tests for templating. You can use all the standard filters and tests included in Jinja2 as well. Ansible also offers a new plugin type: :ref:`lookup_plugins`.
Please note that all templating happens on the Ansible controller before the task is sent and executed on the target machine. This is done to minimize the requirements on the target (jinja2 is only required on the controller) and to be able to pass the minimal information needed for the task, so the target machine does not need a copy of all the data that the controller has access to.
All templating happens on the Ansible controller **before** the task is sent and executed on the target machine. This approach minimizes the package requirements on the target (jinja2 is only required on the controller). It also limits the amount of data Ansible passes to the target machine. Ansible parses templates on the controller and passes only the information needed for each task to the target machine, instead of passing all the data on the controller and parsing it on the target.
.. contents:: Topics
.. contents::
:local:
.. toctree::
:maxdepth: 2
@ -21,20 +22,19 @@ Please note that all templating happens on the Ansible controller before the tas
.. _templating_now:
Get the current time
````````````````````
====================
.. versionadded:: 2.8
The ``now()`` Jinja2 function, allows you to retrieve python datetime object or a string representation for the current time.
The ``now()`` Jinja2 function retrieves a Python datetime object or a string representation for the current time.
The ``now()`` function supports 2 arguments:
utc
Specify ``True`` to get the current time in UTC. Defaults to ``False``
Specify ``True`` to get the current time in UTC. Defaults to ``False``.
fmt
Accepts a `strftime <https://docs.python.org/3/library/datetime.html#strftime-strptime-behavior>`_ string that will be used
to return a formatted date time string
Accepts a `strftime <https://docs.python.org/3/library/datetime.html#strftime-strptime-behavior>`_ string that returns a formatted date time string.
.. seealso::

View file

@ -1,13 +1,10 @@
.. _playbooks_tests:
*****
Tests
-----
*****
.. contents:: Topics
`Tests <http://jinja.pocoo.org/docs/dev/templates/#tests>`_ in Jinja are a way of evaluating template expressions and returning True or False.
Jinja ships with many of these. See `builtin tests`_ in the official Jinja template documentation.
`Tests <http://jinja.pocoo.org/docs/dev/templates/#tests>`_ in Jinja are a way of evaluating template expressions and returning True or False. Jinja ships with many of these. See `builtin tests`_ in the official Jinja template documentation.
The main difference between tests and filters are that Jinja tests are used for comparisons, whereas filters are used for data manipulation, and have different applications in jinja. Tests can also be used in list processing filters, like ``map()`` and ``select()`` to choose items in the list.
@ -15,10 +12,13 @@ Like all templating, tests always execute on the Ansible controller, **not** on
In addition to those Jinja2 tests, Ansible supplies a few more and users can easily create their own.
.. contents::
:local:
.. _test_syntax:
Test syntax
```````````
===========
`Test syntax <http://jinja.pocoo.org/docs/dev/templates/#tests>`_ varies from `filter syntax <http://jinja.pocoo.org/docs/dev/templates/#filters>`_ (``variable | filter``). Historically Ansible has registered tests as both jinja tests and jinja filters, allowing for them to be referenced using filter syntax.
@ -35,7 +35,7 @@ Such as::
.. _testing_strings:
Testing strings
```````````````
===============
To match strings against a substring or a regular expression, use the ``match``, ``search`` or ``regex`` filters::
@ -63,8 +63,8 @@ To match strings against a substring or a regular expression, use the ``match``,
.. _testing_truthiness:
Testing Truthiness
``````````````````
Testing truthiness
==================
.. versionadded:: 2.10
@ -103,8 +103,8 @@ to convert boolean indicators to actual booleans.
.. _testing_versions:
Version Comparison
``````````````````
Comparing versions
==================
.. versionadded:: 1.6
@ -140,7 +140,7 @@ When using ``version`` in a playbook or role, don't use ``{{ }}`` as described i
.. _math_tests:
Set theory tests
````````````````
================
.. versionadded:: 2.1
@ -162,8 +162,8 @@ To see if a list includes or is included by another list, you can use 'subset' a
.. _contains_test:
Test if a list contains a value
```````````````````````````````
Testing if a list contains a value
==================================
.. versionadded:: 2.8
@ -196,10 +196,11 @@ The ``contains`` test is designed to work with the ``select``, ``reject``, ``sel
- debug:
msg: "{{ (lacp_groups|selectattr('interfaces', 'contains', 'em1')|first).master }}"
.. _path_tests:
.. versionadded:: 2.4
Testing if a list value is True
===============================
You can use `any` and `all` to check if any or all elements in a list are true or not::
vars:
@ -220,9 +221,10 @@ You can use `any` and `all` to check if any or all elements in a list are true o
msg: "at least one is true"
when: myotherlist is any
.. _path_tests:
Testing paths
`````````````
=============
.. note:: In 2.5 the following tests were renamed to remove the ``is_`` prefix
@ -256,10 +258,62 @@ The following tests can provide information about a path on the controller::
when: mypath is mount
Testing size formats
====================
The ``human_readable`` and ``human_to_bytes`` functions let you test your
playbooks to make sure you are using the right size format in your tasks, and that
you provide Byte format to computers and human-readable format to people.
Human readable
--------------
Asserts whether the given string is human readable or not.
For example::
- name: "Human Readable"
assert:
that:
- '"1.00 Bytes" == 1|human_readable'
- '"1.00 bits" == 1|human_readable(isbits=True)'
- '"10.00 KB" == 10240|human_readable'
- '"97.66 MB" == 102400000|human_readable'
- '"0.10 GB" == 102400000|human_readable(unit="G")'
- '"0.10 Gb" == 102400000|human_readable(isbits=True, unit="G")'
This would result in::
{ "changed": false, "msg": "All assertions passed" }
Human to bytes
--------------
Returns the given string in the Bytes format.
For example::
- name: "Human to Bytes"
assert:
that:
- "{{'0'|human_to_bytes}} == 0"
- "{{'0.1'|human_to_bytes}} == 0"
- "{{'0.9'|human_to_bytes}} == 1"
- "{{'1'|human_to_bytes}} == 1"
- "{{'10.00 KB'|human_to_bytes}} == 10240"
- "{{ '11 MB'|human_to_bytes}} == 11534336"
- "{{ '1.1 GB'|human_to_bytes}} == 1181116006"
- "{{'10.00 Kb'|human_to_bytes(isbits=True)}} == 10240"
This would result in::
{ "changed": false, "msg": "All assertions passed" }
.. _test_task_results:
Task results
````````````
Testing task results
====================
The following tasks are illustrative of the tests meant to check the status of tasks::
@ -293,8 +347,7 @@ The following tasks are illustrative of the tests meant to check the status of t
.. note:: From 2.1, you can also use success, failure, change, and skip so that the grammar matches, for those who need to be strict about it.
.. _builtin tests: http://jinja.pocoo.org/docs/templates/#builtin-tests
.. _builtin tests: http://jinja.palletsprojects.com/templates/#builtin-tests
.. seealso::

View file

@ -0,0 +1,285 @@
.. _sample_setup:
********************
Sample Ansible setup
********************
You have learned about playbooks, inventory, roles, and variables. This section pulls all those elements together, outlining a sample setup for automating a web service. You can find more example playbooks illustrating these patterns in our `ansible-examples repository <https://github.com/ansible/ansible-examples>`_. (NOTE: These may not use all of the features in the latest release, but are still an excellent reference!).
The sample setup organizes playbooks, roles, inventory, and variables files by function, with tags at the play and task level for greater granularity and control. This is a powerful and flexible approach, but there are other ways to organize Ansible content. Your usage of Ansible should fit your needs, not ours, so feel free to modify this approach and organize your content as you see fit.
.. contents::
:local:
Sample directory layout
-----------------------
This layout organizes most tasks in roles, with a single inventory file for each environment and a few playbooks in the top-level directory::
production # inventory file for production servers
staging # inventory file for staging environment
group_vars/
group1.yml # here we assign variables to particular groups
group2.yml
host_vars/
hostname1.yml # here we assign variables to particular systems
hostname2.yml
library/ # if any custom modules, put them here (optional)
module_utils/ # if any custom module_utils to support modules, put them here (optional)
filter_plugins/ # if any custom filter plugins, put them here (optional)
site.yml # master playbook
webservers.yml # playbook for webserver tier
dbservers.yml # playbook for dbserver tier
tasks/ # task files included from playbooks
webservers-extra.yml # <-- avoids confusing playbook with task files
roles/
common/ # this hierarchy represents a "role"
tasks/ #
main.yml # <-- tasks file can include smaller files if warranted
handlers/ #
main.yml # <-- handlers file
templates/ # <-- files for use with the template resource
ntp.conf.j2 # <------- templates end in .j2
files/ #
bar.txt # <-- files for use with the copy resource
foo.sh # <-- script files for use with the script resource
vars/ #
main.yml # <-- variables associated with this role
defaults/ #
main.yml # <-- default lower priority variables for this role
meta/ #
main.yml # <-- role dependencies
library/ # roles can also include custom modules
module_utils/ # roles can also include custom module_utils
lookup_plugins/ # or other types of plugins, like lookup in this case
webtier/ # same kind of structure as "common" was above, done for the webtier role
monitoring/ # ""
fooapp/ # ""
.. note: By default, Ansible assumes your playbooks are stored in one directory with roles stored in a sub-directory called ``roles/``. As you use Ansible to automate more tasks, you may want to move your playbooks into a sub-directory called ``playbooks/``. If you do this, you must configure the path to your ``roles/`` directory using the ``roles_path`` setting in ansible.cfg.
Alternative directory layout
----------------------------
Alternatively you can put each inventory file with its ``group_vars``/``host_vars`` in a separate directory. This is particularly useful if your ``group_vars``/``host_vars`` don't have that much in common in different environments. The layout could look something like this::
inventories/
production/
hosts # inventory file for production servers
group_vars/
group1.yml # here we assign variables to particular groups
group2.yml
host_vars/
hostname1.yml # here we assign variables to particular systems
hostname2.yml
staging/
hosts # inventory file for staging environment
group_vars/
group1.yml # here we assign variables to particular groups
group2.yml
host_vars/
stagehost1.yml # here we assign variables to particular systems
stagehost2.yml
library/
module_utils/
filter_plugins/
site.yml
webservers.yml
dbservers.yml
roles/
common/
webtier/
monitoring/
fooapp/
This layout gives you more flexibility for larger environments, as well as a total separation of inventory variables between different environments. However, this approach is harder to maintain, because there are more files. For more information on organizing group and host variables, see :ref:`splitting_out_vars`.
.. _groups_and_hosts:
Sample group and host variables
-------------------------------
These sample group and host variables files record the variable values that apply to each machine or group of machines. For instance, the data center in Atlanta has its own NTP servers, so when setting up ntp.conf, we should use them::
---
# file: group_vars/atlanta
ntp: ntp-atlanta.example.com
backup: backup-atlanta.example.com
Similarly, the webservers have some configuration that does not apply to the database servers::
---
# file: group_vars/webservers
apacheMaxRequestsPerChild: 3000
apacheMaxClients: 900
Default values, or values that are universally true, belong in a file called group_vars/all::
---
# file: group_vars/all
ntp: ntp-boston.example.com
backup: backup-boston.example.com
If necessary, you can define specific hardware variance in systems in a host_vars file::
---
# file: host_vars/db-bos-1.example.com
foo_agent_port: 86
bar_agent_port: 99
Again, if you are using :ref:`dynamic inventory <dynamic_inventory>`, Ansible creates many dynamic groups automatically. So a tag like "class:webserver" would load in variables from the file "group_vars/ec2_tag_class_webserver" automatically.
.. _split_by_role:
Sample playbooks organized by function
--------------------------------------
With this setup, a single playbook can define all the infrastructure. The site.yml playbook imports two other playbooks, one for the webservers and one for the database servers::
---
# file: site.yml
- import_playbook: webservers.yml
- import_playbook: dbservers.yml
The webservers.yml file, also at the top level, maps the configuration of the webservers group to the roles related to the webservers group::
---
# file: webservers.yml
- hosts: webservers
roles:
- common
- webtier
With this setup, you can configure your whole infrastructure by "running" site.yml, or run a subset by running webservers.yml. This is analogous to the Ansible "--limit" parameter but a little more explicit::
ansible-playbook site.yml --limit webservers
ansible-playbook webservers.yml
.. _role_organization:
Sample task and handler files in a function-based role
------------------------------------------------------
Ansible loads any file called ``main.yml`` in a role sub-directory. This sample ``tasks/main.yml`` file is simple - it sets up NTP, but it could do more if we wanted::
---
# file: roles/common/tasks/main.yml
- name: be sure ntp is installed
yum:
name: ntp
state: present
tags: ntp
- name: be sure ntp is configured
template:
src: ntp.conf.j2
dest: /etc/ntp.conf
notify:
- restart ntpd
tags: ntp
- name: be sure ntpd is running and enabled
service:
name: ntpd
state: started
enabled: yes
tags: ntp
Here is an example handlers file. As a review, handlers are only fired when certain tasks report changes, and are run at the end
of each play::
---
# file: roles/common/handlers/main.yml
- name: restart ntpd
service:
name: ntpd
state: restarted
See :ref:`playbooks_reuse_roles` for more information.
.. _organization_examples:
What the sample setup enables
-----------------------------
The basic organizational structure described above enables a lot of different automation options. To reconfigure your entire infrastructure::
ansible-playbook -i production site.yml
To reconfigure NTP on everything::
ansible-playbook -i production site.yml --tags ntp
To reconfigure only the webservers::
ansible-playbook -i production webservers.yml
To reconfigure only the webservers in Boston::
ansible-playbook -i production webservers.yml --limit boston
To reconfigure only the first 10 webservers in Boston, and then the next 10::
ansible-playbook -i production webservers.yml --limit boston[0:9]
ansible-playbook -i production webservers.yml --limit boston[10:19]
The sample setup also supports basic ad-hoc commands::
ansible boston -i production -m ping
ansible boston -i production -m command -a '/sbin/reboot'
To discover what tasks would run or what hostnames would be affected by a particular Ansible command::
# confirm what task names would be run if I ran this command and said "just ntp tasks"
ansible-playbook -i production webservers.yml --tags ntp --list-tasks
# confirm what hostnames might be communicated with if I said "limit to boston"
ansible-playbook -i production webservers.yml --limit boston --list-hosts
.. _dep_vs_config:
Organizing for deployment or configuration
------------------------------------------
The sample setup models a typical configuration topology. When doing multi-tier deployments, there are going
to be some additional playbooks that hop between tiers to roll out an application. In this case, 'site.yml'
may be augmented by playbooks like 'deploy_exampledotcom.yml' but the general concepts still apply. Ansible allows you to deploy and configure using the same tool, so you would likely reuse groups and keep the OS configuration in separate playbooks or roles from the app deployment.
Consider "playbooks" as a sports metaphor -- you can have one set of plays to use against all your infrastructure and situational plays that you use at different times and for different purposes.
.. _ship_modules_with_playbooks:
Using local Ansible modules
---------------------------
If a playbook has a :file:`./library` directory relative to its YAML file, this directory can be used to add Ansible modules that will
automatically be in the Ansible module path. This is a great way to keep modules that go with a playbook together. This is shown
in the directory structure example at the start of this section.
.. seealso::
:ref:`yaml_syntax`
Learn about YAML syntax
:ref:`working_with_playbooks`
Review the basic playbook features
:ref:`all_modules`
Learn about available modules
:ref:`developing_modules`
Learn how to extend Ansible by writing your own modules
:ref:`intro_patterns`
Learn about how to select hosts
`GitHub examples directory <https://github.com/ansible/ansible-examples>`_
Complete playbook files from the github project source
`Mailing List <https://groups.google.com/group/ansible-project>`_
Questions? Help? Ideas? Stop by the list on Google Groups