2018-03-14 20:44:21 +01:00
.. _playbooks_delegation:
2013-09-29 22:22:54 +02:00
Delegation, Rolling Updates, and Local Actions
==============================================
2012-05-13 17:00:02 +02:00
2013-12-26 20:32:01 +01:00
.. contents :: Topics
2013-10-05 18:49:42 +02:00
Being designed for multi-tier deployments since the beginning, Ansible is great at doing things on one host on behalf of another, or doing local steps with reference to some remote hosts.
2013-10-03 04:02:11 +02:00
2014-04-27 21:07:09 +02:00
This in particular is very applicable when setting up continuous deployment infrastructure or zero downtime rolling updates, where you might be talking with load balancers or monitoring systems.
2013-10-03 04:02:11 +02:00
2013-10-05 18:49:42 +02:00
Additional features allow for tuning the orders in which things complete, and assigning a batch window size for how many machines to process at once during a rolling update.
2013-10-03 04:02:11 +02:00
2015-07-11 17:33:28 +02:00
This section covers all of these features. For examples of these items in use, `please see the ansible-examples repository <https://github.com/ansible/ansible-examples/> `_ . There are quite a few examples of zero-downtime update procedures for different kinds of applications.
2013-10-05 18:49:42 +02:00
2017-08-01 14:24:37 +02:00
You should also consult the :doc: `modules` section, various modules like 'ec2_elb', 'nagios', and 'bigip_pool', and 'netscaler' dovetail neatly with the concepts mentioned here.
2013-10-05 18:49:42 +02:00
2017-08-01 14:24:37 +02:00
You'll also want to read up on :doc: `playbooks_reuse_roles` , as the 'pre_task' and 'post_task' concepts are the places where you would typically call these modules.
2012-05-13 17:00:02 +02:00
2017-08-01 14:24:37 +02:00
Be aware that certain tasks are impossible to delegate, i.e. `include` , `add_host` , `debug` , etc as they always execute on the controller.
2017-08-10 20:11:50 +02:00
2013-10-05 00:34:39 +02:00
.. _rolling_update_batch_size:
2013-09-29 22:22:54 +02:00
Rolling Update Batch Size
2012-10-17 00:58:31 +02:00
`` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ``
2012-08-18 16:23:17 +02:00
2017-12-13 17:38:30 +01:00
By default, Ansible will try to manage all of the machines referenced in a play in parallel. For a rolling update use case, you can define how many hosts Ansible should manage at a single time by using the `` serial `` keyword::
2012-10-17 01:12:31 +02:00
2012-08-18 16:23:17 +02:00
- name: test play
hosts: webservers
serial: 3
2012-10-17 01:12:31 +02:00
In the above example, if we had 100 hosts, 3 hosts in the group 'webservers'
would complete the play completely before moving on to the next 3 hosts.
2012-08-18 16:23:17 +02:00
2017-12-13 17:38:30 +01:00
The `` serial `` keyword can also be specified as a percentage, which will be applied to the total number of hosts in a
2014-04-10 14:57:39 +02:00
play, in order to determine the number of hosts per pass::
- name: test play
2016-09-12 14:15:07 +02:00
hosts: webservers
2014-04-10 14:57:39 +02:00
serial: "30%"
If the number of hosts does not divide equally into the number of passes, the final pass will contain the remainder.
2016-08-04 07:05:30 +02:00
As of Ansible 2.2, the batch sizes can be specified as a list, as follows::
- name: test play
hosts: webservers
serial:
- 1
- 5
- 10
In the above example, the first batch would contain a single host, the next would contain 5 hosts, and (if there are any hosts left),
every following batch would contain 10 hosts until all available hosts are used.
2017-04-28 22:47:35 +02:00
It is also possible to list multiple batch sizes as percentages::
2016-08-04 07:05:30 +02:00
- name: test play
hosts: webservers
serial:
- "10%"
- "20%"
- "100%"
You can also mix and match the values::
- name: test play
hosts: webservers
serial:
- 1
- 5
- "20%"
2014-04-10 14:57:39 +02:00
.. note ::
No matter how small the percentage, the number of hosts per pass will always be 1 or greater.
2013-10-05 00:34:39 +02:00
.. _maximum_failure_percentage:
2013-09-06 22:19:34 +02:00
Maximum Failure Percentage
`` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ``
2017-12-13 17:38:30 +01:00
By default, Ansible will continue executing actions as long as there are hosts in the batch that have not yet failed. The batch size for a play is determined by the `` serial `` parameter. If `` serial `` is not set, then batch size is all the hosts specified in the `` hosts: `` field.
2017-12-08 18:09:38 +01:00
In some situations, such as with the rolling updates described above, it may be desirable to abort the play when a
certain threshold of failures have been reached. To achieve this, you can set a maximum failure
2013-09-06 22:19:34 +02:00
percentage on a play as follows::
- hosts: webservers
max_fail_percentage: 30
serial: 10
In the above example, if more than 3 of the 10 servers in the group were to fail, the rest of the play would be aborted.
.. note ::
The percentage set must be exceeded, not equaled. For example, if serial were set to 4 and you wanted the task to abort
when 2 of the systems failed, the percentage should be set at 49 rather than 50.
2013-10-05 00:34:39 +02:00
.. _delegation:
2012-08-18 16:23:17 +02:00
Delegation
`` ` ` ` ` ` ` ``
2013-09-29 22:22:54 +02:00
This isn't actually rolling update specific but comes up frequently in those cases.
2012-08-18 16:23:17 +02:00
If you want to perform a task on one host with reference to other hosts, use the 'delegate_to' keyword on a task.
2017-06-02 16:38:42 +02:00
This is ideal for placing nodes in a load balanced pool, or removing them. It is also very useful for controlling outage windows.
Be aware that it does not make sense to delegate all tasks, debug, add_host, include, etc always get executed on the controller.
Using this with the 'serial' keyword to control the number of hosts executing at one time is also a good idea::
2012-08-18 16:23:17 +02:00
---
2014-02-28 20:18:44 +01:00
2012-08-18 16:23:17 +02:00
- hosts: webservers
serial: 5
tasks:
2014-02-28 20:18:44 +01:00
2012-08-18 16:23:17 +02:00
- name: take out of load balancer pool
2013-07-15 19:50:48 +02:00
command: /usr/bin/take_out_of_pool {{ inventory_hostname }}
2012-08-18 16:23:17 +02:00
delegate_to: 127.0.0.1
- name: actual steps would go here
2018-02-03 12:29:40 +01:00
yum:
name: acme-web-stack
state: latest
2012-08-18 16:23:17 +02:00
- name: add back to load balancer pool
2013-07-15 19:50:48 +02:00
command: /usr/bin/add_back_to_pool {{ inventory_hostname }}
2012-08-18 16:23:17 +02:00
delegate_to: 127.0.0.1
2012-08-20 04:04:18 +02:00
2014-02-28 20:18:44 +01:00
These commands will run on 127.0.0.1, which is the machine running Ansible. There is also a shorthand syntax that you can use on a per-task basis: 'local_action'. Here is the same playbook as above, but using the shorthand syntax for delegating to 127.0.0.1::
2012-08-20 04:04:18 +02:00
---
2014-02-28 20:18:44 +01:00
2012-08-20 04:04:18 +02:00
# ...
2014-02-28 20:18:44 +01:00
2012-08-20 04:04:18 +02:00
tasks:
2014-02-28 20:18:44 +01:00
2012-08-20 04:04:18 +02:00
- name: take out of load balancer pool
2013-04-13 00:21:09 +02:00
local_action: command /usr/bin/take_out_of_pool {{ inventory_hostname }}
2012-08-20 04:04:18 +02:00
# ...
- name: add back to load balancer pool
2013-04-13 00:21:09 +02:00
local_action: command /usr/bin/add_back_to_pool {{ inventory_hostname }}
2012-08-20 04:04:18 +02:00
2013-03-26 21:34:16 +01:00
A common pattern is to use a local action to call 'rsync' to recursively copy files to the managed servers.
Here is an example::
---
# ...
tasks:
2014-02-28 20:18:44 +01:00
2013-03-26 21:34:16 +01:00
- name: recursively copy files from management server to target
2013-04-13 00:21:09 +02:00
local_action: command rsync -a /path/to/files {{ inventory_hostname }}:/path/to/target/
2013-03-26 21:34:16 +01:00
2013-03-27 03:37:06 +01:00
Note that you must have passphrase-less SSH keys or an ssh-agent configured for this to work, otherwise rsync
will need to ask for a passphrase.
2017-07-26 04:06:59 +02:00
In case you have to specify more arguments you can use the following syntax::
---
# ...
tasks:
- name: Send summary mail
local_action:
module: mail
subject: "Summary Mail"
to: "{{ mail_recipient }}"
body: "{{ mail_body }}"
run_once: True
move from with_<lookup>: to loop:
- old functionality is still available direct lookup use, the following are equivalent
with_nested: [[1,2,3], ['a','b','c']]
loop: "{{lookup('nested', [1,2,3], ['a','b','c'])}}"
- avoid squashing with 'loop:'
- fixed test to use new intenal attributes
- removed most of 'lookup docs' as these now reside in the plugins
2017-09-17 05:32:34 +02:00
2016-11-08 15:07:19 +01:00
The `ansible_host` variable (`ansible_ssh_host` in 1.x or specific to ssh/paramiko plugins) reflects the host a task is delegated to.
2016-04-26 17:18:06 +02:00
2015-12-08 23:18:11 +01:00
.. _delegate_facts:
Delegated facts
`` ` ` ` ` ` ` ` ` ` ` ` ``
2015-12-09 17:44:09 +01:00
By default, any fact gathered by a delegated task are assigned to the `inventory_hostname` (the current host) instead of the host which actually produced the facts (the delegated to host).
2017-11-22 05:14:27 +01:00
The directive `delegate_facts` may be set to `True` to assign the task's gathered facts to the delegated host instead of the current one.::
2015-12-08 23:18:11 +01:00
- hosts: app_servers
tasks:
- name: gather facts from db servers
setup:
delegate_to: "{{item}}"
delegate_facts: True
move from with_<lookup>: to loop:
- old functionality is still available direct lookup use, the following are equivalent
with_nested: [[1,2,3], ['a','b','c']]
loop: "{{lookup('nested', [1,2,3], ['a','b','c'])}}"
- avoid squashing with 'loop:'
- fixed test to use new intenal attributes
- removed most of 'lookup docs' as these now reside in the plugins
2017-09-17 05:32:34 +02:00
loop: "{{groups['dbservers']}}"
2015-12-08 23:18:11 +01:00
2015-12-09 17:44:09 +01:00
The above will gather facts for the machines in the dbservers group and assign the facts to those machines and not to app_servers.
2017-04-19 15:49:36 +02:00
This way you can lookup `hostvars['dbhost1']['default_ipv4']['address']` even though dbservers were not part of the play, or left out by using `--limit` .
2015-12-08 23:18:11 +01:00
2014-05-15 17:47:17 +02:00
.. _run_once:
Run Once
`` ` ` ` ` ``
In some cases there may be a need to only run a task one time and only on one host. This can be achieved
by configuring "run_once" on a task::
---
# ...
tasks:
# ...
- command: /opt/application/upgrade_db.py
run_once: true
# ...
This can be optionally paired with "delegate_to" to specify an individual host to execute on::
- command: /opt/application/upgrade_db.py
run_once: true
delegate_to: web01.example.org
When "run_once" is not used with "delegate_to" it will execute on the first host, as defined by inventory,
2015-12-10 16:22:37 +01:00
in the group(s) of hosts targeted by the play - e.g. webservers[0] if the play targeted "hosts: webservers".
2014-05-15 17:47:17 +02:00
2015-12-10 16:22:37 +01:00
This approach is similar to applying a conditional to a task such as::
2014-05-15 17:47:17 +02:00
- command: /opt/application/upgrade_db.py
when: inventory_hostname == webservers[0]
2015-12-10 16:22:37 +01:00
.. note ::
2016-06-01 22:36:18 +02:00
When used together with "serial", tasks marked as "run_once" will be run on one host in *each* serial batch.
2015-12-10 16:22:37 +01:00
If it's crucial that the task is run only once regardless of "serial" mode, use
2016-09-23 17:17:46 +02:00
:code:`when: inventory_hostname == ansible_play_hosts[0]` construct.
2015-12-10 16:22:37 +01:00
2013-10-05 00:34:39 +02:00
.. _local_playbooks:
2013-09-29 22:47:34 +02:00
Local Playbooks
`` ` ` ` ` ` ` ` ` ` ` ` ``
It may be useful to use a playbook locally, rather than by connecting over SSH. This can be useful
2016-06-01 22:36:18 +02:00
for assuring the configuration of a system by putting a playbook in a crontab. This may also be used
2014-05-03 17:59:50 +02:00
to run a playbook inside an OS installer, such as an Anaconda kickstart.
2013-09-29 22:47:34 +02:00
2015-02-09 12:05:21 +01:00
To run an entire playbook locally, just set the "hosts:" line to "hosts: 127.0.0.1" and then run the playbook like so::
2013-09-29 22:47:34 +02:00
ansible-playbook playbook.yml --connection=local
Alternatively, a local connection can be used in a single playbook play, even if other plays in the playbook
use the default remote connection type::
- hosts: 127.0.0.1
connection: local
2018-02-07 22:17:57 +01:00
.. note ::
If you set the connection to local and there is no ansible_python_interpreter set, modules will run under /usr/bin/python and not
under {{ ansible_playbook_python }}. Be sure to set ansible_python_interpreter: "{{ ansible_playbook_python }}" in
host_vars/localhost.yml, for example. You can avoid this issue by using `` local_action `` or `` delegate_to: localhost `` instead.
2015-07-30 11:16:55 +02:00
.. _interrupt_execution_on_any_error:
Interrupt execution on any error
`` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ` ``
2016-11-16 19:44:51 +01:00
With the ''any_errors_fatal'' option, any failure on any host in a multi-host play will be treated as fatal and Ansible will exit immediately without waiting for the other hosts.
2015-07-30 11:16:55 +02:00
2016-11-16 19:44:51 +01:00
Sometimes ''serial'' execution is unsuitable; the number of hosts is unpredictable (because of dynamic inventory) and speed is crucial (simultaneous execution is required), but all tasks must be 100% successful to continue playbook execution.
2015-07-30 11:16:55 +02:00
2016-11-16 19:44:51 +01:00
For example, consider a service located in many datacenters with some load balancers to pass traffic from users to the service. There is a deploy playbook to upgrade service deb-packages. The playbook has the stages:
2015-07-30 11:16:55 +02:00
- disable traffic on load balancers (must be turned off simultaneously)
2016-11-16 19:44:51 +01:00
- gracefully stop the service
- upgrade software (this step includes tests and starting the service)
- enable traffic on the load balancers (which should be turned on simultaneously)
2015-07-30 11:16:55 +02:00
2016-11-16 19:44:51 +01:00
The service can't be stopped with "alive" load balancers; they must be disabled first. Because of this, the second stage can't be played if any server failed in the first stage.
2015-07-30 11:16:55 +02:00
2016-11-16 19:44:51 +01:00
For datacenter "A", the playbook can be written this way::
2015-07-30 11:16:55 +02:00
---
- hosts: load_balancers_dc_a
any_errors_fatal: True
tasks:
- name: 'shutting down datacenter [ A ]'
command: /usr/bin/disable-dc
- hosts: frontends_dc_a
tasks:
- name: 'stopping service'
command: /usr/bin/stop-software
- name: 'updating software'
command: /usr/bin/upgrade-software
- hosts: load_balancers_dc_a
tasks:
- name: 'Starting datacenter [ A ]'
command: /usr/bin/enable-dc
2016-11-16 19:44:51 +01:00
In this example Ansible will start the software upgrade on the front ends only if all of the load balancers are successfully disabled.
2015-07-30 11:16:55 +02:00
2013-10-05 18:31:16 +02:00
.. seealso ::
:doc: `playbooks`
An introduction to playbooks
2015-07-11 17:33:28 +02:00
`Ansible Examples on GitHub <https://github.com/ansible/ansible-examples> `_
2013-10-05 18:49:42 +02:00
Many examples of full-stack deployments
2013-10-05 18:31:16 +02:00
`User Mailing List <http://groups.google.com/group/ansible-devel> `_
Have a question? Stop by the google group!
`irc.freenode.net <http://irc.freenode.net> `_
#ansible IRC chat channel
2013-09-29 22:47:34 +02:00