Update indentation used in the code examples, unify empty lines (#65874)
This commit is contained in:
parent
40fb46f1e8
commit
aacc2d1a18
1 changed files with 59 additions and 52 deletions
|
@ -27,16 +27,17 @@ Rolling Update Batch Size
|
|||
|
||||
By default, Ansible will try to manage all of the machines referenced in a play in parallel. For a rolling update use case, you can define how many hosts Ansible should manage at a single time by using the ``serial`` keyword::
|
||||
|
||||
|
||||
---
|
||||
- name: test play
|
||||
hosts: webservers
|
||||
serial: 2
|
||||
gather_facts: False
|
||||
|
||||
tasks:
|
||||
- name: task one
|
||||
command: hostname
|
||||
- name: task two
|
||||
command: hostname
|
||||
- name: task one
|
||||
command: hostname
|
||||
- name: task two
|
||||
command: hostname
|
||||
|
||||
In the above example, if we had 4 hosts in the group 'webservers', 2
|
||||
would complete the play completely before moving on to the next 2 hosts::
|
||||
|
@ -72,6 +73,7 @@ would complete the play completely before moving on to the next 2 hosts::
|
|||
The ``serial`` keyword can also be specified as a percentage, which will be applied to the total number of hosts in a
|
||||
play, in order to determine the number of hosts per pass::
|
||||
|
||||
---
|
||||
- name: test play
|
||||
hosts: webservers
|
||||
serial: "30%"
|
||||
|
@ -80,33 +82,36 @@ If the number of hosts does not divide equally into the number of passes, the fi
|
|||
|
||||
As of Ansible 2.2, the batch sizes can be specified as a list, as follows::
|
||||
|
||||
---
|
||||
- name: test play
|
||||
hosts: webservers
|
||||
serial:
|
||||
- 1
|
||||
- 5
|
||||
- 10
|
||||
- 1
|
||||
- 5
|
||||
- 10
|
||||
|
||||
In the above example, the first batch would contain a single host, the next would contain 5 hosts, and (if there are any hosts left),
|
||||
every following batch would contain 10 hosts until all available hosts are used.
|
||||
|
||||
It is also possible to list multiple batch sizes as percentages::
|
||||
|
||||
---
|
||||
- name: test play
|
||||
hosts: webservers
|
||||
serial:
|
||||
- "10%"
|
||||
- "20%"
|
||||
- "100%"
|
||||
- "10%"
|
||||
- "20%"
|
||||
- "100%"
|
||||
|
||||
You can also mix and match the values::
|
||||
|
||||
---
|
||||
- name: test play
|
||||
hosts: webservers
|
||||
serial:
|
||||
- 1
|
||||
- 5
|
||||
- "20%"
|
||||
- 1
|
||||
- 5
|
||||
- "20%"
|
||||
|
||||
.. note::
|
||||
No matter how small the percentage, the number of hosts per pass will always be 1 or greater.
|
||||
|
@ -122,6 +127,7 @@ In some situations, such as with the rolling updates described above, it may be
|
|||
certain threshold of failures have been reached. To achieve this, you can set a maximum failure
|
||||
percentage on a play as follows::
|
||||
|
||||
---
|
||||
- hosts: webservers
|
||||
max_fail_percentage: 30
|
||||
serial: 10
|
||||
|
@ -147,51 +153,47 @@ Be aware that it does not make sense to delegate all tasks, debug, add_host, inc
|
|||
Using this with the 'serial' keyword to control the number of hosts executing at one time is also a good idea::
|
||||
|
||||
---
|
||||
|
||||
- hosts: webservers
|
||||
serial: 5
|
||||
|
||||
tasks:
|
||||
- name: take out of load balancer pool
|
||||
command: /usr/bin/take_out_of_pool {{ inventory_hostname }}
|
||||
delegate_to: 127.0.0.1
|
||||
|
||||
- name: take out of load balancer pool
|
||||
command: /usr/bin/take_out_of_pool {{ inventory_hostname }}
|
||||
delegate_to: 127.0.0.1
|
||||
- name: actual steps would go here
|
||||
yum:
|
||||
name: acme-web-stack
|
||||
state: latest
|
||||
|
||||
- name: actual steps would go here
|
||||
yum:
|
||||
name: acme-web-stack
|
||||
state: latest
|
||||
|
||||
- name: add back to load balancer pool
|
||||
command: /usr/bin/add_back_to_pool {{ inventory_hostname }}
|
||||
delegate_to: 127.0.0.1
|
||||
- name: add back to load balancer pool
|
||||
command: /usr/bin/add_back_to_pool {{ inventory_hostname }}
|
||||
delegate_to: 127.0.0.1
|
||||
|
||||
|
||||
These commands will run on 127.0.0.1, which is the machine running Ansible. There is also a shorthand syntax that you can use on a per-task basis: 'local_action'. Here is the same playbook as above, but using the shorthand syntax for delegating to 127.0.0.1::
|
||||
|
||||
---
|
||||
|
||||
# ...
|
||||
|
||||
tasks:
|
||||
|
||||
- name: take out of load balancer pool
|
||||
local_action: command /usr/bin/take_out_of_pool {{ inventory_hostname }}
|
||||
- name: take out of load balancer pool
|
||||
local_action: command /usr/bin/take_out_of_pool {{ inventory_hostname }}
|
||||
|
||||
# ...
|
||||
|
||||
- name: add back to load balancer pool
|
||||
local_action: command /usr/bin/add_back_to_pool {{ inventory_hostname }}
|
||||
- name: add back to load balancer pool
|
||||
local_action: command /usr/bin/add_back_to_pool {{ inventory_hostname }}
|
||||
|
||||
A common pattern is to use a local action to call 'rsync' to recursively copy files to the managed servers.
|
||||
Here is an example::
|
||||
|
||||
---
|
||||
# ...
|
||||
tasks:
|
||||
|
||||
- name: recursively copy files from management server to target
|
||||
local_action: command rsync -a /path/to/files {{ inventory_hostname }}:/path/to/target/
|
||||
tasks:
|
||||
- name: recursively copy files from management server to target
|
||||
local_action: command rsync -a /path/to/files {{ inventory_hostname }}:/path/to/target/
|
||||
|
||||
Note that you must have passphrase-less SSH keys or an ssh-agent configured for this to work, otherwise rsync
|
||||
will need to ask for a passphrase.
|
||||
|
@ -200,15 +202,15 @@ In case you have to specify more arguments you can use the following syntax::
|
|||
|
||||
---
|
||||
# ...
|
||||
tasks:
|
||||
|
||||
- name: Send summary mail
|
||||
local_action:
|
||||
module: mail
|
||||
subject: "Summary Mail"
|
||||
to: "{{ mail_recipient }}"
|
||||
body: "{{ mail_body }}"
|
||||
run_once: True
|
||||
tasks:
|
||||
- name: Send summary mail
|
||||
local_action:
|
||||
module: mail
|
||||
subject: "Summary Mail"
|
||||
to: "{{ mail_recipient }}"
|
||||
body: "{{ mail_body }}"
|
||||
run_once: True
|
||||
|
||||
The `ansible_host` variable (`ansible_ssh_host` in 1.x or specific to ssh/paramiko plugins) reflects the host a task is delegated to.
|
||||
|
||||
|
@ -220,8 +222,9 @@ Delegated facts
|
|||
By default, any fact gathered by a delegated task are assigned to the `inventory_hostname` (the current host) instead of the host which actually produced the facts (the delegated to host).
|
||||
The directive `delegate_facts` may be set to `True` to assign the task's gathered facts to the delegated host instead of the current one.::
|
||||
|
||||
|
||||
---
|
||||
- hosts: app_servers
|
||||
|
||||
tasks:
|
||||
- name: gather facts from db servers
|
||||
setup:
|
||||
|
@ -297,6 +300,7 @@ To run an entire playbook locally, just set the "hosts:" line to "hosts: 127.0.0
|
|||
Alternatively, a local connection can be used in a single playbook play, even if other plays in the playbook
|
||||
use the default remote connection type::
|
||||
|
||||
---
|
||||
- hosts: 127.0.0.1
|
||||
connection: local
|
||||
|
||||
|
@ -330,21 +334,24 @@ For datacenter "A", the playbook can be written this way::
|
|||
---
|
||||
- hosts: load_balancers_dc_a
|
||||
any_errors_fatal: True
|
||||
|
||||
tasks:
|
||||
- name: 'shutting down datacenter [ A ]'
|
||||
command: /usr/bin/disable-dc
|
||||
- name: 'shutting down datacenter [ A ]'
|
||||
command: /usr/bin/disable-dc
|
||||
|
||||
- hosts: frontends_dc_a
|
||||
|
||||
tasks:
|
||||
- name: 'stopping service'
|
||||
command: /usr/bin/stop-software
|
||||
- name: 'updating software'
|
||||
command: /usr/bin/upgrade-software
|
||||
- name: 'stopping service'
|
||||
command: /usr/bin/stop-software
|
||||
- name: 'updating software'
|
||||
command: /usr/bin/upgrade-software
|
||||
|
||||
- hosts: load_balancers_dc_a
|
||||
|
||||
tasks:
|
||||
- name: 'Starting datacenter [ A ]'
|
||||
command: /usr/bin/enable-dc
|
||||
- name: 'Starting datacenter [ A ]'
|
||||
command: /usr/bin/enable-dc
|
||||
|
||||
|
||||
In this example Ansible will start the software upgrade on the front ends only if all of the load balancers are successfully disabled.
|
||||
|
|
Loading…
Reference in a new issue