Merge remote-tracking branch 'upstream/devel' into ec2_util_boto3

This commit is contained in:
Jonathan Davila 2016-01-25 17:35:39 -05:00
commit f95652e7db
280 changed files with 11609 additions and 2471 deletions

View file

@ -24,6 +24,7 @@ script:
- ./test/code-smell/replace-urlopen.sh . - ./test/code-smell/replace-urlopen.sh .
- ./test/code-smell/use-compat-six.sh lib - ./test/code-smell/use-compat-six.sh lib
- ./test/code-smell/boilerplate.sh - ./test/code-smell/boilerplate.sh
- ./test/code-smell/required-and-default-attributes.sh
- if test x"$TOXENV" != x'py24' ; then tox ; fi - if test x"$TOXENV" != x'py24' ; then tox ; fi
- if test x"$TOXENV" = x'py24' ; then python2.4 -V && python2.4 -m compileall -fq -x 'module_utils/(a10|rax|openstack|ec2|gce).py' lib/ansible/module_utils ; fi - if test x"$TOXENV" = x'py24' ; then python2.4 -V && python2.4 -m compileall -fq -x 'module_utils/(a10|rax|openstack|ec2|gce).py' lib/ansible/module_utils ; fi
#- make -C docsite all #- make -C docsite all

View file

@ -1,7 +1,16 @@
Ansible Changes By Release Ansible Changes By Release
========================== ==========================
## 2.0 "Over the Hills and Far Away" - ACTIVE DEVELOPMENT ## 2.1 TBD - ACTIVE DEVELOPMENT
####New Modules:
* aws: ec2_vpc_net_facts
* cloudstack: cs_volume
####New Filters:
* extract
## 2.0 "Over the Hills and Far Away"
###Major Changes: ###Major Changes:
@ -24,10 +33,13 @@ Ansible Changes By Release
by setting the `ANSIBLE_NULL_REPRESENTATION` environment variable. by setting the `ANSIBLE_NULL_REPRESENTATION` environment variable.
* Added `meta: refresh_inventory` to force rereading the inventory in a play. * Added `meta: refresh_inventory` to force rereading the inventory in a play.
This re-executes inventory scripts, but does not force them to ignore any cache they might use. This re-executes inventory scripts, but does not force them to ignore any cache they might use.
* Now when you delegate an action that returns ansible_facts, these facts will be applied to the delegated host, unlike before when they were applied to the current host. * New delegate_facts directive, a boolean that allows you to apply facts to the delegated host (true/yes) instead of the inventory_hostname (no/false) which is the default and previous behaviour.
* local connections now work with 'su' as a privilege escalation method
* Ansible 2.0 has deprecated the “ssh” from ansible_ssh_user, ansible_ssh_host, and ansible_ssh_port to become ansible_user, ansible_host, and ansible_port.
* New ssh configuration variables(`ansible_ssh_common_args`, `ansible_ssh_extra_args`) can be used to configure a * New ssh configuration variables(`ansible_ssh_common_args`, `ansible_ssh_extra_args`) can be used to configure a
per-group or per-host ssh ProxyCommand or set any other ssh options. per-group or per-host ssh ProxyCommand or set any other ssh options.
`ansible_ssh_extra_args` is used to set options that are accepted only by ssh (not sftp or scp, which have their own analogous settings). `ansible_ssh_extra_args` is used to set options that are accepted only by ssh (not sftp or scp, which have their own analogous settings).
* ansible-pull can now verify the code it runs when using git as a source repository, using git's code signing and verification features.
* Backslashes used when specifying parameters in jinja2 expressions in YAML dicts sometimes needed to be escaped twice. * Backslashes used when specifying parameters in jinja2 expressions in YAML dicts sometimes needed to be escaped twice.
This has been fixed so that escaping once works. Here's an example of how playbooks need to be modified: This has been fixed so that escaping once works. Here's an example of how playbooks need to be modified:
@ -71,9 +83,31 @@ newline being stripped you can change your playbook like this:
"msg": "Testing some things" "msg": "Testing some things"
``` ```
* When specifying complex args as a variable, the variable must use the full jinja2
variable syntax ('{{var_name}}') - bare variable names there are no longer accepted.
In fact, even specifying args with variables has been deprecated, and will not be
allowed in future versions:
```
---
- hosts: localhost
connection: local
gather_facts: false
vars:
my_dirs:
- { path: /tmp/3a, state: directory, mode: 0755 }
- { path: /tmp/3b, state: directory, mode: 0700 }
tasks:
- file:
args: "{{item}}"
with_items: my_dirs
```
###Plugins ###Plugins
* Rewritten dnf module that should be faster and less prone to encountering bugs in cornercases * Rewritten dnf module that should be faster and less prone to encountering bugs in cornercases
* WinRM connection plugin passes all vars named `ansible_winrm_*` to the underlying pywinrm client. This allows, for instance, `ansible_winrm_server_cert_validation=ignore` to be used with newer versions of pywinrm to disable certificate validation on Python 2.7.9+.
* WinRM connection plugin put_file is significantly faster and no longer has file size limitations.
####Deprecated Modules (new ones in parens): ####Deprecated Modules (new ones in parens):
@ -94,23 +128,31 @@ newline being stripped you can change your playbook like this:
* amazon: ec2_eni * amazon: ec2_eni
* amazon: ec2_eni_facts * amazon: ec2_eni_facts
* amazon: ec2_remote_facts * amazon: ec2_remote_facts
* amazon: ec2_vpc_igw
* amazon: ec2_vpc_net * amazon: ec2_vpc_net
* amazon: ec2_vpc_net_facts
* amazon: ec2_vpc_route_table * amazon: ec2_vpc_route_table
* amazon: ec2_vpc_route_table_facts * amazon: ec2_vpc_route_table_facts
* amazon: ec2_vpc_subnet * amazon: ec2_vpc_subnet
* amazon: ec2_vpc_subnet_facts
* amazon: ec2_win_password * amazon: ec2_win_password
* amazon: ecs_cluster * amazon: ecs_cluster
* amazon: ecs_task * amazon: ecs_task
* amazon: ecs_taskdefinition * amazon: ecs_taskdefinition
* amazon: elasticache_subnet_group * amazon: elasticache_subnet_group_facts
* amazon: iam * amazon: iam
* amazon: iam_cert
* amazon: iam_policy * amazon: iam_policy
* amazon: route53_zone * amazon: route53_facts
* amazon: route53_health_check * amazon: route53_health_check
* amazon: route53_zone
* amazon: sts_assume_role * amazon: sts_assume_role
* amazon: s3_bucket * amazon: s3_bucket
* amazon: s3_lifecycle * amazon: s3_lifecycle
* amazon: s3_logging * amazon: s3_logging
* amazon: sqs_queue
* amazon: sns_topic
* amazon: sts_assume_role
* apk * apk
* bigip_gtm_wide_ip * bigip_gtm_wide_ip
* bundler * bundler
@ -151,29 +193,35 @@ newline being stripped you can change your playbook like this:
* cloudstack: cs_template * cloudstack: cs_template
* cloudstack: cs_user * cloudstack: cs_user
* cloudstack: cs_vmsnapshot * cloudstack: cs_vmsnapshot
* cronvar
* datadog_monitor * datadog_monitor
* deploy_helper * deploy_helper
* docker: docker_login
* dpkg_selections * dpkg_selections
* elasticsearch_plugin * elasticsearch_plugin
* expect * expect
* find * find
* google: gce_tag
* hall * hall
* ipify_facts * ipify_facts
* iptables * iptables
* libvirt: virt_net * libvirt: virt_net
* libvirt: virt_pool * libvirt: virt_pool
* maven_artifact * maven_artifact
* openstack: os_ironic * openstack: os_auth
* openstack: os_ironic_node
* openstack: os_client_config * openstack: os_client_config
* openstack: os_floating_ip
* openstack: os_image * openstack: os_image
* openstack: os_image_facts * openstack: os_image_facts
* openstack: os_floating_ip
* openstack: os_ironic
* openstack: os_ironic_node
* openstack: os_keypair
* openstack: os_network * openstack: os_network
* openstack: os_network_facts * openstack: os_network_facts
* openstack: os_nova_flavor * openstack: os_nova_flavor
* openstack: os_object * openstack: os_object
* openstack: os_port * openstack: os_port
* openstack: os_project
* openstack: os_router * openstack: os_router
* openstack: os_security_group * openstack: os_security_group
* openstack: os_security_group_rule * openstack: os_security_group_rule
@ -183,6 +231,7 @@ newline being stripped you can change your playbook like this:
* openstack: os_server_volume * openstack: os_server_volume
* openstack: os_subnet * openstack: os_subnet
* openstack: os_subnet_facts * openstack: os_subnet_facts
* openstack: os_user
* openstack: os_user_group * openstack: os_user_group
* openstack: os_volume * openstack: os_volume
* openvswitch_db. * openvswitch_db.
@ -193,14 +242,15 @@ newline being stripped you can change your playbook like this:
* profitbricks: profitbricks * profitbricks: profitbricks
* profitbricks: profitbricks_datacenter * profitbricks: profitbricks_datacenter
* profitbricks: profitbricks_nic * profitbricks: profitbricks_nic
* profitbricks: profitbricks_snapshot
* profitbricks: profitbricks_volume * profitbricks: profitbricks_volume
* profitbricks: profitbricks_volume_attachments * profitbricks: profitbricks_volume_attachments
* proxmox * profitbricks: profitbricks_snapshot
* proxmox_template * proxmox: proxmox
* proxmox: proxmox_template
* puppet * puppet
* pushover * pushover
* pushbullet * pushbullet
* rax: rax_clb_ssl
* rax: rax_mon_alarm * rax: rax_mon_alarm
* rax: rax_mon_check * rax: rax_mon_check
* rax: rax_mon_entity * rax: rax_mon_entity
@ -210,6 +260,7 @@ newline being stripped you can change your playbook like this:
* rabbitmq_exchange * rabbitmq_exchange
* rabbitmq_queue * rabbitmq_queue
* selinux_permissive * selinux_permissive
* sendgrid
* sensu_check * sensu_check
* sensu_subscription * sensu_subscription
* seport * seport
@ -221,21 +272,24 @@ newline being stripped you can change your playbook like this:
* vertica_role * vertica_role
* vertica_schema * vertica_schema
* vertica_user * vertica_user
* vmware: vmware_datacenter * vmware: vca_fw
* vmware: vca_nat
* vmware: vmware_cluster * vmware: vmware_cluster
* vmware: vmware_datacenter
* vmware: vmware_dns_config * vmware: vmware_dns_config
* vmware: vmware_dvs_host * vmware: vmware_dvs_host
* vmware: vmware_dvs_portgroup * vmware: vmware_dvs_portgroup
* vmware: vmware_dvswitch * vmware: vmware_dvswitch
* vmware: vmware_host * vmware: vmware_host
* vmware: vmware_vmkernel_ip_config * vmware: vmware_migrate_vmk
* vmware: vmware_portgroup * vmware: vmware_portgroup
* vmware: vmware_target_canonical_facts
* vmware: vmware_vm_facts * vmware: vmware_vm_facts
* vmware: vmware_vm_vss_dvs_migrate
* vmware: vmware_vmkernel * vmware: vmware_vmkernel
* vmware: vmware_vmkernel_ip_config
* vmware: vmware_vsan_cluster * vmware: vmware_vsan_cluster
* vmware: vmware_vswitch * vmware: vmware_vswitch
* vmware: vca_fw
* vmware: vca_nat
* vmware: vsphere_copy * vmware: vsphere_copy
* webfaction_app * webfaction_app
* webfaction_db * webfaction_db
@ -243,17 +297,22 @@ newline being stripped you can change your playbook like this:
* webfaction_mailbox * webfaction_mailbox
* webfaction_site * webfaction_site
* win_acl * win_acl
* win_dotnet_ngen
* win_environment * win_environment
* win_firewall_rule * win_firewall_rule
* win_package
* win_scheduled_task
* win_iis_virtualdirectory * win_iis_virtualdirectory
* win_iis_webapplication * win_iis_webapplication
* win_iis_webapppool * win_iis_webapppool
* win_iis_webbinding * win_iis_webbinding
* win_iis_website * win_iis_website
* win_lineinfile
* win_nssm
* win_package
* win_regedit * win_regedit
* win_scheduled_task
* win_unzip * win_unzip
* win_updates
* win_webpicmd
* xenserver_facts * xenserver_facts
* zabbix_host * zabbix_host
* zabbix_hostmacro * zabbix_hostmacro
@ -266,6 +325,7 @@ newline being stripped you can change your playbook like this:
* fleetctl * fleetctl
* openvz * openvz
* nagios_ndo * nagios_ndo
* nsot
* proxmox * proxmox
* rudder * rudder
* serf * serf
@ -285,6 +345,11 @@ newline being stripped you can change your playbook like this:
* docker: for talking to docker containers on the ansible controller machine without using ssh. * docker: for talking to docker containers on the ansible controller machine without using ssh.
####New Callbacks:
* logentries: plugin to send play data to logentries service
* skippy: same as default but does not display skip messages
###Minor changes: ###Minor changes:
* Many more tests. The new API makes things more testable and we took advantage of it. * Many more tests. The new API makes things more testable and we took advantage of it.
@ -311,9 +376,16 @@ newline being stripped you can change your playbook like this:
* Lookup, vars and action plugin pathing has been normalized, all now follow the same sequence to find relative files. * Lookup, vars and action plugin pathing has been normalized, all now follow the same sequence to find relative files.
* We do not ignore the explicitly set login user for ssh when it matches the 'current user' anymore, this allows overriding .ssh/config when it is set * We do not ignore the explicitly set login user for ssh when it matches the 'current user' anymore, this allows overriding .ssh/config when it is set
explicitly. Leaving it unset will still use the same user and respect .ssh/config. This also means ansible_ssh_user can now return a None value. explicitly. Leaving it unset will still use the same user and respect .ssh/config. This also means ansible_ssh_user can now return a None value.
* Handling of undefined variables has changed. In most places they will now raise an error instead of silently injecting an empty string. Use the default filter if you want to approximate the old behaviour:: * environment variables passed to remote shells now default to 'controller' settings, with fallback to en_us.UTF8 which was the previous default.
* add_hosts is much stricter about host name and will prevent invalid names from being added.
* ansible-pull now defaults to doing shallow checkouts with git, use `--full` to return to previous behaviour.
* random cows are more random
* when: now gets the registered var after the first iteration, making it possible to break out of item loops
* Handling of undefined variables has changed. In most places they will now raise an error instead of silently injecting an empty string. Use the default filter if you want to approximate the old behaviour:
```
- debug: msg="The error message was: {{error_code |default('') }}" - debug: msg="The error message was: {{error_code |default('') }}"
```
## 1.9.4 "Dancing In the Street" - Oct 9, 2015 ## 1.9.4 "Dancing In the Street" - Oct 9, 2015

View file

@ -4,12 +4,14 @@ prune ticket_stubs
prune packaging prune packaging
prune test prune test
prune hacking prune hacking
include README.md packaging/rpm/ansible.spec COPYING include README.md COPYING
include examples/hosts include examples/hosts
include examples/ansible.cfg include examples/ansible.cfg
include lib/ansible/module_utils/powershell.ps1 include lib/ansible/module_utils/powershell.ps1
recursive-include lib/ansible/modules * recursive-include lib/ansible/modules *
recursive-include lib/ansible/galaxy/data *
recursive-include docs * recursive-include docs *
recursive-include packaging *
include Makefile include Makefile
include VERSION include VERSION
include MANIFEST.in include MANIFEST.in

View file

@ -44,7 +44,7 @@ GIT_HASH := $(shell git log -n 1 --format="%h")
GIT_BRANCH := $(shell git rev-parse --abbrev-ref HEAD | sed 's/[-_.\/]//g') GIT_BRANCH := $(shell git rev-parse --abbrev-ref HEAD | sed 's/[-_.\/]//g')
GITINFO = .$(GIT_HASH).$(GIT_BRANCH) GITINFO = .$(GIT_HASH).$(GIT_BRANCH)
else else
GITINFO = '' GITINFO = ""
endif endif
ifeq ($(shell echo $(OS) | egrep -c 'Darwin|FreeBSD|OpenBSD'),1) ifeq ($(shell echo $(OS) | egrep -c 'Darwin|FreeBSD|OpenBSD'),1)
@ -167,6 +167,9 @@ install:
sdist: clean docs sdist: clean docs
$(PYTHON) setup.py sdist $(PYTHON) setup.py sdist
sdist_upload: clean docs
$(PYTHON) setup.py sdist upload 2>&1 |tee upload.log
rpmcommon: $(MANPAGES) sdist rpmcommon: $(MANPAGES) sdist
@mkdir -p rpm-build @mkdir -p rpm-build
@cp dist/*.gz rpm-build/ @cp dist/*.gz rpm-build/

View file

@ -55,3 +55,4 @@ Ansible was created by [Michael DeHaan](https://github.com/mpdehaan) (michael.de
Ansible is sponsored by [Ansible, Inc](http://ansible.com) Ansible is sponsored by [Ansible, Inc](http://ansible.com)

View file

@ -4,7 +4,7 @@ Ansible Releases at a Glance
Active Development Active Development
++++++++++++++++++ ++++++++++++++++++
2.0 "TBD" - in progress 2.0 "Over the Hills and Far Away" - in progress
Released Released
++++++++ ++++++++

View file

@ -1 +1 @@
2.0.0 0.5.beta3 2.1.0

2716
ansible-core-sitemap.xml Normal file

File diff suppressed because it is too large Load diff

View file

@ -60,6 +60,7 @@ if __name__ == '__main__':
try: try:
display = Display() display = Display()
display.debug("starting run")
sub = None sub = None
try: try:

View file

@ -27,11 +27,11 @@ result['all'] = {}
pipe = Popen(['virsh', '-q', '-c', 'lxc:///', 'list', '--name', '--all'], stdout=PIPE, universal_newlines=True) pipe = Popen(['virsh', '-q', '-c', 'lxc:///', 'list', '--name', '--all'], stdout=PIPE, universal_newlines=True)
result['all']['hosts'] = [x[:-1] for x in pipe.stdout.readlines()] result['all']['hosts'] = [x[:-1] for x in pipe.stdout.readlines()]
result['all']['vars'] = {} result['all']['vars'] = {}
result['all']['vars']['ansible_connection'] = 'lxc' result['all']['vars']['ansible_connection'] = 'libvirt_lxc'
if len(sys.argv) == 2 and sys.argv[1] == '--list': if len(sys.argv) == 2 and sys.argv[1] == '--list':
print(json.dumps(result)) print(json.dumps(result))
elif len(sys.argv) == 3 and sys.argv[1] == '--host': elif len(sys.argv) == 3 and sys.argv[1] == '--host':
print(json.dumps({'ansible_connection': 'lxc'})) print(json.dumps({'ansible_connection': 'libvirt_lxc'}))
else: else:
print("Need an argument, either --list or --host <host>") print("Need an argument, either --list or --host <host>")

341
contrib/inventory/nsot.py Normal file
View file

@ -0,0 +1,341 @@
#!/bin/env python
'''
nsot
====
Ansible Dynamic Inventory to pull hosts from NSoT, a flexible CMDB by Dropbox
Features
--------
* Define host groups in form of NSoT device attribute criteria
* All parameters defined by the spec as of 2015-09-05 are supported.
+ ``--list``: Returns JSON hash of host groups -> hosts and top-level
``_meta`` -> ``hostvars`` which correspond to all device attributes.
Group vars can be specified in the YAML configuration, noted below.
+ ``--host <hostname>``: Returns JSON hash where every item is a device
attribute.
* In addition to all attributes assigned to resource being returned, script
will also append ``site_id`` and ``id`` as facts to utilize.
Confguration
------------
Since it'd be annoying and failure prone to guess where you're configuration
file is, use ``NSOT_INVENTORY_CONFIG`` to specify the path to it.
This file should adhere to the YAML spec. All top-level variable must be
desired Ansible group-name hashed with single 'query' item to define the NSoT
attribute query.
Queries follow the normal NSoT query syntax, `shown here`_
.. _shown here: https://github.com/dropbox/pynsot#set-queries
.. code:: yaml
routers:
query: 'deviceType=ROUTER'
vars:
a: b
c: d
juniper_fw:
query: 'deviceType=FIREWALL manufacturer=JUNIPER'
not_f10:
query: '-manufacturer=FORCE10'
The inventory will automatically use your ``.pynsotrc`` like normal pynsot from
cli would, so make sure that's configured appropriately.
.. note::
Attributes I'm showing above are influenced from ones that the Trigger
project likes. As is the spirit of NSoT, use whichever attributes work best
for your workflow.
If config file is blank or absent, the following default groups will be
created:
* ``routers``: deviceType=ROUTER
* ``switches``: deviceType=SWITCH
* ``firewalls``: deviceType=FIREWALL
These are likely not useful for everyone so please use the configuration. :)
.. note::
By default, resources will only be returned for what your default
site is set for in your ``~/.pynsotrc``.
If you want to specify, add an extra key under the group for ``site: n``.
Output Examples
---------------
Here are some examples shown from just calling the command directly::
$ NSOT_INVENTORY_CONFIG=$PWD/test.yaml ansible_nsot --list | jq '.'
{
"routers": {
"hosts": [
"test1.example.com"
],
"vars": {
"cool_level": "very",
"group": "routers"
}
},
"firewalls": {
"hosts": [
"test2.example.com"
],
"vars": {
"cool_level": "enough",
"group": "firewalls"
}
},
"_meta": {
"hostvars": {
"test2.example.com": {
"make": "SRX",
"site_id": 1,
"id": 108
},
"test1.example.com": {
"make": "MX80",
"site_id": 1,
"id": 107
}
}
},
"rtr_and_fw": {
"hosts": [
"test1.example.com",
"test2.example.com"
],
"vars": {}
}
}
$ NSOT_INVENTORY_CONFIG=$PWD/test.yaml ansible_nsot --host test1 | jq '.'
{
"make": "MX80",
"site_id": 1,
"id": 107
}
'''
from __future__ import print_function
import sys
import os
import pkg_resources
import argparse
import json
import yaml
from textwrap import dedent
from pynsot.client import get_api_client
from pynsot.app import HttpServerError
from click.exceptions import UsageError
def warning(*objs):
print("WARNING: ", *objs, file=sys.stderr)
class NSoTInventory(object):
'''NSoT Client object for gather inventory'''
def __init__(self):
self.config = dict()
config_env = os.environ.get('NSOT_INVENTORY_CONFIG')
if config_env:
try:
config_file = os.path.abspath(config_env)
except IOError: # If file non-existent, use default config
self._config_default()
except Exception as e:
sys.exit('%s\n' % e)
with open(config_file) as f:
try:
self.config.update(yaml.safe_load(f))
except TypeError: # If empty file, use default config
warning('Empty config file')
self._config_default()
except Exception as e:
sys.exit('%s\n' % e)
else: # Use defaults if env var missing
self._config_default()
self.groups = self.config.keys()
self.client = get_api_client()
self._meta = {'hostvars': dict()}
def _config_default(self):
default_yaml = '''
---
routers:
query: deviceType=ROUTER
switches:
query: deviceType=SWITCH
firewalls:
query: deviceType=FIREWALL
'''
self.config = yaml.safe_load(dedent(default_yaml))
def do_list(self):
'''Direct callback for when ``--list`` is provided
Relies on the configuration generated from init to run
_inventory_group()
'''
inventory = dict()
for group, contents in self.config.iteritems():
group_response = self._inventory_group(group, contents)
inventory.update(group_response)
inventory.update({'_meta': self._meta})
return json.dumps(inventory)
def do_host(self, host):
return json.dumps(self._hostvars(host))
def _hostvars(self, host):
'''Return dictionary of all device attributes
Depending on number of devices in NSoT, could be rather slow since this
has to request every device resource to filter through
'''
device = [i for i in self.client.devices.get()['data']['devices']
if host in i['hostname']][0]
attributes = device['attributes']
attributes.update({'site_id': device['site_id'], 'id': device['id']})
return attributes
def _inventory_group(self, group, contents):
'''Takes a group and returns inventory for it as dict
:param group: Group name
:type group: str
:param contents: The contents of the group's YAML config
:type contents: dict
contents param should look like::
{
'query': 'xx',
'vars':
'a': 'b'
}
Will return something like::
{ group: {
hosts: [],
vars: {},
}
'''
query = contents.get('query')
hostvars = contents.get('vars', dict())
site = contents.get('site', dict())
obj = {group: dict()}
obj[group]['hosts'] = []
obj[group]['vars'] = hostvars
try:
assert isinstance(query, basestring)
except:
sys.exit('ERR: Group queries must be a single string\n'
' Group: %s\n'
' Query: %s\n' % (group, query)
)
try:
if site:
site = self.client.sites(site)
devices = site.devices.query.get(query=query)
else:
devices = self.client.devices.query.get(query=query)
except HttpServerError as e:
if '500' in str(e.response):
_site = 'Correct site id?'
_attr = 'Queried attributes actually exist?'
questions = _site + '\n' + _attr
sys.exit('ERR: 500 from server.\n%s' % questions)
else:
raise
except UsageError:
sys.exit('ERR: Could not connect to server. Running?')
# Would do a list comprehension here, but would like to save code/time
# and also acquire attributes in this step
for host in devices['data']['devices']:
# Iterate through each device that matches query, assign hostname
# to the group's hosts array and then use this single iteration as
# a chance to update self._meta which will be used in the final
# return
hostname = host['hostname']
obj[group]['hosts'].append(hostname)
attributes = host['attributes']
attributes.update({'site_id': host['site_id'], 'id': host['id']})
self._meta['hostvars'].update({hostname: attributes})
return obj
def parse_args():
desc = __doc__.splitlines()[4] # Just to avoid being redundant
# Establish parser with options and error out if no action provided
parser = argparse.ArgumentParser(
description=desc,
conflict_handler='resolve',
)
# Arguments
#
# Currently accepting (--list | -l) and (--host | -h)
# These must not be allowed together
parser.add_argument(
'--list', '-l',
help='Print JSON object containing hosts to STDOUT',
action='store_true',
dest='list_', # Avoiding syntax highlighting for list
)
parser.add_argument(
'--host', '-h',
help='Print JSON object containing hostvars for <host>',
action='store',
)
args = parser.parse_args()
if not args.list_ and not args.host: # Require at least one option
parser.exit(status=1, message='No action requested')
if args.list_ and args.host: # Do not allow multiple options
parser.exit(status=1, message='Too many actions requested')
return args
def main():
'''Set up argument handling and callback routing'''
args = parse_args()
client = NSoTInventory()
# Callback condition
if args.list_:
print(client.do_list())
elif args.host:
print(client.do_host(args.host))
if __name__ == '__main__':
main()

View file

@ -0,0 +1,22 @@
---
juniper_routers:
query: 'deviceType=ROUTER manufacturer=JUNIPER'
vars:
group: juniper_routers
netconf: true
os: junos
cisco_asa:
query: 'manufacturer=CISCO deviceType=FIREWALL'
vars:
group: cisco_asa
routed_vpn: false
stateful: true
old_cisco_asa:
query: 'manufacturer=CISCO deviceType=FIREWALL -softwareVersion=8.3+'
vars:
old_nat: true
not_f10:
query: '-manufacturer=FORCE10'

View file

@ -32,6 +32,13 @@
# all of them and present them as one contiguous inventory. # all of them and present them as one contiguous inventory.
# #
# See the adjacent openstack.yml file for an example config file # See the adjacent openstack.yml file for an example config file
# There are two ansible inventory specific options that can be set in
# the inventory section.
# expand_hostvars controls whether or not the inventory will make extra API
# calls to fill out additional information about each server
# use_hostnames changes the behavior from registering every host with its UUID
# and making a group of its hostname to only doing this if the
# hostname in question has more than one server
import argparse import argparse
import collections import collections
@ -51,7 +58,7 @@ import shade.inventory
CONFIG_FILES = ['/etc/ansible/openstack.yaml'] CONFIG_FILES = ['/etc/ansible/openstack.yaml']
def get_groups_from_server(server_vars): def get_groups_from_server(server_vars, namegroup=True):
groups = [] groups = []
region = server_vars['region'] region = server_vars['region']
@ -76,7 +83,8 @@ def get_groups_from_server(server_vars):
groups.append(extra_group) groups.append(extra_group)
groups.append('instance-%s' % server_vars['id']) groups.append('instance-%s' % server_vars['id'])
groups.append(server_vars['name']) if namegroup:
groups.append(server_vars['name'])
for key in ('flavor', 'image'): for key in ('flavor', 'image'):
if 'name' in server_vars[key]: if 'name' in server_vars[key]:
@ -94,9 +102,9 @@ def get_groups_from_server(server_vars):
return groups return groups
def get_host_groups(inventory): def get_host_groups(inventory, refresh=False):
(cache_file, cache_expiration_time) = get_cache_settings() (cache_file, cache_expiration_time) = get_cache_settings()
if is_cache_stale(cache_file, cache_expiration_time): if is_cache_stale(cache_file, cache_expiration_time, refresh=refresh):
groups = to_json(get_host_groups_from_cloud(inventory)) groups = to_json(get_host_groups_from_cloud(inventory))
open(cache_file, 'w').write(groups) open(cache_file, 'w').write(groups)
else: else:
@ -106,23 +114,44 @@ def get_host_groups(inventory):
def get_host_groups_from_cloud(inventory): def get_host_groups_from_cloud(inventory):
groups = collections.defaultdict(list) groups = collections.defaultdict(list)
firstpass = collections.defaultdict(list)
hostvars = {} hostvars = {}
for server in inventory.list_hosts(): list_args = {}
if hasattr(inventory, 'extra_config'):
use_hostnames = inventory.extra_config['use_hostnames']
list_args['expand'] = inventory.extra_config['expand_hostvars']
else:
use_hostnames = False
for server in inventory.list_hosts(**list_args):
if 'interface_ip' not in server: if 'interface_ip' not in server:
continue continue
for group in get_groups_from_server(server): firstpass[server['name']].append(server)
groups[group].append(server['id']) for name, servers in firstpass.items():
hostvars[server['id']] = dict( if len(servers) == 1 and use_hostnames:
ansible_ssh_host=server['interface_ip'], server = servers[0]
openstack=server, hostvars[name] = dict(
) ansible_ssh_host=server['interface_ip'],
openstack=server)
for group in get_groups_from_server(server, namegroup=False):
groups[group].append(server['name'])
else:
for server in servers:
server_id = server['id']
hostvars[server_id] = dict(
ansible_ssh_host=server['interface_ip'],
openstack=server)
for group in get_groups_from_server(server, namegroup=True):
groups[group].append(server_id)
groups['_meta'] = {'hostvars': hostvars} groups['_meta'] = {'hostvars': hostvars}
return groups return groups
def is_cache_stale(cache_file, cache_expiration_time): def is_cache_stale(cache_file, cache_expiration_time, refresh=False):
''' Determines if cache file has expired, or if it is still valid ''' ''' Determines if cache file has expired, or if it is still valid '''
if refresh:
return True
if os.path.isfile(cache_file): if os.path.isfile(cache_file):
mod_time = os.path.getmtime(cache_file) mod_time = os.path.getmtime(cache_file)
current_time = time.time() current_time = time.time()
@ -169,14 +198,24 @@ def main():
try: try:
config_files = os_client_config.config.CONFIG_FILES + CONFIG_FILES config_files = os_client_config.config.CONFIG_FILES + CONFIG_FILES
shade.simple_logging(debug=args.debug) shade.simple_logging(debug=args.debug)
inventory = shade.inventory.OpenStackInventory( inventory_args = dict(
refresh=args.refresh, refresh=args.refresh,
config_files=config_files, config_files=config_files,
private=args.private, private=args.private,
) )
if hasattr(shade.inventory.OpenStackInventory, 'extra_config'):
inventory_args.update(dict(
config_key='ansible',
config_defaults={
'use_hostnames': False,
'expand_hostvars': True,
}
))
inventory = shade.inventory.OpenStackInventory(**inventory_args)
if args.list: if args.list:
output = get_host_groups(inventory) output = get_host_groups(inventory, refresh=args.refresh)
elif args.host: elif args.host:
output = to_json(inventory.get_host(args.host)) output = to_json(inventory.get_host(args.host))
print(output) print(output)

View file

@ -26,3 +26,6 @@ clouds:
username: stack username: stack
password: stack password: stack
project_name: stack project_name: stack
ansible:
use_hostnames: True
expand_hostvars: False

View file

@ -55,3 +55,12 @@
# will be ignored, and 4 will be used. Accepts a comma separated list, # will be ignored, and 4 will be used. Accepts a comma separated list,
# the first found wins. # the first found wins.
# access_ip_version = 4 # access_ip_version = 4
# Environment Variable: RAX_CACHE_MAX_AGE
# Default: 600
#
# A configuration the changes the behavior or the inventory cache.
# Inventory listing performed before this value will be returned from
# the cache instead of making a full request for all inventory. Setting
# this value to 0 will force a full request.
# cache_max_age = 600

View file

@ -355,9 +355,12 @@ def get_cache_file_path(regions):
def _list(regions, refresh_cache=True): def _list(regions, refresh_cache=True):
cache_max_age = int(get_config(p, 'rax', 'cache_max_age',
'RAX_CACHE_MAX_AGE', 600))
if (not os.path.exists(get_cache_file_path(regions)) or if (not os.path.exists(get_cache_file_path(regions)) or
refresh_cache or refresh_cache or
(time() - os.stat(get_cache_file_path(regions))[-1]) > 600): (time() - os.stat(get_cache_file_path(regions))[-1]) > cache_max_age):
# Cache file doesn't exist or older than 10m or refresh cache requested # Cache file doesn't exist or older than 10m or refresh cache requested
_list_into_cache(regions) _list_into_cache(regions)

View file

@ -12,7 +12,7 @@ ansible-galaxy - manage roles using galaxy.ansible.com
SYNOPSIS SYNOPSIS
-------- --------
ansible-galaxy [init|info|install|list|remove] [--help] [options] ... ansible-galaxy [delete|import|info|init|install|list|login|remove|search|setup] [--help] [options] ...
DESCRIPTION DESCRIPTION
@ -20,7 +20,7 @@ DESCRIPTION
*Ansible Galaxy* is a shared repository for Ansible roles. *Ansible Galaxy* is a shared repository for Ansible roles.
The ansible-galaxy command can be used to manage these roles, The ansible-galaxy command can be used to manage these roles,
or by creating a skeleton framework for roles you'd like to upload to Galaxy. or for creating a skeleton framework for roles you'd like to upload to Galaxy.
COMMON OPTIONS COMMON OPTIONS
-------------- --------------
@ -29,7 +29,6 @@ COMMON OPTIONS
Show a help message related to the given sub-command. Show a help message related to the given sub-command.
INSTALL INSTALL
------- -------
@ -145,6 +144,204 @@ The path to the directory containing your roles. The default is the *roles_path*
configured in your *ansible.cfg* file (/etc/ansible/roles if not configured) configured in your *ansible.cfg* file (/etc/ansible/roles if not configured)
SEARCH
------
The *search* sub-command returns a filtered list of roles found on the remote
server.
USAGE
~~~~~
$ ansible-galaxy search [options] [searchterm1 searchterm2]
OPTIONS
~~~~~~~
*--galaxy-tags*::
Provide a comma separated list of Galaxy Tags on which to filter.
*--platforms*::
Provide a comma separated list of Platforms on which to filter.
*--author*::
Specify the username of a Galaxy contributor on which to filter.
*-c*, *--ignore-certs*::
Ignore TLS certificate errors.
*-s*, *--server*::
Override the default server https://galaxy.ansible.com.
INFO
----
The *info* sub-command shows detailed information for a specific role.
Details returned about the role included information from the local copy
as well as information from galaxy.ansible.com.
USAGE
~~~~~
$ ansible-galaxy info [options] role_name[, version]
OPTIONS
~~~~~~~
*-p* 'ROLES_PATH', *--roles-path=*'ROLES_PATH'::
The path to the directory containing your roles. The default is the *roles_path*
configured in your *ansible.cfg* file (/etc/ansible/roles if not configured)
*-c*, *--ignore-certs*::
Ignore TLS certificate errors.
*-s*, *--server*::
Override the default server https://galaxy.ansible.com.
LOGIN
-----
The *login* sub-command is used to authenticate with galaxy.ansible.com.
Authentication is required to use the import, delete and setup commands.
It will authenticate the user, retrieve a token from Galaxy, and store it
in the user's home directory.
USAGE
~~~~~
$ ansible-galaxy login [options]
The *login* sub-command prompts for a *GitHub* username and password. It does
NOT send your password to Galaxy. It actually authenticates with GitHub and
creates a personal access token. It then sends the personal access token to
Galaxy, which in turn verifies that you are you and returns a Galaxy access
token. After authentication completes the *GitHub* personal access token is
destroyed.
If you do not wish to use your GitHub password, or if you have two-factor
authentication enabled with GitHub, use the *--github-token* option to pass a
personal access token that you create. Log into GitHub, go to Settings and
click on Personal Access Token to create a token.
OPTIONS
~~~~~~~
*-c*, *--ignore-certs*::
Ignore TLS certificate errors.
*-s*, *--server*::
Override the default server https://galaxy.ansible.com.
*--github-token*::
Authenticate using a *GitHub* personal access token rather than a password.
IMPORT
------
Import a role from *GitHub* to galaxy.ansible.com. Requires the user first
authenticate with galaxy.ansible.com using the *login* subcommand.
USAGE
~~~~~
$ ansible-galaxy import [options] github_user github_repo
OPTIONS
~~~~~~~
*-c*, *--ignore-certs*::
Ignore TLS certificate errors.
*-s*, *--server*::
Override the default server https://galaxy.ansible.com.
*--branch*::
Provide a specific branch to import. When a branch is not specified the
branch found in meta/main.yml is used. If no branch is specified in
meta/main.yml, the repo's default branch (usually master) is used.
DELETE
------
The *delete* sub-command will delete a role from galaxy.ansible.com. Requires
the user first authenticate with galaxy.ansible.com using the *login* subcommand.
USAGE
~~~~~
$ ansible-galaxy delete [options] github_user github_repo
OPTIONS
~~~~~~~
*-c*, *--ignore-certs*::
Ignore TLS certificate errors.
*-s*, *--server*::
Override the default server https://galaxy.ansible.com.
SETUP
-----
The *setup* sub-command creates an integration point for *Travis CI*, enabling
galaxy.ansible.com to receive notifications from *Travis* on build completion.
Requires the user first authenticate with galaxy.ansible.com using the *login*
subcommand.
USAGE
~~~~~
$ ansible-galaxy setup [options] source github_user github_repo secret
* Use *travis* as the source value. In the future additional source values may
be added.
* Provide your *Travis* user token as the secret. The token is not stored by
galaxy.ansible.com. A hash is created using github_user, github_repo
and your token. The hash value is what actually gets stored.
OPTIONS
~~~~~~~
*-c*, *--ignore-certs*::
Ignore TLS certificate errors.
*-s*, *--server*::
Override the default server https://galaxy.ansible.com.
--list::
Show your configured integrations. Provids the ID of each integration
which can be used with the remove option.
--remove::
Remove a specific integration. Provide the ID of the integration to
be removed.
AUTHOR AUTHOR
------ ------

View file

@ -96,7 +96,7 @@ Show help page and exit
*-i* 'PATH', *--inventory=*'PATH':: *-i* 'PATH', *--inventory=*'PATH'::
The 'PATH' to the inventory, which defaults to '/etc/ansible/hosts'. The 'PATH' to the inventory, which defaults to '/etc/ansible/hosts'.
Alternatively you can use a comma separated list of hosts or single host with traling comma 'host,'. Alternatively, you can use a comma-separated list of hosts or a single host with a trailing comma 'host,'.
*-l* 'SUBSET', *--limit=*'SUBSET':: *-l* 'SUBSET', *--limit=*'SUBSET'::

View file

@ -95,6 +95,10 @@ Force running of playbook even if unable to update playbook repository. This
can be useful, for example, to enforce run-time state when a network can be useful, for example, to enforce run-time state when a network
connection may not always be up or possible. connection may not always be up or possible.
*--full*::
Do a full clone of the repository. By default ansible-pull will do a shallow clone based on the last revision.
*-h*, *--help*:: *-h*, *--help*::
Show the help message and exit. Show the help message and exit.

View file

@ -20,6 +20,8 @@ viewdocs: clean staticmin
htmldocs: staticmin htmldocs: staticmin
./build-site.py rst ./build-site.py rst
webdocs: htmldocs
clean: clean:
-rm -rf htmlout -rm -rf htmlout
-rm -f .buildinfo -rm -f .buildinfo
@ -43,4 +45,4 @@ modules: $(FORMATTER) ../hacking/templates/rst.j2
PYTHONPATH=../lib $(FORMATTER) -t rst --template-dir=../hacking/templates --module-dir=../lib/ansible/modules -o rst/ PYTHONPATH=../lib $(FORMATTER) -t rst --template-dir=../hacking/templates --module-dir=../lib/ansible/modules -o rst/
staticmin: staticmin:
cat _themes/srtd/static/css/theme.css | sed -e 's/^[ \t]*//g; s/[ \t]*$$//g; s/\([:{;,]\) /\1/g; s/ {/{/g; s/\/\*.*\*\///g; /^$$/d' | sed -e :a -e '$$!N; s/\n\(.\)/\1/; ta' > _themes/srtd/static/css/theme.min.css cat _themes/srtd/static/css/theme.css | sed -e 's/^[ ]*//g; s/[ ]*$$//g; s/\([:{;,]\) /\1/g; s/ {/{/g; s/\/\*.*\*\///g; /^$$/d' | sed -e :a -e '$$!N; s/\n\(.\)/\1/; ta' > _themes/srtd/static/css/theme.min.css

View file

@ -12,8 +12,17 @@
<hr/> <hr/>
<script type="text/javascript">
(function(w,d,t,u,n,s,e){w['SwiftypeObject']=n;w[n]=w[n]||function(){
(w[n].q=w[n].q||[]).push(arguments);};s=d.createElement(t);
e=d.getElementsByTagName(t)[0];s.async=1;s.src=u;e.parentNode.insertBefore(s,e);
})(window,document,'script','//s.swiftypecdn.com/install/v2/st.js','_st');
_st('install','yABGvz2N8PwcwBxyfzUc','2.0.0');
</script>
<p> <p>
&copy; Copyright 2015 <a href="http://ansible.com">Ansible, Inc.</a>. &copy; Copyright 2016 <a href="http://ansible.com">Ansible, Inc.</a>.
{%- if last_updated %} {%- if last_updated %}
{% trans last_updated=last_updated|e %}Last updated on {{ last_updated }}.{% endtrans %} {% trans last_updated=last_updated|e %}Last updated on {{ last_updated }}.{% endtrans %}

View file

@ -150,11 +150,6 @@
</a> </a>
</div> </div>
<div class="wy-side-nav-search" style="background-color:#5bbdbf;height=80px;margin:'auto auto auto auto'">
<!-- <a href="{{ pathto(master_doc) }}" class="icon icon-home"> {{ project }}</a> -->
{% include "searchbox.html" %}
</div>
<div id="menu-id" class="wy-menu wy-menu-vertical" data-spy="affix"> <div id="menu-id" class="wy-menu wy-menu-vertical" data-spy="affix">
{% set toctree = toctree(maxdepth=2, collapse=False) %} {% set toctree = toctree(maxdepth=2, collapse=False) %}
{% if toctree %} {% if toctree %}
@ -166,16 +161,9 @@
<!-- changeable widget --> <!-- changeable widget -->
<center> <center>
<br/> <br/>
<span class="hs-cta-wrapper" id="hs-cta-wrapper-71d47584-8ef5-4b06-87ae-8d25bc2a837e"> <a href="http://www.ansible.com/docs-left?utm_source=docs">
<span class="hs-cta-node hs-cta-71d47584-8ef5-4b06-87ae-8d25bc2a837e" id="hs-cta-71d47584-8ef5-4b06-87ae-8d25bc2a837e"> <img style="border-width:0px;" src="https://cdn2.hubspot.net/hubfs/330046/docs-graphics/ASB-docs-left-rail.png" />
<!--[if lte IE 8]><div id="hs-cta-ie-element"></div><![endif]--> </a>
<a href="http://cta-redirect.hubspot.com/cta/redirect/330046/71d47584-8ef5-4b06-87ae-8d25bc2a837e"><img class="hs-cta-img" id="hs-cta-img-71d47584-8ef5-4b06-87ae-8d25bc2a837e" style="border-width:0px;" src="https://no-cache.hubspot.com/cta/default/330046/71d47584-8ef5-4b06-87ae-8d25bc2a837e.png" /></a>
</span>
<script charset="utf-8" src="https://js.hscta.net/cta/current.js"></script>
<script type="text/javascript">
hbspt.cta.load(330046, '71d47584-8ef5-4b06-87ae-8d25bc2a837e');
</script>
</span>
</center> </center>
@ -196,15 +184,17 @@
<div class="wy-nav-content"> <div class="wy-nav-content">
<div class="rst-content"> <div class="rst-content">
<!-- Tower ads --> <!-- Banner ads -->
<a class="DocSiteBanner" href="http://www.ansible.com/tower?utm_source=docs"> <div class="DocSiteBanner">
<div class="DocSiteBanner-imgWrapper"> <a class="DocSiteBanner-imgWrapper"
<img src="{{ pathto('_static/', 1) }}images/banner_ad_1.png"> href="http://www.ansible.com/docs-top?utm_source=docs">
</div> <img src="https://cdn2.hubspot.net/hubfs/330046/docs-graphics/ASB-docs-top-left.png">
<div class="DocSiteBanner-imgWrapper"> </a>
<img src="{{ pathto('_static/', 1) }}images/banner_ad_2.png"> <a class="DocSiteBanner-imgWrapper"
</div> href="http://www.ansible.com/docs-top?utm_source=docs">
</a> <img src="https://cdn2.hubspot.net/hubfs/330046/docs-graphics/ASB-docs-top-right.png">
</a>
</div>
{% include "breadcrumbs.html" %} {% include "breadcrumbs.html" %}
<div id="page-content"> <div id="page-content">

View file

@ -1,205 +0,0 @@
{#
basic/layout.html
~~~~~~~~~~~~~~~~~
Master layout template for Sphinx themes.
:copyright: Copyright 2007-2013 by the Sphinx team, see AUTHORS.
:license: BSD, see LICENSE for details.
#}
{%- block doctype -%}
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN"
"http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
{%- endblock %}
{%- set reldelim1 = reldelim1 is not defined and ' &raquo;' or reldelim1 %}
{%- set reldelim2 = reldelim2 is not defined and ' |' or reldelim2 %}
{%- set render_sidebar = (not embedded) and (not theme_nosidebar|tobool) and
(sidebars != []) %}
{%- set url_root = pathto('', 1) %}
{# XXX necessary? #}
{%- if url_root == '#' %}{% set url_root = '' %}{% endif %}
{%- if not embedded and docstitle %}
{%- set titlesuffix = " &mdash; "|safe + docstitle|e %}
{%- else %}
{%- set titlesuffix = "" %}
{%- endif %}
{%- macro relbar() %}
<div class="related">
<h3>{{ _('Navigation') }}</h3>
<ul>
{%- for rellink in rellinks %}
<li class="right" {% if loop.first %}style="margin-right: 10px"{% endif %}>
<a href="{{ pathto(rellink[0]) }}" title="{{ rellink[1]|striptags|e }}"
{{ accesskey(rellink[2]) }}>{{ rellink[3] }}</a>
{%- if not loop.first %}{{ reldelim2 }}{% endif %}</li>
{%- endfor %}
{%- block rootrellink %}
<li><a href="{{ pathto(master_doc) }}">{{ shorttitle|e }}</a>{{ reldelim1 }}</li>
{%- endblock %}
{%- for parent in parents %}
<li><a href="{{ parent.link|e }}" {% if loop.last %}{{ accesskey("U") }}{% endif %}>{{ parent.title }}</a>{{ reldelim1 }}</li>
{%- endfor %}
{%- block relbaritems %} {% endblock %}
</ul>
</div>
{%- endmacro %}
{%- macro sidebar() %}
{%- if render_sidebar %}
<div class="sphinxsidebar">
<div class="sphinxsidebarwrapper">
{%- block sidebarlogo %}
{%- if logo %}
<p class="logo"><a href="{{ pathto(master_doc) }}">
<img class="logo" src="{{ pathto('_static/' + logo, 1) }}" alt="Logo"/>
</a></p>
{%- endif %}
{%- endblock %}
{%- if sidebars != None %}
{#- new style sidebar: explicitly include/exclude templates #}
{%- for sidebartemplate in sidebars %}
{%- include sidebartemplate %}
{%- endfor %}
{%- else %}
{#- old style sidebars: using blocks -- should be deprecated #}
{%- block sidebartoc %}
{%- include "localtoc.html" %}
{%- endblock %}
{%- block sidebarrel %}
{%- include "relations.html" %}
{%- endblock %}
{%- block sidebarsourcelink %}
{%- include "sourcelink.html" %}
{%- endblock %}
{%- if customsidebar %}
{%- include customsidebar %}
{%- endif %}
{%- block sidebarsearch %}
{%- include "searchbox.html" %}
{%- endblock %}
{%- endif %}
</div>
</div>
{%- endif %}
{%- endmacro %}
{%- macro script() %}
<script type="text/javascript">
var DOCUMENTATION_OPTIONS = {
URL_ROOT: '{{ url_root }}',
VERSION: '{{ release|e }}',
COLLAPSE_INDEX: false,
FILE_SUFFIX: '{{ '' if no_search_suffix else file_suffix }}',
HAS_SOURCE: {{ has_source|lower }}
};
</script>
{%- for scriptfile in script_files %}
<script type="text/javascript" src="{{ pathto(scriptfile, 1) }}"></script>
{%- endfor %}
{%- endmacro %}
{%- macro css() %}
<link rel="stylesheet" href="{{ pathto('_static/' + style, 1) }}" type="text/css" />
<link rel="stylesheet" href="{{ pathto('_static/pygments.css', 1) }}" type="text/css" />
{%- for cssfile in css_files %}
<link rel="stylesheet" href="{{ pathto(cssfile, 1) }}" type="text/css" />
{%- endfor %}
{%- endmacro %}
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<meta http-equiv="Content-Type" content="text/html; charset={{ encoding }}" />
{{ metatags }}
{%- block htmltitle %}
<title>{{ title|striptags|e }}{{ titlesuffix }}</title>
{%- endblock %}
{{ css() }}
{%- if not embedded %}
{{ script() }}
{%- if use_opensearch %}
<link rel="search" type="application/opensearchdescription+xml"
title="{% trans docstitle=docstitle|e %}Search within {{ docstitle }}{% endtrans %}"
href="{{ pathto('_static/opensearch.xml', 1) }}"/>
{%- endif %}
{%- if favicon %}
<link rel="shortcut icon" href="{{ pathto('_static/' + favicon, 1) }}"/>
{%- endif %}
{%- endif %}
{%- block linktags %}
{%- if hasdoc('about') %}
<link rel="author" title="{{ _('About these documents') }}" href="{{ pathto('about') }}" />
{%- endif %}
{%- if hasdoc('genindex') %}
<link rel="index" title="{{ _('Index') }}" href="{{ pathto('genindex') }}" />
{%- endif %}
{%- if hasdoc('search') %}
<link rel="search" title="{{ _('Search') }}" href="{{ pathto('search') }}" />
{%- endif %}
{%- if hasdoc('copyright') %}
<link rel="copyright" title="{{ _('Copyright') }}" href="{{ pathto('copyright') }}" />
{%- endif %}
<link rel="top" title="{{ docstitle|e }}" href="{{ pathto('index') }}" />
{%- if parents %}
<link rel="up" title="{{ parents[-1].title|striptags|e }}" href="{{ parents[-1].link|e }}" />
{%- endif %}
{%- if next %}
<link rel="next" title="{{ next.title|striptags|e }}" href="{{ next.link|e }}" />
{%- endif %}
{%- if prev %}
<link rel="prev" title="{{ prev.title|striptags|e }}" href="{{ prev.link|e }}" />
{%- endif %}
{%- endblock %}
{%- block extrahead %} {% endblock %}
</head>
<body>
{%- block header %}{% endblock %}
{%- block relbar1 %}{{ relbar() }}{% endblock %}
{%- block content %}
{%- block sidebar1 %} {# possible location for sidebar #} {% endblock %}
<div class="document">
{%- block document %}
<div class="documentwrapper">
{%- if render_sidebar %}
<div class="bodywrapper">
{%- endif %}
<div class="body">
{% block body %} {% endblock %}
</div>
{%- if render_sidebar %}
</div>
{%- endif %}
</div>
{%- endblock %}
{%- block sidebar2 %}{{ sidebar() }}{% endblock %}
<div class="clearer"></div>
</div>
{%- endblock %}
{%- block relbar2 %}{{ relbar() }}{% endblock %}
{%- block footer %}
<div class="footer">
{%- if show_copyright %}
{%- if hasdoc('copyright') %}
{% trans path=pathto('copyright'), copyright=copyright|e %}&copy; <a href="{{ path }}">Copyright</a> {{ copyright }}.{% endtrans %}
{%- else %}
{% trans copyright=copyright|e %}&copy; Copyright {{ copyright }}.{% endtrans %}
{%- endif %}
{%- endif %}
{%- if last_updated %}
{% trans last_updated=last_updated|e %}Last updated on {{ last_updated }}.{% endtrans %}
{%- endif %}
{%- if show_sphinx %}
{% trans sphinx_version=sphinx_version|e %}Created using <a href="http://sphinx-doc.org/">Sphinx</a> {{ sphinx_version }}.{% endtrans %}
{%- endif %}
</div>
<p>asdf asdf asdf asdf 22</p>
{%- endblock %}
</body>
</html>

View file

@ -1,61 +0,0 @@
<!-- <form class="wy-form" action="{{ pathto('search') }}" method="get">
<input type="text" name="q" placeholder="Search docs" />
<input type="hidden" name="check_keywords" value="yes" />
<input type="hidden" name="area" value="default" />
</form> -->
<script>
(function() {
var cx = '006019874985968165468:eu5pbnxp4po';
var gcse = document.createElement('script');
gcse.type = 'text/javascript';
gcse.async = true;
gcse.src = (document.location.protocol == 'https:' ? 'https:' : 'http:') +
'//www.google.com/cse/cse.js?cx=' + cx;
var s = document.getElementsByTagName('script')[0];
s.parentNode.insertBefore(gcse, s);
})();
</script>
<form id="search-form-id" action="">
<input type="text" name="query" id="search-box-id" />
<a class="search-reset-start" id="search-reset"><i class="fa fa-times"></i></a>
<a class="search-reset-start" id="search-start"><i class="fa fa-search"></i></a>
</form>
<script type="text/javascript" src="http://www.google.com/cse/brand?form=search-form-id&inputbox=search-box-id"></script>
<script>
function executeQuery() {
var input = document.getElementById('search-box-id');
var element = google.search.cse.element.getElement('searchresults-only0');
element.resultsUrl = '/htmlout/search.html'
if (input.value == '') {
element.clearAllResults();
$('#page-content, .rst-footer-buttons, #search-start').show();
$('#search-results, #search-reset').hide();
} else {
$('#page-content, .rst-footer-buttons, #search-start').hide();
$('#search-results, #search-reset').show();
element.execute(input.value);
}
return false;
}
$('#search-reset').hide();
$('#search-box-id').css('background-position', '1em center');
$('#search-box-id').on('blur', function() {
$('#search-box-id').css('background-position', '1em center');
});
$('#search-start').click(function(e) { executeQuery(); });
$('#search-reset').click(function(e) { $('#search-box-id').val(''); executeQuery(); });
$('#search-form-id').submit(function(e) {
console.log('submitting!');
executeQuery();
e.preventDefault();
});
</script>

View file

@ -4723,33 +4723,16 @@ span[id*='MathJax-Span'] {
padding: 0.4045em 1.618em; padding: 0.4045em 1.618em;
} }
.DocSiteBanner { .DocSiteBanner {
width: 100%;
display: flex; display: flex;
display: -webkit-flex; display: -webkit-flex;
justify-content: center;
-webkit-justify-content: center;
flex-wrap: wrap; flex-wrap: wrap;
-webkit-flex-wrap: wrap; -webkit-flex-wrap: wrap;
justify-content: space-between;
-webkit-justify-content: space-between;
background-color: #ff5850;
margin-bottom: 25px; margin-bottom: 25px;
} }
.DocSiteBanner-imgWrapper { .DocSiteBanner-imgWrapper {
max-width: 100%; max-width: 100%;
} }
@media screen and (max-width: 1403px) {
.DocSiteBanner {
width: 100%;
display: flex;
display: -webkit-flex;
flex-wrap: wrap;
-webkit-flex-wrap: wrap;
justify-content: center;
-webkit-justify-content: center;
background-color: #fff;
margin-bottom: 25px;
}
}

Binary file not shown.

Before

Width:  |  Height:  |  Size: 4.4 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 4.8 KiB

View file

@ -15,6 +15,7 @@
# #
# You should have received a copy of the GNU General Public License # You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>. # along with Ansible. If not, see <http://www.gnu.org/licenses/>.
from __future__ import print_function
__docformat__ = 'restructuredtext' __docformat__ = 'restructuredtext'
@ -24,9 +25,9 @@ import traceback
try: try:
from sphinx.application import Sphinx from sphinx.application import Sphinx
except ImportError: except ImportError:
print "#################################" print("#################################")
print "Dependency missing: Python Sphinx" print("Dependency missing: Python Sphinx")
print "#################################" print("#################################")
sys.exit(1) sys.exit(1)
import os import os
@ -40,7 +41,7 @@ class SphinxBuilder(object):
""" """
Run the DocCommand. Run the DocCommand.
""" """
print "Creating html documentation ..." print("Creating html documentation ...")
try: try:
buildername = 'html' buildername = 'html'
@ -69,10 +70,10 @@ class SphinxBuilder(object):
app.builder.build_all() app.builder.build_all()
except ImportError, ie: except ImportError:
traceback.print_exc() traceback.print_exc()
except Exception, ex: except Exception as ex:
print >> sys.stderr, "FAIL! exiting ... (%s)" % ex print("FAIL! exiting ... (%s)" % ex, file=sys.stderr)
def build_docs(self): def build_docs(self):
self.app.builder.build_all() self.app.builder.build_all()
@ -83,9 +84,9 @@ def build_rst_docs():
if __name__ == '__main__': if __name__ == '__main__':
if '-h' in sys.argv or '--help' in sys.argv: if '-h' in sys.argv or '--help' in sys.argv:
print "This script builds the html documentation from rst/asciidoc sources.\n" print("This script builds the html documentation from rst/asciidoc sources.\n")
print " Run 'make docs' to build everything." print(" Run 'make docs' to build everything.")
print " Run 'make viewdocs' to build and then preview in a web browser." print(" Run 'make viewdocs' to build and then preview in a web browser.")
sys.exit(0) sys.exit(0)
build_rst_docs() build_rst_docs()
@ -93,4 +94,4 @@ if __name__ == '__main__':
if "view" in sys.argv: if "view" in sys.argv:
import webbrowser import webbrowser
if not webbrowser.open('htmlout/index.html'): if not webbrowser.open('htmlout/index.html'):
print >> sys.stderr, "Could not open on your webbrowser." print("Could not open on your webbrowser.", file=sys.stderr)

View file

@ -20,52 +20,52 @@ Each item in the list is a list of key/value pairs, commonly
called a "hash" or a "dictionary". So, we need to know how called a "hash" or a "dictionary". So, we need to know how
to write lists and dictionaries in YAML. to write lists and dictionaries in YAML.
There's another small quirk to YAML. All YAML files (regardless of their association with There's another small quirk to YAML. All YAML files (regardless of their association with Ansible or not) can optionally
Ansible or not) should begin with ``---``. This is part of the YAML begin with ``---`` and end with ``...``. This is part of the YAML format and indicates the start and end of a document.
format and indicates the start of a document.
All members of a list are lines beginning at the same indentation level starting All members of a list are lines beginning at the same indentation level starting with a ``"- "`` (a dash and a space)::
with a ``"- "`` (a dash and a space)::
--- ---
# A list of tasty fruits # A list of tasty fruits
- Apple fruits:
- Orange - Apple
- Strawberry - Orange
- Mango - Strawberry
- Mango
...
A dictionary is represented in a simple ``key: value`` form (the colon must be followed by a space):: A dictionary is represented in a simple ``key: value`` form (the colon must be followed by a space)::
---
# An employee record # An employee record
name: Example Developer - martin:
job: Developer name: Martin D'vloper
skill: Elite job: Developer
skill: Elite
Dictionaries can also be represented in an abbreviated form if you really want to:: Dictionaries and lists can also be represented in an abbreviated form if you really want to::
--- ---
# An employee record employees:
{name: Example Developer, job: Developer, skill: Elite} - martin: {name: Martin D'vloper, job: Developer, skill: Elite}
fruits: ['Apple', 'Orange', 'Strawberry', 'Mango']
.. _truthiness: .. _truthiness:
Ansible doesn't really use these too much, but you can also specify a Ansible doesn't really use these too much, but you can also specify a boolean value (true/false) in several forms::
boolean value (true/false) in several forms::
---
create_key: yes create_key: yes
needs_agent: no needs_agent: no
knows_oop: True knows_oop: True
likes_emacs: TRUE likes_emacs: TRUE
uses_cvs: false uses_cvs: false
Let's combine what we learned so far in an arbitrary YAML example. This really
has nothing to do with Ansible, but will give you a feel for the format:: Let's combine what we learned so far in an arbitrary YAML example.
This really has nothing to do with Ansible, but will give you a feel for the format::
--- ---
# An employee record # An employee record
name: Example Developer name: Martin D'vloper
job: Developer job: Developer
skill: Elite skill: Elite
employed: True employed: True
@ -79,8 +79,7 @@ has nothing to do with Ansible, but will give you a feel for the format::
python: Elite python: Elite
dotnet: Lame dotnet: Lame
That's all you really need to know about YAML to start writing That's all you really need to know about YAML to start writing `Ansible` playbooks.
`Ansible` playbooks.
Gotchas Gotchas
------- -------
@ -100,6 +99,14 @@ with a "{", YAML will think it is a dictionary, so you must quote it, like so::
foo: "{{ variable }}" foo: "{{ variable }}"
The same applies for strings that start or contain any YAML special characters `` [] {} : > | `` .
Boolean conversion is helpful, but this can be a problem when you want a literal `yes` or other boolean values as a string.
In these cases just use quotes::
non_boolean: "yes"
other_string: "False"
.. seealso:: .. seealso::

View file

@ -1,5 +1,5 @@
Ansible Privilege Escalation Become (Privilege Escalation)
++++++++++++++++++++++++++++ +++++++++++++++++++++++++++++
Ansible can use existing privilege escalation systems to allow a user to execute tasks as another. Ansible can use existing privilege escalation systems to allow a user to execute tasks as another.
@ -7,17 +7,17 @@ Ansible can use existing privilege escalation systems to allow a user to execute
Become Become
`````` ``````
Before 1.9 Ansible mostly allowed the use of sudo and a limited use of su to allow a login/remote user to become a different user Before 1.9 Ansible mostly allowed the use of `sudo` and a limited use of `su` to allow a login/remote user to become a different user
and execute tasks, create resources with the 2nd user's permissions. As of 1.9 'become' supersedes the old sudo/su, while still and execute tasks, create resources with the 2nd user's permissions. As of 1.9 `become` supersedes the old sudo/su, while still
being backwards compatible. This new system also makes it easier to add other privilege escalation tools like pbrun (Powerbroker), being backwards compatible. This new system also makes it easier to add other privilege escalation tools like `pbrun` (Powerbroker),
pfexec and others. `pfexec` and others.
New directives New directives
-------------- --------------
become become
equivalent to adding 'sudo:' or 'su:' to a play or task, set to 'true'/'yes' to activate privilege escalation equivalent to adding `sudo:` or `su:` to a play or task, set to 'true'/'yes' to activate privilege escalation
become_user become_user
equivalent to adding 'sudo_user:' or 'su_user:' to a play or task, set to user with desired privileges equivalent to adding 'sudo_user:' or 'su_user:' to a play or task, set to user with desired privileges

View file

@ -11,6 +11,7 @@ Learn how to build modules of your own in any language, and also how to extend A
developing_modules developing_modules
developing_plugins developing_plugins
developing_test_pr developing_test_pr
developing_releases
Developers will also likely be interested in the fully-discoverable in :doc:`tower`. It's great for embedding Ansible in all manner of applications. Developers will also likely be interested in the fully-discoverable in :doc:`tower`. It's great for embedding Ansible in all manner of applications.

View file

@ -6,7 +6,7 @@ Python API
There are several interesting ways to use Ansible from an API perspective. You can use There are several interesting ways to use Ansible from an API perspective. You can use
the Ansible python API to control nodes, you can extend Ansible to respond to various python events, you can the Ansible python API to control nodes, you can extend Ansible to respond to various python events, you can
write various plugins, and you can plug in inventory data from external data sources. This document write various plugins, and you can plug in inventory data from external data sources. This document
covers the Runner and Playbook API at a basic level. covers the execution and Playbook API at a basic level.
If you are looking to use Ansible programmatically from something other than Python, trigger events asynchronously, If you are looking to use Ansible programmatically from something other than Python, trigger events asynchronously,
or have access control and logging demands, take a look at :doc:`tower` or have access control and logging demands, take a look at :doc:`tower`
@ -17,11 +17,69 @@ This chapter discusses the Python API.
.. _python_api: .. _python_api:
Python API The Python API is very powerful, and is how the all the ansible CLI tools are implemented.
---------- In version 2.0 the core ansible got rewritten and the API was mostly rewritten.
The Python API is very powerful, and is how the ansible CLI and ansible-playbook :.. note:: Ansible relies on forking processes, as such the API is not thread safe.
are implemented.
.. _python_api_20:
Python API 2.0
--------------
In 2.0 things get a bit more complicated to start, but you end up with much more discrete and readable classes::
#!/usr/bin/python2
from collections import namedtuple
from ansible.parsing.dataloader import DataLoader
from ansible.vars import VariableManager
from ansible.inventory import Inventory
from ansible.playbook.play import Play
from ansible.executor.task_queue_manager import TaskQueueManager
Options = namedtuple('Options', ['connection','module_path', 'forks', 'remote_user', 'private_key_file', 'ssh_common_args', 'ssh_extra_args', 'sftp_extra_args', 'scp_extra_args', 'become', 'become_method', 'become_user', 'verbosity', 'check'])
# initialize needed objects
variable_manager = VariableManager()
loader = DataLoader()
options = Options(connection='local', module_path='/path/to/mymodules', forks=100, remote_user=None, private_key_file=None, ssh_common_args=None, ssh_extra_args=None, sftp_extra_args=None, scp_extra_args=None, become=None, become_method=None, become_user=None, verbosity=None, check=False)
passwords = dict(vault_pass='secret')
# create inventory and pass to var manager
inventory = Inventory(loader=loader, variable_manager=variable_manager, host_list='localhost')
variable_manager.set_inventory(inventory)
# create play with tasks
play_source = dict(
name = "Ansible Play",
hosts = 'localhost',
gather_facts = 'no',
tasks = [ dict(action=dict(module='debug', args=dict(msg='Hello Galaxy!'))) ]
)
play = Play().load(play_source, variable_manager=variable_manager, loader=loader)
# actually run it
tqm = None
try:
tqm = TaskQueueManager(
inventory=inventory,
variable_manager=variable_manager,
loader=loader,
options=options,
passwords=passwords,
stdout_callback='default',
)
result = tqm.run(play)
finally:
if tqm is not None:
tqm.cleanup()
.. _python_api_old:
Python API pre 2.0
------------------
It's pretty simple:: It's pretty simple::
@ -51,7 +109,7 @@ expressed in the :doc:`modules` documentation.::
A module can return any type of JSON data it wants, so Ansible can A module can return any type of JSON data it wants, so Ansible can
be used as a framework to rapidly build powerful applications and scripts. be used as a framework to rapidly build powerful applications and scripts.
.. _detailed_api_example: .. _detailed_api_old_example:
Detailed API Example Detailed API Example
```````````````````` ````````````````````
@ -87,9 +145,9 @@ The following script prints out the uptime information for all hosts::
for (hostname, result) in results['dark'].items(): for (hostname, result) in results['dark'].items():
print "%s >>> %s" % (hostname, result) print "%s >>> %s" % (hostname, result)
Advanced programmers may also wish to read the source to ansible itself, for Advanced programmers may also wish to read the source to ansible itself,
it uses the Runner() API (with all available options) to implement the for it uses the API (with all available options) to implement the ``ansible``
command line tools ``ansible`` and ``ansible-playbook``. command line tools (``lib/ansible/cli/``).
.. seealso:: .. seealso::

View file

@ -191,7 +191,7 @@ a lot shorter than this::
Let's test that module:: Let's test that module::
ansible/hacking/test-module -m ./time -a "time=\"March 14 12:23\"" ansible/hacking/test-module -m ./timetest.py -a "time=\"March 14 12:23\""
This should return something like:: This should return something like::
@ -219,7 +219,7 @@ this, just have the module return a `ansible_facts` key, like so, along with oth
} }
These 'facts' will be available to all statements called after that module (but not before) in the playbook. These 'facts' will be available to all statements called after that module (but not before) in the playbook.
A good idea might be make a module called 'site_facts' and always call it at the top of each playbook, though A good idea might be to make a module called 'site_facts' and always call it at the top of each playbook, though
we're always open to improving the selection of core facts in Ansible as well. we're always open to improving the selection of core facts in Ansible as well.
.. _common_module_boilerplate: .. _common_module_boilerplate:
@ -247,7 +247,7 @@ And instantiating the module class like::
argument_spec = dict( argument_spec = dict(
state = dict(default='present', choices=['present', 'absent']), state = dict(default='present', choices=['present', 'absent']),
name = dict(required=True), name = dict(required=True),
enabled = dict(required=True, choices=BOOLEANS), enabled = dict(required=True, type='bool'),
something = dict(aliases=['whatever']) something = dict(aliases=['whatever'])
) )
) )
@ -335,7 +335,7 @@ and guidelines:
* If you have a company module that returns facts specific to your installations, a good name for this module is `site_facts`. * If you have a company module that returns facts specific to your installations, a good name for this module is `site_facts`.
* Modules accepting boolean status should generally accept 'yes', 'no', 'true', 'false', or anything else a user may likely throw at them. The AnsibleModule common code supports this with "choices=BOOLEANS" and a module.boolean(value) casting function. * Modules accepting boolean status should generally accept 'yes', 'no', 'true', 'false', or anything else a user may likely throw at them. The AnsibleModule common code supports this with "type='bool'".
* Include a minimum of dependencies if possible. If there are dependencies, document them at the top of the module file, and have the module raise JSON error messages when the import fails. * Include a minimum of dependencies if possible. If there are dependencies, document them at the top of the module file, and have the module raise JSON error messages when the import fails.
@ -347,7 +347,7 @@ and guidelines:
* In the event of failure, a key of 'failed' should be included, along with a string explanation in 'msg'. Modules that raise tracebacks (stacktraces) are generally considered 'poor' modules, though Ansible can deal with these returns and will automatically convert anything unparseable into a failed result. If you are using the AnsibleModule common Python code, the 'failed' element will be included for you automatically when you call 'fail_json'. * In the event of failure, a key of 'failed' should be included, along with a string explanation in 'msg'. Modules that raise tracebacks (stacktraces) are generally considered 'poor' modules, though Ansible can deal with these returns and will automatically convert anything unparseable into a failed result. If you are using the AnsibleModule common Python code, the 'failed' element will be included for you automatically when you call 'fail_json'.
* Return codes from modules are not actually not significant, but continue on with 0=success and non-zero=failure for reasons of future proofing. * Return codes from modules are actually not significant, but continue on with 0=success and non-zero=failure for reasons of future proofing.
* As results from many hosts will be aggregated at once, modules should return only relevant output. Returning the entire contents of a log file is generally bad form. * As results from many hosts will be aggregated at once, modules should return only relevant output. Returning the entire contents of a log file is generally bad form.
@ -479,9 +479,10 @@ Module checklist
```````````````` ````````````````
* The shebang should always be #!/usr/bin/python, this allows ansible_python_interpreter to work * The shebang should always be #!/usr/bin/python, this allows ansible_python_interpreter to work
* Modules must be written to support Python 2.4. If this is not possible, required minimum python version and rationale should be explained in the requirements section in DOCUMENTATION.
* Documentation: Make sure it exists * Documentation: Make sure it exists
* `required` should always be present, be it true or false * `required` should always be present, be it true or false
* If `required` is false you need to document `default`, even if the default is 'None' (which is the default if no parameter is supplied). Make sure default parameter in docs matches default parameter in code. * If `required` is false you need to document `default`, even if the default is 'null' (which is the default if no parameter is supplied). Make sure default parameter in docs matches default parameter in code.
* `default` is not needed for `required: true` * `default` is not needed for `required: true`
* Remove unnecessary doc like `aliases: []` or `choices: []` * Remove unnecessary doc like `aliases: []` or `choices: []`
* The version is not a float number and value the current development version * The version is not a float number and value the current development version
@ -538,24 +539,34 @@ Windows modules checklist
#!powershell #!powershell
then:: then::
<GPL header> <GPL header>
then::
then::
# WANT_JSON # WANT_JSON
# POWERSHELL_COMMON # POWERSHELL_COMMON
then, to parse all arguments into a variable modules generally use:: then, to parse all arguments into a variable modules generally use::
$params = Parse-Args $args $params = Parse-Args $args
* Arguments: * Arguments:
* Try and use state present and state absent like other modules * Try and use state present and state absent like other modules
* You need to check that all your mandatory args are present. You can do this using the builtin Get-AnsibleParam function. * You need to check that all your mandatory args are present. You can do this using the builtin Get-AnsibleParam function.
* Required arguments:: * Required arguments::
$package = Get-AnsibleParam -obj $params -name name -failifempty $true $package = Get-AnsibleParam -obj $params -name name -failifempty $true
* Required arguments with name validation:: * Required arguments with name validation::
$state = Get-AnsibleParam -obj $params -name "State" -ValidateSet "Present","Absent" -resultobj $resultobj -failifempty $true $state = Get-AnsibleParam -obj $params -name "State" -ValidateSet "Present","Absent" -resultobj $resultobj -failifempty $true
* Optional arguments with name validation:: * Optional arguments with name validation::
$state = Get-AnsibleParam -obj $params -name "State" -default "Present" -ValidateSet "Present","Absent" $state = Get-AnsibleParam -obj $params -name "State" -default "Present" -ValidateSet "Present","Absent"
* the If "FailIfEmpty" is true, the resultobj parameter is used to specify the object returned to fail-json. You can also override the default message * the If "FailIfEmpty" is true, the resultobj parameter is used to specify the object returned to fail-json. You can also override the default message
using $emptyattributefailmessage (for missing required attributes) and $ValidateSetErrorMessage (for attribute validation errors) using $emptyattributefailmessage (for missing required attributes) and $ValidateSetErrorMessage (for attribute validation errors)
* Look at existing modules for more examples of argument checking. * Look at existing modules for more examples of argument checking.
@ -586,7 +597,7 @@ Starting in 1.8 you can deprecate modules by renaming them with a preceding _, i
_old_cloud.py, This will keep the module available but hide it from the primary docs and listing. _old_cloud.py, This will keep the module available but hide it from the primary docs and listing.
You can also rename modules and keep an alias to the old name by using a symlink that starts with _. You can also rename modules and keep an alias to the old name by using a symlink that starts with _.
This example allows the stat module to be called with fileinfo, making the following examples equivalent This example allows the stat module to be called with fileinfo, making the following examples equivalent::
EXAMPLES = ''' EXAMPLES = '''
ln -s stat.py _fileinfo.py ln -s stat.py _fileinfo.py

View file

@ -0,0 +1,48 @@
Releases
========
.. contents:: Topics
:local:
.. _schedule:
Release Schedule
````````````````
Ansible is on a 'flexible' 4 month release schedule, sometimes this can be extended if there is a major change that requires a longer cycle (i.e. 2.0 core rewrite).
Currently modules get released at the same time as the main Ansible repo, even though they are separated into ansible-modules-core and ansible-modules-extras.
The major features and bugs fixed in a release should be reflected in the CHANGELOG.md, minor ones will be in the commit history (FIXME: add git exmaple to list).
When a fix/feature gets added to the `devel` branch it will be part of the next release, some bugfixes can be backported to previous releases and might be part of a minor point release if it is deemed necessary.
Sometimes an RC can be extended by a few days if a bugfix makes a change that can have far reaching consequences, so users have enough time to find any new issues that may stem from this.
.. _methods:
Release methods
````````````````
Ansible normally goes through a 'release candidate', issuing an RC1 for a release, if no major bugs are discovered in it after 5 business days we'll get a final release.
Otherwise fixes will be applied and an RC2 will be provided for testing and if no bugs after 2 days, the final release will be made, iterating this last step and incrementing the candidate number as we find major bugs.
.. _freezing:
Release feature freeze
``````````````````````
During the release candidate process, the focus will be on bugfixes that affect the RC, new features will be delayed while we try to produce a final version. Some bugfixes that are minor or don't affect the RC will also be postponed until after the release is finalized.
.. seealso::
:doc:`developing_api`
Python API to Playbooks and Ad Hoc Task Execution
:doc:`developing_modules`
How to develop modules
:doc:`developing_plugins`
How to develop plugins
`Ansible Tower <http://ansible.com/ansible-tower>`_
REST API endpoint and GUI for Ansible, syncs with dynamic inventory
`Development Mailing List <http://groups.google.com/group/ansible-devel>`_
Mailing list for development topics
`irc.freenode.net <http://irc.freenode.net>`_
#ansible IRC chat channel

View file

@ -81,27 +81,34 @@ and destination repositories. It will look something like this::
Someuser wants to merge 1 commit into ansible:devel from someuser:feature_branch_name Someuser wants to merge 1 commit into ansible:devel from someuser:feature_branch_name
.. note:: .. note::
It is important that the PR request target be ansible:devel, as we do not accept pull requests into any other branch. It is important that the PR request target be ansible:devel, as we do not accept pull requests into any other branch. Dot releases are cherry-picked manually by ansible staff.
Dot releases are cherry-picked manually by ansible staff.
The username and branch at the end are the important parts, which will be turned into git commands as follows:: The username and branch at the end are the important parts, which will be turned into git commands as follows::
git checkout -b testing_PRXXXX devel git checkout -b testing_PRXXXX devel
git pull https://github.com/someuser/ansible.git feature_branch_name git pull https://github.com/someuser/ansible.git feature_branch_name
The first command creates and switches to a new branch named testing_PRXXXX, where the XXXX is the actual issue number associated The first command creates and switches to a new branch named testing_PRXXXX, where the XXXX is the actual issue number associated with the pull request (for example, 1234). This branch is based on the devel branch. The second command pulls the new code from the users feature branch into the newly created branch.
with the pull request (for example, 1234). This branch is based on the devel branch. The second command pulls the new code from the
users feature branch into the newly created branch.
.. note:: .. note::
If the GitHub user interface shows that the pull request will not merge cleanly, we do not recommend proceeding if you If the GitHub user interface shows that the pull request will not merge cleanly, we do not recommend proceeding if you are not somewhat familiar with git and coding, as you will have to resolve a merge conflict. This is the responsibility of the original pull request contributor.
are not somewhat familiar with git and coding, as you will have to resolve a merge conflict. This is the responsibility of
the original pull request contributor.
.. note:: .. note::
Some users do not create feature branches, which can cause problems when they have multiple, un-related commits in Some users do not create feature branches, which can cause problems when they have multiple, un-related commits in their version of `devel`. If the source looks like `someuser:devel`, make sure there is only one commit listed on the pull request.
their version of `devel`. If the source looks like `someuser:devel`, make sure there is only one commit listed on
the pull request. Finding a Pull Request for Ansible Modules
++++++++++++++++++++++++++++++++++++++++++
Ansible modules are in separate repositories, which are managed as Git submodules. Here's a step by step process for checking out a PR for an Ansible extras module, for instance:
1. git clone https://github.com/ansible/ansible.git
2. cd ansible
3. git submodule init
4. git submodule update --recursive [ fetches the submodules ]
5. cd lib/ansible/modules/extras
6. git fetch origin pull/1234/head:pr/1234 [ fetches the specific PR ]
7. git checkout pr/1234 [ do your testing here ]
8. cd /path/to/ansible/clone
9. git submodule update --recursive
For Those About To Test, We Salute You For Those About To Test, We Salute You
++++++++++++++++++++++++++++++++++++++ ++++++++++++++++++++++++++++++++++++++

View file

@ -38,7 +38,7 @@ You can also dictate the connection type to be used, if you want::
foo.example.com foo.example.com
bar.example.com bar.example.com
You may also wish to keep these in group variables instead, or file in them in a group_vars/<groupname> file. You may also wish to keep these in group variables instead, or file them in a group_vars/<groupname> file.
See the rest of the documentation for more information about how to organize variables. See the rest of the documentation for more information about how to organize variables.
.. _use_ssh: .. _use_ssh:

View file

@ -1,55 +1,60 @@
Ansible Galaxy Ansible Galaxy
++++++++++++++ ++++++++++++++
"Ansible Galaxy" can either refer to a website for sharing and downloading Ansible roles, or a command line tool that helps work with roles. "Ansible Galaxy" can either refer to a website for sharing and downloading Ansible roles, or a command line tool for managing and creating roles.
.. contents:: Topics .. contents:: Topics
The Website The Website
``````````` ```````````
The website `Ansible Galaxy <https://galaxy.ansible.com>`_, is a free site for finding, downloading, rating, and reviewing all kinds of community developed Ansible roles and can be a great way to get a jumpstart on your automation projects. The website `Ansible Galaxy <https://galaxy.ansible.com>`_, is a free site for finding, downloading, and sharing community developed Ansible roles. Downloading roles from Galaxy is a great way to jumpstart your automation projects.
You can sign up with social auth and use the download client 'ansible-galaxy' which is included in Ansible 1.4.2 and later. Access the Galaxy web site using GitHub OAuth, and to install roles use the 'ansible-galaxy' command line tool included in Ansible 1.4.2 and later.
Read the "About" page on the Galaxy site for more information. Read the "About" page on the Galaxy site for more information.
The ansible-galaxy command line tool The ansible-galaxy command line tool
```````````````````````````````````` ````````````````````````````````````
The command line ansible-galaxy has many different subcommands. The ansible-galaxy command has many different sub-commands for managing roles both locally and at `galaxy.ansible.com <https://galaxy.ansible.com>`_.
.. note::
The search, login, import, delete, and setup commands in the Ansible 2.0 version of ansible-galaxy require access to the
2.0 Beta release of the Galaxy web site available at `https://galaxy-qa.ansible.com <https://galaxy-qa.ansible.com>`_.
Use the ``--server`` option to access the beta site. For example::
$ ansible-galaxy search --server https://galaxy-qa.ansible.com mysql --author geerlingguy
Additionally, you can define a server in ansible.cfg::
[galaxy]
server=https://galaxy-qa.ansible.com
Installing Roles Installing Roles
---------------- ----------------
The most obvious is downloading roles from the Ansible Galaxy website:: The most obvious use of the ansible-galaxy command is downloading roles from `the Ansible Galaxy website <https://galaxy.ansible.com>`_::
ansible-galaxy install username.rolename $ ansible-galaxy install username.rolename
.. _galaxy_cli_roles_path:
roles_path roles_path
=============== ==========
You can specify a particular directory where you want the downloaded roles to be placed:: You can specify a particular directory where you want the downloaded roles to be placed::
ansible-galaxy install username.role -p ~/Code/ansible_roles/ $ ansible-galaxy install username.role -p ~/Code/ansible_roles/
This can be useful if you have a master folder that contains ansible galaxy roles shared across several projects. The default is the roles_path configured in your ansible.cfg file (/etc/ansible/roles if not configured). This can be useful if you have a master folder that contains ansible galaxy roles shared across several projects. The default is the roles_path configured in your ansible.cfg file (/etc/ansible/roles if not configured).
Building out Role Scaffolding
-----------------------------
It can also be used to initialize the base structure of a new role, saving time on creating the various directories and main.yml files a role requires::
ansible-galaxy init rolename
Installing Multiple Roles From A File Installing Multiple Roles From A File
------------------------------------- =====================================
To install multiple roles, the ansible-galaxy CLI can be fed a requirements file. All versions of ansible allow the following syntax for installing roles from the Ansible Galaxy website:: To install multiple roles, the ansible-galaxy CLI can be fed a requirements file. All versions of ansible allow the following syntax for installing roles from the Ansible Galaxy website::
ansible-galaxy install -r requirements.txt $ ansible-galaxy install -r requirements.txt
Where the requirements.txt looks like:: Where the requirements.txt looks like::
@ -64,7 +69,7 @@ To request specific versions (tags) of a role, use this syntax in the roles file
Available versions will be listed on the Ansible Galaxy webpage for that role. Available versions will be listed on the Ansible Galaxy webpage for that role.
Advanced Control over Role Requirements Files Advanced Control over Role Requirements Files
--------------------------------------------- =============================================
For more advanced control over where to download roles from, including support for remote repositories, Ansible 1.8 and later support a new YAML format for the role requirements file, which must end in a 'yml' extension. It works like this:: For more advanced control over where to download roles from, including support for remote repositories, Ansible 1.8 and later support a new YAML format for the role requirements file, which must end in a 'yml' extension. It works like this::
@ -77,14 +82,10 @@ And here's an example showing some specific version downloads from multiple sour
# from galaxy # from galaxy
- src: yatesr.timezone - src: yatesr.timezone
# from github # from GitHub
- src: https://github.com/bennojoy/nginx - src: https://github.com/bennojoy/nginx
# from github installing to a relative path # from GitHub, overriding the name and specifying a specific tag
- src: https://github.com/bennojoy/nginx
path: vagrant/roles/
# from github, overriding the name and specifying a specific tag
- src: https://github.com/bennojoy/nginx - src: https://github.com/bennojoy/nginx
version: master version: master
name: nginx_role name: nginx_role
@ -93,19 +94,18 @@ And here's an example showing some specific version downloads from multiple sour
- src: https://some.webserver.example.com/files/master.tar.gz - src: https://some.webserver.example.com/files/master.tar.gz
name: http-role name: http-role
# from bitbucket, if bitbucket happens to be operational right now :) # from Bitbucket
- src: git+http://bitbucket.org/willthames/git-ansible-galaxy - src: git+http://bitbucket.org/willthames/git-ansible-galaxy
version: v1.4 version: v1.4
# from bitbucket, alternative syntax and caveats # from Bitbucket, alternative syntax and caveats
- src: http://bitbucket.org/willthames/hg-ansible-galaxy - src: http://bitbucket.org/willthames/hg-ansible-galaxy
scm: hg scm: hg
# from gitlab or other git-based scm # from GitLab or other git-based scm
- src: git@gitlab.company.com:mygroup/ansible-base.git - src: git@gitlab.company.com:mygroup/ansible-base.git
scm: git scm: git
version: 0.1.0 version: 0.1.0
path: roles/
As you can see in the above, there are a large amount of controls available As you can see in the above, there are a large amount of controls available
to customize where roles can be pulled from, and what to save roles as. to customize where roles can be pulled from, and what to save roles as.
@ -121,3 +121,283 @@ Roles pulled from galaxy work as with other SCM sourced roles above. To download
`irc.freenode.net <http://irc.freenode.net>`_ `irc.freenode.net <http://irc.freenode.net>`_
#ansible IRC chat channel #ansible IRC chat channel
Building Role Scaffolding
-------------------------
Use the init command to initialize the base structure of a new role, saving time on creating the various directories and main.yml files a role requires::
$ ansible-galaxy init rolename
The above will create the following directory structure in the current working directory:
::
README.md
.travis.yml
defaults/
main.yml
files/
handlers/
main.yml
meta/
main.yml
templates/
tests/
inventory
test.yml
vars/
main.yml
.. note::
.travis.yml and tests/ are new in Ansible 2.0
If a directory matching the name of the role already exists in the current working directory, the init command will result in an error. To ignore the error use the --force option. Force will create the above subdirectories and files, replacing anything that matches.
Search for Roles
----------------
The search command provides for querying the Galaxy database, allowing for searching by tags, platforms, author and multiple keywords. For example:
::
$ ansible-galaxy search elasticsearch --author geerlingguy
The search command will return a list of the first 1000 results matching your search:
::
Found 2 roles matching your search:
Name Description
---- -----------
geerlingguy.elasticsearch Elasticsearch for Linux.
geerlingguy.elasticsearch-curator Elasticsearch curator for Linux.
.. note::
The format of results pictured here is new in Ansible 2.0.
Get More Information About a Role
---------------------------------
Use the info command To view more detail about a specific role:
::
$ ansible-galaxy info username.role_name
This returns everything found in Galaxy for the role:
::
Role: username.rolename
description: Installs and configures a thing, a distributed, highly available NoSQL thing.
active: True
commit: c01947b7bc89ebc0b8a2e298b87ab416aed9dd57
commit_message: Adding travis
commit_url: https://github.com/username/repo_name/commit/c01947b7bc89ebc0b8a2e298b87ab
company: My Company, Inc.
created: 2015-12-08T14:17:52.773Z
download_count: 1
forks_count: 0
github_branch:
github_repo: repo_name
github_user: username
id: 6381
is_valid: True
issue_tracker_url:
license: Apache
min_ansible_version: 1.4
modified: 2015-12-08T18:43:49.085Z
namespace: username
open_issues_count: 0
path: /Users/username/projects/roles
scm: None
src: username.repo_name
stargazers_count: 0
travis_status_url: https://travis-ci.org/username/repo_name.svg?branch=master
version:
watchers_count: 1
List Installed Roles
--------------------
The list command shows the name and version of each role installed in roles_path.
::
$ ansible-galaxy list
- chouseknecht.role-install_mongod, master
- chouseknecht.test-role-1, v1.0.2
- chrismeyersfsu.role-iptables, master
- chrismeyersfsu.role-required_vars, master
Remove an Installed Role
------------------------
The remove command will delete a role from roles_path:
::
$ ansible-galaxy remove username.rolename
Authenticate with Galaxy
------------------------
To use the import, delete and setup commands authentication with Galaxy is required. The login command will authenticate the user,retrieve a token from Galaxy, and store it in the user's home directory.
::
$ ansible-galaxy login
We need your Github login to identify you.
This information will not be sent to Galaxy, only to api.github.com.
The password will not be displayed.
Use --github-token if you do not want to enter your password.
Github Username: dsmith
Password for dsmith:
Succesfully logged into Galaxy as dsmith
As depicted above, the login command prompts for a GitHub username and password. It does NOT send your password to Galaxy. It actually authenticates with GitHub and creates a personal access token. It then sends the personal access token to Galaxy, which in turn verifies that you are you and returns a Galaxy access token. After authentication completes the GitHub personal access token is destroyed.
If you do not wish to use your GitHub password, or if you have two-factor authentication enabled with GitHub, use the --github-token option to pass a personal access token that you create. Log into GitHub, go to Settings and click on Personal Access Token to create a token.
.. note::
The login command in Ansible 2.0 requires using the Galaxy 2.0 Beta site. Use the ``--server`` option to access
`https://galaxy-qa.ansible.com <https://galaxy-qa.ansible.com>`_. You can also add a *server* definition in the [galaxy]
section of your ansible.cfg file.
Import a Role
-------------
Roles can be imported using ansible-galaxy. The import command expects that the user previously authenticated with Galaxy using the login command.
Import any GitHub repo you have access to:
::
$ ansible-galaxy import github_user github_repo
By default the command will wait for the role to be imported by Galaxy, displaying the results as the import progresses:
::
Successfully submitted import request 41
Starting import 41: role_name=myrole repo=githubuser/ansible-role-repo ref=
Retrieving Github repo githubuser/ansible-role-repo
Accessing branch: master
Parsing and validating meta/main.yml
Parsing galaxy_tags
Parsing platforms
Adding dependencies
Parsing and validating README.md
Adding repo tags as role versions
Import completed
Status SUCCESS : warnings=0 errors=0
Use the --branch option to import a specific branch. If not specified, the default branch for the repo will be used.
If the --no-wait option is present, the command will not wait for results. Results of the most recent import for any of your roles is available on the Galaxy web site under My Imports.
.. note::
The import command in Ansible 2.0 requires using the Galaxy 2.0 Beta site. Use the ``--server`` option to access
`https://galaxy-qa.ansible.com <https://galaxy-qa.ansible.com>`_. You can also add a *server* definition in the [galaxy]
section of your ansible.cfg file.
Delete a Role
-------------
Remove a role from the Galaxy web site using the delete command. You can delete any role that you have access to in GitHub. The delete command expects that the user previously authenticated with Galaxy using the login command.
::
$ ansible-galaxy delete github_user github_repo
This only removes the role from Galaxy. It does not impact the actual GitHub repo.
.. note::
The delete command in Ansible 2.0 requires using the Galaxy 2.0 Beta site. Use the ``--server`` option to access
`https://galaxy-qa.ansible.com <https://galaxy-qa.ansible.com>`_. You can also add a *server* definition in the [galaxy]
section of your ansible.cfg file.
Setup Travis Integrations
--------------------------
Using the setup command you can enable notifications from `travis <http://travis-ci.org>`_. The setup command expects that the user previously authenticated with Galaxy using the login command.
::
$ ansible-galaxy setup travis github_user github_repo xxxtravistokenxxx
Added integration for travis github_user/github_repo
The setup command requires your Travis token. The Travis token is not stored in Galaxy. It is used along with the GitHub username and repo to create a hash as described in `the Travis documentation <https://docs.travis-ci.com/user/notifications/>`_. The calculated hash is stored in Galaxy and used to verify notifications received from Travis.
The setup command enables Galaxy to respond to notifications. Follow the `Travis getting started guide <https://docs.travis-ci.com/user/getting-started/>`_ to enable the Travis build process for the role repository.
When you create your .travis.yml file add the following to cause Travis to notify Galaxy when a build completes:
::
notifications:
webhooks: https://galaxy.ansible.com/api/v1/notifications/
.. note::
The setup command in Ansible 2.0 requires using the Galaxy 2.0 Beta site. Use the ``--server`` option to access
`https://galaxy-qa.ansible.com <https://galaxy-qa.ansible.com>`_. You can also add a *server* definition in the [galaxy]
section of your ansible.cfg file.
List Travis Integrations
========================
Use the --list option to display your Travis integrations:
::
$ ansible-galaxy setup --list
ID Source Repo
---------- ---------- ----------
2 travis github_user/github_repo
1 travis github_user/github_repo
Remove Travis Integrations
==========================
Use the --remove option to disable and remove a Travis integration:
::
$ ansible-galaxy setup --remove ID
Provide the ID of the integration you want disabled. Use the --list option to get the ID.

View file

@ -178,8 +178,8 @@ Now to the fun part. We create a playbook to create our infrastructure we call i
- name: ensure firewall ports opened - name: ensure firewall ports opened
cs_firewall: cs_firewall:
ip_address: {{ public_ip }} ip_address: "{{ public_ip }}"
port: {{ item.port }} port: "{{ item.port }}"
cidr: "{{ item.cidr | default('0.0.0.0/0') }}" cidr: "{{ item.cidr | default('0.0.0.0/0') }}"
with_items: cs_firewall with_items: cs_firewall
when: public_ip is defined when: public_ip is defined

View file

@ -6,12 +6,13 @@ Using Vagrant and Ansible
Introduction Introduction
```````````` ````````````
Vagrant is a tool to manage virtual machine environments, and allows you to `Vagrant <http://vagrantup.com/>`_ is a tool to manage virtual machine
configure and use reproducible work environments on top of various environments, and allows you to configure and use reproducible work
virtualization and cloud platforms. It also has integration with Ansible as a environments on top of various virtualization and cloud platforms.
provisioner for these virtual machines, and the two tools work together well. It also has integration with Ansible as a provisioner for these virtual
machines, and the two tools work together well.
This guide will describe how to use Vagrant and Ansible together. This guide will describe how to use Vagrant 1.7+ and Ansible together.
If you're not familiar with Vagrant, you should visit `the documentation If you're not familiar with Vagrant, you should visit `the documentation
<http://docs.vagrantup.com/v2/>`_. <http://docs.vagrantup.com/v2/>`_.
@ -27,54 +28,48 @@ Vagrant Setup
The first step once you've installed Vagrant is to create a ``Vagrantfile`` The first step once you've installed Vagrant is to create a ``Vagrantfile``
and customize it to suit your needs. This is covered in detail in the Vagrant and customize it to suit your needs. This is covered in detail in the Vagrant
documentation, but here is a quick example: documentation, but here is a quick example that includes a section to use the
Ansible provisioner to manage a single machine:
.. code-block:: bash
$ mkdir vagrant-test
$ cd vagrant-test
$ vagrant init precise32 http://files.vagrantup.com/precise32.box
This will create a file called Vagrantfile that you can edit to suit your
needs. The default Vagrantfile has a lot of comments. Here is a simplified
example that includes a section to use the Ansible provisioner:
.. code-block:: ruby .. code-block:: ruby
# Vagrantfile API/syntax version. Don't touch unless you know what you're doing! # This guide is optimized for Vagrant 1.7 and above.
VAGRANTFILE_API_VERSION = "2" # Although versions 1.6.x should behave very similarly, it is recommended
# to upgrade instead of disabling the requirement below.
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config| Vagrant.require_version ">= 1.7.0"
config.vm.box = "precise32"
config.vm.box_url = "http://files.vagrantup.com/precise32.box"
config.vm.network :public_network
config.vm.provision "ansible" do |ansible| Vagrant.configure(2) do |config|
ansible.playbook = "playbook.yml"
end config.vm.box = "ubuntu/trusty64"
# Disable the new default behavior introduced in Vagrant 1.7, to
# ensure that all Vagrant machines will use the same SSH key pair.
# See https://github.com/mitchellh/vagrant/issues/5005
config.ssh.insert_key = false
config.vm.provision "ansible" do |ansible|
ansible.verbose = "v"
ansible.playbook = "playbook.yml"
end end
end
The Vagrantfile has a lot of options, but these are the most important ones.
Notice the ``config.vm.provision`` section that refers to an Ansible playbook Notice the ``config.vm.provision`` section that refers to an Ansible playbook
called ``playbook.yml`` in the same directory as the Vagrantfile. Vagrant runs called ``playbook.yml`` in the same directory as the ``Vagrantfile``. Vagrant
the provisioner once the virtual machine has booted and is ready for SSH runs the provisioner once the virtual machine has booted and is ready for SSH
access. access.
There are a lot of Ansible options you can configure in your ``Vagrantfile``.
Visit the `Ansible Provisioner documentation
<http://docs.vagrantup.com/v2/provisioning/ansible.html>`_ for more
information.
.. code-block:: bash .. code-block:: bash
$ vagrant up $ vagrant up
This will start the VM and run the provisioning playbook. This will start the VM, and run the provisioning playbook (on the first VM
startup).
There are a lot of Ansible options you can configure in your Vagrantfile. Some
particularly useful options are ``ansible.extra_vars``, ``ansible.sudo`` and
``ansible.sudo_user``, and ``ansible.host_key_checking`` which you can disable
to avoid SSH connection problems to new virtual machines.
Visit the `Ansible Provisioner documentation
<http://docs.vagrantup.com/v2/provisioning/ansible.html>`_ for more
information.
To re-run a playbook on an existing VM, just run: To re-run a playbook on an existing VM, just run:
@ -82,7 +77,19 @@ To re-run a playbook on an existing VM, just run:
$ vagrant provision $ vagrant provision
This will re-run the playbook. This will re-run the playbook against the existing VM.
Note that having the ``ansible.verbose`` option enabled will instruct Vagrant
to show the full ``ansible-playbook`` command used behind the scene, as
illustrated by this example:
.. code-block:: bash
$ PYTHONUNBUFFERED=1 ANSIBLE_FORCE_COLOR=true ANSIBLE_HOST_KEY_CHECKING=false ANSIBLE_SSH_ARGS='-o UserKnownHostsFile=/dev/null -o ControlMaster=auto -o ControlPersist=60s' ansible-playbook --private-key=/home/someone/.vagrant.d/insecure_private_key --user=vagrant --connection=ssh --limit='machine1' --inventory-file=/home/someone/coding-in-a-project/.vagrant/provisioners/ansible/inventory/vagrant_ansible_inventory playbook.yml
This information can be quite useful to debug integration issues and can also
be used to manually execute Ansible from a shell, as explained in the next
section.
.. _running_ansible: .. _running_ansible:
@ -90,44 +97,58 @@ Running Ansible Manually
```````````````````````` ````````````````````````
Sometimes you may want to run Ansible manually against the machines. This is Sometimes you may want to run Ansible manually against the machines. This is
pretty easy to do. faster than kicking ``vagrant provision`` and pretty easy to do.
Vagrant automatically creates an inventory file for each Vagrant machine in With our ``Vagrantfile`` example, Vagrant automatically creates an Ansible
the same directory located under ``.vagrant/provisioners/ansible/inventory/vagrant_ansible_inventory``. inventory file in ``.vagrant/provisioners/ansible/inventory/vagrant_ansible_inventory``.
It configures the inventory file according to the SSH tunnel that Vagrant This inventory is configured according to the SSH tunnel that Vagrant
automatically creates, and executes ``ansible-playbook`` with the correct automatically creates. A typical automatically-created inventory file for a
username and SSH key options to allow access. A typical automatically-created single machine environment may look something like this:
inventory file may look something like this:
.. code-block:: none .. code-block:: none
# Generated by Vagrant # Generated by Vagrant
machine ansible_host=127.0.0.1 ansible_port=2222 default ansible_ssh_host=127.0.0.1 ansible_ssh_port=2222
.. include:: ansible_ssh_changes_note.rst
If you want to run Ansible manually, you will want to make sure to pass If you want to run Ansible manually, you will want to make sure to pass
``ansible`` or ``ansible-playbook`` commands the correct arguments for the ``ansible`` or ``ansible-playbook`` commands the correct arguments, at least
username (usually ``vagrant``) and the SSH key (since Vagrant 1.7.0, this will be something like for the *username*, the *SSH private key* and the *inventory*.
``.vagrant/machines/[machine name]/[provider]/private_key``), and the autogenerated inventory file.
Here is an example: Here is an example using the Vagrant global insecure key (``config.ssh.insert_key``
must be set to ``false`` in your ``Vagrantfile``):
.. code-block:: bash .. code-block:: bash
$ ansible-playbook -i .vagrant/provisioners/ansible/inventory/vagrant_ansible_inventory --private-key=.vagrant/machines/default/virtualbox/private_key -u vagrant playbook.yml
Note: Vagrant versions prior to 1.7.0 will use the private key located at ``~/.vagrant.d/insecure_private_key.`` $ ansible-playbook --private-key=~/.vagrant.d/insecure_private_key -u vagrant -i .vagrant/provisioners/ansible/inventory/vagrant_ansible_inventory playbook.yml
Here is a second example using the random private key that Vagrant 1.7+
automatically configures for each new VM (each key is stored in a path like
``.vagrant/machines/[machine name]/[provider]/private_key``):
.. code-block:: bash
$ ansible-playbook --private-key=.vagrant/machines/default/virtualbox/private_key -u vagrant -i .vagrant/provisioners/ansible/inventory/vagrant_ansible_inventory playbook.yml
Advanced Usages
```````````````
The "Tips and Tricks" chapter of the `Ansible Provisioner documentation
<http://docs.vagrantup.com/v2/provisioning/ansible.html>`_ provides detailed information about more advanced Ansible features like:
- how to parallely execute a playbook in a multi-machine environment
- how to integrate a local ``ansible.cfg`` configuration file
.. seealso:: .. seealso::
`Vagrant Home <http://www.vagrantup.com/>`_ `Vagrant Home <http://www.vagrantup.com/>`_
The Vagrant homepage with downloads The Vagrant homepage with downloads
`Vagrant Documentation <http://docs.vagrantup.com/v2/>`_ `Vagrant Documentation <http://docs.vagrantup.com/v2/>`_
Vagrant Documentation Vagrant Documentation
`Ansible Provisioner <http://docs.vagrantup.com/v2/provisioning/ansible.html>`_ `Ansible Provisioner <http://docs.vagrantup.com/v2/provisioning/ansible.html>`_
The Vagrant documentation for the Ansible provisioner The Vagrant documentation for the Ansible provisioner
:doc:`playbooks` `Vagrant Issue Tracker <https://github.com/mitchellh/vagrant/issues?q=is%3Aopen+is%3Aissue+label%3Aprovisioners%2Fansible>`_
An introduction to playbooks The open issues for the Ansible provisioner in the Vagrant project
:doc:`playbooks`
An introduction to playbooks

View file

@ -40,4 +40,5 @@ Ansible, Inc. releases a new major release of Ansible approximately every two mo
faq faq
glossary glossary
YAMLSyntax YAMLSyntax
porting_guide_2.0

View file

@ -88,7 +88,7 @@ The ``-f 10`` in the above specifies the usage of 10 simultaneous
processes to use. You can also set this in :doc:`intro_configuration` to avoid setting it again. The default is actually 5, which processes to use. You can also set this in :doc:`intro_configuration` to avoid setting it again. The default is actually 5, which
is really small and conservative. You are probably going to want to talk to a lot more simultaneous hosts so feel free is really small and conservative. You are probably going to want to talk to a lot more simultaneous hosts so feel free
to crank this up. If you have more hosts than the value set for the fork count, Ansible will talk to them, but it will to crank this up. If you have more hosts than the value set for the fork count, Ansible will talk to them, but it will
take a little longer. Feel free to push this value as high as your system can handle it! take a little longer. Feel free to push this value as high as your system can handle!
You can also select what Ansible "module" you want to run. Normally commands also take a ``-m`` for module name, but You can also select what Ansible "module" you want to run. Normally commands also take a ``-m`` for module name, but
the default module name is 'command', so we didn't need to the default module name is 'command', so we didn't need to
@ -112,7 +112,7 @@ For example, using double rather than single quotes in the above example would
evaluate the variable on the box you were on. evaluate the variable on the box you were on.
So far we've been demoing simple command execution, but most Ansible modules usually do not work like So far we've been demoing simple command execution, but most Ansible modules usually do not work like
simple scripts. They make the remote system look like you state, and run the commands necessary to simple scripts. They make the remote system look like a state, and run the commands necessary to
get it there. This is commonly referred to as 'idempotence', and is a core design goal of Ansible. get it there. This is commonly referred to as 'idempotence', and is a core design goal of Ansible.
However, we also recognize that running arbitrary commands is equally important, so Ansible easily supports both. However, we also recognize that running arbitrary commands is equally important, so Ansible easily supports both.
@ -170,7 +170,7 @@ Ensure a package is not installed::
Ansible has modules for managing packages under many platforms. If your package manager Ansible has modules for managing packages under many platforms. If your package manager
does not have a module available for it, you can install does not have a module available for it, you can install
for other packages using the command module or (better!) contribute a module packages using the command module or (better!) contribute a module
for other package managers. Stop by the mailing list for info/details. for other package managers. Stop by the mailing list for info/details.
.. _users_and_groups: .. _users_and_groups:
@ -249,7 +249,7 @@ very quickly. After the time limit (in seconds) runs out (``-B``), the process o
the remote nodes will be terminated. the remote nodes will be terminated.
Typically you'll only be backgrounding long-running Typically you'll only be backgrounding long-running
shell commands or software upgrades only. Backgrounding the copy module does not do a background file transfer. :doc:`Playbooks <playbooks>` also support polling, and have a simplified syntax for this. shell commands or software upgrades. Backgrounding the copy module does not do a background file transfer. :doc:`Playbooks <playbooks>` also support polling, and have a simplified syntax for this.
.. _checking_facts: .. _checking_facts:

View file

@ -30,7 +30,7 @@ Bootstrapping BSD
For Ansible to effectively manage your machine, we need to install Python along with a json library, in this case we are using Python 2.7 which already has json included. For Ansible to effectively manage your machine, we need to install Python along with a json library, in this case we are using Python 2.7 which already has json included.
On your control machine you can simply execute the following for most versions of FreeBSD:: On your control machine you can simply execute the following for most versions of FreeBSD::
ansible -m raw -a “pkg_add -r python27” mybsdhost1 ansible -m raw -a “pkg install -y python27” mybsdhost1
Once this is done you can now use other Ansible modules aside from the ``raw`` module. Once this is done you can now use other Ansible modules aside from the ``raw`` module.

View file

@ -587,11 +587,12 @@ the sudo implementation is matching CLI flags with the standard sudo::
sudo_flags sudo_flags
========== ==========
Additional flags to pass to sudo when engaging sudo support. The default is '-H' which preserves the $HOME environment variable Additional flags to pass to sudo when engaging sudo support. The default is '-H -S -n' which sets the HOME environment
of the original user. In some situations you may wish to add or remove flags, but in general most users variable, prompts for passwords via STDIN, and avoids prompting the user for input of any kind. Note that '-n' will conflict
will not need to change this setting:: with using password-less sudo auth, such as pam_ssh_agent_auth. In some situations you may wish to add or remove flags, but
in general most users will not need to change this setting:::
sudo_flags=-H sudo_flags=-H -S -n
.. _sudo_user: .. _sudo_user:
@ -897,3 +898,19 @@ The normal behaviour is for operations to copy the existing context or use the u
The default list is: nfs,vboxsf,fuse,ramfs:: The default list is: nfs,vboxsf,fuse,ramfs::
special_context_filesystems = nfs,vboxsf,fuse,ramfs,myspecialfs special_context_filesystems = nfs,vboxsf,fuse,ramfs,myspecialfs
Galaxy Settings
---------------
The following options can be set in the [galaxy] section of ansible.cfg:
server
======
Override the default Galaxy server value of https://galaxy.ansible.com. Useful if you have a hosted version of the Galaxy web app or want to point to the testing site https://galaxy-qa.ansible.com. It does not work against private, hosted repos, which Galaxy can use for fetching and installing roles.
ignore_certs
============
If set to *yes*, ansible-galaxy will not validate TLS certificates. Handy for testing against a server with a self-signed certificate
.

View file

@ -111,9 +111,8 @@ If you use boto profiles to manage multiple AWS accounts, you can pass ``--profi
aws_access_key_id = <prod access key> aws_access_key_id = <prod access key>
aws_secret_access_key = <prod secret key> aws_secret_access_key = <prod secret key>
You can then run ``ec2.py --profile prod`` to get the inventory for the prod account, or run playbooks with: ``ansible-playbook -i 'ec2.py --profile prod' myplaybook.yml``. You can then run ``ec2.py --profile prod`` to get the inventory for the prod account, this option is not supported by ``anisble-playbook`` though.
But you can use the ``AWS_PROFILE`` variable - e.g. ``AWS_PROFILE=prod ansible-playbook -i ec2.py myplaybook.yml``
Alternatively, use the ``AWS_PROFILE`` variable - e.g. ``AWS_PROFILE=prod ansible-playbook -i ec2.py myplaybook.yml``
Since each region requires its own API call, if you are only using a small set of regions, feel free to edit ``ec2.ini`` and list only the regions you are interested in. There are other config options in ``ec2.ini`` including cache control, and destination variables. Since each region requires its own API call, if you are only using a small set of regions, feel free to edit ``ec2.ini`` and list only the regions you are interested in. There are other config options in ``ec2.ini`` including cache control, and destination variables.
@ -207,6 +206,77 @@ explicitly clear the cache, you can run the ec2.py script with the ``--refresh-c
# ./ec2.py --refresh-cache # ./ec2.py --refresh-cache
.. _openstack_example:
Example: OpenStack External Inventory Script
````````````````````````````````````````````
If you use an OpenStack based cloud, instead of manually maintaining your own inventory file, you can use the openstack.py dynamic inventory to pull information about your compute instances directly from OpenStack.
You can download the latest version of the OpenStack inventory script at: https://raw.githubusercontent.com/ansible/ansible/devel/contrib/inventory/openstack.py
You can use the inventory script explicitly (by passing the `-i openstack.py` argument to Ansible) or implicitly (by placing the script at `/etc/ansible/hosts`).
Explicit use of inventory script
++++++++++++++++++++++++++++++++
Download the latest version of the OpenStack dynamic inventory script and make it executable::
wget https://raw.githubusercontent.com/ansible/ansible/devel/contrib/inventory/openstack.py
chmod +x openstack.py
Source an OpenStack RC file::
source openstack.rc
.. note::
An OpenStack RC file contains the environment variables required by the client tools to establish a connection with the cloud provider, such as the authentication URL, user name, password and region name. For more information on how to download, create or source an OpenStack RC file, please refer to http://docs.openstack.org/cli-reference/content/cli_openrc.html.
You can confirm the file has been successfully sourced by running a simple command, such as `nova list` and ensuring it return no errors.
.. note::
The OpenStack command line clients are required to run the `nova list` command. For more information on how to install them, please refer to http://docs.openstack.org/cli-reference/content/install_clients.html.
You can test the OpenStack dynamic inventory script manually to confirm it is working as expected::
./openstack.py --list
After a few moments you should see some JSON output with information about your compute instances.
Once you confirm the dynamic inventory script is working as expected, you can tell Ansible to use the `openstack.py` script as an inventory file, as illustrated below::
ansible -i openstack.py all -m ping
Implicit use of inventory script
++++++++++++++++++++++++++++++++
Download the latest version of the OpenStack dynamic inventory script, make it executable and copy it to `/etc/ansible/hosts`::
wget https://raw.githubusercontent.com/ansible/ansible/devel/contrib/inventory/openstack.py
chmod +x openstack.py
sudo cp openstack.py /etc/ansible/hosts
Download the sample configuration file, modify it to suit your needs and copy it to `/etc/ansible/openstack.yml`::
wget https://raw.githubusercontent.com/ansible/ansible/devel/contrib/inventory/openstack.yml
vi openstack.yml
sudo cp openstack.yml /etc/ansible/
You can test the OpenStack dynamic inventory script manually to confirm it is working as expected::
/etc/ansible/hosts --list
After a few moments you should see some JSON output with information about your compute instances.
Refresh the cache
+++++++++++++++++
Note that the OpenStack dynamic inventory script will cache results to avoid repeated API calls. To explicitly clear the cache, you can run the openstack.py (or hosts) script with the --refresh parameter:
./openstack.py --refresh
.. _other_inventory_scripts: .. _other_inventory_scripts:
Other inventory scripts Other inventory scripts

View file

@ -33,7 +33,7 @@ In releases up to and including Ansible 1.2, the default was strictly paramiko.
Occasionally you'll encounter a device that doesn't support SFTP. This is rare, but should it occur, you can switch to SCP mode in :doc:`intro_configuration`. Occasionally you'll encounter a device that doesn't support SFTP. This is rare, but should it occur, you can switch to SCP mode in :doc:`intro_configuration`.
When speaking with remote machines, Ansible by default assumes you are using SSH keys. SSH keys are encouraged but password authentication can also be used where needed by supplying the option ``--ask-pass``. If using sudo features and when sudo requires a password, also supply ``--ask-sudo-pass``. When speaking with remote machines, Ansible by default assumes you are using SSH keys. SSH keys are encouraged but password authentication can also be used where needed by supplying the option ``--ask-pass``. If using sudo features and when sudo requires a password, also supply ``--ask-become-pass`` (previously ``--ask-sudo-pass`` which has been depricated).
While it may be common sense, it is worth sharing: Any management system benefits from being run near the machines being managed. If you are running Ansible in a cloud, consider running it from a machine inside that cloud. In most cases this will work better than on the open Internet. While it may be common sense, it is worth sharing: Any management system benefits from being run near the machines being managed. If you are running Ansible in a cloud, consider running it from a machine inside that cloud. In most cases this will work better than on the open Internet.

View file

@ -27,12 +27,11 @@ What Version To Pick?
````````````````````` `````````````````````
Because it runs so easily from source and does not require any installation of software on remote Because it runs so easily from source and does not require any installation of software on remote
machines, many users will actually track the development version. machines, many users will actually track the development version.
Ansible's release cycles are usually about two months long. Due to this Ansible's release cycles are usually about four months long. Due to this short release cycle,
short release cycle, minor bugs will generally be fixed in the next release versus maintaining minor bugs will generally be fixed in the next release versus maintaining backports on the stable branch.
backports on the stable branch. Major bugs will still have maintenance releases when needed, though Major bugs will still have maintenance releases when needed, though these are infrequent.
these are infrequent.
If you are wishing to run the latest released version of Ansible and you are running Red Hat Enterprise Linux (TM), CentOS, Fedora, Debian, or Ubuntu, we recommend using the OS package manager. If you are wishing to run the latest released version of Ansible and you are running Red Hat Enterprise Linux (TM), CentOS, Fedora, Debian, or Ubuntu, we recommend using the OS package manager.
@ -52,8 +51,8 @@ This includes Red Hat, Debian, CentOS, OS X, any of the BSDs, and so on.
.. note:: .. note::
As of 2.0 ansible uses a few more file handles to manage it's forks, OS X has a very low setting so if you want to use 15 or more forks As of 2.0 ansible uses a few more file handles to manage its forks, OS X has a very low setting so if you want to use 15 or more forks
you'll need to raise the ulimit, like so ``sudo launchctl limit maxfiles 1024 2048``. Or just any time you see a "Too many open files" error. you'll need to raise the ulimit, like so ``sudo launchctl limit maxfiles 1024 unlimited``. Or just any time you see a "Too many open files" error.
.. _managed_node_requirements: .. _managed_node_requirements:

View file

@ -31,7 +31,7 @@ It is also possible to address a specific host or set of hosts by name::
192.168.1.50 192.168.1.50
192.168.1.* 192.168.1.*
The following patterns address one or more groups. Groups separated by a comma indicate an "OR" configuration. The following patterns address one or more groups. Groups separated by a colon indicate an "OR" configuration.
This means the host may be in either one group or the other:: This means the host may be in either one group or the other::
webservers webservers

View file

@ -26,12 +26,12 @@ Installing on the Control Machine
On a Linux control machine:: On a Linux control machine::
pip install https://github.com/diyan/pywinrm/archive/master.zip#egg=pywinrm pip install "pywinrm>=0.1.1"
Active Directory Support Active Directory Support
++++++++++++++++++++++++ ++++++++++++++++++++++++
If you wish to connect to domain accounts published through Active Directory (as opposed to local accounts created on the remote host), you will need to install the "python-kerberos" module and the MIT krb5 libraries it depends on. If you wish to connect to domain accounts published through Active Directory (as opposed to local accounts created on the remote host), you will need to install the "python-kerberos" module on the Ansible control host (and the MIT krb5 libraries it depends on). The Ansible control host also requires a properly configured computer account in Active Directory.
Installing python-kerberos dependencies Installing python-kerberos dependencies
--------------------------------------- ---------------------------------------
@ -131,7 +131,9 @@ To test this, ping the windows host you want to control by name then use the ip
If you get different hostnames back than the name you originally pinged, speak to your active directory administrator and get them to check that DNS Scavenging is enabled and that DNS and DHCP are updating each other. If you get different hostnames back than the name you originally pinged, speak to your active directory administrator and get them to check that DNS Scavenging is enabled and that DNS and DHCP are updating each other.
Check your ansible controller's clock is synchronised with your domain controller. Kerberos is time sensitive and a little clock drift can cause tickets not be granted. Ensure that the Ansible controller has a properly configured computer account in the domain.
Check your Ansible controller's clock is synchronised with your domain controller. Kerberos is time sensitive and a little clock drift can cause tickets not be granted.
Check you are using the real fully qualified domain name for the domain. Sometimes domains are commonly known to users by aliases. To check this run: Check you are using the real fully qualified domain name for the domain. Sometimes domains are commonly known to users by aliases. To check this run:
@ -165,6 +167,8 @@ In group_vars/windows.yml, define the following inventory variables::
ansible_password: SecretPasswordGoesHere ansible_password: SecretPasswordGoesHere
ansible_port: 5986 ansible_port: 5986
ansible_connection: winrm ansible_connection: winrm
# The following is necessary for Python 2.7.9+ when using default WinRM self-signed certificates:
ansible_winrm_server_cert_validation: ignore
Although Ansible is mostly an SSH-oriented system, Windows management will not happen over SSH (`yet <http://blogs.msdn.com/b/powershell/archive/2015/06/03/looking-forward-microsoft-support-for-secure-shell-ssh.aspx>`). Although Ansible is mostly an SSH-oriented system, Windows management will not happen over SSH (`yet <http://blogs.msdn.com/b/powershell/archive/2015/06/03/looking-forward-microsoft-support-for-secure-shell-ssh.aspx>`).
@ -189,6 +193,7 @@ Since 2.0, the following custom inventory variables are also supported for addit
* ``ansible_winrm_path``: Specify an alternate path to the WinRM endpoint. Ansible uses ``/wsman`` by default. * ``ansible_winrm_path``: Specify an alternate path to the WinRM endpoint. Ansible uses ``/wsman`` by default.
* ``ansible_winrm_realm``: Specify the realm to use for Kerberos authentication. If the username contains ``@``, Ansible will use the part of the username after ``@`` by default. * ``ansible_winrm_realm``: Specify the realm to use for Kerberos authentication. If the username contains ``@``, Ansible will use the part of the username after ``@`` by default.
* ``ansible_winrm_transport``: Specify one or more transports as a comma-separated list. By default, Ansible will use ``kerberos,plaintext`` if the ``kerberos`` module is installed and a realm is defined, otherwise ``plaintext``. * ``ansible_winrm_transport``: Specify one or more transports as a comma-separated list. By default, Ansible will use ``kerberos,plaintext`` if the ``kerberos`` module is installed and a realm is defined, otherwise ``plaintext``.
* ``ansible_winrm_server_cert_validation``: Specify the server certificate validation mode (``ignore`` or ``validate``). Ansible defaults to ``validate`` on Python 2.7.9 and higher, which will result in certificate validation errors against the Windows self-signed certificates. Unless verifiable certificates have been configured on the WinRM listeners, this should be set to ``ignore``
* ``ansible_winrm_*``: Any additional keyword arguments supported by ``winrm.Protocol`` may be provided. * ``ansible_winrm_*``: Any additional keyword arguments supported by ``winrm.Protocol`` may be provided.
.. _windows_system_prep: .. _windows_system_prep:
@ -221,7 +226,7 @@ Getting to PowerShell 3.0 or higher
PowerShell 3.0 or higher is needed for most provided Ansible modules for Windows, and is also required to run the above setup script. Note that PowerShell 3.0 is only supported on Windows 7 SP1, Windows Server 2008 SP1, and later releases of Windows. PowerShell 3.0 or higher is needed for most provided Ansible modules for Windows, and is also required to run the above setup script. Note that PowerShell 3.0 is only supported on Windows 7 SP1, Windows Server 2008 SP1, and later releases of Windows.
Looking at an ansible checkout, copy the `examples/scripts/upgrade_to_ps3.ps1 <https://github.com/cchurch/ansible/blob/devel/examples/scripts/upgrade_to_ps3.ps1>`_ script onto the remote host and run a PowerShell console as an administrator. You will now be running PowerShell 3 and can try connectivity again using the win_ping technique referenced above. Looking at an Ansible checkout, copy the `examples/scripts/upgrade_to_ps3.ps1 <https://github.com/cchurch/ansible/blob/devel/examples/scripts/upgrade_to_ps3.ps1>`_ script onto the remote host and run a PowerShell console as an administrator. You will now be running PowerShell 3 and can try connectivity again using the win_ping technique referenced above.
.. _what_windows_modules_are_available: .. _what_windows_modules_are_available:
@ -248,13 +253,10 @@ Note there are a few other Ansible modules that don't start with "win" that also
Developers: Supported modules and how it works Developers: Supported modules and how it works
`````````````````````````````````````````````` ``````````````````````````````````````````````
Developing ansible modules are covered in a `later section of the documentation <http://docs.ansible.com/developing_modules.html>`_, with a focus on Linux/Unix. Developing Ansible modules are covered in a `later section of the documentation <http://docs.ansible.com/developing_modules.html>`_, with a focus on Linux/Unix.
What if you want to write Windows modules for ansible though? What if you want to write Windows modules for Ansible though?
For Windows, ansible modules are implemented in PowerShell. Skim those Linux/Unix module development chapters before proceeding. For Windows, Ansible modules are implemented in PowerShell. Skim those Linux/Unix module development chapters before proceeding. Windows modules in the core and extras repo live in a "windows/" subdir. Custom modules can go directly into the Ansible "library/" directories or those added in ansible.cfg. Documentation lives in a a `.py` file with the same name. For example, if a module is named "win_ping", there will be embedded documentation in the "win_ping.py" file, and the actual PowerShell code will live in a "win_ping.ps1" file. Take a look at the sources and this will make more sense.
Windows modules live in a "windows/" subfolder in the Ansible "library/" subtree. For example, if a module is named
"library/windows/win_ping", there will be embedded documentation in the "win_ping" file, and the actual PowerShell code will live in a "win_ping.ps1" file. Take a look at the sources and this will make more sense.
Modules (ps1 files) should start as follows:: Modules (ps1 files) should start as follows::
@ -317,6 +319,14 @@ Running individual commands uses the 'raw' module, as opposed to the shell or co
register: ipconfig register: ipconfig
- debug: var=ipconfig - debug: var=ipconfig
Running common DOS commands like 'del", 'move', or 'copy" is unlikely to work on a remote Windows Server using Powershell, but they can work by prefacing the commands with "CMD /C" and enclosing the command in double quotes as in this example::
- name: another raw module example
hosts: windows
tasks:
- name: Move file on remote Windows Server from one location to another
raw: CMD /C "MOVE /Y C:\teststuff\myfile.conf C:\builds\smtp.conf"
And for a final example, here's how to use the win_stat module to test for file existence. Note that the data returned by the win_stat module is slightly different than what is provided by the Linux equivalent:: And for a final example, here's how to use the win_stat module to test for file existence. Note that the data returned by the win_stat module is slightly different than what is provided by the Linux equivalent::
- name: test stat module - name: test stat module
@ -351,7 +361,7 @@ form of new modules, tweaks to existing modules, documentation, or something els
:doc:`developing_modules` :doc:`developing_modules`
How to write modules How to write modules
:doc:`playbooks` :doc:`playbooks`
Learning ansible's configuration management language Learning Ansible's configuration management language
`List of Windows Modules <http://docs.ansible.com/list_of_windows_modules.html>`_ `List of Windows Modules <http://docs.ansible.com/list_of_windows_modules.html>`_
Windows specific module list, all implemented in PowerShell Windows specific module list, all implemented in PowerShell
`Mailing List <http://groups.google.com/group/ansible-project>`_ `Mailing List <http://groups.google.com/group/ansible-project>`_

View file

@ -8,6 +8,6 @@ The source of these modules is hosted on GitHub in the `ansible-modules-core <ht
If you believe you have found a bug in a core module and are already running the latest stable or development version of Ansible, first look in the `issue tracker at github.com/ansible/ansible-modules-core <http://github.com/ansible/ansible-modules-core>`_ to see if a bug has already been filed. If not, we would be grateful if you would file one. If you believe you have found a bug in a core module and are already running the latest stable or development version of Ansible, first look in the `issue tracker at github.com/ansible/ansible-modules-core <http://github.com/ansible/ansible-modules-core>`_ to see if a bug has already been filed. If not, we would be grateful if you would file one.
Should you have a question rather than a bug report, inquries are welcome on the `ansible-project google group <https://groups.google.com/forum/#!forum/ansible-project>`_ or on Ansible's "#ansible" channel, located on irc.freenode.net. Development oriented topics should instead use the similar `ansible-devel google group <https://groups.google.com/forum/#!forum/ansible-devel>`_. Should you have a question rather than a bug report, inquiries are welcome on the `ansible-project google group <https://groups.google.com/forum/#!forum/ansible-project>`_ or on Ansible's "#ansible" channel, located on irc.freenode.net. Development oriented topics should instead use the similar `ansible-devel google group <https://groups.google.com/forum/#!forum/ansible-devel>`_.
Documentation updates for these modules can also be edited directly in the module itself and by submitting a pull request to the module source code, just look for the "DOCUMENTATION" block in the source tree. Documentation updates for these modules can also be edited directly in the module itself and by submitting a pull request to the module source code, just look for the "DOCUMENTATION" block in the source tree.

View file

@ -254,8 +254,8 @@ What about just my webservers in Boston?::
What about just the first 10, and then the next 10?:: What about just the first 10, and then the next 10?::
ansible-playbook -i production webservers.yml --limit boston[0-10] ansible-playbook -i production webservers.yml --limit boston[1-10]
ansible-playbook -i production webservers.yml --limit boston[10-20] ansible-playbook -i production webservers.yml --limit boston[11-20]
And of course just basic ad-hoc stuff is also possible.:: And of course just basic ad-hoc stuff is also possible.::

View file

@ -47,7 +47,7 @@ decide to do something conditionally based on success or failure::
- command: /bin/something - command: /bin/something
when: result|failed when: result|failed
- command: /bin/something_else - command: /bin/something_else
when: result|success when: result|succeeded
- command: /bin/still/something_else - command: /bin/still/something_else
when: result|skipped when: result|skipped

View file

@ -130,6 +130,29 @@ Here is an example::
Note that you must have passphrase-less SSH keys or an ssh-agent configured for this to work, otherwise rsync Note that you must have passphrase-less SSH keys or an ssh-agent configured for this to work, otherwise rsync
will need to ask for a passphrase. will need to ask for a passphrase.
.. _delegate_facts:
Delegated facts
```````````````
.. versionadded:: 2.0
By default, any fact gathered by a delegated task are assigned to the `inventory_hostname` (the current host) instead of the host which actually produced the facts (the delegated to host).
In 2.0, the directive `delegate_facts` may be set to `True` to assign the task's gathered facts to the delegated host instead of the current one.::
- hosts: app_servers
tasks:
- name: gather facts from db servers
setup:
delegate_to: "{{item}}"
delegate_facts: True
with_items: "{{groups['dbservers'}}"
The above will gather facts for the machines in the dbservers group and assign the facts to those machines and not to app_servers.
This way you can lookup `hostvars['dbhost1']['default_ipv4_addresses'][0]` even though dbservers were not part of the play, or left out by using `--limit`.
.. _run_once: .. _run_once:
Run Once Run Once
@ -159,13 +182,18 @@ This can be optionally paired with "delegate_to" to specify an individual host t
delegate_to: web01.example.org delegate_to: web01.example.org
When "run_once" is not used with "delegate_to" it will execute on the first host, as defined by inventory, When "run_once" is not used with "delegate_to" it will execute on the first host, as defined by inventory,
in the group(s) of hosts targeted by the play. e.g. webservers[0] if the play targeted "hosts: webservers". in the group(s) of hosts targeted by the play - e.g. webservers[0] if the play targeted "hosts: webservers".
This approach is similar, although more concise and cleaner than applying a conditional to a task such as:: This approach is similar to applying a conditional to a task such as::
- command: /opt/application/upgrade_db.py - command: /opt/application/upgrade_db.py
when: inventory_hostname == webservers[0] when: inventory_hostname == webservers[0]
.. note::
When used together with "serial", tasks marked as "run_once" will be ran on one host in *each* serial batch.
If it's crucial that the task is run only once regardless of "serial" mode, use
:code:`inventory_hostname == my_group_name[0]` construct.
.. _local_playbooks: .. _local_playbooks:
Local Playbooks Local Playbooks

View file

@ -31,7 +31,7 @@ The environment can also be stored in a variable, and accessed like so::
tasks: tasks:
- apt: name=cobbler state=installed - apt: name=cobbler state=installed
environment: proxy_env environment: "{{proxy_env}}"
You can also use it at a playbook level:: You can also use it at a playbook level::

View file

@ -58,12 +58,17 @@ The following tasks are illustrative of how filters can be used with conditional
- debug: msg="it changed" - debug: msg="it changed"
when: result|changed when: result|changed
- debug: msg="it succeeded in Ansible >= 2.1"
when: result|succeeded
- debug: msg="it succeeded" - debug: msg="it succeeded"
when: result|success when: result|success
- debug: msg="it was skipped" - debug: msg="it was skipped"
when: result|skipped when: result|skipped
.. note:: From 2.1 You can also use success, failure, change, skip so the grammer matches, for those that want to be strict about it.
.. _forcing_variables_to_be_defined: .. _forcing_variables_to_be_defined:
Forcing Variables To Be Defined Forcing Variables To Be Defined
@ -352,6 +357,39 @@ override those in `b`, and so on.
This behaviour does not depend on the value of the `hash_behaviour` This behaviour does not depend on the value of the `hash_behaviour`
setting in `ansible.cfg`. setting in `ansible.cfg`.
.. _extract_filter:
Extracting values from containers
---------------------------------
.. versionadded:: 2.1
The `extract` filter is used to map from a list of indices to a list of
values from a container (hash or array)::
{{ [0,2]|map('extract', ['x','y','z'])|list }}
{{ ['x','y']|map('extract', {'x': 42, 'y': 31})|list }}
The results of the above expressions would be::
['x', 'z']
[42, 31]
The filter can take another argument::
{{ groups['x']|map('extract', hostvars, 'ec2_ip_address')|list }}
This takes the list of hosts in group 'x', looks them up in `hostvars`,
and then looks up the `ec2_ip_address` of the result. The final result
is a list of IP addresses for the hosts in group 'x'.
The third argument to the filter can also be a list, for a recursive
lookup inside the container::
{{ ['a']|map('extract', b, ['x','y'])|list }}
This would return a list containing the value of `b['a']['x']['y']`.
.. _comment_filter: .. _comment_filter:
Comment Filter Comment Filter
@ -514,20 +552,25 @@ To match strings against a regex, use the "match" or "search" filter::
To replace text in a string with regex, use the "regex_replace" filter:: To replace text in a string with regex, use the "regex_replace" filter::
# convert "ansible" to "able" # convert "ansible" to "able"
{{ 'ansible' | regex_replace('^a.*i(.*)$', 'a\\1') }} {{ 'ansible' | regex_replace('^a.*i(.*)$', 'a\\1') }}
# convert "foobar" to "bar" # convert "foobar" to "bar"
{{ 'foobar' | regex_replace('^f.*o(.*)$', '\\1') }} {{ 'foobar' | regex_replace('^f.*o(.*)$', '\\1') }}
# convert "localhost:80" to "localhost, 80" using named groups
{{ 'localhost:80' | regex_replace('^(?P<host>.+):(?P<port>\\d+)$', '\\g<host>, \\g<port>') }}
.. note:: Prior to ansible 2.0, if "regex_replace" filter was used with variables inside YAML arguments (as opposed to simpler 'key=value' arguments), .. note:: Prior to ansible 2.0, if "regex_replace" filter was used with variables inside YAML arguments (as opposed to simpler 'key=value' arguments),
then you needed to escape backreferences (e.g. ``\\1``) with 4 backslashes (``\\\\``) instead of 2 (``\\``). then you needed to escape backreferences (e.g. ``\\1``) with 4 backslashes (``\\\\``) instead of 2 (``\\``).
.. versionadded:: 2.0
To escape special characters within a regex, use the "regex_escape" filter:: To escape special characters within a regex, use the "regex_escape" filter::
# convert '^f.*o(.*)$' to '\^f\.\*o\(\.\*\)\$' # convert '^f.*o(.*)$' to '\^f\.\*o\(\.\*\)\$'
{{ '^f.*o(.*)$' | regex_escape() }} {{ '^f.*o(.*)$' | regex_escape() }}
To make use of one attribute from each item in a list of complex variables, use the "map" filter (see the `Jinja2 map() docs`_ for more):: To make use of one attribute from each item in a list of complex variables, use the "map" filter (see the `Jinja2 map() docs`_ for more)::
# get a comma-separated list of the mount points (e.g. "/,/mnt/stuff") on a host # get a comma-separated list of the mount points (e.g. "/,/mnt/stuff") on a host

View file

@ -41,7 +41,7 @@ Each playbook is composed of one or more 'plays' in a list.
The goal of a play is to map a group of hosts to some well defined roles, represented by The goal of a play is to map a group of hosts to some well defined roles, represented by
things ansible calls tasks. At a basic level, a task is nothing more than a call things ansible calls tasks. At a basic level, a task is nothing more than a call
to an ansible module, which you should have learned about in earlier chapters. to an ansible module (see :doc:`Modules`).
By composing a playbook of multiple 'plays', it is possible to By composing a playbook of multiple 'plays', it is possible to
orchestrate multi-machine deployments, running certain steps on all orchestrate multi-machine deployments, running certain steps on all
@ -386,6 +386,7 @@ won't need them for much else.
* Handler names live in a global namespace. * Handler names live in a global namespace.
* If two handler tasks have the same name, only one will run. * If two handler tasks have the same name, only one will run.
`* <https://github.com/ansible/ansible/issues/4943>`_ `* <https://github.com/ansible/ansible/issues/4943>`_
* You cannot notify a handler that is defined inside of an include
Roles are described later on, but it's worthwhile to point out that: Roles are described later on, but it's worthwhile to point out that:

View file

@ -240,6 +240,112 @@ If you're not using 2.0 yet, you can do something similar with the credstash too
debug: msg="Poor man's credstash lookup! {{ lookup('pipe', 'credstash -r us-west-1 get my-other-password') }}" debug: msg="Poor man's credstash lookup! {{ lookup('pipe', 'credstash -r us-west-1 get my-other-password') }}"
.. _dns_lookup:
The DNS Lookup (dig)
````````````````````
.. versionadded:: 1.9.0
.. warning:: This lookup depends on the `dnspython <http://www.dnspython.org/>`_
library.
The ``dig`` lookup runs queries against DNS servers to retrieve DNS records for
a specific name (*FQDN* - fully qualified domain name). It is possible to lookup any DNS record in this manner.
There is a couple of different syntaxes that can be used to specify what record
should be retrieved, and for which name. It is also possible to explicitly
specify the DNS server(s) to use for lookups.
In its simplest form, the ``dig`` lookup plugin can be used to retrieve an IPv4
address (DNS ``A`` record) associated with *FQDN*:
.. note:: If you need to obtain the ``AAAA`` record (IPv6 address), you must
specify the record type explicitly. Syntax for specifying the record
type is described below.
.. note:: The trailing dot in most of the examples listed is purely optional,
but is specified for completeness/correctness sake.
::
- debug: msg="The IPv4 address for example.com. is {{ lookup('dig', 'example.com.')}}"
In addition to (default) ``A`` record, it is also possible to specify a different
record type that should be queried. This can be done by either passing-in
additional parameter of format ``qtype=TYPE`` to the ``dig`` lookup, or by
appending ``/TYPE`` to the *FQDN* being queried. For example::
- debug: msg="The TXT record for gmail.com. is {{ lookup('dig', 'gmail.com.', 'qtype=TXT') }}"
- debug: msg="The TXT record for gmail.com. is {{ lookup('dig', 'gmail.com./TXT') }}"
If multiple values are associated with the requested record, the results will be
returned as a comma-separated list. In such cases you may want to pass option
``wantlist=True`` to the plugin, which will result in the record values being
returned as a list over which you can iterate later on::
- debug: msg="One of the MX records for gmail.com. is {{ item }}"
with_items: "{{ lookup('dig', 'gmail.com./MX', wantlist=True) }}"
In case of reverse DNS lookups (``PTR`` records), you can also use a convenience
syntax of format ``IP_ADDRESS/PTR``. The following three lines would produce the
same output::
- debug: msg="Reverse DNS for 8.8.8.8 is {{ lookup('dig', '8.8.8.8/PTR') }}"
- debug: msg="Reverse DNS for 8.8.8.8 is {{ lookup('dig', '8.8.8.8.in-addr.arpa./PTR') }}"
- debug: msg="Reverse DNS for 8.8.8.8 is {{ lookup('dig', '8.8.8.8.in-addr.arpa.', 'qtype=PTR') }}"
By default, the lookup will rely on system-wide configured DNS servers for
performing the query. It is also possible to explicitly specify DNS servers to
query using the ``@DNS_SERVER_1,DNS_SERVER_2,...,DNS_SERVER_N`` notation. This
needs to be passed-in as an additional parameter to the lookup. For example::
- debug: msg="Querying 8.8.8.8 for IPv4 address for example.com. produces {{ lookup('dig', 'example.com', '@8.8.8.8') }}"
In some cases the DNS records may hold a more complex data structure, or it may
be useful to obtain the results in a form of a dictionary for future
processing. The ``dig`` lookup supports parsing of a number of such records,
with the result being returned as a dictionary. This way it is possible to
easily access such nested data. This return format can be requested by
passing-in the ``flat=0`` option to the lookup. For example::
- debug: msg="XMPP service for gmail.com. is available at {{ item.target }} on port {{ item.port }}"
with_items: "{{ lookup('dig', '_xmpp-server._tcp.gmail.com./SRV', 'flat=0', wantlist=True) }}"
Take note that due to the way Ansible lookups work, you must pass the
``wantlist=True`` argument to the lookup, otherwise Ansible will report errors.
Currently the dictionary results are supported for the following records:
.. note:: *ALL* is not a record per-se, merely the listed fields are available
for any record results you retrieve in the form of a dictionary.
========== =============================================================================
Record Fields
---------- -----------------------------------------------------------------------------
*ALL* owner, ttl, type
A address
AAAA address
CNAME target
DNAME target
DLV algorithm, digest_type, key_tag, digest
DNSKEY flags, algorithm, protocol, key
DS algorithm, digest_type, key_tag, digest
HINFO cpu, os
LOC latitude, longitude, altitude, size, horizontal_precision, vertical_precision
MX preference, exchange
NAPTR order, preference, flags, service, regexp, replacement
NS target
NSEC3PARAM algorithm, flags, iterations, salt
PTR target
RP mbox, txt
SOA mname, rname, serial, refresh, retry, expire, minimum
SPF strings
SRV priority, weight, port, target
SSHFP algorithm, fp_type, fingerprint
TLSA usage, selector, mtype, cert
TXT strings
========== =============================================================================
.. _more_lookups: .. _more_lookups:
More Lookups More Lookups

View file

@ -96,7 +96,7 @@ And you want to print every user's name and phone number. You can loop through
Looping over Files Looping over Files
`````````````````` ``````````````````
``with_file`` iterates over a list of files, setting `item` to the content of each file in sequence. It can be used like this:: ``with_file`` iterates over the content of a list of files, `item` will be set to the content of each file in sequence. It can be used like this::
--- ---
- hosts: all - hosts: all
@ -516,10 +516,37 @@ Subsequent loops over the registered variable to inspect the results may look li
.. _looping_over_the_inventory:
Looping over the inventory
``````````````````````````
If you wish to loop over the inventory, or just a subset of it, there is multiple ways.
One can use a regular ``with_items`` with the ``play_hosts`` or ``groups`` variables, like this::
# show all the hosts in the inventory
- debug: msg={{ item }}
with_items: "{{groups['all']}}"
# show all the hosts in the current play
- debug: msg={{ item }}
with_items: play_hosts
There is also a specific lookup plugin ``inventory_hostname`` that can be used like this::
# show all the hosts in the inventory
- debug: msg={{ item }}
with_inventory_hostname: all
# show all the hosts matching the pattern, ie all but the group www
- debug: msg={{ item }}
with_inventory_hostname: all:!www
More information on the patterns can be found on :doc:`intro_patterns`
.. _loops_and_includes: .. _loops_and_includes:
Loops and Includes Loops and Includes
`````````````````` ``````````````````
In 2.0 you are able to use `with_` loops and task includes (but not playbook includes), this adds the ability to loop over the set of tasks in one shot. In 2.0 you are able to use `with_` loops and task includes (but not playbook includes), this adds the ability to loop over the set of tasks in one shot.

View file

@ -132,7 +132,7 @@ Note that you cannot do variable substitution when including one playbook
inside another. inside another.
.. note:: .. note::
You can not conditionally path the location to an include file, You can not conditionally pass the location to an include file,
like you can with 'vars_files'. If you find yourself needing to do like you can with 'vars_files'. If you find yourself needing to do
this, consider how you can restructure your playbook to be more this, consider how you can restructure your playbook to be more
class/role oriented. This is to say you cannot use a 'fact' to class/role oriented. This is to say you cannot use a 'fact' to
@ -191,11 +191,8 @@ This designates the following behaviors, for each role 'x':
- If roles/x/handlers/main.yml exists, handlers listed therein will be added to the play - If roles/x/handlers/main.yml exists, handlers listed therein will be added to the play
- If roles/x/vars/main.yml exists, variables listed therein will be added to the play - If roles/x/vars/main.yml exists, variables listed therein will be added to the play
- If roles/x/meta/main.yml exists, any role dependencies listed therein will be added to the list of roles (1.3 and later) - If roles/x/meta/main.yml exists, any role dependencies listed therein will be added to the list of roles (1.3 and later)
- Any copy tasks can reference files in roles/x/files/ without having to path them relatively or absolutely - Any copy, script, template or include tasks (in the role) can reference files in roles/x/{files,templates,tasks}/ (dir depends on task) without having to path them relatively or absolutely
- Any script tasks can reference scripts in roles/x/files/ without having to path them relatively or absolutely
- Any template tasks can reference files in roles/x/templates/ without having to path them relatively or absolutely
- Any include tasks can reference files in roles/x/tasks/ without having to path them relatively or absolutely
In Ansible 1.4 and later you can configure a roles_path to search for roles. Use this to check all of your common roles out to one location, and share In Ansible 1.4 and later you can configure a roles_path to search for roles. Use this to check all of your common roles out to one location, and share
them easily between multiple playbook projects. See :doc:`intro_configuration` for details about how to set this up in ansible.cfg. them easily between multiple playbook projects. See :doc:`intro_configuration` for details about how to set this up in ansible.cfg.
@ -216,8 +213,8 @@ Also, should you wish to parameterize roles, by adding variables, you can do so,
- hosts: webservers - hosts: webservers
roles: roles:
- common - common
- { role: foo_app_instance, dir: '/opt/a', port: 5000 } - { role: foo_app_instance, dir: '/opt/a', app_port: 5000 }
- { role: foo_app_instance, dir: '/opt/b', port: 5001 } - { role: foo_app_instance, dir: '/opt/b', app_port: 5001 }
While it's probably not something you should do often, you can also conditionally apply roles like so:: While it's probably not something you should do often, you can also conditionally apply roles like so::
@ -287,7 +284,7 @@ a list of roles and parameters to insert before the specified role, such as the
--- ---
dependencies: dependencies:
- { role: common, some_parameter: 3 } - { role: common, some_parameter: 3 }
- { role: apache, port: 80 } - { role: apache, appache_port: 80 }
- { role: postgres, dbname: blarg, other_parameter: 12 } - { role: postgres, dbname: blarg, other_parameter: 12 }
Role dependencies can also be specified as a full path, just like top level roles:: Role dependencies can also be specified as a full path, just like top level roles::

View file

@ -793,10 +793,10 @@ Basically, anything that goes into "role defaults" (the defaults folder inside t
.. rubric:: Footnotes .. rubric:: Footnotes
.. [1] Tasks in each role will see their own role's defaults tasks outside of roles will the last role's defaults .. [1] Tasks in each role will see their own role's defaults. Tasks defined outside of a role will see the last role's defaults.
.. [2] Variables defined in inventory file or provided by dynamic inventory .. [2] Variables defined in inventory file or provided by dynamic inventory.
.. note:: Within a any section, redefining a var will overwrite the previous instance. .. note:: Within any section, redefining a var will overwrite the previous instance.
If multiple groups have the same variable, the last one loaded wins. If multiple groups have the same variable, the last one loaded wins.
If you define a variable twice in a play's vars: section, the 2nd one wins. If you define a variable twice in a play's vars: section, the 2nd one wins.
.. note:: the previous describes the default config `hash_behavior=replace`, switch to 'merge' to only partially overwrite. .. note:: the previous describes the default config `hash_behavior=replace`, switch to 'merge' to only partially overwrite.

View file

@ -0,0 +1,183 @@
Porting Guide
=============
Playbook
--------
* backslash escapes When specifying parameters in jinja2 expressions in YAML
dicts, backslashes sometimes needed to be escaped twice. This has been fixed
in 2.0.x so that escaping once works. The following example shows how
playbooks must be modified::
# Syntax in 1.9.x
- debug:
msg: "{{ 'test1_junk 1\\\\3' | regex_replace('(.*)_junk (.*)', '\\\\1 \\\\2') }}"
# Syntax in 2.0.x
- debug:
msg: "{{ 'test1_junk 1\\3' | regex_replace('(.*)_junk (.*)', '\\1 \\2') }}"
# Output:
"msg": "test1 1\\3"
To make an escaped string that will work on all versions you have two options::
- debug: msg="{{ 'test1_junk 1\\3' | regex_replace('(.*)_junk (.*)', '\\1 \\2') }}"
uses key=value escaping which has not changed. The other option is to check for the ansible version::
"{{ (ansible_version|version_compare('ge', '2.0'))|ternary( 'test1_junk 1\\3' | regex_replace('(.*)_junk (.*)', '\\1 \\2') , 'test1_junk 1\\\\3' | regex_replace('(.*)_junk (.*)', '\\\\1 \\\\2') ) }}"
* trailing newline When a string with a trailing newline was specified in the
playbook via yaml dict format, the trailing newline was stripped. When
specified in key=value format, the trailing newlines were kept. In v2, both
methods of specifying the string will keep the trailing newlines. If you
relied on the trailing newline being stripped, you can change your playbook
using the following as an example::
# Syntax in 1.9.x
vars:
message: >
Testing
some things
tasks:
- debug:
msg: "{{ message }}"
# Syntax in 2.0.x
vars:
old_message: >
Testing
some things
message: "{{ old_messsage[:-1] }}"
- debug:
msg: "{{ message }}"
# Output
"msg": "Testing some things"
* When specifying complex args as a variable, the variable must use the full jinja2
variable syntax (```{{var_name}}```) - bare variable names there are no longer accepted.
In fact, even specifying args with variables has been deprecated, and will not be
allowed in future versions::
---
- hosts: localhost
connection: local
gather_facts: false
vars:
my_dirs:
- { path: /tmp/3a, state: directory, mode: 0755 }
- { path: /tmp/3b, state: directory, mode: 0700 }
tasks:
- file:
args: "{{item}}" # <- args here uses the full variable syntax
with_items: my_dirs
* porting task includes
* More dynamic. Corner-case formats that were not supposed to work now do not, as expected.
* variables defined in the yaml dict format https://github.com/ansible/ansible/issues/13324
* templating (variables in playbooks and template lookups) has improved with regard to keeping the original instead of turning everything into a string.
If you need the old behavior, quote the value to pass it around as a string.
* Empty variables and variables set to null in yaml are no longer converted to empty strings. They will retain the value of `None`.
You can override the `null_representation` setting to an empty string in your config file by setting the `ANSIBLE_NULL_REPRESENTATION` environment variable.
* Extras callbacks must be whitelisted in ansible.cfg. Copying is no longer necessary but whitelisting in ansible.cfg must be completed.
* dnf module has been rewritten. Some minor changes in behavior may be observed.
* win_updates has been rewritten and works as expected now.
Deprecated
----------
While all items listed here will show a deprecation warning message, they still work as they did in 1.9.x. Please note that they will be removed in 2.2 (Ansible always waits two major releases to remove a deprecated feature).
* Bare variables in `with_` loops should instead use the “{{var}}” syntax, which helps eliminate ambiguity.
* The ansible-galaxy text format requirements file. Users should use the YAML format for requirements instead.
* Undefined variables within a `with_` loops list currently do not interrupt the loop, but they do issue a warning; in the future, they will issue an error.
* Using dictionary variables to set all task parameters is unsafe and will be removed in a future version. For example::
- hosts: localhost
gather_facts: no
vars:
debug_params:
msg: "hello there"
tasks:
# These are both deprecated:
- debug: "{{debug_params}}"
- debug:
args: "{{debug_params}}"
# Use this instead:
- debug:
msg: "{{debug_params['msg']}}"
* Host patterns should use a comma (,) or colon (:) instead of a semicolon (;) to separate hosts/groups in the pattern.
* Ranges specified in host patterns should use the [x:y] syntax, instead of [x-y].
* Playbooks using privilege escalation should always use “become*” options rather than the old su*/sudo* options.
* The “short form” for vars_prompt is no longer supported.
For example::
vars_prompt:
variable_name: "Prompt string"
* Specifying variables at the top level of a task include statement is no longer supported. For example::
- include: foo.yml
a: 1
Should now be::
- include: foo.yml
args:
a: 1
* Setting any_errors_fatal on a task is no longer supported. This should be set at the play level only.
* Bare variables in the `environment` dictionary (for plays/tasks/etc.) are no longer supported. Variables specified there should use the full variable syntax: {{foo}}.
* Tags should no longer be specified with other parameters in a task include. Instead, they should be specified as an option on the task.
For example::
- include: foo.yml tags=a,b,c
Should be::
- include: foo.yml
tags: [a, b, c]
* The first_available_file option on tasks has been deprecated. Users should use the with_first_found option or lookup (first_found, …) plugin.
Porting plugins
===============
In ansible-1.9.x, you would generally copy an existing plugin to create a new one. Simply implementing the methods and attributes that the caller of the plugin expected made it a plugin of that type. In ansible-2.0, most plugins are implemented by subclassing a base class for each plugin type. This way the custom plugin does not need to contain methods which are not customized.
Lookup plugins
--------------
* lookup plugins ; import version
Connection plugins
------------------
* connection plugins
Action plugins
--------------
* action plugins
Callback plugins
----------------
* callback plugins
Connection plugins
------------------
* connection plugins
Porting custom scripts
======================
Custom scripts that used the ``ansible.runner.Runner`` API in 1.x have to be ported in 2.x. Please refer to:
https://github.com/ansible/ansible/blob/devel/docsite/rst/developing_api.rst

View file

@ -14,7 +14,6 @@
#inventory = /etc/ansible/hosts #inventory = /etc/ansible/hosts
#library = /usr/share/my_modules/ #library = /usr/share/my_modules/
#remote_tmp = $HOME/.ansible/tmp #remote_tmp = $HOME/.ansible/tmp
#pattern = *
#forks = 5 #forks = 5
#poll_interval = 15 #poll_interval = 15
#sudo_user = root #sudo_user = root
@ -182,7 +181,7 @@
#no_log = False #no_log = False
# prevents logging of tasks, but only on the targets, data is still logged on the master/controller # prevents logging of tasks, but only on the targets, data is still logged on the master/controller
#no_target_syslog = True #no_target_syslog = False
# controls the compression level of variables sent to # controls the compression level of variables sent to
# worker processes. At the default of 0, no compression # worker processes. At the default of 0, no compression
@ -263,3 +262,14 @@
# the default behaviour that copies the existing context or uses the user default # the default behaviour that copies the existing context or uses the user default
# needs to be changed to use the file system dependent context. # needs to be changed to use the file system dependent context.
#special_context_filesystems=nfs,vboxsf,fuse,ramfs #special_context_filesystems=nfs,vboxsf,fuse,ramfs
[colors]
#verbose = blue
#warn = bright purple
#error = red
#debug = dark gray
#deprecate = purple
#skip = cyan
#unreachable = red
#ok = green
#changed = yellow

View file

@ -10,35 +10,35 @@
# Ex 1: Ungrouped hosts, specify before any group headers. # Ex 1: Ungrouped hosts, specify before any group headers.
green.example.com ## green.example.com
blue.example.com ## blue.example.com
192.168.100.1 ## 192.168.100.1
192.168.100.10 ## 192.168.100.10
# Ex 2: A collection of hosts belonging to the 'webservers' group # Ex 2: A collection of hosts belonging to the 'webservers' group
[webservers] ## [webservers]
alpha.example.org ## alpha.example.org
beta.example.org ## beta.example.org
192.168.1.100 ## 192.168.1.100
192.168.1.110 ## 192.168.1.110
# If you have multiple hosts following a pattern you can specify # If you have multiple hosts following a pattern you can specify
# them like this: # them like this:
www[001:006].example.com ## www[001:006].example.com
# Ex 3: A collection of database servers in the 'dbservers' group # Ex 3: A collection of database servers in the 'dbservers' group
[dbservers] ## [dbservers]
##
db01.intranet.mydomain.net ## db01.intranet.mydomain.net
db02.intranet.mydomain.net ## db02.intranet.mydomain.net
10.25.1.56 ## 10.25.1.56
10.25.1.57 ## 10.25.1.57
# Here's another example of host ranges, this time there are no # Here's another example of host ranges, this time there are no
# leading 0s: # leading 0s:
db-[99:101]-node.example.com ## db-[99:101]-node.example.com

View file

@ -57,10 +57,10 @@ fi
cd "$ANSIBLE_HOME" cd "$ANSIBLE_HOME"
if [ "$verbosity" = silent ] ; then if [ "$verbosity" = silent ] ; then
gen_egg_info > /dev/null 2>&1 gen_egg_info > /dev/null 2>&1
find . -type f -name "*.pyc" -delete > /dev/null 2>&1 find . -type f -name "*.pyc" -exec rm {} \; > /dev/null 2>&1
else else
gen_egg_info gen_egg_info
find . -type f -name "*.pyc" -delete find . -type f -name "*.pyc" -exec rm {} \;
fi fi
cd "$current_dir" cd "$current_dir"
) )

View file

@ -140,7 +140,7 @@ def list_modules(module_dir, depth=0):
if os.path.isdir(d): if os.path.isdir(d):
res = list_modules(d, depth + 1) res = list_modules(d, depth + 1)
for key in res.keys(): for key in list(res.keys()):
if key in categories: if key in categories:
categories[key] = merge_hash(categories[key], res[key]) categories[key] = merge_hash(categories[key], res[key])
res.pop(key, None) res.pop(key, None)
@ -451,7 +451,7 @@ def main():
categories = list_modules(options.module_dir) categories = list_modules(options.module_dir)
last_category = None last_category = None
category_names = categories.keys() category_names = list(categories.keys())
category_names.sort() category_names.sort()
category_list_path = os.path.join(options.output_dir, "modules_by_category.rst") category_list_path = os.path.join(options.output_dir, "modules_by_category.rst")

View file

@ -19,5 +19,5 @@
from __future__ import (absolute_import, division, print_function) from __future__ import (absolute_import, division, print_function)
__metaclass__ = type __metaclass__ = type
__version__ = '2.0.0' __version__ = '2.1.0'
__author__ = 'Ansible, Inc.' __author__ = 'Ansible, Inc.'

View file

@ -32,7 +32,7 @@ import subprocess
from ansible import __version__ from ansible import __version__
from ansible import constants as C from ansible import constants as C
from ansible.errors import AnsibleError, AnsibleOptionsError from ansible.errors import AnsibleError, AnsibleOptionsError
from ansible.utils.unicode import to_bytes from ansible.utils.unicode import to_bytes, to_unicode
try: try:
from __main__ import display from __main__ import display
@ -66,7 +66,7 @@ class CLI(object):
LESS_OPTS = 'FRSX' # -F (quit-if-one-screen) -R (allow raw ansi control chars) LESS_OPTS = 'FRSX' # -F (quit-if-one-screen) -R (allow raw ansi control chars)
# -S (chop long lines) -X (disable termcap init and de-init) # -S (chop long lines) -X (disable termcap init and de-init)
def __init__(self, args): def __init__(self, args, callback=None):
""" """
Base init method for all command line programs Base init method for all command line programs
""" """
@ -75,6 +75,7 @@ class CLI(object):
self.options = None self.options = None
self.parser = None self.parser = None
self.action = None self.action = None
self.callback = callback
def set_action(self): def set_action(self):
""" """
@ -104,9 +105,9 @@ class CLI(object):
if self.options.verbosity > 0: if self.options.verbosity > 0:
if C.CONFIG_FILE: if C.CONFIG_FILE:
display.display("Using %s as config file" % C.CONFIG_FILE) display.display(u"Using %s as config file" % to_unicode(C.CONFIG_FILE))
else: else:
display.display("No config file found; using defaults") display.display(u"No config file found; using defaults")
@staticmethod @staticmethod
def ask_vault_passwords(ask_new_vault_pass=False, rekey=False): def ask_vault_passwords(ask_new_vault_pass=False, rekey=False):
@ -191,12 +192,9 @@ class CLI(object):
if runas_opts: if runas_opts:
# Check for privilege escalation conflicts # Check for privilege escalation conflicts
if (op.su or op.su_user or op.ask_su_pass) and \ if (op.su or op.su_user) and (op.sudo or op.sudo_user) or \
(op.sudo or op.sudo_user or op.ask_sudo_pass) or \ (op.su or op.su_user) and (op.become or op.become_user) or \
(op.su or op.su_user or op.ask_su_pass) and \ (op.sudo or op.sudo_user) and (op.become or op.become_user):
(op.become or op.become_user or op.become_ask_pass) or \
(op.sudo or op.sudo_user or op.ask_sudo_pass) and \
(op.become or op.become_user or op.become_ask_pass):
self.parser.error("Sudo arguments ('--sudo', '--sudo-user', and '--ask-sudo-pass') " self.parser.error("Sudo arguments ('--sudo', '--sudo-user', and '--ask-sudo-pass') "
"and su arguments ('-su', '--su-user', and '--ask-su-pass') " "and su arguments ('-su', '--su-user', and '--ask-su-pass') "
@ -213,7 +211,7 @@ class CLI(object):
@staticmethod @staticmethod
def base_parser(usage="", output_opts=False, runas_opts=False, meta_opts=False, runtask_opts=False, vault_opts=False, module_opts=False, def base_parser(usage="", output_opts=False, runas_opts=False, meta_opts=False, runtask_opts=False, vault_opts=False, module_opts=False,
async_opts=False, connect_opts=False, subset_opts=False, check_opts=False, inventory_opts=False, epilog=None, fork_opts=False): async_opts=False, connect_opts=False, subset_opts=False, check_opts=False, inventory_opts=False, epilog=None, fork_opts=False, runas_prompt_opts=False):
''' create an options parser for most ansible scripts ''' ''' create an options parser for most ansible scripts '''
# TODO: implement epilog parsing # TODO: implement epilog parsing
@ -246,14 +244,15 @@ class CLI(object):
help="specify number of parallel processes to use (default=%s)" % C.DEFAULT_FORKS) help="specify number of parallel processes to use (default=%s)" % C.DEFAULT_FORKS)
if vault_opts: if vault_opts:
parser.add_option('--ask-vault-pass', default=False, dest='ask_vault_pass', action='store_true', parser.add_option('--ask-vault-pass', default=C.DEFAULT_ASK_VAULT_PASS, dest='ask_vault_pass', action='store_true',
help='ask for vault password') help='ask for vault password')
parser.add_option('--vault-password-file', default=C.DEFAULT_VAULT_PASSWORD_FILE, dest='vault_password_file', parser.add_option('--vault-password-file', default=C.DEFAULT_VAULT_PASSWORD_FILE, dest='vault_password_file',
help="vault password file", action="callback", callback=CLI.expand_tilde, type=str) help="vault password file", action="callback", callback=CLI.expand_tilde, type=str)
parser.add_option('--new-vault-password-file', dest='new_vault_password_file', parser.add_option('--new-vault-password-file', dest='new_vault_password_file',
help="new vault password file for rekey", action="callback", callback=CLI.expand_tilde, type=str) help="new vault password file for rekey", action="callback", callback=CLI.expand_tilde, type=str)
parser.add_option('--output', default=None, dest='output_file', parser.add_option('--output', default=None, dest='output_file',
help='output file name for encrypt or decrypt; use - for stdout') help='output file name for encrypt or decrypt; use - for stdout',
action="callback", callback=CLI.expand_tilde, type=str)
if subset_opts: if subset_opts:
parser.add_option('-t', '--tags', dest='tags', default='all', parser.add_option('-t', '--tags', dest='tags', default='all',
@ -269,10 +268,6 @@ class CLI(object):
if runas_opts: if runas_opts:
# priv user defaults to root later on to enable detecting when this option was given here # priv user defaults to root later on to enable detecting when this option was given here
parser.add_option('-K', '--ask-sudo-pass', default=C.DEFAULT_ASK_SUDO_PASS, dest='ask_sudo_pass', action='store_true',
help='ask for sudo password (deprecated, use become)')
parser.add_option('--ask-su-pass', default=C.DEFAULT_ASK_SU_PASS, dest='ask_su_pass', action='store_true',
help='ask for su password (deprecated, use become)')
parser.add_option("-s", "--sudo", default=C.DEFAULT_SUDO, action="store_true", dest='sudo', parser.add_option("-s", "--sudo", default=C.DEFAULT_SUDO, action="store_true", dest='sudo',
help="run operations with sudo (nopasswd) (deprecated, use become)") help="run operations with sudo (nopasswd) (deprecated, use become)")
parser.add_option('-U', '--sudo-user', dest='sudo_user', default=None, parser.add_option('-U', '--sudo-user', dest='sudo_user', default=None,
@ -289,6 +284,12 @@ class CLI(object):
help="privilege escalation method to use (default=%s), valid choices: [ %s ]" % (C.DEFAULT_BECOME_METHOD, ' | '.join(C.BECOME_METHODS))) help="privilege escalation method to use (default=%s), valid choices: [ %s ]" % (C.DEFAULT_BECOME_METHOD, ' | '.join(C.BECOME_METHODS)))
parser.add_option('--become-user', default=None, dest='become_user', type='string', parser.add_option('--become-user', default=None, dest='become_user', type='string',
help='run operations as this user (default=%s)' % C.DEFAULT_BECOME_USER) help='run operations as this user (default=%s)' % C.DEFAULT_BECOME_USER)
if runas_opts or runas_prompt_opts:
parser.add_option('-K', '--ask-sudo-pass', default=C.DEFAULT_ASK_SUDO_PASS, dest='ask_sudo_pass', action='store_true',
help='ask for sudo password (deprecated, use become)')
parser.add_option('--ask-su-pass', default=C.DEFAULT_ASK_SU_PASS, dest='ask_su_pass', action='store_true',
help='ask for su password (deprecated, use become)')
parser.add_option('--ask-become-pass', default=False, dest='become_ask_pass', action='store_true', parser.add_option('--ask-become-pass', default=False, dest='become_ask_pass', action='store_true',
help='ask for privilege escalation password') help='ask for privilege escalation password')

View file

@ -70,7 +70,7 @@ class AdHocCLI(CLI):
help="module name to execute (default=%s)" % C.DEFAULT_MODULE_NAME, help="module name to execute (default=%s)" % C.DEFAULT_MODULE_NAME,
default=C.DEFAULT_MODULE_NAME) default=C.DEFAULT_MODULE_NAME)
self.options, self.args = self.parser.parse_args() self.options, self.args = self.parser.parse_args(self.args[1:])
if len(self.args) != 1: if len(self.args) != 1:
raise AnsibleOptionsError("Missing target hosts") raise AnsibleOptionsError("Missing target hosts")
@ -124,17 +124,13 @@ class AdHocCLI(CLI):
inventory = Inventory(loader=loader, variable_manager=variable_manager, host_list=self.options.inventory) inventory = Inventory(loader=loader, variable_manager=variable_manager, host_list=self.options.inventory)
variable_manager.set_inventory(inventory) variable_manager.set_inventory(inventory)
hosts = inventory.list_hosts(pattern)
no_hosts = False
if len(hosts) == 0:
display.warning("provided hosts list is empty, only localhost is available")
no_hosts = True
if self.options.subset: if self.options.subset:
inventory.subset(self.options.subset) inventory.subset(self.options.subset)
if len(inventory.list_hosts(pattern)) == 0 and not no_hosts:
# Invalid limit hosts = inventory.list_hosts(pattern)
raise AnsibleError("Specified --limit does not match any hosts") if len(hosts) == 0:
raise AnsibleError("Specified hosts options do not match any hosts")
if self.options.listhosts: if self.options.listhosts:
display.display(' hosts (%d):' % len(hosts)) display.display(' hosts (%d):' % len(hosts))
@ -158,14 +154,18 @@ class AdHocCLI(CLI):
play_ds = self._play_ds(pattern, self.options.seconds, self.options.poll_interval) play_ds = self._play_ds(pattern, self.options.seconds, self.options.poll_interval)
play = Play().load(play_ds, variable_manager=variable_manager, loader=loader) play = Play().load(play_ds, variable_manager=variable_manager, loader=loader)
if self.options.one_line: if self.callback:
cb = self.callback
elif self.options.one_line:
cb = 'oneline' cb = 'oneline'
else: else:
cb = 'minimal' cb = 'minimal'
run_tree=False
if self.options.tree: if self.options.tree:
C.DEFAULT_CALLBACK_WHITELIST.append('tree') C.DEFAULT_CALLBACK_WHITELIST.append('tree')
C.TREE_DIR = self.options.tree C.TREE_DIR = self.options.tree
run_tree=True
# now create a task queue manager to execute the play # now create a task queue manager to execute the play
self._tqm = None self._tqm = None
@ -177,6 +177,8 @@ class AdHocCLI(CLI):
options=self.options, options=self.options,
passwords=passwords, passwords=passwords,
stdout_callback=cb, stdout_callback=cb,
run_additional_callbacks=C.DEFAULT_LOAD_CALLBACK_PLUGINS,
run_tree=run_tree,
) )
result = self._tqm.run(play) result = self._tqm.run(play)
finally: finally:

View file

@ -62,7 +62,7 @@ class DocCLI(CLI):
self.parser.add_option("-s", "--snippet", action="store_true", default=False, dest='show_snippet', self.parser.add_option("-s", "--snippet", action="store_true", default=False, dest='show_snippet',
help='Show playbook snippet for specified module(s)') help='Show playbook snippet for specified module(s)')
self.options, self.args = self.parser.parse_args() self.options, self.args = self.parser.parse_args(self.args[1:])
display.verbosity = self.options.verbosity display.verbosity = self.options.verbosity
def run(self): def run(self):
@ -90,7 +90,8 @@ class DocCLI(CLI):
for module in self.args: for module in self.args:
try: try:
filename = module_loader.find_plugin(module) # if the module lives in a non-python file (eg, win_X.ps1), require the corresponding python file for docs
filename = module_loader.find_plugin(module, mod_type='.py')
if filename is None: if filename is None:
display.warning("module %s not found in %s\n" % (module, DocCLI.print_paths(module_loader))) display.warning("module %s not found in %s\n" % (module, DocCLI.print_paths(module_loader)))
continue continue
@ -167,7 +168,8 @@ class DocCLI(CLI):
if module in module_docs.BLACKLIST_MODULES: if module in module_docs.BLACKLIST_MODULES:
continue continue
filename = module_loader.find_plugin(module) # if the module lives in a non-python file (eg, win_X.ps1), require the corresponding python file for docs
filename = module_loader.find_plugin(module, mod_type='.py')
if filename is None: if filename is None:
continue continue

View file

@ -22,10 +22,10 @@
from __future__ import (absolute_import, division, print_function) from __future__ import (absolute_import, division, print_function)
__metaclass__ = type __metaclass__ = type
import os
import os.path import os.path
import sys import sys
import yaml import yaml
import time
from collections import defaultdict from collections import defaultdict
from jinja2 import Environment from jinja2 import Environment
@ -36,6 +36,8 @@ from ansible.errors import AnsibleError, AnsibleOptionsError
from ansible.galaxy import Galaxy from ansible.galaxy import Galaxy
from ansible.galaxy.api import GalaxyAPI from ansible.galaxy.api import GalaxyAPI
from ansible.galaxy.role import GalaxyRole from ansible.galaxy.role import GalaxyRole
from ansible.galaxy.login import GalaxyLogin
from ansible.galaxy.token import GalaxyToken
from ansible.playbook.role.requirement import RoleRequirement from ansible.playbook.role.requirement import RoleRequirement
try: try:
@ -44,14 +46,12 @@ except ImportError:
from ansible.utils.display import Display from ansible.utils.display import Display
display = Display() display = Display()
class GalaxyCLI(CLI): class GalaxyCLI(CLI):
VALID_ACTIONS = ("init", "info", "install", "list", "remove", "search")
SKIP_INFO_KEYS = ("name", "description", "readme_html", "related", "summary_fields", "average_aw_composite", "average_aw_score", "url" ) SKIP_INFO_KEYS = ("name", "description", "readme_html", "related", "summary_fields", "average_aw_composite", "average_aw_score", "url" )
VALID_ACTIONS = ("delete", "import", "info", "init", "install", "list", "login", "remove", "search", "setup")
def __init__(self, args): def __init__(self, args):
self.api = None self.api = None
self.galaxy = None self.galaxy = None
super(GalaxyCLI, self).__init__(args) super(GalaxyCLI, self).__init__(args)
@ -67,7 +67,17 @@ class GalaxyCLI(CLI):
self.set_action() self.set_action()
# options specific to actions # options specific to actions
if self.action == "info": if self.action == "delete":
self.parser.set_usage("usage: %prog delete [options] github_user github_repo")
elif self.action == "import":
self.parser.set_usage("usage: %prog import [options] github_user github_repo")
self.parser.add_option('--no-wait', dest='wait', action='store_false', default=True,
help='Don\'t wait for import results.')
self.parser.add_option('--branch', dest='reference',
help='The name of a branch to import. Defaults to the repository\'s default branch (usually master)')
self.parser.add_option('--status', dest='check_status', action='store_true', default=False,
help='Check the status of the most recent import request for given github_user/github_repo.')
elif self.action == "info":
self.parser.set_usage("usage: %prog info [options] role_name[,version]") self.parser.set_usage("usage: %prog info [options] role_name[,version]")
elif self.action == "init": elif self.action == "init":
self.parser.set_usage("usage: %prog init [options] role_name") self.parser.set_usage("usage: %prog init [options] role_name")
@ -88,31 +98,42 @@ class GalaxyCLI(CLI):
self.parser.set_usage("usage: %prog remove role1 role2 ...") self.parser.set_usage("usage: %prog remove role1 role2 ...")
elif self.action == "list": elif self.action == "list":
self.parser.set_usage("usage: %prog list [role_name]") self.parser.set_usage("usage: %prog list [role_name]")
elif self.action == "login":
self.parser.set_usage("usage: %prog login [options]")
self.parser.add_option('--github-token', dest='token', default=None,
help='Identify with github token rather than username and password.')
elif self.action == "search": elif self.action == "search":
self.parser.add_option('--platforms', dest='platforms', self.parser.add_option('--platforms', dest='platforms',
help='list of OS platforms to filter by') help='list of OS platforms to filter by')
self.parser.add_option('--galaxy-tags', dest='tags', self.parser.add_option('--galaxy-tags', dest='tags',
help='list of galaxy tags to filter by') help='list of galaxy tags to filter by')
self.parser.set_usage("usage: %prog search [<search_term>] [--galaxy-tags <galaxy_tag1,galaxy_tag2>] [--platforms platform]") self.parser.add_option('--author', dest='author',
help='GitHub username')
self.parser.set_usage("usage: %prog search [searchterm1 searchterm2] [--galaxy-tags galaxy_tag1,galaxy_tag2] [--platforms platform1,platform2] [--author username]")
elif self.action == "setup":
self.parser.set_usage("usage: %prog setup [options] source github_user github_repo secret")
self.parser.add_option('--remove', dest='remove_id', default=None,
help='Remove the integration matching the provided ID value. Use --list to see ID values.')
self.parser.add_option('--list', dest="setup_list", action='store_true', default=False,
help='List all of your integrations.')
# options that apply to more than one action # options that apply to more than one action
if self.action != "init": if not self.action in ("delete","import","init","login","setup"):
self.parser.add_option('-p', '--roles-path', dest='roles_path', default=C.DEFAULT_ROLES_PATH, self.parser.add_option('-p', '--roles-path', dest='roles_path', default=C.DEFAULT_ROLES_PATH,
help='The path to the directory containing your roles. ' help='The path to the directory containing your roles. '
'The default is the roles_path configured in your ' 'The default is the roles_path configured in your '
'ansible.cfg file (/etc/ansible/roles if not configured)') 'ansible.cfg file (/etc/ansible/roles if not configured)')
if self.action in ("info","init","install","search"): if self.action in ("import","info","init","install","login","search","setup","delete"):
self.parser.add_option('-s', '--server', dest='api_server', default="https://galaxy.ansible.com", self.parser.add_option('-s', '--server', dest='api_server', default=C.GALAXY_SERVER,
help='The API server destination') help='The API server destination')
self.parser.add_option('-c', '--ignore-certs', action='store_false', dest='validate_certs', default=True, self.parser.add_option('-c', '--ignore-certs', action='store_true', dest='ignore_certs', default=False,
help='Ignore SSL certificate validation errors.') help='Ignore SSL certificate validation errors.')
if self.action in ("init","install"): if self.action in ("init","install"):
self.parser.add_option('-f', '--force', dest='force', action='store_true', default=False, self.parser.add_option('-f', '--force', dest='force', action='store_true', default=False,
help='Force overwriting an existing role') help='Force overwriting an existing role')
# get options, args and galaxy object
self.options, self.args =self.parser.parse_args() self.options, self.args =self.parser.parse_args()
display.verbosity = self.options.verbosity display.verbosity = self.options.verbosity
self.galaxy = Galaxy(self.options) self.galaxy = Galaxy(self.options)
@ -120,15 +141,13 @@ class GalaxyCLI(CLI):
return True return True
def run(self): def run(self):
super(GalaxyCLI, self).run() super(GalaxyCLI, self).run()
# if not offline, get connect to galaxy api # if not offline, get connect to galaxy api
if self.action in ("info","install", "search") or (self.action == 'init' and not self.options.offline): if self.action in ("import","info","install","search","login","setup","delete") or \
api_server = self.options.api_server (self.action == 'init' and not self.options.offline):
self.api = GalaxyAPI(self.galaxy, api_server) self.api = GalaxyAPI(self.galaxy)
if not self.api:
raise AnsibleError("The API server (%s) is not responding, please try again later." % api_server)
self.execute() self.execute()
@ -188,7 +207,7 @@ class GalaxyCLI(CLI):
"however it will reset any main.yml files that may have\n" "however it will reset any main.yml files that may have\n"
"been modified there already." % role_path) "been modified there already." % role_path)
# create the default README.md # create default README.md
if not os.path.exists(role_path): if not os.path.exists(role_path):
os.makedirs(role_path) os.makedirs(role_path)
readme_path = os.path.join(role_path, "README.md") readme_path = os.path.join(role_path, "README.md")
@ -196,9 +215,16 @@ class GalaxyCLI(CLI):
f.write(self.galaxy.default_readme) f.write(self.galaxy.default_readme)
f.close() f.close()
# create default .travis.yml
travis = Environment().from_string(self.galaxy.default_travis).render()
f = open(os.path.join(role_path, '.travis.yml'), 'w')
f.write(travis)
f.close()
for dir in GalaxyRole.ROLE_DIRS: for dir in GalaxyRole.ROLE_DIRS:
dir_path = os.path.join(init_path, role_name, dir) dir_path = os.path.join(init_path, role_name, dir)
main_yml_path = os.path.join(dir_path, 'main.yml') main_yml_path = os.path.join(dir_path, 'main.yml')
# create the directory if it doesn't exist already # create the directory if it doesn't exist already
if not os.path.exists(dir_path): if not os.path.exists(dir_path):
os.makedirs(dir_path) os.makedirs(dir_path)
@ -234,6 +260,20 @@ class GalaxyCLI(CLI):
f.write(rendered_meta) f.write(rendered_meta)
f.close() f.close()
pass pass
elif dir == "tests":
# create tests/test.yml
inject = dict(
role_name = role_name
)
playbook = Environment().from_string(self.galaxy.default_test).render(inject)
f = open(os.path.join(dir_path, 'test.yml'), 'w')
f.write(playbook)
f.close()
# create tests/inventory
f = open(os.path.join(dir_path, 'inventory'), 'w')
f.write('localhost')
f.close()
elif dir not in ('files','templates'): elif dir not in ('files','templates'):
# just write a (mostly) empty YAML file for main.yml # just write a (mostly) empty YAML file for main.yml
f = open(main_yml_path, 'w') f = open(main_yml_path, 'w')
@ -325,7 +365,7 @@ class GalaxyCLI(CLI):
for role in required_roles: for role in required_roles:
role = RoleRequirement.role_yaml_parse(role) role = RoleRequirement.role_yaml_parse(role)
display.debug('found role %s in yaml file' % str(role)) display.vvv('found role %s in yaml file' % str(role))
if 'name' not in role and 'scm' not in role: if 'name' not in role and 'scm' not in role:
raise AnsibleError("Must specify name or src for role") raise AnsibleError("Must specify name or src for role")
roles_left.append(GalaxyRole(self.galaxy, **role)) roles_left.append(GalaxyRole(self.galaxy, **role))
@ -348,7 +388,7 @@ class GalaxyCLI(CLI):
roles_left.append(GalaxyRole(self.galaxy, rname.strip())) roles_left.append(GalaxyRole(self.galaxy, rname.strip()))
for role in roles_left: for role in roles_left:
display.debug('Installing role %s ' % role.name) display.vvv('Installing role %s ' % role.name)
# query the galaxy API for the role data # query the galaxy API for the role data
if role.install_info is not None and not force: if role.install_info is not None and not force:
@ -458,21 +498,187 @@ class GalaxyCLI(CLI):
return 0 return 0
def execute_search(self): def execute_search(self):
page_size = 1000
search = None search = None
if len(self.args) > 1:
raise AnsibleOptionsError("At most a single search term is allowed.")
elif len(self.args) == 1:
search = self.args.pop()
response = self.api.search_roles(search, self.options.platforms, self.options.tags) if len(self.args):
terms = []
for i in range(len(self.args)):
terms.append(self.args.pop())
search = '+'.join(terms[::-1])
if 'count' in response: if not search and not self.options.platforms and not self.options.tags and not self.options.author:
display.display("Found %d roles matching your search:\n" % response['count']) raise AnsibleError("Invalid query. At least one search term, platform, galaxy tag or author must be provided.")
response = self.api.search_roles(search, platforms=self.options.platforms,
tags=self.options.tags, author=self.options.author, page_size=page_size)
if response['count'] == 0:
display.display("No roles match your search.", color=C.COLOR_ERROR)
return True
data = '' data = ''
if 'results' in response:
for role in response['results']: if response['count'] > page_size:
data += self._display_role_info(role) data += ("\nFound %d roles matching your search. Showing first %s.\n" % (response['count'], page_size))
else:
data += ("\nFound %d roles matching your search:\n" % response['count'])
max_len = []
for role in response['results']:
max_len.append(len(role['username'] + '.' + role['name']))
name_len = max(max_len)
format_str = " %%-%ds %%s\n" % name_len
data +='\n'
data += (format_str % ("Name", "Description"))
data += (format_str % ("----", "-----------"))
for role in response['results']:
data += (format_str % (role['username'] + '.' + role['name'],role['description']))
self.pager(data) self.pager(data)
return True
def execute_login(self):
"""
Verify user's identify via Github and retreive an auth token from Galaxy.
"""
# Authenticate with github and retrieve a token
if self.options.token is None:
login = GalaxyLogin(self.galaxy)
github_token = login.create_github_token()
else:
github_token = self.options.token
galaxy_response = self.api.authenticate(github_token)
if self.options.token is None:
# Remove the token we created
login.remove_github_token()
# Store the Galaxy token
token = GalaxyToken()
token.set(galaxy_response['token'])
display.display("Succesfully logged into Galaxy as %s" % galaxy_response['username'])
return 0
def execute_import(self):
"""
Import a role into Galaxy
"""
colors = {
'INFO': 'normal',
'WARNING': C.COLOR_WARN,
'ERROR': C.COLOR_ERROR,
'SUCCESS': C.COLOR_OK,
'FAILED': C.COLOR_ERROR,
}
if len(self.args) < 2:
raise AnsibleError("Expected a github_username and github_repository. Use --help.")
github_repo = self.args.pop()
github_user = self.args.pop()
if self.options.check_status:
task = self.api.get_import_task(github_user=github_user, github_repo=github_repo)
else:
# Submit an import request
task = self.api.create_import_task(github_user, github_repo, reference=self.options.reference)
if len(task) > 1:
# found multiple roles associated with github_user/github_repo
display.display("WARNING: More than one Galaxy role associated with Github repo %s/%s." % (github_user,github_repo),
color='yellow')
display.display("The following Galaxy roles are being updated:" + u'\n', color=C.COLOR_CHANGED)
for t in task:
display.display('%s.%s' % (t['summary_fields']['role']['namespace'],t['summary_fields']['role']['name']), color=C.COLOR_CHANGED)
display.display(u'\n' + "To properly namespace this role, remove each of the above and re-import %s/%s from scratch" % (github_user,github_repo), color=C.COLOR_CHANGED)
return 0
# found a single role as expected
display.display("Successfully submitted import request %d" % task[0]['id'])
if not self.options.wait:
display.display("Role name: %s" % task[0]['summary_fields']['role']['name'])
display.display("Repo: %s/%s" % (task[0]['github_user'],task[0]['github_repo']))
if self.options.check_status or self.options.wait:
# Get the status of the import
msg_list = []
finished = False
while not finished:
task = self.api.get_import_task(task_id=task[0]['id'])
for msg in task[0]['summary_fields']['task_messages']:
if msg['id'] not in msg_list:
display.display(msg['message_text'], color=colors[msg['message_type']])
msg_list.append(msg['id'])
if task[0]['state'] in ['SUCCESS', 'FAILED']:
finished = True
else:
time.sleep(10)
return 0
def execute_setup(self):
"""
Setup an integration from Github or Travis
"""
if self.options.setup_list:
# List existing integration secrets
secrets = self.api.list_secrets()
if len(secrets) == 0:
# None found
display.display("No integrations found.")
return 0
display.display(u'\n' + "ID Source Repo", color=C.COLOR_OK)
display.display("---------- ---------- ----------", color=C.COLOR_OK)
for secret in secrets:
display.display("%-10s %-10s %s/%s" % (secret['id'], secret['source'], secret['github_user'],
secret['github_repo']),color=C.COLOR_OK)
return 0
if self.options.remove_id:
# Remove a secret
self.api.remove_secret(self.options.remove_id)
display.display("Secret removed. Integrations using this secret will not longer work.", color=C.COLOR_OK)
return 0
if len(self.args) < 4:
raise AnsibleError("Missing one or more arguments. Expecting: source github_user github_repo secret")
return 0
secret = self.args.pop()
github_repo = self.args.pop()
github_user = self.args.pop()
source = self.args.pop()
resp = self.api.add_secret(source, github_user, github_repo, secret)
display.display("Added integration for %s %s/%s" % (resp['source'], resp['github_user'], resp['github_repo']))
return 0
def execute_delete(self):
"""
Delete a role from galaxy.ansible.com
"""
if len(self.args) < 2:
raise AnsibleError("Missing one or more arguments. Expected: github_user github_repo")
github_repo = self.args.pop()
github_user = self.args.pop()
resp = self.api.delete_role(github_user, github_repo)
if len(resp['deleted_roles']) > 1:
display.display("Deleted the following roles:")
display.display("ID User Name")
display.display("------ --------------- ----------")
for role in resp['deleted_roles']:
display.display("%-8s %-15s %s" % (role.id,role.namespace,role.name))
display.display(resp['status'])
return True

View file

@ -30,6 +30,7 @@ from ansible.errors import AnsibleError, AnsibleOptionsError
from ansible.executor.playbook_executor import PlaybookExecutor from ansible.executor.playbook_executor import PlaybookExecutor
from ansible.inventory import Inventory from ansible.inventory import Inventory
from ansible.parsing.dataloader import DataLoader from ansible.parsing.dataloader import DataLoader
from ansible.playbook.play_context import PlayContext
from ansible.utils.vars import load_extra_vars from ansible.utils.vars import load_extra_vars
from ansible.vars import VariableManager from ansible.vars import VariableManager
@ -72,7 +73,7 @@ class PlaybookCLI(CLI):
parser.add_option('--start-at-task', dest='start_at_task', parser.add_option('--start-at-task', dest='start_at_task',
help="start the playbook at the task matching this name") help="start the playbook at the task matching this name")
self.options, self.args = parser.parse_args() self.options, self.args = parser.parse_args(self.args[1:])
self.parser = parser self.parser = parser
@ -152,18 +153,10 @@ class PlaybookCLI(CLI):
for p in results: for p in results:
display.display('\nplaybook: %s' % p['playbook']) display.display('\nplaybook: %s' % p['playbook'])
i = 1 for idx, play in enumerate(p['plays']):
for play in p['plays']: msg = "\n play #%d (%s): %s" % (idx + 1, ','.join(play.hosts), play.name)
if play.name: mytags = set(play.tags)
playname = play.name msg += '\tTAGS: [%s]' % (','.join(mytags))
else:
playname = '#' + str(i)
msg = "\n PLAY: %s" % (playname)
mytags = set()
if self.options.listtags and play.tags:
mytags = mytags.union(set(play.tags))
msg += ' TAGS: [%s]' % (','.join(mytags))
if self.options.listhosts: if self.options.listhosts:
playhosts = set(inventory.get_hosts(play.hosts)) playhosts = set(inventory.get_hosts(play.hosts))
@ -173,23 +166,40 @@ class PlaybookCLI(CLI):
display.display(msg) display.display(msg)
all_tags = set()
if self.options.listtags or self.options.listtasks: if self.options.listtags or self.options.listtasks:
taskmsg = ' tasks:' taskmsg = ''
if self.options.listtasks:
taskmsg = ' tasks:\n'
all_vars = variable_manager.get_vars(loader=loader, play=play)
play_context = PlayContext(play=play, options=self.options)
for block in play.compile(): for block in play.compile():
block = block.filter_tagged_tasks(play_context, all_vars)
if not block.has_tasks(): if not block.has_tasks():
continue continue
j = 1
for task in block.block: for task in block.block:
taskmsg += "\n %s" % task if task.action == 'meta':
if self.options.listtags and task.tags: continue
taskmsg += " TAGS: [%s]" % ','.join(mytags.union(set(task.tags)))
j = j + 1 all_tags.update(task.tags)
if self.options.listtasks:
cur_tags = list(mytags.union(set(task.tags)))
cur_tags.sort()
if task.name:
taskmsg += " %s" % task.get_name()
else:
taskmsg += " %s" % task.action
taskmsg += "\tTAGS: [%s]\n" % ', '.join(cur_tags)
if self.options.listtags:
cur_tags = list(mytags.union(all_tags))
cur_tags.sort()
taskmsg += " TASK TAGS: [%s]\n" % ', '.join(cur_tags)
display.display(taskmsg) display.display(taskmsg)
i = i + 1
return 0 return 0
else: else:
return results return results

View file

@ -64,18 +64,24 @@ class PullCLI(CLI):
subset_opts=True, subset_opts=True,
inventory_opts=True, inventory_opts=True,
module_opts=True, module_opts=True,
runas_prompt_opts=True,
) )
# options unique to pull # options unique to pull
self.parser.add_option('--purge', default=False, action='store_true', help='purge checkout after playbook run') self.parser.add_option('--purge', default=False, action='store_true',
help='purge checkout after playbook run')
self.parser.add_option('-o', '--only-if-changed', dest='ifchanged', default=False, action='store_true', self.parser.add_option('-o', '--only-if-changed', dest='ifchanged', default=False, action='store_true',
help='only run the playbook if the repository has been updated') help='only run the playbook if the repository has been updated')
self.parser.add_option('-s', '--sleep', dest='sleep', default=None, self.parser.add_option('-s', '--sleep', dest='sleep', default=None,
help='sleep for random interval (between 0 and n number of seconds) before starting. This is a useful way to disperse git requests') help='sleep for random interval (between 0 and n number of seconds) before starting. This is a useful way to disperse git requests')
self.parser.add_option('-f', '--force', dest='force', default=False, action='store_true', self.parser.add_option('-f', '--force', dest='force', default=False, action='store_true',
help='run the playbook even if the repository could not be updated') help='run the playbook even if the repository could not be updated')
self.parser.add_option('-d', '--directory', dest='dest', default='~/.ansible/pull', help='directory to checkout repository to') self.parser.add_option('-d', '--directory', dest='dest', default=None,
self.parser.add_option('-U', '--url', dest='url', default=None, help='URL of the playbook repository') help='directory to checkout repository to')
self.parser.add_option('-U', '--url', dest='url', default=None,
help='URL of the playbook repository')
self.parser.add_option('--full', dest='fullclone', action='store_true',
help='Do a full clone, instead of a shallow one.')
self.parser.add_option('-C', '--checkout', dest='checkout', self.parser.add_option('-C', '--checkout', dest='checkout',
help='branch/tag/commit to checkout. ' 'Defaults to behavior of repository module.') help='branch/tag/commit to checkout. ' 'Defaults to behavior of repository module.')
self.parser.add_option('--accept-host-key', default=False, dest='accept_host_key', action='store_true', self.parser.add_option('--accept-host-key', default=False, dest='accept_host_key', action='store_true',
@ -86,7 +92,13 @@ class PullCLI(CLI):
help='verify GPG signature of checked out commit, if it fails abort running the playbook.' help='verify GPG signature of checked out commit, if it fails abort running the playbook.'
' This needs the corresponding VCS module to support such an operation') ' This needs the corresponding VCS module to support such an operation')
self.options, self.args = self.parser.parse_args() self.options, self.args = self.parser.parse_args(self.args[1:])
if not self.options.dest:
hostname = socket.getfqdn()
# use a hostname dependent directory, in case of $HOME on nfs
self.options.dest = os.path.join('~/.ansible/pull', hostname)
self.options.dest = os.path.expandvars(os.path.expanduser(self.options.dest))
if self.options.sleep: if self.options.sleep:
try: try:
@ -119,7 +131,7 @@ class PullCLI(CLI):
node = platform.node() node = platform.node()
host = socket.getfqdn() host = socket.getfqdn()
limit_opts = 'localhost,%s,127.0.0.1' % ','.join(set([host, node, host.split('.')[0], node.split('.')[0]])) limit_opts = 'localhost,%s,127.0.0.1' % ','.join(set([host, node, host.split('.')[0], node.split('.')[0]]))
base_opts = '-c local "%s"' % limit_opts base_opts = '-c local '
if self.options.verbosity > 0: if self.options.verbosity > 0:
base_opts += ' -%s' % ''.join([ "v" for x in range(0, self.options.verbosity) ]) base_opts += ' -%s' % ''.join([ "v" for x in range(0, self.options.verbosity) ])
@ -130,7 +142,7 @@ class PullCLI(CLI):
else: else:
inv_opts = self.options.inventory inv_opts = self.options.inventory
#TODO: enable more repo modules hg/svn? #FIXME: enable more repo modules hg/svn?
if self.options.module_name == 'git': if self.options.module_name == 'git':
repo_opts = "name=%s dest=%s" % (self.options.url, self.options.dest) repo_opts = "name=%s dest=%s" % (self.options.url, self.options.dest)
if self.options.checkout: if self.options.checkout:
@ -145,13 +157,17 @@ class PullCLI(CLI):
if self.options.verify: if self.options.verify:
repo_opts += ' verify_commit=yes' repo_opts += ' verify_commit=yes'
if not self.options.fullclone:
repo_opts += ' depth=1'
path = module_loader.find_plugin(self.options.module_name) path = module_loader.find_plugin(self.options.module_name)
if path is None: if path is None:
raise AnsibleOptionsError(("module '%s' not found.\n" % self.options.module_name)) raise AnsibleOptionsError(("module '%s' not found.\n" % self.options.module_name))
bin_path = os.path.dirname(os.path.abspath(sys.argv[0])) bin_path = os.path.dirname(os.path.abspath(sys.argv[0]))
cmd = '%s/ansible -i "%s" %s -m %s -a "%s"' % ( cmd = '%s/ansible -i "%s" %s -m %s -a "%s" "%s"' % (
bin_path, inv_opts, base_opts, self.options.module_name, repo_opts bin_path, inv_opts, base_opts, self.options.module_name, repo_opts, limit_opts
) )
for ev in self.options.extra_vars: for ev in self.options.extra_vars:
@ -163,6 +179,8 @@ class PullCLI(CLI):
time.sleep(self.options.sleep) time.sleep(self.options.sleep)
# RUN the Checkout command # RUN the Checkout command
display.debug("running ansible with VCS module to checkout repo")
display.vvvv('EXEC: %s' % cmd)
rc, out, err = run_cmd(cmd, live=True) rc, out, err = run_cmd(cmd, live=True)
if rc != 0: if rc != 0:
@ -174,8 +192,7 @@ class PullCLI(CLI):
display.display("Repository has not changed, quitting.") display.display("Repository has not changed, quitting.")
return 0 return 0
playbook = self.select_playbook(path) playbook = self.select_playbook(self.options.dest)
if playbook is None: if playbook is None:
raise AnsibleOptionsError("Could not find a playbook to run.") raise AnsibleOptionsError("Could not find a playbook to run.")
@ -187,16 +204,18 @@ class PullCLI(CLI):
cmd += ' -i "%s"' % self.options.inventory cmd += ' -i "%s"' % self.options.inventory
for ev in self.options.extra_vars: for ev in self.options.extra_vars:
cmd += ' -e "%s"' % ev cmd += ' -e "%s"' % ev
if self.options.ask_sudo_pass: if self.options.ask_sudo_pass or self.options.ask_su_pass or self.options.become_ask_pass:
cmd += ' -K' cmd += ' --ask-become-pass'
if self.options.tags: if self.options.tags:
cmd += ' -t "%s"' % self.options.tags cmd += ' -t "%s"' % self.options.tags
if self.options.limit: if self.options.subset:
cmd += ' -l "%s"' % self.options.limit cmd += ' -l "%s"' % self.options.subset
os.chdir(self.options.dest) os.chdir(self.options.dest)
# RUN THE PLAYBOOK COMMAND # RUN THE PLAYBOOK COMMAND
display.debug("running ansible-playbook to do actual work")
display.debug('EXEC: %s' % cmd)
rc, out, err = run_cmd(cmd, live=True) rc, out, err = run_cmd(cmd, live=True)
if self.options.purge: if self.options.purge:

View file

@ -69,7 +69,7 @@ class VaultCLI(CLI):
elif self.action == "rekey": elif self.action == "rekey":
self.parser.set_usage("usage: %prog rekey [options] file_name") self.parser.set_usage("usage: %prog rekey [options] file_name")
self.options, self.args = self.parser.parse_args() self.options, self.args = self.parser.parse_args(self.args[1:])
display.verbosity = self.options.verbosity display.verbosity = self.options.verbosity
can_output = ['encrypt', 'decrypt'] can_output = ['encrypt', 'decrypt']

View file

@ -120,19 +120,23 @@ DEFAULT_COW_WHITELIST = ['bud-frogs', 'bunny', 'cheese', 'daemon', 'default', 'd
# sections in config file # sections in config file
DEFAULTS='defaults' DEFAULTS='defaults'
# FIXME: add deprecation warning when these get set
#### DEPRECATED VARS ####
# use more sanely named 'inventory'
DEPRECATED_HOST_LIST = get_config(p, DEFAULTS, 'hostfile', 'ANSIBLE_HOSTS', '/etc/ansible/hosts', ispath=True) DEPRECATED_HOST_LIST = get_config(p, DEFAULTS, 'hostfile', 'ANSIBLE_HOSTS', '/etc/ansible/hosts', ispath=True)
# this is not used since 0.5 but people might still have in config
DEFAULT_PATTERN = get_config(p, DEFAULTS, 'pattern', None, None)
# generally configurable things #### GENERALLY CONFIGURABLE THINGS ####
DEFAULT_DEBUG = get_config(p, DEFAULTS, 'debug', 'ANSIBLE_DEBUG', False, boolean=True) DEFAULT_DEBUG = get_config(p, DEFAULTS, 'debug', 'ANSIBLE_DEBUG', False, boolean=True)
DEFAULT_HOST_LIST = get_config(p, DEFAULTS,'inventory', 'ANSIBLE_INVENTORY', DEPRECATED_HOST_LIST, ispath=True) DEFAULT_HOST_LIST = get_config(p, DEFAULTS,'inventory', 'ANSIBLE_INVENTORY', DEPRECATED_HOST_LIST, ispath=True)
DEFAULT_MODULE_PATH = get_config(p, DEFAULTS, 'library', 'ANSIBLE_LIBRARY', None, ispath=True) DEFAULT_MODULE_PATH = get_config(p, DEFAULTS, 'library', 'ANSIBLE_LIBRARY', None, ispath=True)
DEFAULT_ROLES_PATH = get_config(p, DEFAULTS, 'roles_path', 'ANSIBLE_ROLES_PATH', '/etc/ansible/roles', ispath=True) DEFAULT_ROLES_PATH = get_config(p, DEFAULTS, 'roles_path', 'ANSIBLE_ROLES_PATH', '/etc/ansible/roles', ispath=True)
DEFAULT_REMOTE_TMP = get_config(p, DEFAULTS, 'remote_tmp', 'ANSIBLE_REMOTE_TEMP', '$HOME/.ansible/tmp') DEFAULT_REMOTE_TMP = get_config(p, DEFAULTS, 'remote_tmp', 'ANSIBLE_REMOTE_TEMP', '$HOME/.ansible/tmp')
DEFAULT_MODULE_NAME = get_config(p, DEFAULTS, 'module_name', None, 'command') DEFAULT_MODULE_NAME = get_config(p, DEFAULTS, 'module_name', None, 'command')
DEFAULT_PATTERN = get_config(p, DEFAULTS, 'pattern', None, '*')
DEFAULT_FORKS = get_config(p, DEFAULTS, 'forks', 'ANSIBLE_FORKS', 5, integer=True) DEFAULT_FORKS = get_config(p, DEFAULTS, 'forks', 'ANSIBLE_FORKS', 5, integer=True)
DEFAULT_MODULE_ARGS = get_config(p, DEFAULTS, 'module_args', 'ANSIBLE_MODULE_ARGS', '') DEFAULT_MODULE_ARGS = get_config(p, DEFAULTS, 'module_args', 'ANSIBLE_MODULE_ARGS', '')
DEFAULT_MODULE_LANG = get_config(p, DEFAULTS, 'module_lang', 'ANSIBLE_MODULE_LANG', 'en_US.UTF-8') DEFAULT_MODULE_LANG = get_config(p, DEFAULTS, 'module_lang', 'ANSIBLE_MODULE_LANG', os.getenv('LANG', 'en_US.UTF-8'))
DEFAULT_TIMEOUT = get_config(p, DEFAULTS, 'timeout', 'ANSIBLE_TIMEOUT', 10, integer=True) DEFAULT_TIMEOUT = get_config(p, DEFAULTS, 'timeout', 'ANSIBLE_TIMEOUT', 10, integer=True)
DEFAULT_POLL_INTERVAL = get_config(p, DEFAULTS, 'poll_interval', 'ANSIBLE_POLL_INTERVAL', 15, integer=True) DEFAULT_POLL_INTERVAL = get_config(p, DEFAULTS, 'poll_interval', 'ANSIBLE_POLL_INTERVAL', 15, integer=True)
DEFAULT_REMOTE_USER = get_config(p, DEFAULTS, 'remote_user', 'ANSIBLE_REMOTE_USER', None) DEFAULT_REMOTE_USER = get_config(p, DEFAULTS, 'remote_user', 'ANSIBLE_REMOTE_USER', None)
@ -159,7 +163,7 @@ DEFAULT_VAR_COMPRESSION_LEVEL = get_config(p, DEFAULTS, 'var_compression_level',
# disclosure # disclosure
DEFAULT_NO_LOG = get_config(p, DEFAULTS, 'no_log', 'ANSIBLE_NO_LOG', False, boolean=True) DEFAULT_NO_LOG = get_config(p, DEFAULTS, 'no_log', 'ANSIBLE_NO_LOG', False, boolean=True)
DEFAULT_NO_TARGET_SYSLOG = get_config(p, DEFAULTS, 'no_target_syslog', 'ANSIBLE_NO_TARGET_SYSLOG', True, boolean=True) DEFAULT_NO_TARGET_SYSLOG = get_config(p, DEFAULTS, 'no_target_syslog', 'ANSIBLE_NO_TARGET_SYSLOG', False, boolean=True)
# selinux # selinux
DEFAULT_SELINUX_SPECIAL_FS = get_config(p, 'selinux', 'special_context_filesystems', None, 'fuse, nfs, vboxsf, ramfs', islist=True) DEFAULT_SELINUX_SPECIAL_FS = get_config(p, 'selinux', 'special_context_filesystems', None, 'fuse, nfs, vboxsf, ramfs', islist=True)
@ -197,7 +201,7 @@ DEFAULT_BECOME_ASK_PASS = get_config(p, 'privilege_escalation', 'become_ask_pa
# the module takes both, bad things could happen. # the module takes both, bad things could happen.
# In the future we should probably generalize this even further # In the future we should probably generalize this even further
# (mapping of param: squash field) # (mapping of param: squash field)
DEFAULT_SQUASH_ACTIONS = get_config(p, DEFAULTS, 'squash_actions', 'ANSIBLE_SQUASH_ACTIONS', "apt, yum, pkgng, zypper, dnf", islist=True) DEFAULT_SQUASH_ACTIONS = get_config(p, DEFAULTS, 'squash_actions', 'ANSIBLE_SQUASH_ACTIONS', "apt, dnf, package, pkgng, yum, zypper", islist=True)
# paths # paths
DEFAULT_ACTION_PLUGIN_PATH = get_config(p, DEFAULTS, 'action_plugins', 'ANSIBLE_ACTION_PLUGINS', '~/.ansible/plugins/action:/usr/share/ansible/plugins/action', ispath=True) DEFAULT_ACTION_PLUGIN_PATH = get_config(p, DEFAULTS, 'action_plugins', 'ANSIBLE_ACTION_PLUGINS', '~/.ansible/plugins/action:/usr/share/ansible/plugins/action', ispath=True)
DEFAULT_CACHE_PLUGIN_PATH = get_config(p, DEFAULTS, 'cache_plugins', 'ANSIBLE_CACHE_PLUGINS', '~/.ansible/plugins/cache:/usr/share/ansible/plugins/cache', ispath=True) DEFAULT_CACHE_PLUGIN_PATH = get_config(p, DEFAULTS, 'cache_plugins', 'ANSIBLE_CACHE_PLUGINS', '~/.ansible/plugins/cache:/usr/share/ansible/plugins/cache', ispath=True)
@ -255,12 +259,25 @@ ACCELERATE_MULTI_KEY = get_config(p, 'accelerate', 'accelerate_multi_k
PARAMIKO_PTY = get_config(p, 'paramiko_connection', 'pty', 'ANSIBLE_PARAMIKO_PTY', True, boolean=True) PARAMIKO_PTY = get_config(p, 'paramiko_connection', 'pty', 'ANSIBLE_PARAMIKO_PTY', True, boolean=True)
# galaxy related # galaxy related
DEFAULT_GALAXY_URI = get_config(p, 'galaxy', 'server_uri', 'ANSIBLE_GALAXY_SERVER_URI', 'https://galaxy.ansible.com') GALAXY_SERVER = get_config(p, 'galaxy', 'server', 'ANSIBLE_GALAXY_SERVER', 'https://galaxy.ansible.com')
GALAXY_IGNORE_CERTS = get_config(p, 'galaxy', 'ignore_certs', 'ANSIBLE_GALAXY_IGNORE', False, boolean=True)
# this can be configured to blacklist SCMS but cannot add new ones unless the code is also updated # this can be configured to blacklist SCMS but cannot add new ones unless the code is also updated
GALAXY_SCMS = get_config(p, 'galaxy', 'scms', 'ANSIBLE_GALAXY_SCMS', 'git, hg', islist=True) GALAXY_SCMS = get_config(p, 'galaxy', 'scms', 'ANSIBLE_GALAXY_SCMS', 'git, hg', islist=True)
# characters included in auto-generated passwords # characters included in auto-generated passwords
DEFAULT_PASSWORD_CHARS = ascii_letters + digits + ".,:-_" DEFAULT_PASSWORD_CHARS = ascii_letters + digits + ".,:-_"
STRING_TYPE_FILTERS = get_config(p, 'jinja2', 'dont_type_filters', 'ANSIBLE_STRING_TYPE_FILTERS', ['string', 'to_json', 'to_nice_json', 'to_yaml', 'ppretty', 'json'], islist=True )
# colors
COLOR_VERBOSE = get_config(p, 'colors', 'verbose', 'ANSIBLE_COLOR_VERBOSE', 'blue')
COLOR_WARN = get_config(p, 'colors', 'warn', 'ANSIBLE_COLOR_WARN', 'bright purple')
COLOR_ERROR = get_config(p, 'colors', 'error', 'ANSIBLE_COLOR_ERROR', 'red')
COLOR_DEBUG = get_config(p, 'colors', 'debug', 'ANSIBLE_COLOR_DEBUG', 'dark gray')
COLOR_DEPRECATE = get_config(p, 'colors', 'deprecate', 'ANSIBLE_COLOR_DEPRECATE', 'purple')
COLOR_SKIP = get_config(p, 'colors', 'skip', 'ANSIBLE_COLOR_SKIP', 'cyan')
COLOR_UNREACHABLE = get_config(p, 'colors', 'unreachable', 'ANSIBLE_COLOR_UNREACHABLE', 'bright red')
COLOR_OK = get_config(p, 'colors', 'ok', 'ANSIBLE_COLOR_OK', 'green')
COLOR_CHANGED = get_config(p, 'colors', 'ok', 'ANSIBLE_COLOR_CHANGED', 'yellow')
# non-configurable things # non-configurable things
MODULE_REQUIRE_ARGS = ['command', 'shell', 'raw', 'script'] MODULE_REQUIRE_ARGS = ['command', 'shell', 'raw', 'script']

View file

@ -44,7 +44,7 @@ class AnsibleError(Exception):
which should be returned by the DataLoader() class. which should be returned by the DataLoader() class.
''' '''
def __init__(self, message, obj=None, show_content=True): def __init__(self, message="", obj=None, show_content=True):
# we import this here to prevent an import loop problem, # we import this here to prevent an import loop problem,
# since the objects code also imports ansible.errors # since the objects code also imports ansible.errors
from ansible.parsing.yaml.objects import AnsibleBaseYAMLObject from ansible.parsing.yaml.objects import AnsibleBaseYAMLObject
@ -54,9 +54,9 @@ class AnsibleError(Exception):
if obj and isinstance(obj, AnsibleBaseYAMLObject): if obj and isinstance(obj, AnsibleBaseYAMLObject):
extended_error = self._get_extended_error() extended_error = self._get_extended_error()
if extended_error: if extended_error:
self.message = 'ERROR! %s\n\n%s' % (message, to_str(extended_error)) self.message = '%s\n\n%s' % (to_str(message), to_str(extended_error))
else: else:
self.message = 'ERROR! %s' % message self.message = '%s' % to_str(message)
def __str__(self): def __str__(self):
return self.message return self.message

View file

@ -39,6 +39,7 @@ REPLACER_WINDOWS = "# POWERSHELL_COMMON"
REPLACER_WINARGS = "<<INCLUDE_ANSIBLE_MODULE_WINDOWS_ARGS>>" REPLACER_WINARGS = "<<INCLUDE_ANSIBLE_MODULE_WINDOWS_ARGS>>"
REPLACER_JSONARGS = "<<INCLUDE_ANSIBLE_MODULE_JSON_ARGS>>" REPLACER_JSONARGS = "<<INCLUDE_ANSIBLE_MODULE_JSON_ARGS>>"
REPLACER_VERSION = "\"<<ANSIBLE_VERSION>>\"" REPLACER_VERSION = "\"<<ANSIBLE_VERSION>>\""
REPLACER_SELINUX = "<<SELINUX_SPECIAL_FILESYSTEMS>>"
# We could end up writing out parameters with unicode characters so we need to # We could end up writing out parameters with unicode characters so we need to
# specify an encoding for the python source file # specify an encoding for the python source file
@ -172,6 +173,7 @@ def modify_module(module_path, module_args, task_vars=dict(), strip_comments=Fal
module_data = module_data.replace(REPLACER_COMPLEX, python_repred_args) module_data = module_data.replace(REPLACER_COMPLEX, python_repred_args)
module_data = module_data.replace(REPLACER_WINARGS, module_args_json) module_data = module_data.replace(REPLACER_WINARGS, module_args_json)
module_data = module_data.replace(REPLACER_JSONARGS, module_args_json) module_data = module_data.replace(REPLACER_JSONARGS, module_args_json)
module_data = module_data.replace(REPLACER_SELINUX, ','.join(C.DEFAULT_SELINUX_SPECIAL_FS))
if module_style == 'new': if module_style == 'new':
facility = C.DEFAULT_SYSLOG_FACILITY facility = C.DEFAULT_SYSLOG_FACILITY
@ -200,4 +202,3 @@ def modify_module(module_path, module_args, task_vars=dict(), strip_comments=Fal
module_data = b"\n".join(lines) module_data = b"\n".join(lines)
return (module_data, module_style, shebang) return (module_data, module_style, shebang)

View file

@ -49,6 +49,7 @@ class HostState:
self.cur_rescue_task = 0 self.cur_rescue_task = 0
self.cur_always_task = 0 self.cur_always_task = 0
self.cur_role = None self.cur_role = None
self.cur_dep_chain = None
self.run_state = PlayIterator.ITERATING_SETUP self.run_state = PlayIterator.ITERATING_SETUP
self.fail_state = PlayIterator.FAILED_NONE self.fail_state = PlayIterator.FAILED_NONE
self.pending_setup = False self.pending_setup = False
@ -57,14 +58,32 @@ class HostState:
self.always_child_state = None self.always_child_state = None
def __repr__(self): def __repr__(self):
return "HOST STATE: block=%d, task=%d, rescue=%d, always=%d, role=%s, run_state=%d, fail_state=%d, pending_setup=%s, tasks child state? %s, rescue child state? %s, always child state? %s" % ( def _run_state_to_string(n):
states = ["ITERATING_SETUP", "ITERATING_TASKS", "ITERATING_RESCUE", "ITERATING_ALWAYS", "ITERATING_COMPLETE"]
try:
return states[n]
except IndexError:
return "UNKNOWN STATE"
def _failed_state_to_string(n):
states = {1:"FAILED_SETUP", 2:"FAILED_TASKS", 4:"FAILED_RESCUE", 8:"FAILED_ALWAYS"}
if n == 0:
return "FAILED_NONE"
else:
ret = []
for i in (1, 2, 4, 8):
if n & i:
ret.append(states[i])
return "|".join(ret)
return "HOST STATE: block=%d, task=%d, rescue=%d, always=%d, role=%s, run_state=%s, fail_state=%s, pending_setup=%s, tasks child state? %s, rescue child state? %s, always child state? %s" % (
self.cur_block, self.cur_block,
self.cur_regular_task, self.cur_regular_task,
self.cur_rescue_task, self.cur_rescue_task,
self.cur_always_task, self.cur_always_task,
self.cur_role, self.cur_role,
self.run_state, _run_state_to_string(self.run_state),
self.fail_state, _failed_state_to_string(self.fail_state),
self.pending_setup, self.pending_setup,
self.tasks_child_state, self.tasks_child_state,
self.rescue_child_state, self.rescue_child_state,
@ -84,6 +103,8 @@ class HostState:
new_state.run_state = self.run_state new_state.run_state = self.run_state
new_state.fail_state = self.fail_state new_state.fail_state = self.fail_state
new_state.pending_setup = self.pending_setup new_state.pending_setup = self.pending_setup
if self.cur_dep_chain is not None:
new_state.cur_dep_chain = self.cur_dep_chain[:]
if self.tasks_child_state is not None: if self.tasks_child_state is not None:
new_state.tasks_child_state = self.tasks_child_state.copy() new_state.tasks_child_state = self.tasks_child_state.copy()
if self.rescue_child_state is not None: if self.rescue_child_state is not None:
@ -119,30 +140,35 @@ class PlayIterator:
self._blocks.append(new_block) self._blocks.append(new_block)
self._host_states = {} self._host_states = {}
start_at_matched = False
for host in inventory.get_hosts(self._play.hosts): for host in inventory.get_hosts(self._play.hosts):
self._host_states[host.name] = HostState(blocks=self._blocks) self._host_states[host.name] = HostState(blocks=self._blocks)
# if the host's name is in the variable manager's fact cache, then set # if the host's name is in the variable manager's fact cache, then set
# its _gathered_facts flag to true for smart gathering tests later # its _gathered_facts flag to true for smart gathering tests later
if host.name in variable_manager._fact_cache: if host.name in variable_manager._fact_cache:
host._gathered_facts = True host._gathered_facts = True
# if we're looking to start at a specific task, iterate through # if we're looking to start at a specific task, iterate through
# the tasks for this host until we find the specified task # the tasks for this host until we find the specified task
if play_context.start_at_task is not None and not start_at_done: if play_context.start_at_task is not None and not start_at_done:
while True: while True:
(s, task) = self.get_next_task_for_host(host, peek=True) (s, task) = self.get_next_task_for_host(host, peek=True)
if s.run_state == self.ITERATING_COMPLETE: if s.run_state == self.ITERATING_COMPLETE:
break break
if task.name == play_context.start_at_task or fnmatch.fnmatch(task.name, play_context.start_at_task) or \ if task.name == play_context.start_at_task or fnmatch.fnmatch(task.name, play_context.start_at_task) or \
task.get_name() == play_context.start_at_task or fnmatch.fnmatch(task.get_name(), play_context.start_at_task): task.get_name() == play_context.start_at_task or fnmatch.fnmatch(task.get_name(), play_context.start_at_task):
# we have our match, so clear the start_at_task field on the start_at_matched = True
# play context to flag that we've started at a task (and future break
# plays won't try to advance) else:
play_context.start_at_task = None self.get_next_task_for_host(host)
break
else: # finally, reset the host's state to ITERATING_SETUP
self.get_next_task_for_host(host) self._host_states[host.name].run_state = self.ITERATING_SETUP
# finally, reset the host's state to ITERATING_SETUP
self._host_states[host.name].run_state = self.ITERATING_SETUP if start_at_matched:
# we have our match, so clear the start_at_task field on the
# play context to flag that we've started at a task (and future
# plays won't try to advance)
play_context.start_at_task = None
# Extend the play handlers list to include the handlers defined in roles # Extend the play handlers list to include the handlers defined in roles
self._play.handlers.extend(play.compile_roles_handlers()) self._play.handlers.extend(play.compile_roles_handlers())
@ -189,13 +215,21 @@ class PlayIterator:
s.pending_setup = False s.pending_setup = False
if not task: if not task:
old_s = s
(s, task) = self._get_next_task_from_state(s, peek=peek) (s, task) = self._get_next_task_from_state(s, peek=peek)
def _roles_are_different(ra, rb):
if ra != rb:
return True
else:
return old_s.cur_dep_chain != task._block._dep_chain
if task and task._role: if task and task._role:
# if we had a current role, mark that role as completed # if we had a current role, mark that role as completed
if s.cur_role and task._role != s.cur_role and host.name in s.cur_role._had_task_run and not peek: if s.cur_role and _roles_are_different(task._role, s.cur_role) and host.name in s.cur_role._had_task_run and not peek:
s.cur_role._completed[host.name] = True s.cur_role._completed[host.name] = True
s.cur_role = task._role s.cur_role = task._role
s.cur_dep_chain = task._block._dep_chain
if not peek: if not peek:
self._host_states[host.name] = s self._host_states[host.name] = s
@ -324,13 +358,21 @@ class PlayIterator:
state.tasks_child_state = self._set_failed_state(state.tasks_child_state) state.tasks_child_state = self._set_failed_state(state.tasks_child_state)
else: else:
state.fail_state |= self.FAILED_TASKS state.fail_state |= self.FAILED_TASKS
state.run_state = self.ITERATING_RESCUE if state._blocks[state.cur_block].rescue:
state.run_state = self.ITERATING_RESCUE
elif state._blocks[state.cur_block].always:
state.run_state = self.ITERATING_ALWAYS
else:
state.run_state = self.ITERATING_COMPLETE
elif state.run_state == self.ITERATING_RESCUE: elif state.run_state == self.ITERATING_RESCUE:
if state.rescue_child_state is not None: if state.rescue_child_state is not None:
state.rescue_child_state = self._set_failed_state(state.rescue_child_state) state.rescue_child_state = self._set_failed_state(state.rescue_child_state)
else: else:
state.fail_state |= self.FAILED_RESCUE state.fail_state |= self.FAILED_RESCUE
state.run_state = self.ITERATING_ALWAYS if state._blocks[state.cur_block].always:
state.run_state = self.ITERATING_ALWAYS
else:
state.run_state = self.ITERATING_COMPLETE
elif state.run_state == self.ITERATING_ALWAYS: elif state.run_state == self.ITERATING_ALWAYS:
if state.always_child_state is not None: if state.always_child_state is not None:
state.always_child_state = self._set_failed_state(state.always_child_state) state.always_child_state = self._set_failed_state(state.always_child_state)
@ -347,6 +389,28 @@ class PlayIterator:
def get_failed_hosts(self): def get_failed_hosts(self):
return dict((host, True) for (host, state) in iteritems(self._host_states) if state.run_state == self.ITERATING_COMPLETE and state.fail_state != self.FAILED_NONE) return dict((host, True) for (host, state) in iteritems(self._host_states) if state.run_state == self.ITERATING_COMPLETE and state.fail_state != self.FAILED_NONE)
def _check_failed_state(self, state):
if state is None:
return False
elif state.run_state == self.ITERATING_TASKS and self._check_failed_state(state.tasks_child_state):
return True
elif state.run_state == self.ITERATING_RESCUE and self._check_failed_state(state.rescue_child_state):
return True
elif state.run_state == self.ITERATING_ALWAYS and self._check_failed_state(state.always_child_state):
return True
elif state.run_state == self.ITERATING_COMPLETE and state.fail_state != self.FAILED_NONE:
if state.run_state == self.ITERATING_RESCUE and state.fail_state&self.FAILED_RESCUE == 0:
return False
elif state.run_state == self.ITERATING_ALWAYS and state.fail_state&self.FAILED_ALWAYS == 0:
return False
else:
return True
return False
def is_failed(self, host):
s = self.get_host_state(host)
return self._check_failed_state(s)
def get_original_task(self, host, task): def get_original_task(self, host, task):
''' '''
Finds the task in the task list which matches the UUID of the given task. Finds the task in the task list which matches the UUID of the given task.
@ -396,7 +460,8 @@ class PlayIterator:
return None return None
def _insert_tasks_into_state(self, state, task_list): def _insert_tasks_into_state(self, state, task_list):
if state.fail_state != self.FAILED_NONE: # if we've failed at all, or if the task list is empty, just return the current state
if state.fail_state != self.FAILED_NONE and state.run_state not in (self.ITERATING_RESCUE, self.ITERATING_ALWAYS) or not task_list:
return state return state
if state.run_state == self.ITERATING_TASKS: if state.run_state == self.ITERATING_TASKS:

View file

@ -31,8 +31,6 @@ from ansible.executor.task_queue_manager import TaskQueueManager
from ansible.playbook import Playbook from ansible.playbook import Playbook
from ansible.template import Templar from ansible.template import Templar
from ansible.utils.color import colorize, hostcolor
from ansible.utils.encrypt import do_encrypt
from ansible.utils.unicode import to_unicode from ansible.utils.unicode import to_unicode
try: try:
@ -83,6 +81,10 @@ class PlaybookExecutor:
if self._tqm is None: # we are doing a listing if self._tqm is None: # we are doing a listing
entry = {'playbook': playbook_path} entry = {'playbook': playbook_path}
entry['plays'] = [] entry['plays'] = []
else:
# make sure the tqm has callbacks loaded
self._tqm.load_callbacks()
self._tqm.send_callback('v2_playbook_on_start', pb)
i = 1 i = 1
plays = pb.get_plays() plays = pb.get_plays()
@ -108,10 +110,12 @@ class PlaybookExecutor:
salt_size = var.get("salt_size", None) salt_size = var.get("salt_size", None)
salt = var.get("salt", None) salt = var.get("salt", None)
if vname not in play.vars: if vname not in self._variable_manager.extra_vars:
if self._tqm: if self._tqm:
self._tqm.send_callback('v2_playbook_on_vars_prompt', vname, private, prompt, encrypt, confirm, salt_size, salt, default) self._tqm.send_callback('v2_playbook_on_vars_prompt', vname, private, prompt, encrypt, confirm, salt_size, salt, default)
play.vars[vname] = self._do_var_prompt(vname, private, prompt, encrypt, confirm, salt_size, salt, default) play.vars[vname] = display.do_var_prompt(vname, private, prompt, encrypt, confirm, salt_size, salt, default)
else: # we are either in --list-<option> or syntax check
play.vars[vname] = default
# Create a temporary copy of the play here, so we can run post_validate # Create a temporary copy of the play here, so we can run post_validate
# on it without the templating changes affecting the original object. # on it without the templating changes affecting the original object.
@ -128,8 +132,6 @@ class PlaybookExecutor:
entry['plays'].append(new_play) entry['plays'].append(new_play)
else: else:
# make sure the tqm has callbacks loaded
self._tqm.load_callbacks()
self._tqm._unreachable_hosts.update(self._unreachable_hosts) self._tqm._unreachable_hosts.update(self._unreachable_hosts)
# we are actually running plays # we are actually running plays
@ -149,9 +151,7 @@ class PlaybookExecutor:
# conditions are met, we break out, otherwise we only break out if the entire # conditions are met, we break out, otherwise we only break out if the entire
# batch failed # batch failed
failed_hosts_count = len(self._tqm._failed_hosts) + len(self._tqm._unreachable_hosts) failed_hosts_count = len(self._tqm._failed_hosts) + len(self._tqm._unreachable_hosts)
if new_play.any_errors_fatal and failed_hosts_count > 0: if new_play.max_fail_percentage is not None and \
break
elif new_play.max_fail_percentage is not None and \
int((new_play.max_fail_percentage)/100.0 * len(batch)) > int((len(batch) - failed_hosts_count) / len(batch) * 100.0): int((new_play.max_fail_percentage)/100.0 * len(batch)) > int((len(batch) - failed_hosts_count) / len(batch) * 100.0):
break break
elif len(batch) == failed_hosts_count: elif len(batch) == failed_hosts_count:
@ -171,6 +171,10 @@ class PlaybookExecutor:
if entry: if entry:
entrylist.append(entry) # per playbook entrylist.append(entry) # per playbook
# send the stats callback for this playbook
if self._tqm is not None:
self._tqm.send_callback('v2_playbook_on_stats', self._tqm._stats)
# if the last result wasn't zero, break out of the playbook file name loop # if the last result wasn't zero, break out of the playbook file name loop
if result != 0: if result != 0:
break break
@ -186,35 +190,6 @@ class PlaybookExecutor:
display.display("No issues encountered") display.display("No issues encountered")
return result return result
# TODO: this stat summary stuff should be cleaned up and moved
# to a new method, if it even belongs here...
display.banner("PLAY RECAP")
hosts = sorted(self._tqm._stats.processed.keys())
for h in hosts:
t = self._tqm._stats.summarize(h)
display.display(u"%s : %s %s %s %s" % (
hostcolor(h, t),
colorize(u'ok', t['ok'], 'green'),
colorize(u'changed', t['changed'], 'yellow'),
colorize(u'unreachable', t['unreachable'], 'red'),
colorize(u'failed', t['failures'], 'red')),
screen_only=True
)
display.display(u"%s : %s %s %s %s" % (
hostcolor(h, t, False),
colorize(u'ok', t['ok'], None),
colorize(u'changed', t['changed'], None),
colorize(u'unreachable', t['unreachable'], None),
colorize(u'failed', t['failures'], None)),
log_only=True
)
display.display("", screen_only=True)
# END STATS STUFF
return result return result
def _cleanup(self, signum=None, framenum=None): def _cleanup(self, signum=None, framenum=None):
@ -258,48 +233,3 @@ class PlaybookExecutor:
return serialized_batches return serialized_batches
def _do_var_prompt(self, varname, private=True, prompt=None, encrypt=None, confirm=False, salt_size=None, salt=None, default=None):
if sys.__stdin__.isatty():
if prompt and default is not None:
msg = "%s [%s]: " % (prompt, default)
elif prompt:
msg = "%s: " % prompt
else:
msg = 'input for %s: ' % varname
def do_prompt(prompt, private):
if sys.stdout.encoding:
msg = prompt.encode(sys.stdout.encoding)
else:
# when piping the output, or at other times when stdout
# may not be the standard file descriptor, the stdout
# encoding may not be set, so default to something sane
msg = prompt.encode(locale.getpreferredencoding())
if private:
return getpass.getpass(msg)
return raw_input(msg)
if confirm:
while True:
result = do_prompt(msg, private)
second = do_prompt("confirm " + msg, private)
if result == second:
break
display.display("***** VALUES ENTERED DO NOT MATCH ****")
else:
result = do_prompt(msg, private)
else:
result = None
display.warning("Not prompting as we are not in interactive mode")
# if result is false and default is not None
if not result and default is not None:
result = default
if encrypt:
result = do_encrypt(result, encrypt, salt_size, salt)
# handle utf-8 chars
result = to_unicode(result, errors='strict')
return result

View file

@ -58,7 +58,7 @@ class ResultProcess(multiprocessing.Process):
def _send_result(self, result): def _send_result(self, result):
debug(u"sending result: %s" % ([text_type(x) for x in result],)) debug(u"sending result: %s" % ([text_type(x) for x in result],))
self._final_q.put(result, block=False) self._final_q.put(result)
debug("done sending result") debug("done sending result")
def _read_worker_result(self): def _read_worker_result(self):
@ -73,7 +73,7 @@ class ResultProcess(multiprocessing.Process):
try: try:
if not rslt_q.empty(): if not rslt_q.empty():
debug("worker %d has data to read" % self._cur_worker) debug("worker %d has data to read" % self._cur_worker)
result = rslt_q.get(block=False) result = rslt_q.get()
debug("got a result from worker %d: %s" % (self._cur_worker, result)) debug("got a result from worker %d: %s" % (self._cur_worker, result))
break break
except queue.Empty: except queue.Empty:
@ -101,7 +101,7 @@ class ResultProcess(multiprocessing.Process):
try: try:
result = self._read_worker_result() result = self._read_worker_result()
if result is None: if result is None:
time.sleep(0.01) time.sleep(0.0001)
continue continue
clean_copy = strip_internal_keys(result._result) clean_copy = strip_internal_keys(result._result)
@ -110,7 +110,7 @@ class ResultProcess(multiprocessing.Process):
# if this task is registering a result, do it now # if this task is registering a result, do it now
if result._task.register: if result._task.register:
self._send_result(('register_host_var', result._host, result._task.register, clean_copy)) self._send_result(('register_host_var', result._host, result._task, clean_copy))
# send callbacks, execute other options based on the result status # send callbacks, execute other options based on the result status
# TODO: this should all be cleaned up and probably moved to a sub-function. # TODO: this should all be cleaned up and probably moved to a sub-function.
@ -142,8 +142,6 @@ class ResultProcess(multiprocessing.Process):
# notifies all other threads # notifies all other threads
for notify in result_item['_ansible_notify']: for notify in result_item['_ansible_notify']:
self._send_result(('notify_handler', result, notify)) self._send_result(('notify_handler', result, notify))
# now remove the notify field from the results, as its no longer needed
result_item.pop('_ansible_notify')
if 'add_host' in result_item: if 'add_host' in result_item:
# this task added a new host (add_host module) # this task added a new host (add_host module)

View file

@ -59,12 +59,18 @@ class WorkerProcess(multiprocessing.Process):
for reading later. for reading later.
''' '''
def __init__(self, tqm, main_q, rslt_q, loader): def __init__(self, rslt_q, task_vars, host, task, play_context, loader, variable_manager, shared_loader_obj):
super(WorkerProcess, self).__init__()
# takes a task queue manager as the sole param: # takes a task queue manager as the sole param:
self._main_q = main_q self._rslt_q = rslt_q
self._rslt_q = rslt_q self._task_vars = task_vars
self._loader = loader self._host = host
self._task = task
self._play_context = play_context
self._loader = loader
self._variable_manager = variable_manager
self._shared_loader_obj = shared_loader_obj
# dupe stdin, if we have one # dupe stdin, if we have one
self._new_stdin = sys.stdin self._new_stdin = sys.stdin
@ -82,8 +88,6 @@ class WorkerProcess(multiprocessing.Process):
# couldn't get stdin's fileno, so we just carry on # couldn't get stdin's fileno, so we just carry on
pass pass
super(WorkerProcess, self).__init__()
def run(self): def run(self):
''' '''
Called when the process is started, and loops indefinitely Called when the process is started, and loops indefinitely
@ -97,72 +101,45 @@ class WorkerProcess(multiprocessing.Process):
if HAS_ATFORK: if HAS_ATFORK:
atfork() atfork()
while True: try:
task = None # execute the task and build a TaskResult from the result
try: debug("running TaskExecutor() for %s/%s" % (self._host, self._task))
debug("waiting for a message...") executor_result = TaskExecutor(
(host, task, basedir, zip_vars, hostvars, compressed_vars, play_context, shared_loader_obj) = self._main_q.get() self._host,
self._task,
self._task_vars,
self._play_context,
self._new_stdin,
self._loader,
self._shared_loader_obj,
).run()
if compressed_vars: debug("done running TaskExecutor() for %s/%s" % (self._host, self._task))
job_vars = json.loads(zlib.decompress(zip_vars)) self._host.vars = dict()
else: self._host.groups = []
job_vars = zip_vars task_result = TaskResult(self._host, self._task, executor_result)
job_vars['hostvars'] = hostvars
debug("there's work to be done! got a task/handler to work on: %s" % task) # put the result on the result queue
debug("sending task result")
self._rslt_q.put(task_result)
debug("done sending task result")
# because the task queue manager starts workers (forks) before the except AnsibleConnectionFailure:
# playbook is loaded, set the basedir of the loader inherted by self._host.vars = dict()
# this fork now so that we can find files correctly self._host.groups = []
self._loader.set_basedir(basedir) task_result = TaskResult(self._host, self._task, dict(unreachable=True))
self._rslt_q.put(task_result, block=False)
# Serializing/deserializing tasks does not preserve the loader attribute, except Exception as e:
# since it is passed to the worker during the forking of the process and if not isinstance(e, (IOError, EOFError, KeyboardInterrupt)) or isinstance(e, TemplateNotFound):
# would be wasteful to serialize. So we set it here on the task now, and
# the task handles updating parent/child objects as needed.
task.set_loader(self._loader)
# execute the task and build a TaskResult from the result
debug("running TaskExecutor() for %s/%s" % (host, task))
executor_result = TaskExecutor(
host,
task,
job_vars,
play_context,
self._new_stdin,
self._loader,
shared_loader_obj,
).run()
debug("done running TaskExecutor() for %s/%s" % (host, task))
task_result = TaskResult(host, task, executor_result)
# put the result on the result queue
debug("sending task result")
self._rslt_q.put(task_result)
debug("done sending task result")
except queue.Empty:
pass
except AnsibleConnectionFailure:
try: try:
if task: self._host.vars = dict()
task_result = TaskResult(host, task, dict(unreachable=True)) self._host.groups = []
self._rslt_q.put(task_result, block=False) task_result = TaskResult(self._host, self._task, dict(failed=True, exception=traceback.format_exc(), stdout=''))
self._rslt_q.put(task_result, block=False)
except: except:
break debug("WORKER EXCEPTION: %s" % e)
except Exception as e: debug("WORKER EXCEPTION: %s" % traceback.format_exc())
if isinstance(e, (IOError, EOFError, KeyboardInterrupt)) and not isinstance(e, TemplateNotFound):
break
else:
try:
if task:
task_result = TaskResult(host, task, dict(failed=True, exception=traceback.format_exc(), stdout=''))
self._rslt_q.put(task_result, block=False)
except:
debug("WORKER EXCEPTION: %s" % e)
debug("WORKER EXCEPTION: %s" % traceback.format_exc())
break
debug("WORKER PROCESS EXITING") debug("WORKER PROCESS EXITING")

View file

@ -35,7 +35,7 @@ from ansible.template import Templar
from ansible.utils.encrypt import key_for_hostname from ansible.utils.encrypt import key_for_hostname
from ansible.utils.listify import listify_lookup_plugin_terms from ansible.utils.listify import listify_lookup_plugin_terms
from ansible.utils.unicode import to_unicode from ansible.utils.unicode import to_unicode
from ansible.vars.unsafe_proxy import UnsafeProxy from ansible.vars.unsafe_proxy import UnsafeProxy, wrap_var
try: try:
from __main__ import display from __main__ import display
@ -67,6 +67,7 @@ class TaskExecutor:
self._new_stdin = new_stdin self._new_stdin = new_stdin
self._loader = loader self._loader = loader
self._shared_loader_obj = shared_loader_obj self._shared_loader_obj = shared_loader_obj
self._connection = None
def run(self): def run(self):
''' '''
@ -145,7 +146,7 @@ class TaskExecutor:
except AttributeError: except AttributeError:
pass pass
except Exception as e: except Exception as e:
display.debug("error closing connection: %s" % to_unicode(e)) display.debug(u"error closing connection: %s" % to_unicode(e))
def _get_loop_items(self): def _get_loop_items(self):
''' '''
@ -153,16 +154,19 @@ class TaskExecutor:
and returns the items result. and returns the items result.
''' '''
# create a copy of the job vars here so that we can modify # save the play context variables to a temporary dictionary,
# them temporarily without changing them too early for other # so that we can modify the job vars without doing a full copy
# parts of the code that might still need a pristine version # and later restore them to avoid modifying things too early
#vars_copy = self._job_vars.copy() play_context_vars = dict()
vars_copy = self._job_vars self._play_context.update_vars(play_context_vars)
# now we update them with the play context vars old_vars = dict()
self._play_context.update_vars(vars_copy) for k in play_context_vars.keys():
if k in self._job_vars:
old_vars[k] = self._job_vars[k]
self._job_vars[k] = play_context_vars[k]
templar = Templar(loader=self._loader, shared_loader_obj=self._shared_loader_obj, variables=vars_copy) templar = Templar(loader=self._loader, shared_loader_obj=self._shared_loader_obj, variables=self._job_vars)
items = None items = None
if self._task.loop: if self._task.loop:
if self._task.loop in self._shared_loader_obj.lookup_loader: if self._task.loop in self._shared_loader_obj.lookup_loader:
@ -179,16 +183,25 @@ class TaskExecutor:
loop_terms = listify_lookup_plugin_terms(terms=self._task.loop_args, templar=templar, loop_terms = listify_lookup_plugin_terms(terms=self._task.loop_args, templar=templar,
loader=self._loader, fail_on_undefined=True, convert_bare=True) loader=self._loader, fail_on_undefined=True, convert_bare=True)
except AnsibleUndefinedVariable as e: except AnsibleUndefinedVariable as e:
if 'has no attribute' in str(e): if u'has no attribute' in to_unicode(e):
loop_terms = [] loop_terms = []
display.deprecated("Skipping task due to undefined attribute, in the future this will be a fatal error.") display.deprecated("Skipping task due to undefined attribute, in the future this will be a fatal error.")
else: else:
raise raise
items = self._shared_loader_obj.lookup_loader.get(self._task.loop, loader=self._loader, items = self._shared_loader_obj.lookup_loader.get(self._task.loop, loader=self._loader,
templar=templar).run(terms=loop_terms, variables=vars_copy) templar=templar).run(terms=loop_terms, variables=self._job_vars)
else: else:
raise AnsibleError("Unexpected failure in finding the lookup named '%s' in the available lookup plugins" % self._task.loop) raise AnsibleError("Unexpected failure in finding the lookup named '%s' in the available lookup plugins" % self._task.loop)
# now we restore any old job variables that may have been modified,
# and delete them if they were in the play context vars but not in
# the old variables dictionary
for k in play_context_vars.keys():
if k in old_vars:
self._job_vars[k] = old_vars[k]
else:
del self._job_vars[k]
if items: if items:
from ansible.vars.unsafe_proxy import UnsafeProxy from ansible.vars.unsafe_proxy import UnsafeProxy
for idx, item in enumerate(items): for idx, item in enumerate(items):
@ -218,7 +231,7 @@ class TaskExecutor:
tmp_task = self._task.copy() tmp_task = self._task.copy()
tmp_play_context = self._play_context.copy() tmp_play_context = self._play_context.copy()
except AnsibleParserError as e: except AnsibleParserError as e:
results.append(dict(failed=True, msg=str(e))) results.append(dict(failed=True, msg=to_unicode(e)))
continue continue
# now we swap the internal task and play context with their copies, # now we swap the internal task and play context with their copies,
@ -232,6 +245,7 @@ class TaskExecutor:
# now update the result with the item info, and append the result # now update the result with the item info, and append the result
# to the list of results # to the list of results
res['item'] = item res['item'] = item
#TODO: send item results to callback here, instead of all at the end
results.append(res) results.append(res)
return results return results
@ -302,6 +316,11 @@ class TaskExecutor:
# do the same kind of post validation step on it here before we use it. # do the same kind of post validation step on it here before we use it.
self._play_context.post_validate(templar=templar) self._play_context.post_validate(templar=templar)
# now that the play context is finalized, if the remote_addr is not set
# default to using the host's address field as the remote address
if not self._play_context.remote_addr:
self._play_context.remote_addr = self._host.address
# We also add "magic" variables back into the variables dict to make sure # We also add "magic" variables back into the variables dict to make sure
# a certain subset of variables exist. # a certain subset of variables exist.
self._play_context.update_vars(variables) self._play_context.update_vars(variables)
@ -348,8 +367,13 @@ class TaskExecutor:
self._task.args = variable_params self._task.args = variable_params
# get the connection and the handler for this execution # get the connection and the handler for this execution
self._connection = self._get_connection(variables=variables, templar=templar) if not self._connection or not getattr(self._connection, 'connected', False) or self._play_context.remote_addr != self._connection._play_context.remote_addr:
self._connection.set_host_overrides(host=self._host) self._connection = self._get_connection(variables=variables, templar=templar)
self._connection.set_host_overrides(host=self._host)
else:
# if connection is reused, its _play_context is no longer valid and needs
# to be replaced with the one templated above, in case other data changed
self._connection._play_context = self._play_context
self._handler = self._get_action_handler(connection=self._connection, templar=templar) self._handler = self._get_action_handler(connection=self._connection, templar=templar)
@ -372,30 +396,36 @@ class TaskExecutor:
# make a copy of the job vars here, in case we need to update them # make a copy of the job vars here, in case we need to update them
# with the registered variable value later on when testing conditions # with the registered variable value later on when testing conditions
#vars_copy = variables.copy()
vars_copy = variables.copy() vars_copy = variables.copy()
display.debug("starting attempt loop") display.debug("starting attempt loop")
result = None result = None
for attempt in range(retries): for attempt in range(retries):
if attempt > 0: if attempt > 0:
display.display("FAILED - RETRYING: %s (%d retries left). Result was: %s" % (self._task, retries-attempt, result), color="dark gray") display.display("FAILED - RETRYING: %s (%d retries left). Result was: %s" % (self._task, retries-attempt, result), color=C.COLOR_DEBUG)
result['attempts'] = attempt + 1 result['attempts'] = attempt + 1
display.debug("running the handler") display.debug("running the handler")
try: try:
result = self._handler.run(task_vars=variables) result = self._handler.run(task_vars=variables)
except AnsibleConnectionFailure as e: except AnsibleConnectionFailure as e:
return dict(unreachable=True, msg=str(e)) return dict(unreachable=True, msg=to_unicode(e))
display.debug("handler run complete") display.debug("handler run complete")
# update the local copy of vars with the registered value, if specified,
# or any facts which may have been generated by the module execution
if self._task.register:
vars_copy[self._task.register] = wrap_var(result.copy())
if self._task.async > 0: if self._task.async > 0:
# the async_wrapper module returns dumped JSON via its stdout # the async_wrapper module returns dumped JSON via its stdout
# response, so we parse it here and replace the result # response, so we parse it here and replace the result
try: try:
if 'skipped' in result and result['skipped'] or 'failed' in result and result['failed']:
return result
result = json.loads(result.get('stdout')) result = json.loads(result.get('stdout'))
except (TypeError, ValueError) as e: except (TypeError, ValueError) as e:
return dict(failed=True, msg="The async task did not return valid JSON: %s" % str(e)) return dict(failed=True, msg=u"The async task did not return valid JSON: %s" % to_unicode(e))
if self._task.poll > 0: if self._task.poll > 0:
result = self._poll_async_result(result=result, templar=templar) result = self._poll_async_result(result=result, templar=templar)
@ -416,11 +446,6 @@ class TaskExecutor:
return failed_when_result return failed_when_result
return False return False
# update the local copy of vars with the registered value, if specified,
# or any facts which may have been generated by the module execution
if self._task.register:
vars_copy[self._task.register] = result
if 'ansible_facts' in result: if 'ansible_facts' in result:
vars_copy.update(result['ansible_facts']) vars_copy.update(result['ansible_facts'])
@ -437,7 +462,7 @@ class TaskExecutor:
if attempt < retries - 1: if attempt < retries - 1:
cond = Conditional(loader=self._loader) cond = Conditional(loader=self._loader)
cond.when = self._task.until cond.when = [ self._task.until ]
if cond.evaluate_conditional(templar, vars_copy): if cond.evaluate_conditional(templar, vars_copy):
break break
@ -450,7 +475,7 @@ class TaskExecutor:
# do the final update of the local variables here, for both registered # do the final update of the local variables here, for both registered
# values and any facts which may have been created # values and any facts which may have been created
if self._task.register: if self._task.register:
variables[self._task.register] = result variables[self._task.register] = wrap_var(result)
if 'ansible_facts' in result: if 'ansible_facts' in result:
variables.update(result['ansible_facts']) variables.update(result['ansible_facts'])
@ -528,9 +553,6 @@ class TaskExecutor:
correct connection object from the list of connection plugins correct connection object from the list of connection plugins
''' '''
if not self._play_context.remote_addr:
self._play_context.remote_addr = self._host.address
if self._task.delegate_to is not None: if self._task.delegate_to is not None:
# since we're delegating, we don't want to use interpreter values # since we're delegating, we don't want to use interpreter values
# which would have been set for the original target host # which would have been set for the original target host

View file

@ -19,6 +19,7 @@
from __future__ import (absolute_import, division, print_function) from __future__ import (absolute_import, division, print_function)
__metaclass__ = type __metaclass__ = type
from multiprocessing.managers import SyncManager, DictProxy
import multiprocessing import multiprocessing
import os import os
import tempfile import tempfile
@ -32,6 +33,8 @@ from ansible.executor.stats import AggregateStats
from ansible.playbook.play_context import PlayContext from ansible.playbook.play_context import PlayContext
from ansible.plugins import callback_loader, strategy_loader, module_loader from ansible.plugins import callback_loader, strategy_loader, module_loader
from ansible.template import Templar from ansible.template import Templar
from ansible.vars.hostvars import HostVars
from ansible.plugins.callback import CallbackBase
try: try:
from __main__ import display from __main__ import display
@ -54,7 +57,7 @@ class TaskQueueManager:
which dispatches the Play's tasks to hosts. which dispatches the Play's tasks to hosts.
''' '''
def __init__(self, inventory, variable_manager, loader, options, passwords, stdout_callback=None): def __init__(self, inventory, variable_manager, loader, options, passwords, stdout_callback=None, run_additional_callbacks=True, run_tree=False):
self._inventory = inventory self._inventory = inventory
self._variable_manager = variable_manager self._variable_manager = variable_manager
@ -63,6 +66,8 @@ class TaskQueueManager:
self._stats = AggregateStats() self._stats = AggregateStats()
self.passwords = passwords self.passwords = passwords
self._stdout_callback = stdout_callback self._stdout_callback = stdout_callback
self._run_additional_callbacks = run_additional_callbacks
self._run_tree = run_tree
self._callbacks_loaded = False self._callbacks_loaded = False
self._callback_plugins = [] self._callback_plugins = []
@ -94,14 +99,10 @@ class TaskQueueManager:
def _initialize_processes(self, num): def _initialize_processes(self, num):
self._workers = [] self._workers = []
for i in xrange(num): for i in range(num):
main_q = multiprocessing.Queue() main_q = multiprocessing.Queue()
rslt_q = multiprocessing.Queue() rslt_q = multiprocessing.Queue()
self._workers.append([None, main_q, rslt_q])
prc = WorkerProcess(self, main_q, rslt_q, self._loader)
prc.start()
self._workers.append((prc, main_q, rslt_q))
self._result_prc = ResultProcess(self._final_q, self._workers) self._result_prc = ResultProcess(self._final_q, self._workers)
self._result_prc.start() self._result_prc.start()
@ -142,8 +143,16 @@ class TaskQueueManager:
if self._stdout_callback is None: if self._stdout_callback is None:
self._stdout_callback = C.DEFAULT_STDOUT_CALLBACK self._stdout_callback = C.DEFAULT_STDOUT_CALLBACK
if self._stdout_callback not in callback_loader: if isinstance(self._stdout_callback, CallbackBase):
raise AnsibleError("Invalid callback for stdout specified: %s" % self._stdout_callback) stdout_callback_loaded = True
elif isinstance(self._stdout_callback, basestring):
if self._stdout_callback not in callback_loader:
raise AnsibleError("Invalid callback for stdout specified: %s" % self._stdout_callback)
else:
self._stdout_callback = callback_loader.get(self._stdout_callback)
stdout_callback_loaded = True
else:
raise AnsibleError("callback must be an instance of CallbackBase or the name of a callback plugin")
for callback_plugin in callback_loader.all(class_only=True): for callback_plugin in callback_loader.all(class_only=True):
if hasattr(callback_plugin, 'CALLBACK_VERSION') and callback_plugin.CALLBACK_VERSION >= 2.0: if hasattr(callback_plugin, 'CALLBACK_VERSION') and callback_plugin.CALLBACK_VERSION >= 2.0:
@ -157,7 +166,9 @@ class TaskQueueManager:
if callback_name != self._stdout_callback or stdout_callback_loaded: if callback_name != self._stdout_callback or stdout_callback_loaded:
continue continue
stdout_callback_loaded = True stdout_callback_loaded = True
elif callback_needs_whitelist and (C.DEFAULT_CALLBACK_WHITELIST is None or callback_name not in C.DEFAULT_CALLBACK_WHITELIST): elif callback_name == 'tree' and self._run_tree:
pass
elif not self._run_additional_callbacks or (callback_needs_whitelist and (C.DEFAULT_CALLBACK_WHITELIST is None or callback_name not in C.DEFAULT_CALLBACK_WHITELIST)):
continue continue
self._callback_plugins.append(callback_plugin()) self._callback_plugins.append(callback_plugin())
@ -173,11 +184,6 @@ class TaskQueueManager:
are done with the current task). are done with the current task).
''' '''
# Fork # of forks, # of hosts or serial, whichever is lowest
contenders = [self._options.forks, play.serial, len(self._inventory.get_hosts(play.hosts))]
contenders = [ v for v in contenders if v is not None and v > 0 ]
self._initialize_processes(min(contenders))
if not self._callbacks_loaded: if not self._callbacks_loaded:
self.load_callbacks() self.load_callbacks()
@ -187,6 +193,17 @@ class TaskQueueManager:
new_play = play.copy() new_play = play.copy()
new_play.post_validate(templar) new_play.post_validate(templar)
self.hostvars = HostVars(
inventory=self._inventory,
variable_manager=self._variable_manager,
loader=self._loader,
)
# Fork # of forks, # of hosts or serial, whichever is lowest
contenders = [self._options.forks, play.serial, len(self._inventory.get_hosts(new_play.hosts))]
contenders = [ v for v in contenders if v is not None and v > 0 ]
self._initialize_processes(min(contenders))
play_context = PlayContext(new_play, self._options, self.passwords, self._connection_lockfile.fileno()) play_context = PlayContext(new_play, self._options, self.passwords, self._connection_lockfile.fileno())
for callback_plugin in self._callback_plugins: for callback_plugin in self._callback_plugins:
if hasattr(callback_plugin, 'set_play_context'): if hasattr(callback_plugin, 'set_play_context'):
@ -236,7 +253,8 @@ class TaskQueueManager:
for (worker_prc, main_q, rslt_q) in self._workers: for (worker_prc, main_q, rslt_q) in self._workers:
rslt_q.close() rslt_q.close()
main_q.close() main_q.close()
worker_prc.terminate() if worker_prc and worker_prc.is_alive():
worker_prc.terminate()
def clear_failed_hosts(self): def clear_failed_hosts(self):
self._failed_hosts = dict() self._failed_hosts = dict()
@ -260,7 +278,7 @@ class TaskQueueManager:
self._terminated = True self._terminated = True
def send_callback(self, method_name, *args, **kwargs): def send_callback(self, method_name, *args, **kwargs):
for callback_plugin in self._callback_plugins: for callback_plugin in [self._stdout_callback] + self._callback_plugins:
# a plugin that set self.disabled to True will not be called # a plugin that set self.disabled to True will not be called
# see osx_say.py example for such a plugin # see osx_say.py example for such a plugin
if getattr(callback_plugin, 'disabled', False): if getattr(callback_plugin, 'disabled', False):
@ -272,10 +290,28 @@ class TaskQueueManager:
for method in methods: for method in methods:
if method is not None: if method is not None:
try: try:
method(*args, **kwargs) # temporary hack, required due to a change in the callback API, so
# we don't break backwards compatibility with callbacks which were
# designed to use the original API
# FIXME: target for removal and revert to the original code here
# after a year (2017-01-14)
if method_name == 'v2_playbook_on_start':
import inspect
(f_args, f_varargs, f_keywords, f_defaults) = inspect.getargspec(method)
if 'playbook' in f_args:
method(*args, **kwargs)
else:
method()
else:
method(*args, **kwargs)
except Exception as e: except Exception as e:
import traceback
orig_tb = traceback.format_exc()
try: try:
v1_method = method.replace('v2_','') v1_method = method.replace('v2_','')
v1_method(*args, **kwargs) v1_method(*args, **kwargs)
except Exception: except Exception:
display.warning('Error when using %s: %s' % (method, str(e))) if display.verbosity >= 3:
display.warning(orig_tb, formatted=True)
else:
display.warning('Error when using %s: %s' % (method, str(e)))

View file

@ -49,9 +49,34 @@ class Galaxy(object):
this_dir, this_filename = os.path.split(__file__) this_dir, this_filename = os.path.split(__file__)
self.DATA_PATH = os.path.join(this_dir, "data") self.DATA_PATH = os.path.join(this_dir, "data")
#TODO: move to getter for lazy loading self._default_readme = None
self.default_readme = self._str_from_data_file('readme') self._default_meta = None
self.default_meta = self._str_from_data_file('metadata_template.j2') self._default_test = None
self._default_travis = None
@property
def default_readme(self):
if self._default_readme is None:
self._default_readme = self._str_from_data_file('readme')
return self._default_readme
@property
def default_meta(self):
if self._default_meta is None:
self._default_meta = self._str_from_data_file('metadata_template.j2')
return self._default_meta
@property
def default_test(self):
if self._default_test is None:
self._default_test = self._str_from_data_file('test_playbook.j2')
return self._default_test
@property
def default_travis(self):
if self._default_travis is None:
self._default_travis = self._str_from_data_file('travis.j2')
return self._default_travis
def add_role(self, role): def add_role(self, role):
self.roles[role.name] = role self.roles[role.name] = role

View file

@ -25,11 +25,15 @@ from __future__ import (absolute_import, division, print_function)
__metaclass__ = type __metaclass__ = type
import json import json
import urllib
from urllib2 import quote as urlquote, HTTPError from urllib2 import quote as urlquote, HTTPError
from urlparse import urlparse from urlparse import urlparse
import ansible.constants as C
from ansible.errors import AnsibleError from ansible.errors import AnsibleError
from ansible.module_utils.urls import open_url from ansible.module_utils.urls import open_url
from ansible.galaxy.token import GalaxyToken
try: try:
from __main__ import display from __main__ import display
@ -43,45 +47,111 @@ class GalaxyAPI(object):
SUPPORTED_VERSIONS = ['v1'] SUPPORTED_VERSIONS = ['v1']
def __init__(self, galaxy, api_server): def __init__(self, galaxy):
self.galaxy = galaxy self.galaxy = galaxy
self.token = GalaxyToken()
self._api_server = C.GALAXY_SERVER
self._validate_certs = not C.GALAXY_IGNORE_CERTS
try: # set validate_certs
urlparse(api_server, scheme='https') if galaxy.options.ignore_certs:
except: self._validate_certs = False
raise AnsibleError("Invalid server API url passed: %s" % api_server) display.vvv('Validate TLS certificates: %s' % self._validate_certs)
server_version = self.get_server_api_version('%s/api/' % (api_server)) # set the API server
if not server_version: if galaxy.options.api_server != C.GALAXY_SERVER:
raise AnsibleError("Could not retrieve server API version: %s" % api_server) self._api_server = galaxy.options.api_server
display.vvv("Connecting to galaxy_server: %s" % self._api_server)
if server_version in self.SUPPORTED_VERSIONS: server_version = self.get_server_api_version()
self.baseurl = '%s/api/%s' % (api_server, server_version) if not server_version in self.SUPPORTED_VERSIONS:
self.version = server_version # for future use
display.vvvvv("Base API: %s" % self.baseurl)
else:
raise AnsibleError("Unsupported Galaxy server API version: %s" % server_version) raise AnsibleError("Unsupported Galaxy server API version: %s" % server_version)
def get_server_api_version(self, api_server): self.baseurl = '%s/api/%s' % (self._api_server, server_version)
self.version = server_version # for future use
display.vvv("Base API: %s" % self.baseurl)
def __auth_header(self):
token = self.token.get()
if token is None:
raise AnsibleError("No access token. You must first use login to authenticate and obtain an access token.")
return {'Authorization': 'Token ' + token}
def __call_galaxy(self, url, args=None, headers=None, method=None):
if args and not headers:
headers = self.__auth_header()
try:
display.vvv(url)
resp = open_url(url, data=args, validate_certs=self._validate_certs, headers=headers, method=method)
data = json.load(resp)
except HTTPError as e:
res = json.load(e)
raise AnsibleError(res['detail'])
return data
@property
def api_server(self):
return self._api_server
@property
def validate_certs(self):
return self._validate_certs
def get_server_api_version(self):
""" """
Fetches the Galaxy API current version to ensure Fetches the Galaxy API current version to ensure
the API server is up and reachable. the API server is up and reachable.
""" """
#TODO: fix galaxy server which returns current_version path (/api/v1) vs actual version (v1)
# also should set baseurl using supported_versions which has path
return 'v1'
try: try:
data = json.load(open_url(api_server, validate_certs=self.galaxy.options.validate_certs)) url = '%s/api/' % self._api_server
return data.get("current_version", 'v1') data = json.load(open_url(url, validate_certs=self._validate_certs))
except Exception: return data['current_version']
# TODO: report error except Exception as e:
return None raise AnsibleError("The API server (%s) is not responding, please try again later." % url)
def authenticate(self, github_token):
"""
Retrieve an authentication token
"""
url = '%s/tokens/' % self.baseurl
args = urllib.urlencode({"github_token": github_token})
resp = open_url(url, data=args, validate_certs=self._validate_certs, method="POST")
data = json.load(resp)
return data
def create_import_task(self, github_user, github_repo, reference=None):
"""
Post an import request
"""
url = '%s/imports/' % self.baseurl
args = urllib.urlencode({
"github_user": github_user,
"github_repo": github_repo,
"github_reference": reference if reference else ""
})
data = self.__call_galaxy(url, args=args)
if data.get('results', None):
return data['results']
return data
def get_import_task(self, task_id=None, github_user=None, github_repo=None):
"""
Check the status of an import task.
"""
url = '%s/imports/' % self.baseurl
if not task_id is None:
url = "%s?id=%d" % (url,task_id)
elif not github_user is None and not github_repo is None:
url = "%s?github_user=%s&github_repo=%s" % (url,github_user,github_repo)
else:
raise AnsibleError("Expected task_id or github_user and github_repo")
data = self.__call_galaxy(url)
return data['results']
def lookup_role_by_name(self, role_name, notify=True): def lookup_role_by_name(self, role_name, notify=True):
""" """
Find a role by name Find a role by name.
""" """
role_name = urlquote(role_name) role_name = urlquote(role_name)
@ -92,18 +162,12 @@ class GalaxyAPI(object):
if notify: if notify:
display.display("- downloading role '%s', owned by %s" % (role_name, user_name)) display.display("- downloading role '%s', owned by %s" % (role_name, user_name))
except: except:
raise AnsibleError("- invalid role name (%s). Specify role as format: username.rolename" % role_name) raise AnsibleError("Invalid role name (%s). Specify role as format: username.rolename" % role_name)
url = '%s/roles/?owner__username=%s&name=%s' % (self.baseurl, user_name, role_name) url = '%s/roles/?owner__username=%s&name=%s' % (self.baseurl, user_name, role_name)
display.vvvv("- %s" % (url)) data = self.__call_galaxy(url)
try: if len(data["results"]) != 0:
data = json.load(open_url(url, validate_certs=self.galaxy.options.validate_certs)) return data["results"][0]
if len(data["results"]) != 0:
return data["results"][0]
except:
# TODO: report on connection/availability errors
pass
return None return None
def fetch_role_related(self, related, role_id): def fetch_role_related(self, related, role_id):
@ -114,13 +178,12 @@ class GalaxyAPI(object):
try: try:
url = '%s/roles/%d/%s/?page_size=50' % (self.baseurl, int(role_id), related) url = '%s/roles/%d/%s/?page_size=50' % (self.baseurl, int(role_id), related)
data = json.load(open_url(url, validate_certs=self.galaxy.options.validate_certs)) data = self.__call_galaxy(url)
results = data['results'] results = data['results']
done = (data.get('next', None) is None) done = (data.get('next', None) is None)
while not done: while not done:
url = '%s%s' % (self.baseurl, data['next']) url = '%s%s' % (self.baseurl, data['next'])
display.display(url) data = self.__call_galaxy(url)
data = json.load(open_url(url, validate_certs=self.galaxy.options.validate_certs))
results += data['results'] results += data['results']
done = (data.get('next', None) is None) done = (data.get('next', None) is None)
return results return results
@ -131,10 +194,9 @@ class GalaxyAPI(object):
""" """
Fetch the list of items specified. Fetch the list of items specified.
""" """
try: try:
url = '%s/%s/?page_size' % (self.baseurl, what) url = '%s/%s/?page_size' % (self.baseurl, what)
data = json.load(open_url(url, validate_certs=self.galaxy.options.validate_certs)) data = self.__call_galaxy(url)
if "results" in data: if "results" in data:
results = data['results'] results = data['results']
else: else:
@ -144,41 +206,64 @@ class GalaxyAPI(object):
done = (data.get('next', None) is None) done = (data.get('next', None) is None)
while not done: while not done:
url = '%s%s' % (self.baseurl, data['next']) url = '%s%s' % (self.baseurl, data['next'])
display.display(url) data = self.__call_galaxy(url)
data = json.load(open_url(url, validate_certs=self.galaxy.options.validate_certs))
results += data['results'] results += data['results']
done = (data.get('next', None) is None) done = (data.get('next', None) is None)
return results return results
except Exception as error: except Exception as error:
raise AnsibleError("Failed to download the %s list: %s" % (what, str(error))) raise AnsibleError("Failed to download the %s list: %s" % (what, str(error)))
def search_roles(self, search, platforms=None, tags=None): def search_roles(self, search, **kwargs):
search_url = self.baseurl + '/roles/?page=1' search_url = self.baseurl + '/search/roles/?'
if search: if search:
search_url += '&search=' + urlquote(search) search_url += '&autocomplete=' + urlquote(search)
if tags is None: tags = kwargs.get('tags',None)
tags = [] platforms = kwargs.get('platforms', None)
elif isinstance(tags, basestring): page_size = kwargs.get('page_size', None)
author = kwargs.get('author', None)
if tags and isinstance(tags, basestring):
tags = tags.split(',') tags = tags.split(',')
search_url += '&tags_autocomplete=' + '+'.join(tags)
for tag in tags:
search_url += '&chain__tags__name=' + urlquote(tag) if platforms and isinstance(platforms, basestring):
if platforms is None:
platforms = []
elif isinstance(platforms, basestring):
platforms = platforms.split(',') platforms = platforms.split(',')
search_url += '&platforms_autocomplete=' + '+'.join(platforms)
for plat in platforms: if page_size:
search_url += '&chain__platforms__name=' + urlquote(plat) search_url += '&page_size=%s' % page_size
display.debug("Executing query: %s" % search_url)
try:
data = json.load(open_url(search_url, validate_certs=self.galaxy.options.validate_certs))
except HTTPError as e:
raise AnsibleError("Unsuccessful request to server: %s" % str(e))
if author:
search_url += '&username_autocomplete=%s' % author
data = self.__call_galaxy(search_url)
return data
def add_secret(self, source, github_user, github_repo, secret):
url = "%s/notification_secrets/" % self.baseurl
args = urllib.urlencode({
"source": source,
"github_user": github_user,
"github_repo": github_repo,
"secret": secret
})
data = self.__call_galaxy(url, args=args)
return data
def list_secrets(self):
url = "%s/notification_secrets" % self.baseurl
data = self.__call_galaxy(url, headers=self.__auth_header())
return data
def remove_secret(self, secret_id):
url = "%s/notification_secrets/%s/" % (self.baseurl, secret_id)
data = self.__call_galaxy(url, headers=self.__auth_header(), method='DELETE')
return data
def delete_role(self, github_user, github_repo):
url = "%s/removerole/?github_user=%s&github_repo=%s" % (self.baseurl,github_user,github_repo)
data = self.__call_galaxy(url, headers=self.__auth_header(), method='DELETE')
return data return data

View file

@ -2,9 +2,11 @@ galaxy_info:
author: {{ author }} author: {{ author }}
description: {{description}} description: {{description}}
company: {{ company }} company: {{ company }}
# If the issue tracker for your role is not on github, uncomment the # If the issue tracker for your role is not on github, uncomment the
# next line and provide a value # next line and provide a value
# issue_tracker_url: {{ issue_tracker_url }} # issue_tracker_url: {{ issue_tracker_url }}
# Some suggested licenses: # Some suggested licenses:
# - BSD (default) # - BSD (default)
# - MIT # - MIT
@ -13,7 +15,17 @@ galaxy_info:
# - Apache # - Apache
# - CC-BY # - CC-BY
license: {{ license }} license: {{ license }}
min_ansible_version: {{ min_ansible_version }} min_ansible_version: {{ min_ansible_version }}
# Optionally specify the branch Galaxy will use when accessing the GitHub
# repo for this role. During role install, if no tags are available,
# Galaxy will use this branch. During import Galaxy will access files on
# this branch. If travis integration is cofigured, only notification for this
# branch will be accepted. Otherwise, in all cases, the repo's default branch
# (usually master) will be used.
#github_branch:
# #
# Below are all platforms currently available. Just uncomment # Below are all platforms currently available. Just uncomment
# the ones that apply to your role. If you don't see your # the ones that apply to your role. If you don't see your
@ -28,6 +40,7 @@ galaxy_info:
# - {{ version }} # - {{ version }}
{%- endfor %} {%- endfor %}
{%- endfor %} {%- endfor %}
galaxy_tags: [] galaxy_tags: []
# List tags for your role here, one per line. A tag is # List tags for your role here, one per line. A tag is
# a keyword that describes and categorizes the role. # a keyword that describes and categorizes the role.
@ -36,6 +49,7 @@ galaxy_info:
# #
# NOTE: A tag is limited to a single word comprised of # NOTE: A tag is limited to a single word comprised of
# alphanumeric characters. Maximum 20 tags per role. # alphanumeric characters. Maximum 20 tags per role.
dependencies: [] dependencies: []
# List your role dependencies here, one per line. # List your role dependencies here, one per line.
# Be sure to remove the '[]' above if you add dependencies # Be sure to remove the '[]' above if you add dependencies

View file

@ -0,0 +1,5 @@
---
- hosts: localhost
remote_user: root
roles:
- {{ role_name }}

View file

@ -0,0 +1,29 @@
---
language: python
python: "2.7"
# Use the new container infrastructure
sudo: false
# Install ansible
addons:
apt:
packages:
- python-pip
install:
# Install ansible
- pip install ansible
# Check ansible version
- ansible --version
# Create ansible.cfg with correct roles_path
- printf '[defaults]\nroles_path=../' >ansible.cfg
script:
# Basic role syntax check
- ansible-playbook tests/test.yml -i tests/inventory --syntax-check
notifications:
webhooks: https://galaxy.ansible.com/api/v1/notifications/

113
lib/ansible/galaxy/login.py Normal file
View file

@ -0,0 +1,113 @@
#!/usr/bin/env python
########################################################################
#
# (C) 2015, Chris Houseknecht <chouse@ansible.com>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
#
########################################################################
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import getpass
import json
import urllib
from urllib2 import quote as urlquote, HTTPError
from urlparse import urlparse
from ansible.errors import AnsibleError, AnsibleOptionsError
from ansible.module_utils.urls import open_url
from ansible.utils.color import stringc
try:
from __main__ import display
except ImportError:
from ansible.utils.display import Display
display = Display()
class GalaxyLogin(object):
''' Class to handle authenticating user with Galaxy API prior to performing CUD operations '''
GITHUB_AUTH = 'https://api.github.com/authorizations'
def __init__(self, galaxy, github_token=None):
self.galaxy = galaxy
self.github_username = None
self.github_password = None
if github_token == None:
self.get_credentials()
def get_credentials(self):
display.display(u'\n\n' + "We need your " + stringc("Github login",'bright cyan') +
" to identify you.", screen_only=True)
display.display("This information will " + stringc("not be sent to Galaxy",'bright cyan') +
", only to " + stringc("api.github.com.","yellow"), screen_only=True)
display.display("The password will not be displayed." + u'\n\n', screen_only=True)
display.display("Use " + stringc("--github-token",'yellow') +
" if you do not want to enter your password." + u'\n\n', screen_only=True)
try:
self.github_username = raw_input("Github Username: ")
except:
pass
try:
self.github_password = getpass.getpass("Password for %s: " % self.github_username)
except:
pass
if not self.github_username or not self.github_password:
raise AnsibleError("Invalid Github credentials. Username and password are required.")
def remove_github_token(self):
'''
If for some reason an ansible-galaxy token was left from a prior login, remove it. We cannot
retrieve the token after creation, so we are forced to create a new one.
'''
try:
tokens = json.load(open_url(self.GITHUB_AUTH, url_username=self.github_username,
url_password=self.github_password, force_basic_auth=True,))
except HTTPError as e:
res = json.load(e)
raise AnsibleError(res['message'])
for token in tokens:
if token['note'] == 'ansible-galaxy login':
display.vvvvv('removing token: %s' % token['token_last_eight'])
try:
open_url('https://api.github.com/authorizations/%d' % token['id'], url_username=self.github_username,
url_password=self.github_password, method='DELETE', force_basic_auth=True,)
except HTTPError as e:
res = json.load(e)
raise AnsibleError(res['message'])
def create_github_token(self):
'''
Create a personal authorization token with a note of 'ansible-galaxy login'
'''
self.remove_github_token()
args = json.dumps({"scopes":["public_repo"], "note":"ansible-galaxy login"})
try:
data = json.load(open_url(self.GITHUB_AUTH, url_username=self.github_username,
url_password=self.github_password, force_basic_auth=True, data=args))
except HTTPError as e:
res = json.load(e)
raise AnsibleError(res['message'])
return data['token']

View file

@ -46,7 +46,7 @@ class GalaxyRole(object):
SUPPORTED_SCMS = set(['git', 'hg']) SUPPORTED_SCMS = set(['git', 'hg'])
META_MAIN = os.path.join('meta', 'main.yml') META_MAIN = os.path.join('meta', 'main.yml')
META_INSTALL = os.path.join('meta', '.galaxy_install_info') META_INSTALL = os.path.join('meta', '.galaxy_install_info')
ROLE_DIRS = ('defaults','files','handlers','meta','tasks','templates','vars') ROLE_DIRS = ('defaults','files','handlers','meta','tasks','templates','vars','tests')
def __init__(self, galaxy, name, src=None, version=None, scm=None, path=None): def __init__(self, galaxy, name, src=None, version=None, scm=None, path=None):
@ -130,13 +130,11 @@ class GalaxyRole(object):
install_date=datetime.datetime.utcnow().strftime("%c"), install_date=datetime.datetime.utcnow().strftime("%c"),
) )
info_path = os.path.join(self.path, self.META_INSTALL) info_path = os.path.join(self.path, self.META_INSTALL)
try: with open(info_path, 'w+') as f:
f = open(info_path, 'w+') try:
self._install_info = yaml.safe_dump(info, f) self._install_info = yaml.safe_dump(info, f)
except: except:
return False return False
finally:
f.close()
return True return True
@ -198,10 +196,10 @@ class GalaxyRole(object):
role_data = self.src role_data = self.src
tmp_file = self.fetch(role_data) tmp_file = self.fetch(role_data)
else: else:
api = GalaxyAPI(self.galaxy, self.options.api_server) api = GalaxyAPI(self.galaxy)
role_data = api.lookup_role_by_name(self.src) role_data = api.lookup_role_by_name(self.src)
if not role_data: if not role_data:
raise AnsibleError("- sorry, %s was not found on %s." % (self.src, self.options.api_server)) raise AnsibleError("- sorry, %s was not found on %s." % (self.src, api.api_server))
role_versions = api.fetch_role_related('versions', role_data['id']) role_versions = api.fetch_role_related('versions', role_data['id'])
if not self.version: if not self.version:
@ -213,8 +211,10 @@ class GalaxyRole(object):
loose_versions = [LooseVersion(a.get('name',None)) for a in role_versions] loose_versions = [LooseVersion(a.get('name',None)) for a in role_versions]
loose_versions.sort() loose_versions.sort()
self.version = str(loose_versions[-1]) self.version = str(loose_versions[-1])
elif role_data.get('github_branch', None):
self.version = role_data['github_branch']
else: else:
self.version = 'master' self.version = 'master'
elif self.version != 'master': elif self.version != 'master':
if role_versions and self.version not in [a.get('name', None) for a in role_versions]: if role_versions and self.version not in [a.get('name', None) for a in role_versions]:
raise AnsibleError("- the specified version (%s) of %s was not found in the list of available versions (%s)." % (self.version, self.name, role_versions)) raise AnsibleError("- the specified version (%s) of %s was not found in the list of available versions (%s)." % (self.version, self.name, role_versions))

View file

@ -0,0 +1,67 @@
#!/usr/bin/env python
########################################################################
#
# (C) 2015, Chris Houseknecht <chouse@ansible.com>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
#
########################################################################
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import os
import yaml
from stat import *
try:
from __main__ import display
except ImportError:
from ansible.utils.display import Display
display = Display()
class GalaxyToken(object):
''' Class to storing and retrieving token in ~/.ansible_galaxy '''
def __init__(self):
self.file = os.path.expanduser("~") + '/.ansible_galaxy'
self.config = yaml.safe_load(self.__open_config_for_read())
if not self.config:
self.config = {}
def __open_config_for_read(self):
if os.path.isfile(self.file):
display.vvv('Opened %s' % self.file)
return open(self.file, 'r')
# config.yml not found, create and chomd u+rw
f = open(self.file,'w')
f.close()
os.chmod(self.file,S_IRUSR|S_IWUSR) # owner has +rw
display.vvv('Created %s' % self.file)
return open(self.file, 'r')
def set(self, token):
self.config['token'] = token
self.save()
def get(self):
return self.config.get('token', None)
def save(self):
with open(self.file,'w') as f:
yaml.safe_dump(self.config,f,default_flow_style=False)

View file

@ -78,6 +78,10 @@ class Inventory(object):
self._restriction = None self._restriction = None
self._subset = None self._subset = None
# clear the cache here, which is only useful if more than
# one Inventory objects are created when using the API directly
self.clear_pattern_cache()
self.parse_inventory(host_list) self.parse_inventory(host_list)
def serialize(self): def serialize(self):
@ -109,7 +113,12 @@ class Inventory(object):
pass pass
elif isinstance(host_list, list): elif isinstance(host_list, list):
for h in host_list: for h in host_list:
(host, port) = parse_address(h, allow_ranges=False) try:
(host, port) = parse_address(h, allow_ranges=False)
except AnsibleError as e:
display.vvv("Unable to parse address from hostname, leaving unchanged: %s" % to_unicode(e))
host = h
port = None
all.add_host(Host(host, port)) all.add_host(Host(host, port))
elif self._loader.path_exists(host_list): elif self._loader.path_exists(host_list):
#TODO: switch this to a plugin loader and a 'condition' per plugin on which it should be tried, restoring 'inventory pllugins' #TODO: switch this to a plugin loader and a 'condition' per plugin on which it should be tried, restoring 'inventory pllugins'
@ -178,25 +187,26 @@ class Inventory(object):
if self._restriction: if self._restriction:
pattern_hash += u":%s" % to_unicode(self._restriction) pattern_hash += u":%s" % to_unicode(self._restriction)
if pattern_hash in HOSTS_PATTERNS_CACHE: if pattern_hash not in HOSTS_PATTERNS_CACHE:
return HOSTS_PATTERNS_CACHE[pattern_hash][:]
patterns = Inventory.split_host_pattern(pattern) patterns = Inventory.split_host_pattern(pattern)
hosts = self._evaluate_patterns(patterns) hosts = self._evaluate_patterns(patterns)
# mainly useful for hostvars[host] access # mainly useful for hostvars[host] access
if not ignore_limits_and_restrictions: if not ignore_limits_and_restrictions:
# exclude hosts not in a subset, if defined # exclude hosts not in a subset, if defined
if self._subset: if self._subset:
subset = self._evaluate_patterns(self._subset) subset = self._evaluate_patterns(self._subset)
hosts = [ h for h in hosts if h in subset ] hosts = [ h for h in hosts if h in subset ]
# exclude hosts mentioned in any restriction (ex: failed hosts) # exclude hosts mentioned in any restriction (ex: failed hosts)
if self._restriction is not None: if self._restriction is not None:
hosts = [ h for h in hosts if h in self._restriction ] hosts = [ h for h in hosts if h in self._restriction ]
HOSTS_PATTERNS_CACHE[pattern_hash] = hosts[:] seen = set()
return hosts HOSTS_PATTERNS_CACHE[pattern_hash] = [x for x in hosts if x not in seen and not seen.add(x)]
return HOSTS_PATTERNS_CACHE[pattern_hash][:]
@classmethod @classmethod
def split_host_pattern(cls, pattern): def split_host_pattern(cls, pattern):
@ -227,15 +237,13 @@ class Inventory(object):
# If it doesn't, it could still be a single pattern. This accounts for # If it doesn't, it could still be a single pattern. This accounts for
# non-separator uses of colons: IPv6 addresses and [x:y] host ranges. # non-separator uses of colons: IPv6 addresses and [x:y] host ranges.
else: else:
(base, port) = parse_address(pattern, allow_ranges=True) try:
if base: (base, port) = parse_address(pattern, allow_ranges=True)
patterns = [pattern] patterns = [pattern]
except:
# The only other case we accept is a ':'-separated list of patterns. # The only other case we accept is a ':'-separated list of patterns.
# This mishandles IPv6 addresses, and is retained only for backwards # This mishandles IPv6 addresses, and is retained only for backwards
# compatibility. # compatibility.
else:
patterns = re.findall( patterns = re.findall(
r'''(?: # We want to match something comprising: r'''(?: # We want to match something comprising:
[^\s:\[\]] # (anything other than whitespace or ':[]' [^\s:\[\]] # (anything other than whitespace or ':[]'
@ -388,7 +396,7 @@ class Inventory(object):
end = -1 end = -1
subscript = (int(start), int(end)) subscript = (int(start), int(end))
if sep == '-': if sep == '-':
display.deprecated("Use [x:y] inclusive subscripts instead of [x-y]", version=2.0, removed=True) display.warning("Use [x:y] inclusive subscripts instead of [x-y] which has been removed")
return (pattern, subscript) return (pattern, subscript)
@ -455,6 +463,8 @@ class Inventory(object):
def clear_pattern_cache(self): def clear_pattern_cache(self):
''' called exclusively by the add_host plugin to allow patterns to be recalculated ''' ''' called exclusively by the add_host plugin to allow patterns to be recalculated '''
global HOSTS_PATTERNS_CACHE
HOSTS_PATTERNS_CACHE = {}
self._pattern_cache = {} self._pattern_cache = {}
def groups_for_host(self, host): def groups_for_host(self, host):
@ -729,12 +739,12 @@ class Inventory(object):
if group and host is None: if group and host is None:
# load vars in dir/group_vars/name_of_group # load vars in dir/group_vars/name_of_group
base_path = os.path.realpath(os.path.join(basedir, "group_vars/%s" % group.name)) base_path = os.path.realpath(os.path.join(to_unicode(basedir, errors='strict'), "group_vars/%s" % group.name))
results = self._variable_manager.add_group_vars_file(base_path, self._loader) results = combine_vars(results, self._variable_manager.add_group_vars_file(base_path, self._loader))
elif host and group is None: elif host and group is None:
# same for hostvars in dir/host_vars/name_of_host # same for hostvars in dir/host_vars/name_of_host
base_path = os.path.realpath(os.path.join(basedir, "host_vars/%s" % host.name)) base_path = os.path.realpath(os.path.join(to_unicode(basedir, errors='strict'), "host_vars/%s" % host.name))
results = self._variable_manager.add_host_vars_file(base_path, self._loader) results = combine_vars(results, self._variable_manager.add_host_vars_file(base_path, self._loader))
# all done, results is a dictionary of variables for this particular host. # all done, results is a dictionary of variables for this particular host.
return results return results

View file

@ -192,6 +192,8 @@ class InventoryDirectory(object):
if group.name not in self.groups: if group.name not in self.groups:
# it's brand new, add him! # it's brand new, add him!
self.groups[group.name] = group self.groups[group.name] = group
# the Group class does not (yet) implement __eq__/__ne__,
# so unlike Host we do a regular comparison here
if self.groups[group.name] != group: if self.groups[group.name] != group:
# different object, merge # different object, merge
self._merge_groups(self.groups[group.name], group) self._merge_groups(self.groups[group.name], group)
@ -200,6 +202,9 @@ class InventoryDirectory(object):
if host.name not in self.hosts: if host.name not in self.hosts:
# Papa's got a brand new host # Papa's got a brand new host
self.hosts[host.name] = host self.hosts[host.name] = host
# because the __eq__/__ne__ methods in Host() compare the
# name fields rather than references, we use id() here to
# do the object comparison for merges
if self.hosts[host.name] != host: if self.hosts[host.name] != host:
# different object, merge # different object, merge
self._merge_hosts(self.hosts[host.name], host) self._merge_hosts(self.hosts[host.name], host)

View file

@ -19,6 +19,8 @@
from __future__ import (absolute_import, division, print_function) from __future__ import (absolute_import, division, print_function)
__metaclass__ = type __metaclass__ = type
import uuid
from ansible.inventory.group import Group from ansible.inventory.group import Group
from ansible.utils.vars import combine_vars from ansible.utils.vars import combine_vars
@ -38,7 +40,7 @@ class Host:
def __eq__(self, other): def __eq__(self, other):
if not isinstance(other, Host): if not isinstance(other, Host):
return False return False
return self.name == other.name return self._uuid == other._uuid
def __ne__(self, other): def __ne__(self, other):
return not self.__eq__(other) return not self.__eq__(other)
@ -55,6 +57,7 @@ class Host:
name=self.name, name=self.name,
vars=self.vars.copy(), vars=self.vars.copy(),
address=self.address, address=self.address,
uuid=self._uuid,
gathered_facts=self._gathered_facts, gathered_facts=self._gathered_facts,
groups=groups, groups=groups,
) )
@ -65,6 +68,7 @@ class Host:
self.name = data.get('name') self.name = data.get('name')
self.vars = data.get('vars', dict()) self.vars = data.get('vars', dict())
self.address = data.get('address', '') self.address = data.get('address', '')
self._uuid = data.get('uuid', uuid.uuid4())
groups = data.get('groups', []) groups = data.get('groups', [])
for group_data in groups: for group_data in groups:
@ -84,6 +88,7 @@ class Host:
self.set_variable('ansible_port', int(port)) self.set_variable('ansible_port', int(port))
self._gathered_facts = False self._gathered_facts = False
self._uuid = uuid.uuid4()
def __repr__(self): def __repr__(self):
return self.get_name() return self.get_name()

View file

@ -124,6 +124,9 @@ class InventoryParser(object):
del pending_declarations[groupname] del pending_declarations[groupname]
continue continue
elif line.startswith('['):
self._raise_error("Invalid section entry: '%s'. Please make sure that there are no spaces" % line + \
"in the section entry, and that there are no other invalid characters")
# It's not a section, so the current state tells us what kind of # It's not a section, so the current state tells us what kind of
# definition it must be. The individual parsers will raise an # definition it must be. The individual parsers will raise an
@ -264,9 +267,12 @@ class InventoryParser(object):
# Can the given hostpattern be parsed as a host with an optional port # Can the given hostpattern be parsed as a host with an optional port
# specification? # specification?
(pattern, port) = parse_address(hostpattern, allow_ranges=True) try:
if not pattern: (pattern, port) = parse_address(hostpattern, allow_ranges=True)
self._raise_error("Can't parse '%s' as host[:port]" % hostpattern) except:
# not a recognizable host pattern
pattern = hostpattern
port = None
# Once we have separated the pattern, we expand it into list of one or # Once we have separated the pattern, we expand it into list of one or
# more hostnames, depending on whether it contains any [x:y] ranges. # more hostnames, depending on whether it contains any [x:y] ranges.

View file

@ -31,6 +31,7 @@ from ansible.errors import AnsibleError
from ansible.inventory.host import Host from ansible.inventory.host import Host
from ansible.inventory.group import Group from ansible.inventory.group import Group
from ansible.module_utils.basic import json_dict_bytes_to_unicode from ansible.module_utils.basic import json_dict_bytes_to_unicode
from ansible.utils.unicode import to_str, to_unicode
class InventoryScript: class InventoryScript:
@ -57,12 +58,17 @@ class InventoryScript:
if sp.returncode != 0: if sp.returncode != 0:
raise AnsibleError("Inventory script (%s) had an execution error: %s " % (filename,stderr)) raise AnsibleError("Inventory script (%s) had an execution error: %s " % (filename,stderr))
self.data = stdout # make sure script output is unicode so that json loader will output
# unicode strings itself
try:
self.data = to_unicode(stdout, errors="strict")
except Exception as e:
raise AnsibleError("inventory data from {0} contained characters that cannot be interpreted as UTF-8: {1}".format(to_str(self.filename), to_str(e)))
# see comment about _meta below # see comment about _meta below
self.host_vars_from_top = None self.host_vars_from_top = None
self._parse(stderr) self._parse(stderr)
def _parse(self, err): def _parse(self, err):
all_hosts = {} all_hosts = {}
@ -72,13 +78,11 @@ class InventoryScript:
self.raw = self._loader.load(self.data) self.raw = self._loader.load(self.data)
except Exception as e: except Exception as e:
sys.stderr.write(err + "\n") sys.stderr.write(err + "\n")
raise AnsibleError("failed to parse executable inventory script results from {0}: {1}".format(self.filename, str(e))) raise AnsibleError("failed to parse executable inventory script results from {0}: {1}".format(to_str(self.filename), to_str(e)))
if not isinstance(self.raw, Mapping): if not isinstance(self.raw, Mapping):
sys.stderr.write(err + "\n") sys.stderr.write(err + "\n")
raise AnsibleError("failed to parse executable inventory script results from {0}: data needs to be formatted as a json dict".format(self.filename)) raise AnsibleError("failed to parse executable inventory script results from {0}: data needs to be formatted as a json dict".format(to_str(self.filename)))
self.raw = json_dict_bytes_to_unicode(self.raw)
group = None group = None
for (group_name, data) in self.raw.items(): for (group_name, data) in self.raw.items():
@ -103,7 +107,7 @@ class InventoryScript:
if not isinstance(data, dict): if not isinstance(data, dict):
data = {'hosts': data} data = {'hosts': data}
# is not those subkeys, then simplified syntax, host with vars # is not those subkeys, then simplified syntax, host with vars
elif not any(k in data for k in ('hosts','vars')): elif not any(k in data for k in ('hosts','vars','children')):
data = {'hosts': [group_name], 'vars': data} data = {'hosts': [group_name], 'vars': data}
if 'hosts' in data: if 'hosts' in data:
@ -112,7 +116,7 @@ class InventoryScript:
"data for the host list:\n %s" % (group_name, data)) "data for the host list:\n %s" % (group_name, data))
for hostname in data['hosts']: for hostname in data['hosts']:
if not hostname in all_hosts: if hostname not in all_hosts:
all_hosts[hostname] = Host(hostname) all_hosts[hostname] = Host(hostname)
host = all_hosts[hostname] host = all_hosts[hostname]
group.add_host(host) group.add_host(host)
@ -145,10 +149,12 @@ class InventoryScript:
def get_host_variables(self, host): def get_host_variables(self, host):
""" Runs <script> --host <hostname> to determine additional host variables """ """ Runs <script> --host <hostname> to determine additional host variables """
if self.host_vars_from_top is not None: if self.host_vars_from_top is not None:
got = self.host_vars_from_top.get(host.name, {}) try:
got = self.host_vars_from_top.get(host.name, {})
except AttributeError as e:
raise AnsibleError("Improperly formated host information for %s: %s" % (host.name,to_str(e)))
return got return got
cmd = [self.filename, "--host", host.name] cmd = [self.filename, "--host", host.name]
try: try:
sp = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE) sp = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
@ -161,4 +167,3 @@ class InventoryScript:
return json_dict_bytes_to_unicode(self._loader.load(out)) return json_dict_bytes_to_unicode(self._loader.load(out))
except ValueError: except ValueError:
raise AnsibleError("could not parse post variable response: %s, %s" % (cmd, out)) raise AnsibleError("could not parse post variable response: %s, %s" % (cmd, out))

View file

@ -34,8 +34,8 @@ ANSIBLE_VERSION = "<<ANSIBLE_VERSION>>"
MODULE_ARGS = "<<INCLUDE_ANSIBLE_MODULE_ARGS>>" MODULE_ARGS = "<<INCLUDE_ANSIBLE_MODULE_ARGS>>"
MODULE_COMPLEX_ARGS = "<<INCLUDE_ANSIBLE_MODULE_COMPLEX_ARGS>>" MODULE_COMPLEX_ARGS = "<<INCLUDE_ANSIBLE_MODULE_COMPLEX_ARGS>>"
BOOLEANS_TRUE = ['yes', 'on', '1', 'true', 1] BOOLEANS_TRUE = ['yes', 'on', '1', 'true', 1, True]
BOOLEANS_FALSE = ['no', 'off', '0', 'false', 0] BOOLEANS_FALSE = ['no', 'off', '0', 'false', 0, False]
BOOLEANS = BOOLEANS_TRUE + BOOLEANS_FALSE BOOLEANS = BOOLEANS_TRUE + BOOLEANS_FALSE
SELINUX_SPECIAL_FS="<<SELINUX_SPECIAL_FILESYSTEMS>>" SELINUX_SPECIAL_FS="<<SELINUX_SPECIAL_FILESYSTEMS>>"
@ -213,7 +213,7 @@ except ImportError:
elif isinstance(node, ast.List): elif isinstance(node, ast.List):
return list(map(_convert, node.nodes)) return list(map(_convert, node.nodes))
elif isinstance(node, ast.Dict): elif isinstance(node, ast.Dict):
return dict((_convert(k), _convert(v)) for k, v in node.items) return dict((_convert(k), _convert(v)) for k, v in node.items())
elif isinstance(node, ast.Name): elif isinstance(node, ast.Name):
if node.name in _safe_names: if node.name in _safe_names:
return _safe_names[node.name] return _safe_names[node.name]
@ -369,7 +369,12 @@ def return_values(obj):
sensitive values pre-jsonification.""" sensitive values pre-jsonification."""
if isinstance(obj, basestring): if isinstance(obj, basestring):
if obj: if obj:
yield obj if isinstance(obj, bytes):
yield obj
else:
# Unicode objects should all convert to utf-8
# (still must deal with surrogateescape on python3)
yield obj.encode('utf-8')
return return
elif isinstance(obj, Sequence): elif isinstance(obj, Sequence):
for element in obj: for element in obj:
@ -391,10 +396,22 @@ def remove_values(value, no_log_strings):
""" Remove strings in no_log_strings from value. If value is a container """ Remove strings in no_log_strings from value. If value is a container
type, then remove a lot more""" type, then remove a lot more"""
if isinstance(value, basestring): if isinstance(value, basestring):
if value in no_log_strings: if isinstance(value, unicode):
# This should work everywhere on python2. Need to check
# surrogateescape on python3
bytes_value = value.encode('utf-8')
value_is_unicode = True
else:
bytes_value = value
value_is_unicode = False
if bytes_value in no_log_strings:
return 'VALUE_SPECIFIED_IN_NO_LOG_PARAMETER' return 'VALUE_SPECIFIED_IN_NO_LOG_PARAMETER'
for omit_me in no_log_strings: for omit_me in no_log_strings:
value = value.replace(omit_me, '*' * 8) bytes_value = bytes_value.replace(omit_me, '*' * 8)
if value_is_unicode:
value = unicode(bytes_value, 'utf-8', errors='replace')
else:
value = bytes_value
elif isinstance(value, Sequence): elif isinstance(value, Sequence):
return [remove_values(elem, no_log_strings) for elem in value] return [remove_values(elem, no_log_strings) for elem in value]
elif isinstance(value, Mapping): elif isinstance(value, Mapping):
@ -497,8 +514,11 @@ class AnsibleModule(object):
self.no_log = no_log self.no_log = no_log
self.cleanup_files = [] self.cleanup_files = []
self._debug = False self._debug = False
self._diff = False
self._verbosity = 0
self.aliases = {} self.aliases = {}
self._legal_inputs = ['_ansible_check_mode', '_ansible_no_log', '_ansible_debug', '_ansible_diff', '_ansible_verbosity']
if add_file_common_args: if add_file_common_args:
for k, v in FILE_COMMON_ARGUMENTS.items(): for k, v in FILE_COMMON_ARGUMENTS.items():
@ -507,6 +527,15 @@ class AnsibleModule(object):
self.params = self._load_params() self.params = self._load_params()
# append to legal_inputs and then possibly check against them
try:
self.aliases = self._handle_aliases()
except Exception:
e = get_exception()
# use exceptions here cause its not safe to call vail json until no_log is processed
print('{"failed": true, "msg": "Module alias error: %s"}' % str(e))
sys.exit(1)
# Save parameter values that should never be logged # Save parameter values that should never be logged
self.no_log_values = set() self.no_log_values = set()
# Use the argspec to determine which args are no_log # Use the argspec to determine which args are no_log
@ -517,15 +546,10 @@ class AnsibleModule(object):
if no_log_object: if no_log_object:
self.no_log_values.update(return_values(no_log_object)) self.no_log_values.update(return_values(no_log_object))
# check the locale as set by the current environment, and # check the locale as set by the current environment, and reset to
# reset to LANG=C if it's an invalid/unavailable locale # a known valid (LANG=C) if it's an invalid/unavailable locale
self._check_locale() self._check_locale()
self._legal_inputs = ['_ansible_check_mode', '_ansible_no_log', '_ansible_debug']
# append to legal_inputs and then possibly check against them
self.aliases = self._handle_aliases()
self._check_arguments(check_invalid_arguments) self._check_arguments(check_invalid_arguments)
# check exclusive early # check exclusive early
@ -554,7 +578,7 @@ class AnsibleModule(object):
self._set_defaults(pre=False) self._set_defaults(pre=False)
if not self.no_log: if not self.no_log and self._verbosity >= 3:
self._log_invocation() self._log_invocation()
# finally, make sure we're in a sane working dir # finally, make sure we're in a sane working dir
@ -728,7 +752,7 @@ class AnsibleModule(object):
context = self.selinux_default_context(path) context = self.selinux_default_context(path)
return self.set_context_if_different(path, context, False) return self.set_context_if_different(path, context, False)
def set_context_if_different(self, path, context, changed): def set_context_if_different(self, path, context, changed, diff=None):
if not HAVE_SELINUX or not self.selinux_enabled(): if not HAVE_SELINUX or not self.selinux_enabled():
return changed return changed
@ -749,6 +773,14 @@ class AnsibleModule(object):
new_context[i] = cur_context[i] new_context[i] = cur_context[i]
if cur_context != new_context: if cur_context != new_context:
if diff is not None:
if 'before' not in diff:
diff['before'] = {}
diff['before']['secontext'] = cur_context
if 'after' not in diff:
diff['after'] = {}
diff['after']['secontext'] = new_context
try: try:
if self.check_mode: if self.check_mode:
return True return True
@ -762,7 +794,7 @@ class AnsibleModule(object):
changed = True changed = True
return changed return changed
def set_owner_if_different(self, path, owner, changed): def set_owner_if_different(self, path, owner, changed, diff=None):
path = os.path.expanduser(path) path = os.path.expanduser(path)
if owner is None: if owner is None:
return changed return changed
@ -775,6 +807,15 @@ class AnsibleModule(object):
except KeyError: except KeyError:
self.fail_json(path=path, msg='chown failed: failed to look up user %s' % owner) self.fail_json(path=path, msg='chown failed: failed to look up user %s' % owner)
if orig_uid != uid: if orig_uid != uid:
if diff is not None:
if 'before' not in diff:
diff['before'] = {}
diff['before']['owner'] = orig_uid
if 'after' not in diff:
diff['after'] = {}
diff['after']['owner'] = uid
if self.check_mode: if self.check_mode:
return True return True
try: try:
@ -784,7 +825,7 @@ class AnsibleModule(object):
changed = True changed = True
return changed return changed
def set_group_if_different(self, path, group, changed): def set_group_if_different(self, path, group, changed, diff=None):
path = os.path.expanduser(path) path = os.path.expanduser(path)
if group is None: if group is None:
return changed return changed
@ -797,6 +838,15 @@ class AnsibleModule(object):
except KeyError: except KeyError:
self.fail_json(path=path, msg='chgrp failed: failed to look up group %s' % group) self.fail_json(path=path, msg='chgrp failed: failed to look up group %s' % group)
if orig_gid != gid: if orig_gid != gid:
if diff is not None:
if 'before' not in diff:
diff['before'] = {}
diff['before']['group'] = orig_gid
if 'after' not in diff:
diff['after'] = {}
diff['after']['group'] = gid
if self.check_mode: if self.check_mode:
return True return True
try: try:
@ -806,7 +856,7 @@ class AnsibleModule(object):
changed = True changed = True
return changed return changed
def set_mode_if_different(self, path, mode, changed): def set_mode_if_different(self, path, mode, changed, diff=None):
path = os.path.expanduser(path) path = os.path.expanduser(path)
path_stat = os.lstat(path) path_stat = os.lstat(path)
@ -828,6 +878,15 @@ class AnsibleModule(object):
prev_mode = stat.S_IMODE(path_stat.st_mode) prev_mode = stat.S_IMODE(path_stat.st_mode)
if prev_mode != mode: if prev_mode != mode:
if diff is not None:
if 'before' not in diff:
diff['before'] = {}
diff['before']['mode'] = oct(prev_mode)
if 'after' not in diff:
diff['after'] = {}
diff['after']['mode'] = oct(mode)
if self.check_mode: if self.check_mode:
return True return True
# FIXME: comparison against string above will cause this to be executed # FIXME: comparison against string above will cause this to be executed
@ -961,27 +1020,27 @@ class AnsibleModule(object):
or_reduce = lambda mode, perm: mode | user_perms_to_modes[user][perm] or_reduce = lambda mode, perm: mode | user_perms_to_modes[user][perm]
return reduce(or_reduce, perms, 0) return reduce(or_reduce, perms, 0)
def set_fs_attributes_if_different(self, file_args, changed): def set_fs_attributes_if_different(self, file_args, changed, diff=None):
# set modes owners and context as needed # set modes owners and context as needed
changed = self.set_context_if_different( changed = self.set_context_if_different(
file_args['path'], file_args['secontext'], changed file_args['path'], file_args['secontext'], changed, diff
) )
changed = self.set_owner_if_different( changed = self.set_owner_if_different(
file_args['path'], file_args['owner'], changed file_args['path'], file_args['owner'], changed, diff
) )
changed = self.set_group_if_different( changed = self.set_group_if_different(
file_args['path'], file_args['group'], changed file_args['path'], file_args['group'], changed, diff
) )
changed = self.set_mode_if_different( changed = self.set_mode_if_different(
file_args['path'], file_args['mode'], changed file_args['path'], file_args['mode'], changed, diff
) )
return changed return changed
def set_directory_attributes_if_different(self, file_args, changed): def set_directory_attributes_if_different(self, file_args, changed, diff=None):
return self.set_fs_attributes_if_different(file_args, changed) return self.set_fs_attributes_if_different(file_args, changed, diff)
def set_file_attributes_if_different(self, file_args, changed): def set_file_attributes_if_different(self, file_args, changed, diff=None):
return self.set_fs_attributes_if_different(file_args, changed) return self.set_fs_attributes_if_different(file_args, changed, diff)
def add_path_info(self, kwargs): def add_path_info(self, kwargs):
''' '''
@ -1034,7 +1093,6 @@ class AnsibleModule(object):
# as it would be returned by locale.getdefaultlocale() # as it would be returned by locale.getdefaultlocale()
locale.setlocale(locale.LC_ALL, '') locale.setlocale(locale.LC_ALL, '')
except locale.Error: except locale.Error:
e = get_exception()
# fallback to the 'C' locale, which may cause unicode # fallback to the 'C' locale, which may cause unicode
# issues but is preferable to simply failing because # issues but is preferable to simply failing because
# of an unknown locale # of an unknown locale
@ -1047,6 +1105,7 @@ class AnsibleModule(object):
self.fail_json(msg="An unknown error was encountered while attempting to validate the locale: %s" % e) self.fail_json(msg="An unknown error was encountered while attempting to validate the locale: %s" % e)
def _handle_aliases(self): def _handle_aliases(self):
# this uses exceptions as it happens before we can safely call fail_json
aliases_results = {} #alias:canon aliases_results = {} #alias:canon
for (k,v) in self.argument_spec.items(): for (k,v) in self.argument_spec.items():
self._legal_inputs.append(k) self._legal_inputs.append(k)
@ -1055,11 +1114,11 @@ class AnsibleModule(object):
required = v.get('required', False) required = v.get('required', False)
if default is not None and required: if default is not None and required:
# not alias specific but this is a good place to check this # not alias specific but this is a good place to check this
self.fail_json(msg="internal error: required and default are mutually exclusive for %s" % k) raise Exception("internal error: required and default are mutually exclusive for %s" % k)
if aliases is None: if aliases is None:
continue continue
if type(aliases) != list: if type(aliases) != list:
self.fail_json(msg='internal error: aliases must be a list') raise Exception('internal error: aliases must be a list')
for alias in aliases: for alias in aliases:
self._legal_inputs.append(alias) self._legal_inputs.append(alias)
aliases_results[alias] = k aliases_results[alias] = k
@ -1082,6 +1141,12 @@ class AnsibleModule(object):
elif k == '_ansible_debug': elif k == '_ansible_debug':
self._debug = self.boolean(v) self._debug = self.boolean(v)
elif k == '_ansible_diff':
self._diff = self.boolean(v)
elif k == '_ansible_verbosity':
self._verbosity = v
elif check_invalid_arguments and k not in self._legal_inputs: elif check_invalid_arguments and k not in self._legal_inputs:
self.fail_json(msg="unsupported parameter for module: %s" % k) self.fail_json(msg="unsupported parameter for module: %s" % k)
@ -1257,7 +1322,7 @@ class AnsibleModule(object):
if isinstance(value, bool): if isinstance(value, bool):
return value return value
if isinstance(value, basestring): if isinstance(value, basestring) or isinstance(value, int):
return self.boolean(value) return self.boolean(value)
raise TypeError('%s cannot be converted to a bool' % type(value)) raise TypeError('%s cannot be converted to a bool' % type(value))
@ -1414,7 +1479,6 @@ class AnsibleModule(object):
self.log(msg, log_args=log_args) self.log(msg, log_args=log_args)
def _set_cwd(self): def _set_cwd(self):
try: try:
cwd = os.getcwd() cwd = os.getcwd()
@ -1507,6 +1571,8 @@ class AnsibleModule(object):
self.add_path_info(kwargs) self.add_path_info(kwargs)
if not 'changed' in kwargs: if not 'changed' in kwargs:
kwargs['changed'] = False kwargs['changed'] = False
if 'invocation' not in kwargs:
kwargs['invocation'] = {'module_args': self.params}
kwargs = remove_values(kwargs, self.no_log_values) kwargs = remove_values(kwargs, self.no_log_values)
self.do_cleanup_files() self.do_cleanup_files()
print(self.jsonify(kwargs)) print(self.jsonify(kwargs))
@ -1517,6 +1583,8 @@ class AnsibleModule(object):
self.add_path_info(kwargs) self.add_path_info(kwargs)
assert 'msg' in kwargs, "implementation error -- msg to explain the error is required" assert 'msg' in kwargs, "implementation error -- msg to explain the error is required"
kwargs['failed'] = True kwargs['failed'] = True
if 'invocation' not in kwargs:
kwargs['invocation'] = {'module_args': self.params}
kwargs = remove_values(kwargs, self.no_log_values) kwargs = remove_values(kwargs, self.no_log_values)
self.do_cleanup_files() self.do_cleanup_files()
print(self.jsonify(kwargs)) print(self.jsonify(kwargs))
@ -1687,25 +1755,29 @@ class AnsibleModule(object):
# rename might not preserve context # rename might not preserve context
self.set_context_if_different(dest, context, False) self.set_context_if_different(dest, context, False)
def run_command(self, args, check_rc=False, close_fds=True, executable=None, data=None, binary_data=False, path_prefix=None, cwd=None, use_unsafe_shell=False, prompt_regex=None): def run_command(self, args, check_rc=False, close_fds=True, executable=None, data=None, binary_data=False, path_prefix=None, cwd=None, use_unsafe_shell=False, prompt_regex=None, environ_update=None):
''' '''
Execute a command, returns rc, stdout, and stderr. Execute a command, returns rc, stdout, and stderr.
args is the command to run
If args is a list, the command will be run with shell=False. :arg args: is the command to run
If args is a string and use_unsafe_shell=False it will split args to a list and run with shell=False * If args is a list, the command will be run with shell=False.
If args is a string and use_unsafe_shell=True it run with shell=True. * If args is a string and use_unsafe_shell=False it will split args to a list and run with shell=False
Other arguments: * If args is a string and use_unsafe_shell=True it runs with shell=True.
- check_rc (boolean) Whether to call fail_json in case of :kw check_rc: Whether to call fail_json in case of non zero RC.
non zero RC. Default is False. Default False
- close_fds (boolean) See documentation for subprocess.Popen(). :kw close_fds: See documentation for subprocess.Popen(). Default True
Default is True. :kw executable: See documentation for subprocess.Popen(). Default None
- executable (string) See documentation for subprocess.Popen(). :kw data: If given, information to write to the stdin of the command
Default is None. :kw binary_data: If False, append a newline to the data. Default False
- prompt_regex (string) A regex string (not a compiled regex) which :kw path_prefix: If given, additional path to find the command in.
can be used to detect prompts in the stdout This adds to the PATH environment vairable so helper commands in
which would otherwise cause the execution the same directory can also be found
to hang (especially if no input data is :kw cwd: iIf given, working directory to run the command inside
specified) :kw use_unsafe_shell: See `args` parameter. Default False
:kw prompt_regex: Regex string (not a compiled regex) which can be
used to detect prompts in the stdout which would otherwise cause
the execution to hang (especially if no input data is specified)
:kwarg environ_update: dictionary to *update* os.environ with
''' '''
shell = False shell = False
@ -1736,10 +1808,15 @@ class AnsibleModule(object):
msg = None msg = None
st_in = None st_in = None
# Set a temporary env path if a prefix is passed # Manipulate the environ we'll send to the new process
env=os.environ old_env_vals = {}
if environ_update:
for key, val in environ_update.items():
old_env_vals[key] = os.environ.get(key, None)
os.environ[key] = val
if path_prefix: if path_prefix:
env['PATH']="%s:%s" % (path_prefix, env['PATH']) old_env_vals['PATH'] = os.environ['PATH']
os.environ['PATH'] = "%s:%s" % (path_prefix, os.environ['PATH'])
# create a printable version of the command for use # create a printable version of the command for use
# in reporting later, which strips out things like # in reporting later, which strips out things like
@ -1781,11 +1858,10 @@ class AnsibleModule(object):
close_fds=close_fds, close_fds=close_fds,
stdin=st_in, stdin=st_in,
stdout=subprocess.PIPE, stdout=subprocess.PIPE,
stderr=subprocess.PIPE stderr=subprocess.PIPE,
env=os.environ,
) )
if path_prefix:
kwargs['env'] = env
if cwd and os.path.isdir(cwd): if cwd and os.path.isdir(cwd):
kwargs['cwd'] = cwd kwargs['cwd'] = cwd
@ -1864,6 +1940,13 @@ class AnsibleModule(object):
except: except:
self.fail_json(rc=257, msg=traceback.format_exc(), cmd=clean_args) self.fail_json(rc=257, msg=traceback.format_exc(), cmd=clean_args)
# Restore env settings
for key, val in old_env_vals.items():
if val is None:
del os.environ[key]
else:
os.environ[key] = val
if rc != 0 and check_rc: if rc != 0 and check_rc:
msg = heuristic_log_sanitize(stderr.rstrip(), self.no_log_values) msg = heuristic_log_sanitize(stderr.rstrip(), self.no_log_values)
self.fail_json(cmd=clean_args, rc=rc, stdout=stdout, stderr=stderr, msg=msg) self.fail_json(cmd=clean_args, rc=rc, stdout=stdout, stderr=stderr, msg=msg)

View file

@ -78,6 +78,10 @@ class AnsibleCloudStack(object):
self.returns = {} self.returns = {}
# these values will be casted to int # these values will be casted to int
self.returns_to_int = {} self.returns_to_int = {}
# these keys will be compared case sensitive in self.has_changed()
self.case_sensitive_keys = [
'id',
]
self.module = module self.module = module
self._connect() self._connect()
@ -138,16 +142,14 @@ class AnsibleCloudStack(object):
continue continue
if key in current_dict: if key in current_dict:
if self.case_sensitive_keys and key in self.case_sensitive_keys:
# API returns string for int in some cases, just to make sure if str(value) != str(current_dict[key]):
if isinstance(value, int): return True
current_dict[key] = int(current_dict[key]) # Test for diff in case insensitive way
elif isinstance(value, str): elif str(value).lower() != str(current_dict[key]).lower():
current_dict[key] = str(current_dict[key])
# Only need to detect a singe change, not every item
if value != current_dict[key]:
return True return True
else:
return True
return False return False
@ -218,7 +220,7 @@ class AnsibleCloudStack(object):
vms = self.cs.listVirtualMachines(**args) vms = self.cs.listVirtualMachines(**args)
if vms: if vms:
for v in vms['virtualmachine']: for v in vms['virtualmachine']:
if vm in [ v['name'], v['displayname'], v['id'] ]: if vm.lower() in [ v['name'].lower(), v['displayname'].lower(), v['id'] ]:
self.vm = v self.vm = v
return self._get_by_key(key, self.vm) return self._get_by_key(key, self.vm)
self.module.fail_json(msg="Virtual machine '%s' not found" % vm) self.module.fail_json(msg="Virtual machine '%s' not found" % vm)
@ -238,7 +240,7 @@ class AnsibleCloudStack(object):
if zones: if zones:
for z in zones['zone']: for z in zones['zone']:
if zone in [ z['name'], z['id'] ]: if zone.lower() in [ z['name'].lower(), z['id'] ]:
self.zone = z self.zone = z
return self._get_by_key(key, self.zone) return self._get_by_key(key, self.zone)
self.module.fail_json(msg="zone '%s' not found" % zone) self.module.fail_json(msg="zone '%s' not found" % zone)

View file

@ -1,155 +0,0 @@
#
# (c) 2015 Peter Sprygada, <psprygada@ansible.com>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
#
"""
This module adds shared support for Arista EOS devices using eAPI over
HTTP/S transport. It is built on module_utils/urls.py which is required
for proper operation.
In order to use this module, include it as part of a custom
module as shown below.
** Note: The order of the import statements does matter. **
from ansible.module_utils.basic import *
from ansible.module_utils.urls import *
from ansible.module_utils.eapi import *
The eapi module provides the following common argument spec:
* host (str) - [Required] The IPv4 address or FQDN of the network device
* port (str) - Overrides the default port to use for the HTTP/S
connection. The default values are 80 for HTTP and
443 for HTTPS
* url_username (str) - [Required] The username to use to authenticate
the HTTP/S connection. Aliases: username
* url_password (str) - [Required] The password to use to authenticate
the HTTP/S connection. Aliases: password
* use_ssl (bool) - Specifies whether or not to use an encrypted (HTTPS)
connection or not. The default value is False.
* enable_mode (bool) - Specifies whether or not to enter `enable` mode
prior to executing the command list. The default value is True
* enable_password (str) - The password for entering `enable` mode
on the switch if configured.
In order to communicate with Arista EOS devices, the eAPI feature
must be enabled and configured on the device.
"""
def eapi_argument_spec(spec=None):
"""Creates an argument spec for working with eAPI
"""
arg_spec = url_argument_spec()
arg_spec.update(dict(
host=dict(required=True),
port=dict(),
url_username=dict(required=True, aliases=['username']),
url_password=dict(required=True, aliases=['password']),
use_ssl=dict(default=True, type='bool'),
enable_mode=dict(default=True, type='bool'),
enable_password=dict()
))
if spec:
arg_spec.update(spec)
return arg_spec
def eapi_url(module):
"""Construct a valid Arist eAPI URL
"""
if module.params['use_ssl']:
proto = 'https'
else:
proto = 'http'
host = module.params['host']
url = '{}://{}'.format(proto, host)
if module.params['port']:
url = '{}:{}'.format(url, module.params['port'])
return '{}/command-api'.format(url)
def to_list(arg):
"""Convert the argument to a list object
"""
if isinstance(arg, (list, tuple)):
return list(arg)
elif arg is not None:
return [arg]
else:
return []
def eapi_body(commands, encoding, reqid=None):
"""Create a valid eAPI JSON-RPC request message
"""
params = dict(version=1, cmds=to_list(commands), format=encoding)
return dict(jsonrpc='2.0', id=reqid, method='runCmds', params=params)
def eapi_enable_mode(module):
"""Build commands for entering `enable` mode on the switch
"""
if module.params['enable_mode']:
passwd = module.params['enable_password']
if passwd:
return dict(cmd='enable', input=passwd)
else:
return 'enable'
def eapi_command(module, commands, encoding='json'):
"""Send an ordered list of commands to the device over eAPI
"""
commands = to_list(commands)
url = eapi_url(module)
enable = eapi_enable_mode(module)
if enable:
commands.insert(0, enable)
data = eapi_body(commands, encoding)
data = module.jsonify(data)
headers = {'Content-Type': 'application/json-rpc'}
response, headers = fetch_url(module, url, data=data, headers=headers,
method='POST')
if headers['status'] != 200:
module.fail_json(**headers)
response = module.from_json(response.read())
if 'error' in response:
err = response['error']
module.fail_json(msg='json-rpc error', **err)
if enable:
response['result'].pop(0)
return response['result'], headers
def eapi_configure(module, commands):
"""Send configuration commands to the device over eAPI
"""
commands.insert(0, 'configure')
response, headers = eapi_command(module, commands)
response.pop(0)
return response, headers

View file

@ -41,21 +41,30 @@ except:
HAS_LOOSE_VERSION = False HAS_LOOSE_VERSION = False
class AnsibleAWSError(Exception):
pass
def boto3_conn(module, conn_type=None, resource=None, region=None, endpoint=None, **params): def boto3_conn(module, conn_type=None, resource=None, region=None, endpoint=None, **params):
profile = params.pop('profile_name', None)
params['aws_session_token'] = params.pop('security_token', None)
params['verify'] = params.pop('validate_certs', None)
if conn_type not in ['both', 'resource', 'client']: if conn_type not in ['both', 'resource', 'client']:
module.fail_json(msg='There is an issue in the code of the module. You must specify either both, resource or client to the conn_type parameter in the boto3_conn function call') module.fail_json(msg='There is an issue in the code of the module. You must specify either both, resource or client to the conn_type parameter in the boto3_conn function call')
if conn_type == 'resource': if conn_type == 'resource':
resource = boto3.session.Session().resource(resource, region_name=region, endpoint_url=endpoint, **params) resource = boto3.session.Session(profile_name=profile).resource(resource, region_name=region, endpoint_url=endpoint, **params)
return resource return resource
elif conn_type == 'client': elif conn_type == 'client':
client = boto3.session.Session().client(resource, region_name=region, endpoint_url=endpoint, **params) client = boto3.session.Session(profile_name=profile).client(resource, region_name=region, endpoint_url=endpoint, **params)
return client return client
else: else:
resource = boto3.session.Session().resource(resource, region_name=region, endpoint_url=endpoint, **params) resource = boto3.session.Session(profile_name=profile).resource(resource, region_name=region, endpoint_url=endpoint, **params)
client = boto3.session.Session().client(resource, region_name=region, endpoint_url=endpoint, **params) client = boto3.session.Session(profile_name=profile).client(resource, region_name=region, endpoint_url=endpoint, **params)
return client, resource return client, resource
def aws_common_argument_spec(): def aws_common_argument_spec():
return dict( return dict(
ec2_url=dict(), ec2_url=dict(),
@ -158,13 +167,12 @@ def get_aws_connection_info(module, boto3=False):
if profile_name: if profile_name:
boto_params['profile_name'] = profile_name boto_params['profile_name'] = profile_name
else: else:
boto_params = dict(aws_access_key_id=access_key, boto_params = dict(aws_access_key_id=access_key,
aws_secret_access_key=secret_key, aws_secret_access_key=secret_key,
security_token=security_token) security_token=security_token)
# profile_name only works as a key in boto >= 2.24 # profile_name only works as a key in boto >= 2.24
# so only set profile_name if passed as an argument # so only set profile_name if passed as an argument
if profile_name: if profile_name:
if not boto_supports_profile_name(): if not boto_supports_profile_name():
@ -174,6 +182,10 @@ def get_aws_connection_info(module, boto3=False):
if validate_certs and HAS_LOOSE_VERSION and LooseVersion(boto.Version) >= LooseVersion("2.6.0"): if validate_certs and HAS_LOOSE_VERSION and LooseVersion(boto.Version) >= LooseVersion("2.6.0"):
boto_params['validate_certs'] = validate_certs boto_params['validate_certs'] = validate_certs
for param, value in boto_params.items():
if isinstance(value, str):
boto_params[param] = unicode(value, 'utf-8', 'strict')
return region, ec2_url, boto_params return region, ec2_url, boto_params
@ -196,9 +208,9 @@ def connect_to_aws(aws_module, region, **params):
conn = aws_module.connect_to_region(region, **params) conn = aws_module.connect_to_region(region, **params)
if not conn: if not conn:
if region not in [aws_module_region.name for aws_module_region in aws_module.regions()]: if region not in [aws_module_region.name for aws_module_region in aws_module.regions()]:
raise StandardError("Region %s does not seem to be available for aws module %s. If the region definitely exists, you may need to upgrade boto or extend with endpoints_path" % (region, aws_module.__name__)) raise AnsibleAWSError("Region %s does not seem to be available for aws module %s. If the region definitely exists, you may need to upgrade boto or extend with endpoints_path" % (region, aws_module.__name__))
else: else:
raise StandardError("Unknown problem connecting to region %s for aws module %s." % (region, aws_module.__name__)) raise AnsibleAWSError("Unknown problem connecting to region %s for aws module %s." % (region, aws_module.__name__))
if params.get('profile_name'): if params.get('profile_name'):
conn = boto_fix_security_token_in_profile(conn, params['profile_name']) conn = boto_fix_security_token_in_profile(conn, params['profile_name'])
return conn return conn
@ -214,13 +226,13 @@ def ec2_connect(module):
if region: if region:
try: try:
ec2 = connect_to_aws(boto.ec2, region, **boto_params) ec2 = connect_to_aws(boto.ec2, region, **boto_params)
except (boto.exception.NoAuthHandlerFound, StandardError), e: except (boto.exception.NoAuthHandlerFound, AnsibleAWSError), e:
module.fail_json(msg=str(e)) module.fail_json(msg=str(e))
# Otherwise, no region so we fallback to the old connection method # Otherwise, no region so we fallback to the old connection method
elif ec2_url: elif ec2_url:
try: try:
ec2 = boto.connect_ec2_endpoint(ec2_url, **boto_params) ec2 = boto.connect_ec2_endpoint(ec2_url, **boto_params)
except (boto.exception.NoAuthHandlerFound, StandardError), e: except (boto.exception.NoAuthHandlerFound, AnsibleAWSError), e:
module.fail_json(msg=str(e)) module.fail_json(msg=str(e))
else: else:
module.fail_json(msg="Either region or ec2_url must be specified") module.fail_json(msg="Either region or ec2_url must be specified")

View file

@ -0,0 +1,227 @@
#
# (c) 2015 Peter Sprygada, <psprygada@ansible.com>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
#
NET_PASSWD_RE = re.compile(r"[\r\n]?password: $", re.I)
NET_COMMON_ARGS = dict(
host=dict(required=True),
port=dict(type='int'),
username=dict(required=True),
password=dict(no_log=True),
authorize=dict(default=False, type='bool'),
auth_pass=dict(no_log=True),
transport=dict(choices=['cli', 'eapi']),
use_ssl=dict(default=True, type='bool'),
provider=dict()
)
def to_list(val):
if isinstance(val, (list, tuple)):
return list(val)
elif val is not None:
return [val]
else:
return list()
class Eapi(object):
def __init__(self, module):
self.module = module
# sets the module_utils/urls.py req parameters
self.module.params['url_username'] = module.params['username']
self.module.params['url_password'] = module.params['password']
self.url = None
self.enable = None
def _get_body(self, commands, encoding, reqid=None):
"""Create a valid eAPI JSON-RPC request message
"""
params = dict(version=1, cmds=commands, format=encoding)
return dict(jsonrpc='2.0', id=reqid, method='runCmds', params=params)
def connect(self):
host = self.module.params['host']
port = self.module.params['port']
if self.module.params['use_ssl']:
proto = 'https'
if not port:
port = 443
else:
proto = 'http'
if not port:
port = 80
self.url = '%s://%s:%s/command-api' % (proto, host, port)
def authorize(self):
if self.module.params['auth_pass']:
passwd = self.module.params['auth_pass']
self.enable = dict(cmd='enable', input=passwd)
else:
self.enable = 'enable'
def send(self, commands, encoding='json'):
"""Send commands to the device.
"""
clist = to_list(commands)
if self.enable is not None:
clist.insert(0, self.enable)
data = self._get_body(clist, encoding)
data = self.module.jsonify(data)
headers = {'Content-Type': 'application/json-rpc'}
response, headers = fetch_url(self.module, self.url, data=data,
headers=headers, method='POST')
if headers['status'] != 200:
self.module.fail_json(**headers)
response = self.module.from_json(response.read())
if 'error' in response:
err = response['error']
self.module.fail_json(msg='json-rpc error', **err)
if self.enable:
response['result'].pop(0)
return response['result']
class Cli(object):
def __init__(self, module):
self.module = module
self.shell = None
def connect(self, **kwargs):
host = self.module.params['host']
port = self.module.params['port'] or 22
username = self.module.params['username']
password = self.module.params['password']
self.shell = Shell()
self.shell.open(host, port=port, username=username, password=password)
def authorize(self):
passwd = self.module.params['auth_pass']
self.send(Command('enable', prompt=NET_PASSWD_RE, response=passwd))
def send(self, commands, encoding='text'):
return self.shell.send(commands)
class NetworkModule(AnsibleModule):
def __init__(self, *args, **kwargs):
super(NetworkModule, self).__init__(*args, **kwargs)
self.connection = None
self._config = None
@property
def config(self):
if not self._config:
self._config = self.get_config()
return self._config
def _load_params(self):
params = super(NetworkModule, self)._load_params()
provider = params.get('provider') or dict()
for key, value in provider.items():
if key in NET_COMMON_ARGS.keys():
params[key] = value
return params
def connect(self):
if self.params['transport'] == 'eapi':
self.connection = Eapi(self)
else:
self.connection = Cli(self)
try:
self.connection.connect()
self.execute('terminal length 0')
if self.params['authorize']:
self.connection.authorize()
except Exception, exc:
self.fail_json(msg=exc.message)
def configure(self, commands):
commands = to_list(commands)
commands.insert(0, 'configure terminal')
responses = self.execute(commands)
responses.pop(0)
return responses
def config_replace(self, commands):
if self.params['transport'] == 'cli':
self.fail_json(msg='config replace only supported over eapi')
cmd = 'configure replace terminal:'
commands = '\n'.join(to_list(commands))
command = dict(cmd=cmd, input=commands)
self.execute(command)
def execute(self, commands, **kwargs):
try:
return self.connection.send(commands, **kwargs)
except Exception, exc:
self.fail_json(msg=exc.message, commands=commands)
def disconnect(self):
self.connection.close()
def parse_config(self, cfg):
return parse(cfg, indent=3)
def get_config(self):
cmd = 'show running-config'
if self.params.get('include_defaults'):
cmd += ' all'
if self.params['transport'] == 'cli':
return self.execute(cmd)[0]
else:
resp = self.execute(cmd, encoding='text')
return resp[0]
def get_module(**kwargs):
"""Return instance of NetworkModule
"""
argument_spec = NET_COMMON_ARGS.copy()
if kwargs.get('argument_spec'):
argument_spec.update(kwargs['argument_spec'])
kwargs['argument_spec'] = argument_spec
module = NetworkModule(**kwargs)
# HAS_PARAMIKO is set by module_utils/shell.py
if module.params['transport'] == 'cli' and not HAS_PARAMIKO:
module.fail_json(msg='paramiko is required but does not appear to be installed')
module.connect()
return module

View file

@ -51,19 +51,35 @@ def f5_argument_spec():
def f5_parse_arguments(module): def f5_parse_arguments(module):
if not bigsuds_found: if not bigsuds_found:
module.fail_json(msg="the python bigsuds module is required") module.fail_json(msg="the python bigsuds module is required")
if not module.params['validate_certs']:
disable_ssl_cert_validation() if module.params['validate_certs']:
import ssl
if not hasattr(ssl, 'SSLContext'):
module.fail_json(msg='bigsuds does not support verifying certificates with python < 2.7.9. Either update python or set validate_certs=False on the task')
return (module.params['server'],module.params['user'],module.params['password'],module.params['state'],module.params['partition'],module.params['validate_certs']) return (module.params['server'],module.params['user'],module.params['password'],module.params['state'],module.params['partition'],module.params['validate_certs'])
def bigip_api(bigip, user, password): def bigip_api(bigip, user, password, validate_certs):
api = bigsuds.BIGIP(hostname=bigip, username=user, password=password) try:
return api # bigsuds >= 1.0.3
api = bigsuds.BIGIP(hostname=bigip, username=user, password=password, verify=validate_certs)
except TypeError:
# bigsuds < 1.0.3, no verify param
if validate_certs:
# Note: verified we have SSLContext when we parsed params
api = bigsuds.BIGIP(hostname=bigip, username=user, password=password)
else:
import ssl
if hasattr(ssl, 'SSLContext'):
# Really, you should never do this. It disables certificate
# verification *globally*. But since older bigip libraries
# don't give us a way to toggle verification we need to
# disable it at the global level.
# From https://www.python.org/dev/peps/pep-0476/#id29
ssl._create_default_https_context = ssl._create_unverified_context
api = bigsuds.BIGIP(hostname=bigip, username=user, password=password)
def disable_ssl_cert_validation(): return api
# You probably only want to do this for testing and never in production.
# From https://www.python.org/dev/peps/pep-0476/#id29
import ssl
ssl._create_default_https_context = ssl._create_unverified_context
# Fully Qualified name (with the partition) # Fully Qualified name (with the partition)
def fq_name(partition,name): def fq_name(partition,name):

Some files were not shown because too many files have changed in this diff Show more