Merge branch 'devel' into unevaluated-vars

This commit is contained in:
Lorin Hochstein 2013-06-03 10:09:43 -04:00
commit c947802805
53 changed files with 790 additions and 472 deletions

View file

@ -23,6 +23,12 @@ Core Features:
* added a log file for ansible/ansible-playbook, set 'log_path' in the configuration file or ANSIBLE_LOG_PATH in environment
* debug mode always outputs debug in playbooks, without needing to specify -v
* external inventory script added for Spacewalk / Red Hat Satellite servers
* It is now possible to feed JSON structures to --extra-vars. Pass in a JSON dictionary/hash to feed in complex data.
* group_vars/ and host_vars/ directories can now be kept alongside the playbook as well as inventory (or both!)
* more filters: ability to say {{ foo|success }} and {{ foo|failed }} and when: foo|success and when: foo|failed
* more filters: {{ path|basename }} and {{ path|dirname }}
* lookup plugins now use the basedir of the file they have included from, avoiding needs of ../../../ in places and
increasing the ease at which things can be reorganized.
Modules added:
@ -62,7 +68,6 @@ Modules removed
* vagrant -- can't be compatible with both versions at once, just run things though the vagrant provisioner in vagrant core
Bugfixes and Misc Changes:
* service module happier if only enabled=yes|no specified and no state
@ -124,6 +129,7 @@ the variable is still registered for the host, with the attribute skipped: True.
* fix for some unicode encoding errors in outputing some data in verbose mode
* improved FreeBSD, NetBSD and Solaris facts
* debug module always outputs data without having to specify -v
* fix for sysctl module creating new keys (must specify checks=none)
1.1 "Mean Street" -- 4/2/2013

View file

@ -98,7 +98,10 @@ def main(args):
if options.sudo_user or options.ask_sudo_pass:
options.sudo = True
options.sudo_user = options.sudo_user or C.DEFAULT_SUDO_USER
extra_vars = utils.parse_kv(options.extra_vars)
if options.extra_vars and options.extra_vars[0] in '[{':
extra_vars = utils.json_loads(options.extra_vars)
else:
extra_vars = utils.parse_kv(options.extra_vars)
only_tags = options.tags.split(",")
for playbook in args:
@ -110,6 +113,9 @@ def main(args):
# run all playbooks specified on the command line
for playbook in args:
# let inventory know which playbooks are using so it can know the basedirs
inventory.set_playbook_basedir(os.path.dirname(playbook))
stats = callbacks.AggregateStats()
playbook_cb = callbacks.PlaybookCallbacks(verbose=utils.VERBOSITY)
if options.step:

View file

@ -53,7 +53,8 @@ environment variable.
*-e* 'VARS', *--extra-vars=*'VARS'::
Extra variables to inject into a playbook, in key=value key=value format.
Extra variables to inject into a playbook, in key=value key=value format or
as quoted JSON (hashes and arrays).
*-f* 'NUM', *--forks=*'NUM'::

View file

@ -75,7 +75,7 @@ Host Inventory
Once your nodes are spun up, you'll probably want to talk to them again. The best way to handle his is to use the ec2 inventory plugin.
Even for larger environments, you might have nodes spun up from Cloud Formations or other tooling. You don't have to use Ansible to spin up guests. Once these are created and you wish to configure them, the EC2 API can be used to return system grouping with the help of the EC2 inventory script. This script can be used to group resources by their security group or tags. Tagging is highly recommended in EC2 and can provide an easy way to sort between host groups and roles. The inventory script is documented `here <http://ansible.cc/docs/api.html#external-inventory-scripts>`_.
Even for larger environments, you might have nodes spun up from Cloud Formations or other tooling. You don't have to use Ansible to spin up guests. Once these are created and you wish to configure them, the EC2 API can be used to return system grouping with the help of the EC2 inventory script. This script can be used to group resources by their security group or tags. Tagging is highly recommended in EC2 and can provide an easy way to sort between host groups and roles. The inventory script is documented `in the API chapter <http://ansible.cc/docs/api.html#external-inventory-scripts>`_.
You may wish to schedule a regular refresh of the inventory cache to accomodate for frequent changes in resources:

View file

@ -24,7 +24,8 @@ though a few may remain outside of core depending on use cases and implementatio
- `additional provisioning-related modules <https://github.com/ansible-provisioning>`_ - jhoekx and dagwieers
- `dynamic dns updates <https://github.com/jpmens/ansible-m-dnsupdate>`_ - jp\_mens
All python modules should use the common "AnsibleModule" class to dramatically reduce the amount of boilerplate code required.
All python modules (especially all submitted to core) should use the common "AnsibleModule" class to dramatically reduce the amount of boilerplate code required.
Not all modules above may take advantage of this feature. See the official documentation for more details.
Selected Playbooks
@ -32,9 +33,19 @@ Selected Playbooks
`Playbooks <http://ansible.cc/docs/playbooks.html>`_ are Ansible's
configuration management language. It should be easy to write your own
from scratch for most applications, but it's always helpful to look at
what others have done for reference.
from scratch for most applications (we keep the language simple for EXACTLY that reason), but it can
be helpful to look at what others have done for reference and see what is possible.
The ansible-examples repo on github contains some examples of best-practices Ansible content deploying some
full stack workloads:
- `Ansible-Examples <http://github.com/ansible/ansible-examples>`_
And here are some other community-developed playbooks. Feel free to submit a pull request to the docs
to add your own.
- `edX Online <https://github.com/edx/configuration>`_ - `edX Online <http://edx.org>`_
- `Fedora Infrastructure <http://infrastructure.fedoraproject.org/cgit/ansible.git/tree/>`_ - `Fedora <http://fedoraproject.org>`_
- `Hadoop <https://github.com/jkleint/ansible-hadoop>`_ - jkleint
- `LAMP <https://github.com/fourkitchens/server-playbooks>`_ - `Four Kitchens <http://fourkitchens.com>`_
- `LEMP <https://github.com/francisbesset/ansible-playbooks>`_ - francisbesset
@ -42,7 +53,6 @@ what others have done for reference.
- `Nginx <http://www.capsunlock.net/2012/04/ansible-nginx-playbook.html>`_ - cocoy
- `OpenStack <http://github.com/lorin/openstack-ansible>`_ - lorin
- `Systems Configuration <https://github.com/cegeddin/ansible-contrib>`_ - cegeddin
- `Fedora Infrastructure <http://infrastructure.fedoraproject.org/cgit/ansible.git/tree/>`_ - `Fedora <http://fedoraproject.org>`_
Callbacks and Plugins
`````````````````````
@ -53,6 +63,7 @@ storage. Talk to Cobbler and EC2, tweak the way things are logged, or
even add sound effects.
- `Ansible-Plugins <https://github.com/ansible/ansible/tree/devel/plugins>`_
- `Various modules, plugins, and scripts <https://github.com/ginsys/ansible-plugins>`_ sergevanginderachter
Scripts And Misc
````````````````

View file

@ -226,6 +226,10 @@ the 'raleigh' group might look like::
It is ok if these files do not exist, this is an optional feature.
Tip: In Ansible 1.2 or later the group_vars/ and host_vars/ directories can exist in either
the playbook directory OR the inventory directory. If both paths exist, variables in the playbook
directory will be loaded second.
Tip: Keeping your inventory file and variables in a git repo (or other version control)
is an excellent way to track changes to your inventory and host variables.

View file

@ -54,7 +54,7 @@ your webservers in "webservers.yml" and all your database servers in
"dbservers.yml". You can create a "site.yml" that would reconfigure
all of your systems like this::
----
---
- include: playbooks/webservers.yml
- include: playbooks/dbservers.yml
@ -250,7 +250,7 @@ This is useful, for, among other things, setting the hosts group or the user for
Example::
-----
---
- user: '{{ user }}'
hosts: '{{ hosts }}'
tasks:
@ -258,6 +258,13 @@ Example::
ansible-playbook release.yml --extra-vars "hosts=vipers user=starbuck"
As of Ansible 1.2, you can also pass in extra vars as quoted JSON, like so::
--extra-vars "{'pacman':'mrs','ghosts':['inky','pinky','clyde','sue']}"
The key=value form is obviously simpler, but it's there if you need it!
Conditional Execution
`````````````````````
@ -277,6 +284,20 @@ Don't panic -- it's actually pretty simple::
action: command /sbin/shutdown -t now
when: ansible_os_family == "Debian"
A number of Jinja2 "filters" can also be used in when statements, some of which are unique
and provided by ansible. Suppose we want to ignore the error of one statement and then
decide to do something conditionally based on success or failure::
tasks:
- action: command /bin/false
register: result
ignore_errors: True
- action: command /bin/something
when: result|failed
- action: command /bin/something_else
when: result|sucess
As a reminder, to see what derived variables are available, you can do::
ansible hostname.example.com -m setup
@ -432,7 +453,7 @@ can accept more than one parameter.
``with_fileglob`` matches all files in a single directory, non-recursively, that match a pattern. It can
be used like this::
----
---
- hosts: all
tasks:
@ -534,7 +555,7 @@ This length can be changed by passing an extra parameter::
# create a mysql user with a random password:
- mysql_user: name={{ client }}
password="{{ lookup('password', 'credentials/' + client + '/' + tier + '/' + role + '/mysqlpassword') }}"
password="{{ lookup('password', 'credentials/' + client + '/' + tier + '/' + role + '/mysqlpassword length=15') }}"
priv={{ client }}_{{ tier }}_{{ role }}.*:ALL
(...)
@ -592,7 +613,7 @@ The environment can also be stored in a variable, and accessed like so::
While just proxy settings were shown above, any number of settings can be supplied. The most logical place
to define an environment hash might be a group_vars file, like so::
----
---
# file: group_vars/boston
ntp_server: ntp.bos.example.com

View file

@ -35,7 +35,11 @@
<td>@{ k }@</td>
<td>{% if v.get('required', False) %}yes{% else %}no{% endif %}</td>
<td>{% if v['default'] %}@{ v['default'] }@{% endif %}</td>
{% if v.get('type', 'not_bool') == 'bool' %}
<td><ul><li>yes</li><li>no</li></ul></td>
{% else %}
<td><ul>{% for choice in v.get('choices',[]) -%}<li>@{ choice }@</li>{% endfor -%}</ul></td>
{% endif %}
<td>{% for desc in v.description -%}@{ desc | html_ify }@{% endfor -%}{% if v['version_added'] %} (added in Ansible @{v['version_added']}@){% endif %}</td>
</tr>
{% endfor %}

View file

@ -204,27 +204,37 @@ def regular_generic_msg(hostname, result, oneline, caption):
return "%s | %s >> %s\n" % (hostname, caption, utils.jsonify(result))
def banner(msg):
def banner_cowsay(msg):
if msg.find(": [") != -1:
msg = msg.replace("[","")
if msg.endswith("]"):
msg = msg[:-1]
runcmd = [cowsay,"-W", "60"]
if noncow:
runcmd.append('-f')
runcmd.append(noncow)
runcmd.append(msg)
cmd = subprocess.Popen(runcmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
(out, err) = cmd.communicate()
return "%s\n" % out
def banner_normal(msg):
width = 78 - len(msg)
if width < 3:
width = 3
filler = "*" * width
return "\n%s %s " % (msg, filler)
def banner(msg):
if cowsay:
if msg.find(": [") != -1:
msg = msg.replace("[","")
if msg.endswith("]"):
msg = msg[:-1]
runcmd = [cowsay,"-W", "60"]
if noncow:
runcmd.append('-f')
runcmd.append(noncow)
runcmd.append(msg)
cmd = subprocess.Popen(runcmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
(out, err) = cmd.communicate()
return "%s\n" % out
else:
width = 78 - len(msg)
if width < 3:
width = 3
filler = "*" * width
return "\n%s %s " % (msg, filler)
try:
return banner_cowsay(msg)
except OSError:
# somebody cleverly deleted cowsay or something during the PB run. heh.
return banner_normal(msg)
return banner_normal(msg)
def command_generic_msg(hostname, result, oneline, caption):
''' output the result of a command run '''
@ -552,7 +562,8 @@ class PlaybookCallbacks(object):
if hasattr(self, 'start_at'): # we still have start_at so skip the task
self.skip_task = True
elif hasattr(self, 'step') and self.step:
resp = raw_input('Perform task: %s (y/n/c): ' % name)
msg = ('Perform task: %s (y/n/c): ' % name).encode(sys.stdout.encoding)
resp = raw_input(msg)
if resp.lower() in ['y','yes']:
self.skip_task = False
display(banner(msg))

View file

@ -38,7 +38,7 @@ class Inventory(object):
__slots__ = [ 'host_list', 'groups', '_restriction', '_also_restriction', '_subset',
'parser', '_vars_per_host', '_vars_per_group', '_hosts_cache', '_groups_list',
'_vars_plugins']
'_vars_plugins', '_playbook_basedir']
def __init__(self, host_list=C.DEFAULT_HOST_LIST):
@ -54,6 +54,9 @@ class Inventory(object):
self._hosts_cache = {}
self._groups_list = {}
# to be set by calling set_playbook_basedir by ansible-playbook
self._playbook_basedir = None
# the inventory object holds a list of groups
self.groups = []
@ -371,4 +374,21 @@ class Inventory(object):
""" if inventory came from a file, what's the directory? """
if not self.is_file():
return None
return os.path.dirname(self.host_list)
dname = os.path.dirname(self.host_list)
if dname is None or dname == '':
cwd = os.getcwd()
return cwd
return dname
def playbook_basedir(self):
""" returns the directory of the current playbook """
return self._playbook_basedir
def set_playbook_basedir(self, dir):
"""
sets the base directory of the playbook so inventory plugins can use it to find
variable files and other things.
"""
self._playbook_basedir = dir

View file

@ -65,7 +65,7 @@ class InventoryParser(object):
for line in self.lines:
if line.startswith("["):
active_group_name = line.split("#")[0].replace("[","").replace("]","").strip()
active_group_name = line.split(" #")[0].replace("[","").replace("]","").strip()
if line.find(":vars") != -1 or line.find(":children") != -1:
active_group_name = active_group_name.rsplit(":", 1)[0]
if active_group_name not in self.groups:
@ -78,7 +78,7 @@ class InventoryParser(object):
elif line.startswith("#") or line == '':
pass
elif active_group_name:
tokens = shlex.split(line.split("#")[0])
tokens = shlex.split(line.split(" #")[0])
if len(tokens) == 0:
continue
hostname = tokens[0]

View file

@ -76,8 +76,12 @@ class InventoryScript(object):
if 'vars' in data:
for k, v in data['vars'].iteritems():
group.set_variable(k, v)
all.add_child_group(group)
if group.name == all.name:
all.set_variable(k, v)
else:
group.set_variable(k, v)
if group.name != all.name:
all.add_child_group(group)
# Separate loop to ensure all groups are defined
for (group_name, data) in self.raw.items():

View file

@ -1,4 +1,4 @@
# (c) 2012, Michael DeHaan <michael.dehaan@gmail.com>
# (c) 2012-2013, Michael DeHaan <michael.dehaan@gmail.com>
#
# This file is part of Ansible
#
@ -23,45 +23,90 @@ import ansible.constants as C
class VarsModule(object):
"""
Loads variables from group_vars/<groupname> and host_vars/<hostname> in directories parallel
to the inventory base directory or in the same directory as the playbook. Variables in the playbook
dir will win over the inventory dir if files are in both.
"""
def __init__(self, inventory):
""" constructor """
self.inventory = inventory
def run(self, host):
# return the inventory variables for the host
""" main body of the plugin, does actual loading """
inventory = self.inventory
#hostrec = inventory.get_host(host)
self.pb_basedir = inventory.playbook_basedir()
# sort groups by depth so deepest groups can override the less deep ones
groupz = sorted(inventory.groups_for_host(host.name), key=lambda g: g.depth)
groups = [ g.name for g in groupz ]
basedir = inventory.basedir()
if basedir is None:
# could happen when inventory is passed in via the API
return
inventory_basedir = inventory.basedir()
results = {}
scan_pass = 0
# load vars in inventory_dir/group_vars/name_of_group
for x in groups:
p = os.path.join(basedir, "group_vars/%s" % x)
# look in both the inventory base directory and the playbook base directory
for basedir in [ inventory_basedir, self.pb_basedir ]:
# this can happen from particular API usages, particularly if not run
# from /usr/bin/ansible-playbook
if basedir is None:
continue
scan_pass = scan_pass + 1
# it's not an eror if the directory does not exist, keep moving
if not os.path.exists(basedir):
continue
# save work of second scan if the directories are the same
if inventory_basedir == self.pb_basedir and scan_pass != 1:
continue
# load vars in dir/group_vars/name_of_group
for x in groups:
p = os.path.join(basedir, "group_vars/%s" % x)
# the file can be <groupname> or end in .yml or .yaml
# currently ALL will be loaded, even if more than one
paths = [p, '.'.join([p, 'yml']), '.'.join([p, 'yaml'])]
for path in paths:
if os.path.exists(path) and not os.path.isdir(path):
data = utils.parse_yaml_from_file(path)
if type(data) != dict:
raise errors.AnsibleError("%s must be stored as a dictionary/hash" % path)
# combine vars overrides by default but can be configured to do a hash
# merge in settings
results = utils.combine_vars(results, data)
# group vars have been loaded
# load vars in inventory_dir/hosts_vars/name_of_host
# these have greater precedence than group variables
p = os.path.join(basedir, "host_vars/%s" % host.name)
# again allow the file to be named filename or end in .yml or .yaml
paths = [p, '.'.join([p, 'yml']), '.'.join([p, 'yaml'])]
for path in paths:
if os.path.exists(path) and not os.path.isdir(path):
data = utils.parse_yaml_from_file(path)
if type(data) != dict:
raise errors.AnsibleError("%s must be stored as a dictionary/hash" % path)
results = utils.combine_vars(results, data)
# load vars in inventory_dir/hosts_vars/name_of_host
p = os.path.join(basedir, "host_vars/%s" % host.name)
paths = [p, '.'.join([p, 'yml']), '.'.join([p, 'yaml'])]
for path in paths:
if os.path.exists(path) and not os.path.isdir(path):
data = utils.parse_yaml_from_file(path)
if type(data) != dict:
raise errors.AnsibleError("%s must be stored as a dictionary/hash" % path)
results = utils.combine_vars(results, data)
# all done, results is a dictionary of variables for this particular host.
return results

View file

@ -395,9 +395,16 @@ class Runner(object):
if items_plugin is not None and items_plugin in utils.plugins.lookup_loader:
basedir = self.basedir
if '_original_file' in inject:
basedir = os.path.dirname(inject['_original_file'])
filesdir = os.path.join(basedir, '..', 'files')
if os.path.exists(filesdir):
basedir = filesdir
items_terms = self.module_vars.get('items_lookup_terms', '')
items_terms = template.template(self.basedir, items_terms, inject)
items = utils.plugins.lookup_loader.get(items_plugin, runner=self, basedir=self.basedir).run(items_terms, inject=inject)
items_terms = template.template(basedir, items_terms, inject)
items = utils.plugins.lookup_loader.get(items_plugin, runner=self, basedir=basedir).run(items_terms, inject=inject)
if type(items) != list:
raise errors.AnsibleError("lookup plugins have to return a list: %r" % items)
@ -436,7 +443,6 @@ class Runner(object):
complex_args = utils.safe_eval(complex_args)
if type(complex_args) != dict:
raise errors.AnsibleError("args must be a dictionary, received %s" % complex_args)
result = self._executor_internal_inner(
host,
self.module_name,
@ -481,9 +487,8 @@ class Runner(object):
new_args = new_args + "%s='%s' " % (k,v)
module_args = new_args
# module_name may be dynamic (but cannot contain {{ ansible_ssh_user }})
module_name = template.template(self.basedir, module_name, inject)
module_args = template.template(self.basedir, module_args, inject)
complex_args = template.template(self.basedir, complex_args, inject)
if module_name in utils.plugins.action_loader:
if self.background != 0:
@ -539,9 +544,13 @@ class Runner(object):
actual_host = delegate_to
actual_port = port
# user/pass may still contain variables at this stage
actual_user = template.template(self.basedir, actual_user, inject)
actual_pass = template.template(self.basedir, actual_pass, inject)
# make actual_user available as __magic__ ansible_ssh_user variable
inject['ansible_ssh_user'] = actual_user
try:
if actual_port is not None:
actual_port = int(actual_port)
@ -564,6 +573,10 @@ class Runner(object):
if getattr(handler, 'NEEDS_TMPPATH', True):
tmp = self._make_tmp_path(conn)
# render module_args and complex_args templates
module_args = template.template(self.basedir, module_args, inject)
complex_args = template.template(self.basedir, complex_args, inject)
result = handler.run(conn, tmp, module_name, module_args, inject, complex_args)
conn.close()
@ -845,7 +858,7 @@ class Runner(object):
print ie.errno
if ie.errno == 32:
# broken pipe from Ctrl+C
raise errors.AnsibleError("interupted")
raise errors.AnsibleError("interrupted")
raise
else:
results = [ self._executor(h) for h in hosts ]

View file

@ -19,6 +19,7 @@ import base64
import json
import os.path
import yaml
from ansible import errors
def to_nice_yaml(*a, **kw):
'''Make verbose, human readable yaml'''
@ -28,6 +29,20 @@ def to_nice_json(*a, **kw):
'''Make verbose, human readable JSON'''
return json.dumps(*a, indent=4, sort_keys=True, **kw)
def failed(*a, **kw):
item = a[0]
if type(item) != dict:
raise errors.AnsibleError("|failed expects a dictionary")
rc = item.get('rc',0)
failed = item.get('failed',False)
if rc != 0 or failed:
return True
else:
return False
def success(*a, **kw):
return not failed(*a, **kw)
class FilterModule(object):
''' Ansible core jinja2 filters '''
@ -50,5 +65,10 @@ class FilterModule(object):
# path
'basename': os.path.basename,
'dirname': os.path.dirname,
# failure testing
'failed' : failed,
'success' : success,
}

View file

@ -25,6 +25,7 @@ class LookupModule(object):
self.basedir = basedir
def run(self, terms, inject=None, **kwargs):
terms = utils.listify_lookup_plugin_terms(terms, self.basedir, inject)
ret = []

View file

@ -81,6 +81,7 @@ def lookup(name, *args, **kwargs):
from ansible import utils
instance = utils.plugins.lookup_loader.get(name.lower(), basedir=kwargs.get('basedir',None))
vars = kwargs.get('vars', None)
if instance is not None:
ran = instance.run(*args, inject=vars, **kwargs)
return ",".join(ran)
@ -470,6 +471,12 @@ def template_from_string(basedir, data, vars):
environment.filters.update(_get_filters())
environment.template_class = J2Template
if '_original_file' in vars:
basedir = os.path.dirname(vars['_original_file'])
filesdir = os.path.abspath(os.path.join(basedir, '..', 'files'))
if os.path.exists(filesdir):
basedir = filesdir
# TODO: may need some way of using lookup plugins here seeing we aren't calling
# the legacy engine, lookup() as a function, perhaps?

View file

@ -181,7 +181,7 @@ local_action:
wait: yes
wait_timeout: 500
count: 5
instance_tags: '{"db":"postgres"}' monitoring=true'
instance_tags: '{"db":"postgres"}' monitoring=yes'
# VPC example
local_action:

View file

@ -32,12 +32,12 @@ options:
description:
- Password of login user
required: true
default: True
default: 'yes'
login_tenant_name:
description:
- The tenant name of the login user
required: true
default: True
default: 'yes'
auth_url:
description:
- The keystone url for authentication
@ -87,7 +87,7 @@ options:
description:
- Wether the image can be accesed publically
required: false
default: yes
default: 'yes'
copy_from:
description:
- A url from where the image can be downloaded, mutually exculsive with file parameter

View file

@ -19,7 +19,7 @@ options:
description:
- Password of login user
required: false
default: True
default: 'yes'
token:
description:
- The token to be uses in case the password is not specified

View file

@ -38,12 +38,12 @@ options:
description:
- Password of login user
required: true
default: True
default: 'yes'
login_tenant_name:
description:
- The tenant name of the login user
required: true
default: True
default: 'yes'
auth_url:
description:
- The keystone url for authentication
@ -126,7 +126,7 @@ requirements: ["novaclient"]
def _delete_server(module, nova):
name = None
try:
server = nova.servers.list({'name': module.params['name']}).pop()
server = nova.servers.list(True, {'name': module.params['name']}).pop()
nova.servers.delete(server)
except Exception as e:
module.fail_json( msg = "Error in deleting vm: %s" % e.message)
@ -134,7 +134,7 @@ def _delete_server(module, nova):
module.exit_json(changed = True, result = "deleted")
expire = time.time() + module.params['wait_for']
while time.time() < expire:
name = nova.servers.list({'name': module.params['name']})
name = nova.servers.list(True, {'name': module.params['name']})
if not name:
module.exit_json(changed = True, result = "deleted")
time.sleep(5)
@ -182,7 +182,7 @@ def _create_server(module, nova):
def _get_server_state(module, nova):
server = None
try:
servers = nova.servers.list({'name': module.params['name']})
servers = nova.servers.list(True, {'name': module.params['name']})
if servers:
server = servers.pop()
except Exception as e:

View file

@ -38,12 +38,12 @@ options:
description:
- Password of login user
required: true
default: True
default: 'yes'
login_tenant_name:
description:
- The tenant name of the login user
required: true
default: True
default: 'yes'
auth_url:
description:
- The keystone url for authentication

View file

@ -40,12 +40,12 @@ options:
description:
- Password of login user
required: true
default: True
default: 'yes'
login_tenant_name:
description:
- The tenant name of the login user
required: true
default: True
default: 'yes'
auth_url:
description:
- The keystone url for authentication
@ -160,12 +160,12 @@ def _create_floating_ip(quantum, module, port_id, net_id):
try:
result = quantum.create_floatingip({'floatingip': kwargs})
except Exception as e:
module.fail_json( msg = "There was an error in updating the floating ip address: %s" % e.message)
module.exit_json( changed = True, result = result, public_ip=result['floatingip']['floating_ip_address'] )
module.fail_json(msg="There was an error in updating the floating ip address: %s" % e.message)
module.exit_json(changed=True, result=result, public_ip=result['floatingip']['floating_ip_address'])
def _get_net_id(quantum, module):
kwargs = {
'name': module.params['network_name'],
'name': module.params['network_name'],
}
try:
networks = quantum.list_networks(**kwargs)
@ -177,44 +177,47 @@ def _get_net_id(quantum, module):
def _update_floating_ip(quantum, module, port_id, floating_ip_id):
kwargs = {
'port_id': port_id
'port_id': port_id
}
try:
result = quantum.update_floatingip(floating_ip_id, {'floatingip': kwargs})
result = quantum.update_floatingip(floating_ip_id, {'floatingip': kwargs})
except Exception as e:
module.fail_json( msg = "There was an error in updating the floating ip address: %s" % e.message)
module.exit_json( changed = True, result = result)
module.fail_json(msg="There was an error in updating the floating ip address: %s" % e.message)
module.exit_json(changed=True, result=result)
def main():
module = AnsibleModule(
argument_spec = dict(
login_username = dict(default='admin'),
login_password = dict(required=True),
login_tenant_name = dict(required='True'),
auth_url = dict(default='http://127.0.0.1:35357/v2.0/'),
region_name = dict(default=None),
network_name = dict(required=True),
instance_name = dict(required=True),
state = dict(default='present', choices=['absent', 'present'])
argument_spec = dict(
login_username = dict(default='admin'),
login_password = dict(required=True),
login_tenant_name = dict(required='True'),
auth_url = dict(default='http://127.0.0.1:35357/v2.0/'),
region_name = dict(default=None),
network_name = dict(required=True),
instance_name = dict(required=True),
state = dict(default='present', choices=['absent', 'present'])
),
)
try:
nova = nova_client.Client(module.params['login_username'], module.params['login_password'],
module.params['login_tenant_name'], module.params['auth_url'], service_type='compute')
module.params['login_tenant_name'], module.params['auth_url'], service_type='compute')
quantum = _get_quantum_client(module, module.params)
except Exception as e:
module.fail_json( msg = " Error in authenticating to nova: %s" % e.message)
module.fail_json(msg="Error in authenticating to nova: %s" % e.message)
server_info, server_obj = _get_server_state(module, nova)
if not server_info:
module.fail_json( msg = " The instance name provided cannot be found")
module.fail_json(msg="The instance name provided cannot be found")
fixed_ip, port_id = _get_port_info(quantum, module, server_info['id'])
if not port_id:
module.fail_json(msg = "Cannot find a port for this instance, maybe fixed ip is not assigned")
module.fail_json(msg="Cannot find a port for this instance, maybe fixed ip is not assigned")
floating_id, floating_ip = _get_floating_ip(module, quantum, fixed_ip)
if module.params['state'] == 'present':
if floating_ip:
module.exit_json(changed = False, public_ip=floating_ip)
@ -227,6 +230,7 @@ def main():
if floating_ip:
_update_floating_ip(quantum, module, None, floating_id)
module.exit_json(changed=False)
# this is magic, see lib/ansible/module.params['common.py
#<<INCLUDE_ANSIBLE_MODULE_COMMON>>
main()

View file

@ -40,7 +40,7 @@ options:
description:
- password of login user
required: true
default: True
default: 'yes'
login_tenant_name:
description:
- the tenant name of the login user
@ -163,21 +163,21 @@ def _update_floating_ip(quantum, module, port_id, floating_ip_id):
try:
result = quantum.update_floatingip(floating_ip_id, {'floatingip': kwargs})
except Exception as e:
module.fail_json( msg = "There was an error in updating the floating ip address: %s" % e.message)
module.exit_json( changed = True, result = result, public_ip=module.params['ip_address'])
module.fail_json(msg = "There was an error in updating the floating ip address: %s" % e.message)
module.exit_json(changed = True, result = result, public_ip=module.params['ip_address'])
def main():
module = AnsibleModule(
argument_spec = dict(
login_username = dict(default='admin'),
login_password = dict(required=True),
login_tenant_name = dict(required='True'),
auth_url = dict(default='http://127.0.0.1:35357/v2.0/'),
region_name = dict(default=None),
ip_address = dict(required=True),
instance_name = dict(required=True),
state = dict(default='present', choices=['absent', 'present'])
login_username = dict(default='admin'),
login_password = dict(required=True),
login_tenant_name = dict(required='True'),
auth_url = dict(default='http://127.0.0.1:35357/v2.0/'),
region_name = dict(default=None),
ip_address = dict(required=True),
instance_name = dict(required=True),
state = dict(default='present', choices=['absent', 'present'])
),
)
@ -192,7 +192,7 @@ def main():
module.exit_json(changed = False, result = 'attached', public_ip=module.params['ip_address'])
server_info, server_obj = _get_server_state(module, nova)
if not server_info:
module.fail_json( msg = " The instance name provided cannot be found")
module.fail_json(msg = " The instance name provided cannot be found")
port_id = _get_port_id(quantum, module, server_info['id'])
if not port_id:
module.fail_json(msg = "Cannot find a port for this instance, maybe fixed ip is not assigned")
@ -203,7 +203,8 @@ def main():
module.exit_json(changed = False, result = 'detached')
if state == 'attached':
_update_floating_ip(quantum, module, None, floating_ip_id)
module.exit_json( changed = True, result = "detached")
module.exit_json(changed = True, result = "detached")
# this is magic, see lib/ansible/module.params['common.py
#<<INCLUDE_ANSIBLE_MODULE_COMMON>>
main()

View file

@ -38,12 +38,12 @@ options:
description:
- Password of login user
required: true
default: True
default: 'yes'
login_tenant_name:
description:
- The tenant name of the login user
required: true
default: True
default: 'yes'
auth_url:
description:
- The keystone url for authentication
@ -81,7 +81,7 @@ options:
default: None
router_external:
description:
- A value of true specifies that the virtual network is a external network (public).
- If 'yes', specifies that the virtual network is a external network (public).
required: false
default: false
shared:
@ -98,11 +98,11 @@ examples:
- code: "quantum_network: state=present login_username=admin login_password=admin
provider_network_type=gre login_tenant_name=admin
provider_segmentation_id=1 tenant_name=tenant1 name=t1network"
description: "Create's a GRE nework with tunnel id of 1 for tenant 1"
description: "Createss a GRE nework with tunnel id of 1 for tenant 1"
- code: "quantum_network: state=present login_username=admin login_password=admin
provider_network_type=local login_tenant_name=admin
provider_segmentation_id=1 router_external=true name=external_network"
description: "Create's an external,public network"
provider_segmentation_id=1 router_external=yes name=external_network"
description: "Creates an external,public network"
requirements: ["quantumclient", "keystoneclient"]
'''
@ -173,22 +173,27 @@ def _get_net_id(quantum, module):
return networks['networks'][0]['id']
def _create_network(module, quantum):
quantum.format = 'json'
network = {
'name': module.params.get('name'),
'tenant_id': _os_tenant_id,
'provider:network_type': module.params.get('provider_network_type'),
'provider:physical_network': module.params.get('provider_physical_network'),
'provider:segmentation_id': module.params.get('provider_segmentation_id'),
'router:external': module.params.get('router_external'),
'shared': module.params.get('shared'),
'admin_state_up': module.params.get('admin_state_up'),
'name': module.params.get('name'),
'tenant_id': _os_tenant_id,
'provider:network_type': module.params.get('provider_network_type'),
'provider:physical_network': module.params.get('provider_physical_network'),
'provider:segmentation_id': module.params.get('provider_segmentation_id'),
'router:external': module.params.get('router_external'),
'shared': module.params.get('shared'),
'admin_state_up': module.params.get('admin_state_up'),
}
if module.params['provider_network_type'] == 'local':
network.pop('provider:physical_network', None)
network.pop('provider:segmentation_id', None)
if module.params['provider_network_type'] == 'flat':
network.pop('provider:segmentation_id', None)
if module.params['provider_network_type'] == 'gre':
network.pop('provider:physical_network', None)
@ -199,6 +204,7 @@ def _create_network(module, quantum):
return net['network']['id']
def _delete_network(module, net_id, quantum):
try:
id = quantum.delete_network(net_id)
except Exception as e:
@ -209,30 +215,35 @@ def main():
module = AnsibleModule(
argument_spec = dict(
login_username = dict(default='admin'),
login_password = dict(required=True),
login_tenant_name = dict(required='True'),
auth_url = dict(default='http://127.0.0.1:35357/v2.0/'),
region_name = dict(default=None),
name = dict(required=True),
tenant_name = dict(default=None),
provider_network_type = dict(default='local', choices=['local', 'vlan', 'flat', 'gre']),
provider_physical_network = dict(default=None),
provider_segmentation_id = dict(default=None),
router_external = dict(default='false', choices=BOOLEANS),
shared = dict(default='false', choices=BOOLEANS),
admin_state_up = dict(default='true', choices=BOOLEANS),
state = dict(default='present', choices=['absent', 'present'])
login_username = dict(default='admin'),
login_password = dict(required=True),
login_tenant_name = dict(required='True'),
auth_url = dict(default='http://127.0.0.1:35357/v2.0/'),
region_name = dict(default=None),
name = dict(required=True),
tenant_name = dict(default=None),
provider_network_type = dict(default='local', choices=['local', 'vlan', 'flat', 'gre']),
provider_physical_network = dict(default=None),
provider_segmentation_id = dict(default=None),
router_external = dict(default=False, type='bool'),
shared = dict(default=False, type='bool'),
admin_state_up = dict(default=True, type='bool'),
state = dict(default='present', choices=['absent', 'present'])
),
)
if module.params['provider_network_type'] in ['vlan' , 'flat']:
if not module.params['provider_physical_network']:
module.fail_json(msg = " for vlan and flat networks, variable provider_physical_network should be set.")
if module.params['provider_network_type'] in ['vlan', 'gre']:
if not module.params['provider_segmentation_id']:
module.fail_json(msg = " for vlan & gre networks, variable provider_segmentation_id should be set.")
quantum = _get_quantum_client(module, module.params)
_set_tenant_id(module)
if module.params['state'] == 'present':
network_id = _get_net_id(quantum, module)
if not network_id:
@ -240,6 +251,7 @@ def main():
module.exit_json(changed = True, result = "Created", id = network_id)
else:
module.exit_json(changed = False, result = "Success", id = network_id)
if module.params['state'] == 'absent':
network_id = _get_net_id(quantum, module)
if not network_id:
@ -248,9 +260,6 @@ def main():
_delete_network(module, network_id, quantum)
module.exit_json(changed = True, result = "Deleted")
# this is magic, see lib/ansible/module.params['common.py
#<<INCLUDE_ANSIBLE_MODULE_COMMON>>
main()

View file

@ -38,12 +38,12 @@ options:
description:
- Password of login user
required: true
default: True
default: 'yes'
login_tenant_name:
description:
- The tenant name of the login user
required: true
default: True
default: 'yes'
auth_url:
description:
- The keystone url for authentication
@ -74,12 +74,14 @@ options:
- desired admin state of the created router .
required: false
default: true
examples:
- code: "quantum_router: state=present login_username=admin login_password=admin login_tenant_name=admin name=router1"
description: "Creates a router for tenant admin"
requirements: ["quantumclient", "keystoneclient"]
'''
EXAMPLES = '''
# Creates a router for tenant admin
quantum_router: state=present login_username=admin login_password=admin login_tenant_name=admin name=router1"
'''
_os_keystone = None
_os_tenant_id = None
@ -175,26 +177,28 @@ def main():
name = dict(required=True),
tenant_name = dict(default=None),
state = dict(default='present', choices=['absent', 'present']),
admin_state_up = dict(default='true', choices=BOOLEANS),
admin_state_up = dict(type='bool', default=True),
),
)
quantum = _get_quantum_client(module, module.params)
_set_tenant_id(module)
if module.params['state'] == 'present':
router_id = _get_router_id(module, quantum)
if not router_id:
router_id = _create_router(module, quantum)
module.exit_json(changed = True, result = "Created" , id = router_id)
module.exit_json(changed=True, result="Created", id=router_id)
else:
module.exit_json(changed = False, result = "success" , id = router_id)
module.exit_json(changed=False, result="success" , id=router_id)
else:
router_id = _get_router_id(module, quantum)
if not router_id:
module.exit_json(changed = False, result = "success")
module.exit_json(changed=False, result="success")
else:
_delete_router(module, quantum, router_id)
module.exit_json(changed = True, result = "deleted")
module.exit_json(changed=True, result="deleted")
# this is magic, see lib/ansible/module.params['common.py
#<<INCLUDE_ANSIBLE_MODULE_COMMON>>

View file

@ -37,12 +37,12 @@ options:
description:
- Password of login user
required: true
default: True
default: 'yes'
login_tenant_name:
description:
- The tenant name of the login user
required: true
default: True
default: 'yes'
auth_url:
description:
- The keystone url for authentication
@ -124,8 +124,8 @@ def _get_router_id(module, quantum):
def _get_net_id(quantum, module):
kwargs = {
'name': module.params['network_name'],
'router:external': True
'name': module.params['network_name'],
'router:external': True
}
try:
networks = quantum.list_networks(**kwargs)
@ -135,11 +135,10 @@ def _get_net_id(quantum, module):
return None
return networks['networks'][0]['id']
def _get_port_id(quantum, module, router_id, network_id):
kwargs = {
'device_id': router_id,
'network_id': network_id,
'device_id': router_id,
'network_id': network_id,
}
try:
ports = quantum.list_ports(**kwargs)
@ -151,7 +150,7 @@ def _get_port_id(quantum, module, router_id, network_id):
def _add_gateway_router(quantum, module, router_id, network_id):
kwargs = {
'network_id': network_id
'network_id': network_id
}
try:
quantum.add_gateway_router(router_id, kwargs)
@ -163,46 +162,47 @@ def _remove_gateway_router(quantum, module, router_id):
try:
quantum.remove_gateway_router(router_id)
except Exception as e:
module.fail_json(msg = "Error in removing gateway to router: %s" % e.message)
module.fail_json(msg = "Error in removing gateway to router: %s" % e.message)
return True
def main():
module = AnsibleModule(
argument_spec = dict(
login_username = dict(default='admin'),
login_password = dict(required=True),
login_tenant_name = dict(required='True'),
auth_url = dict(default='http://127.0.0.1:35357/v2.0/'),
region_name = dict(default=None),
router_name = dict(required=True),
network_name = dict(required=True),
state = dict(default='present', choices=['absent', 'present']),
argument_spec = dict(
login_username = dict(default='admin'),
login_password = dict(required=True),
login_tenant_name = dict(required='True'),
auth_url = dict(default='http://127.0.0.1:35357/v2.0/'),
region_name = dict(default=None),
router_name = dict(required=True),
network_name = dict(required=True),
state = dict(default='present', choices=['absent', 'present']),
),
)
quantum = _get_quantum_client(module, module.params)
router_id = _get_router_id(module, quantum)
if not router_id:
module.fail_json(msg = "failed to get the router id, please check the router name")
module.fail_json(msg="failed to get the router id, please check the router name")
network_id = _get_net_id(quantum, module)
if not network_id:
module.fail_json(msg = "failed to get the network id, please check the network name and make sure it is external")
module.fail_json(msg="failed to get the network id, please check the network name and make sure it is external")
if module.params['state'] == 'present':
port_id = _get_port_id(quantum, module, router_id, network_id)
if not port_id:
_add_gateway_router(quantum, module, router_id, network_id)
module.exit_json(changed = True, result = "created")
module.exit_json(changed = False, result = "success")
_add_gateway_router(quantum, module, router_id, network_id)
module.exit_json(changed=True, result="created")
module.exit_json(changed=False, result="success")
if module.params['state'] == 'absent':
port_id = _get_port_id(quantum, module, router_id, network_id)
if not port_id:
module.exit_json(changed = False, result = "Success")
module.exit_json(changed=False, result="Success")
_remove_gateway_router(quantum, module, router_id)
module.exit_json(changed = True, result = "Deleted")
module.exit_json(changed=True, result="Deleted")
# this is magic, see lib/ansible/module.params['common.py
#<<INCLUDE_ANSIBLE_MODULE_COMMON>>

View file

@ -37,12 +37,12 @@ options:
description:
- Password of login user
required: true
default: True
default: 'yes'
login_tenant_name:
description:
- The tenant name of the login user
required: true
default: True
default: 'yes'
auth_url:
description:
- The keystone url for authentication
@ -193,48 +193,49 @@ def _remove_interface_router(quantum, module, router_id, subnet_id):
try:
quantum.remove_interface_router(router_id, kwargs)
except Exception as e:
module.fail_json(msg = "Error in removing interface from router: %s" % e.message)
module.fail_json(msg="Error in removing interface from router: %s" % e.message)
return True
def main():
module = AnsibleModule(
argument_spec = dict(
login_username = dict(default='admin'),
login_password = dict(required=True),
login_tenant_name = dict(required='True'),
auth_url = dict(default='http://127.0.0.1:35357/v2.0/'),
region_name = dict(default=None),
router_name = dict(required=True),
subnet_name = dict(required=True),
tenant_name = dict(default=None),
state = dict(default='present', choices=['absent', 'present']),
login_username = dict(default='admin'),
login_password = dict(required=True),
login_tenant_name = dict(required='True'),
auth_url = dict(default='http://127.0.0.1:35357/v2.0/'),
region_name = dict(default=None),
router_name = dict(required=True),
subnet_name = dict(required=True),
tenant_name = dict(default=None),
state = dict(default='present', choices=['absent', 'present']),
),
)
quantum = _get_quantum_client(module, module.params)
_set_tenant_id(module)
router_id = _get_router_id(module, quantum)
if not router_id:
module.fail_json(msg = "failed to get the router id, please check the router name")
module.fail_json(msg="failed to get the router id, please check the router name")
subnet_id = _get_subnet_id(module, quantum)
if not subnet_id:
module.fail_json(msg = "failed to get the subnet id, please check the subnet name")
module.fail_json(msg="failed to get the subnet id, please check the subnet name")
if module.params['state'] == 'present':
port_id = _get_port_id(quantum, module, router_id, subnet_id)
if not port_id:
_add_interface_router(quantum, module, router_id, subnet_id)
module.exit_json(changed = True, result = "created", id = port_id)
module.exit_json(changed = False, result = "success", id = port_id)
module.exit_json(changed=True, result="created", id=port_id)
module.exit_json(changed=False, result="success", id=port_id)
if module.params['state'] == 'absent':
port_id = _get_port_id(quantum, module, router_id, subnet_id)
if not port_id:
module.exit_json(changed = False, result = "Sucess")
_remove_interface_router(quantum, module, router_id, subnet_id)
module.exit_json(changed = True, result = "Deleted")
module.exit_json(changed=True, result="Deleted")
# this is magic, see lib/ansible/module.params['common.py
#<<INCLUDE_ANSIBLE_MODULE_COMMON>>
main()

View file

@ -24,7 +24,7 @@ except ImportError:
DOCUMENTATION = '''
---
module: quantum_floating_ip
module: quantum_subnet
short_description: Add/Remove floating ip from an instance
description:
- Add or Remove a floating ip to an instance

View file

@ -53,7 +53,7 @@ options:
aliases: []
overwrite:
description:
- force overwrite if a file with the same name already exists, values true/false/yes/no. Does not support files uploaded to s3 with multipart upload.
- force overwrite if a file with the same name already exists. Does not support files uploaded to s3 with multipart upload.
required: false
default: false
version_added: "1.2"
@ -99,15 +99,15 @@ def upload_s3file(module, s3, bucket, key_name, path, expiry):
def main():
module = AnsibleModule(
argument_spec = dict(
bucket = dict(),
path = dict(),
dest = dict(),
state = dict(choices=['present', 'absent']),
expiry = dict(default=600),
s3_url = dict(aliases=['S3_URL']),
bucket = dict(),
path = dict(),
dest = dict(),
state = dict(choices=['present', 'absent']),
expiry = dict(default=600, aliases=['expiration']),
s3_url = dict(aliases=['S3_URL']),
ec2_secret_key = dict(aliases=['EC2_SECRET_KEY']),
ec2_access_key = dict(aliases=['EC2_ACCESS_KEY']),
overwrite = dict(default="false", choices=BOOLEANS),
overwrite = dict(default=False, type='bool'),
),
required_together=[ ['bucket', 'path', 'state'] ],
)
@ -120,7 +120,7 @@ def main():
s3_url = module.params.get('s3_url')
ec2_secret_key = module.params.get('ec2_secret_key')
ec2_access_key = module.params.get('ec2_access_key')
overwrite = module.boolean( module.params.get('overwrite') )
overwrite = module.params.get('overwrite')
# allow eucarc environment variables to be used if ansible vars aren't set
if not s3_url and 'S3_URL' in os.environ:
@ -173,7 +173,8 @@ def main():
else:
key_name = dest
# Check to see if the key already exists
# Check to see if the key already exists
key_exists = False
if bucket_exists is True:
try:
key_check = bucket.get_key(key_name)
@ -184,7 +185,7 @@ def main():
except s3.provider.storage_response_error, e:
module.fail_json(msg= str(e))
if key_exists is True and overwrite is True:
if key_exists is True and overwrite:
# Retrieve MD5 Checksums.
md5_remote = key_check.etag[1:-1] # Strip Quotation marks from etag: https://code.google.com/p/boto/issues/detail?id=391
etag_multipart = md5_remote.find('-')!=-1 # Find out if this is a multipart upload -> etag is not md5: https://forums.aws.amazon.com/message.jspa?messageID=222158

View file

@ -275,13 +275,18 @@ class Virt(object):
}
return info
def list_vms(self):
def list_vms(self, state=None):
self.conn = self.__get_conn()
vms = self.conn.find_vm(-1)
results = []
for x in vms:
try:
results.append(x.name())
if state:
vmstate = self.conn.get_status2(x)
if vmstate == state:
results.append(x.name())
else:
results.append(x.name())
except:
pass
return results
@ -395,6 +400,11 @@ def core(module):
v = Virt(uri)
res = {}
if state and command=='list_vms':
res = v.list_vms(state=state)
if type(res) != dict:
res = { command: res }
return VIRT_SUCCESS, res
if state:
if not guest:

View file

@ -86,7 +86,7 @@ options:
- I(grant_option) only has an effect if I(state) is C(present).
- 'Alias: I(admin_option)'
required: no
choices: [yes, no]
choices: ['yes', 'no']
host:
description:
- Database host address. If unspecified, connect via Unix socket.

View file

@ -55,7 +55,7 @@ options:
description:
- if C(yes), fail when user can't be removed. Otherwise just log and continue
required: false
default: yes
default: 'yes'
choices: [ "yes", "no" ]
login_user:
description:

View file

@ -76,7 +76,7 @@ options:
examples:
- code: "riak: command=join target_node=riak@10.1.1.1"
description: "Join's a Riak node to another node"
- code: "riak: wait_for_handoffs=true"
- code: "riak: wait_for_handoffs=yes"
description: "Wait for handoffs to finish. Use with async and poll."
- code: "riak: wait_for_service=kv"
description: "Wait for riak_kv service to startup"
@ -144,7 +144,7 @@ def main():
# here we attempt to get stats from the http stats interface for 120 seconds.
timeout = time.time() + 120
while True:
if time.time () > timeout:
if time.time() > timeout:
module.fail_json(msg='Timeout, could not fetch Riak stats.')
try:
stats_raw = urllib2.urlopen(
@ -230,7 +230,7 @@ def main():
result['handoffs'] = 'No transfers active.'
break
time.sleep(10)
if time.time () > timeout:
if time.time() > timeout:
module.fail_json(msg='Timeout waiting for handoffs.')
# this could take a while, recommend to run in async mode
@ -247,7 +247,7 @@ def main():
break
time.sleep(10)
wait += 10
if time.time () > timeout:
if time.time() > timeout:
module.fail_json(msg='Timeout waiting for nodes to agree on ring.')
result['ring_ready'] = ring_check()

View file

@ -67,8 +67,8 @@ options:
- if C(no), it will not use a proxy, even if one is defined by
in an environment variable on the target hosts.
required: false
default: yes
choices: [yes, no]
default: 'yes'
choices: ['yes', 'no']
others:
description:
- all arguments accepted by the M(file) module also work here

View file

@ -44,8 +44,8 @@ options:
description:
- notify or not (change the tab color, play a sound, etc)
required: false
default: true
choices: [ "true", "false" ]
default: 'yes'
choices: [ "yes", "no" ]
# informational: requirements for nodes
requirements: [ urllib, urllib2 ]
@ -116,7 +116,7 @@ def main():
color=dict(default="yellow", choices=["yellow", "red", "green",
"purple", "gray", "random"]),
msg_format=dict(default="text", choices=["text", "html"]),
notify=dict(default=True, choices=BOOLEANS),
notify=dict(default=True, type='bool'),
),
supports_check_mode=True
)

View file

@ -46,7 +46,7 @@ options:
force:
required: false
default: "yes"
choices: [ yes, no ]
choices: [ 'yes', 'no' ]
description:
- If C(yes), any modified files in the working
tree will be discarded.

View file

@ -85,15 +85,15 @@ examples:
import re
import tempfile
def get_version(dest):
def get_version(git_path, dest):
''' samples the version of the git repo '''
os.chdir(dest)
cmd = "git show --abbrev-commit"
cmd = "%s show --abbrev-commit" % (git_path,)
sha = os.popen(cmd).read().split("\n")
sha = sha[0].split()[1]
return sha
def clone(module, repo, dest, remote, depth):
def clone(git_path, module, repo, dest, remote, depth):
''' makes a new git repo if it does not already exist '''
dest_dirname = os.path.dirname(dest)
try:
@ -101,39 +101,40 @@ def clone(module, repo, dest, remote, depth):
except:
pass
os.chdir(dest_dirname)
cmd = [ module.get_bin_path('git', True), 'clone', '-o', remote ]
cmd = [ git_path, 'clone', '-o', remote ]
if depth:
cmd.extend([ '--depth', str(depth) ])
cmd.extend([ repo, dest ])
return module.run_command(cmd, check_rc=True)
def has_local_mods(dest):
def has_local_mods(git_path, dest):
os.chdir(dest)
cmd = "git status -s"
cmd = "%s status -s" % (git_path,)
lines = os.popen(cmd).read().splitlines()
lines = filter(lambda c: not re.search('^\\?\\?.*$', c), lines)
return len(lines) > 0
def reset(module, dest, force):
def reset(git_path, module, dest, force):
'''
Resets the index and working tree to HEAD.
Discards any changes to tracked files in working
tree since that commit.
'''
os.chdir(dest)
if not force and has_local_mods(dest):
if not force and has_local_mods(git_path, dest):
module.fail_json(msg="Local modifications exist in repository (force=no).")
return module.run_command("git reset --hard HEAD", check_rc=True)
cmd = "%s reset --hard HEAD" % (git_path,)
return module.run_command(cmd, check_rc=True)
def get_remote_head(module, dest, version, remote):
def get_remote_head(git_path, module, dest, version, remote):
cmd = ''
os.chdir(dest)
if version == 'HEAD':
version = get_head_branch(module, dest, remote)
if is_remote_branch(module, dest, remote, version):
cmd = 'git ls-remote %s -h refs/heads/%s' % (remote, version)
elif is_remote_tag(module, dest, remote, version):
cmd = 'git ls-remote %s -t refs/tags/%s' % (remote, version)
version = get_head_branch(git_path, module, dest, remote)
if is_remote_branch(git_path, module, dest, remote, version):
cmd = '%s ls-remote %s -h refs/heads/%s' % (git_path, remote, version)
elif is_remote_tag(git_path, module, dest, remote, version):
cmd = '%s ls-remote %s -t refs/tags/%s' % (git_path, remote, version)
else:
# appears to be a sha1. return as-is since it appears
# cannot check for a specific sha1 on remote
@ -144,35 +145,36 @@ def get_remote_head(module, dest, version, remote):
rev = out.split()[0]
return rev
def is_remote_tag(module, dest, remote, version):
def is_remote_tag(git_path, module, dest, remote, version):
os.chdir(dest)
cmd = 'git ls-remote %s -t refs/tags/%s' % (remote, version)
cmd = '%s ls-remote %s -t refs/tags/%s' % (git_path, remote, version)
(rc, out, err) = module.run_command(cmd)
if version in out:
return True
else:
return False
def get_branches(module, dest):
def get_branches(git_path, module, dest):
os.chdir(dest)
branches = []
(rc, out, err) = module.run_command("git branch -a")
cmd = '%s branch -a' % (git_path,)
(rc, out, err) = module.run_command(cmd)
if rc != 0:
module.fail_json(msg="Could not determine branch data - received %s" % out)
for line in out.split('\n'):
branches.append(line.strip())
return branches
def is_remote_branch(module, dest, remote, branch):
branches = get_branches(module, dest)
def is_remote_branch(git_path, module, dest, remote, branch):
branches = get_branches(git_path, module, dest)
rbranch = 'remotes/%s/%s' % (remote, branch)
if rbranch in branches:
return True
else:
return False
def is_local_branch(module, dest, branch):
branches = get_branches(module, dest)
def is_local_branch(git_path, module, dest, branch):
branches = get_branches(git_path, module, dest)
lbranch = '%s' % branch
if lbranch in branches:
return True
@ -181,24 +183,14 @@ def is_local_branch(module, dest, branch):
else:
return False
def is_current_branch(module, dest, branch):
branches = get_branches(module, dest)
for b in branches:
if b.startswith('* '):
cur_branch = b
if branch == cur_branch or '* %s' % branch == cur_branch:
return True
else:
return True
def is_not_a_branch(module, dest):
branches = get_branches(module, dest)
def is_not_a_branch(git_path, module, dest):
branches = get_branches(git_path, module, dest)
for b in branches:
if b.startswith('* ') and 'no branch' in b:
return True
return False
def get_head_branch(module, dest, remote):
def get_head_branch(git_path, module, dest, remote):
'''
Determine what branch HEAD is associated with. This is partly
taken from lib/ansible/utils/__init__.py. It finds the correct
@ -222,46 +214,46 @@ def get_head_branch(module, dest, remote):
# If we're in a detached HEAD state, look up the branch associated with
# the remote HEAD in .git/refs/remotes/<remote>/HEAD
f = open(os.path.join(repo_path, "HEAD"))
if is_not_a_branch(module, dest):
if is_not_a_branch(git_path, module, dest):
f.close()
f = open(os.path.join(repo_path, 'refs', 'remotes', remote, 'HEAD'))
branch = f.readline().split('/')[-1].rstrip("\n")
f.close()
return branch
def fetch(module, repo, dest, version, remote):
def fetch(git_path, module, repo, dest, version, remote):
''' updates repo from remote sources '''
os.chdir(dest)
(rc, out1, err1) = module.run_command("git fetch %s" % remote)
(rc, out1, err1) = module.run_command("%s fetch %s" % (git_path, remote))
if rc != 0:
module.fail_json(msg="Failed to download remote objects and refs")
(rc, out2, err2) = module.run_command("git fetch --tags %s" % remote)
(rc, out2, err2) = module.run_command("%s fetch --tags %s" % (git_path, remote))
if rc != 0:
module.fail_json(msg="Failed to download remote objects and refs")
return (rc, out1 + out2, err1 + err2)
def switch_version(module, dest, remote, version):
def switch_version(git_path, module, dest, remote, version):
''' once pulled, switch to a particular SHA, tag, or branch '''
os.chdir(dest)
cmd = ''
if version != 'HEAD':
if is_remote_branch(module, dest, remote, version):
if not is_local_branch(module, dest, version):
cmd = "git checkout --track -b %s %s/%s" % (version, remote, version)
if is_remote_branch(git_path, module, dest, remote, version):
if not is_local_branch(git_path, module, dest, version):
cmd = "%s checkout --track -b %s %s/%s" % (git_path, version, remote, version)
else:
(rc, out, err) = module.run_command("git checkout --force %s" % version)
(rc, out, err) = module.run_command("%s checkout --force %s" % (git_path, version))
if rc != 0:
module.fail_json(msg="Failed to checkout branch %s" % version)
cmd = "git reset --hard %s/%s" % (remote, version)
cmd = "%s reset --hard %s/%s" % (git_path, remote, version)
else:
cmd = "git checkout --force %s" % version
cmd = "%s checkout --force %s" % (git_path, version)
else:
branch = get_head_branch(module, dest, remote)
(rc, out, err) = module.run_command("git checkout --force %s" % branch)
branch = get_head_branch(git_path, module, dest, remote)
(rc, out, err) = module.run_command("%s checkout --force %s" % (git_path, branch))
if rc != 0:
module.fail_json(msg="Failed to checkout branch %s" % branch)
cmd = "git reset --hard %s" % remote
cmd = "%s reset --hard %s" % (git_path, remote)
return module.run_command(cmd, check_rc=True)
# ===========================================
@ -288,6 +280,7 @@ def main():
depth = module.params['depth']
update = module.params['update']
git_path = module.get_bin_path('git', True)
gitconfig = os.path.join(dest, '.git', 'config')
rc, out, err, status = (0, None, None, None)
@ -299,31 +292,31 @@ def main():
if not os.path.exists(gitconfig):
if module.check_mode:
module.exit_json(changed=True)
(rc, out, err) = clone(module, repo, dest, remote, depth)
(rc, out, err) = clone(git_path, module, repo, dest, remote, depth)
elif not update:
# Just return having found a repo already in the dest path
# this does no checking that the repo is the actual repo
# requested.
before = get_version(dest)
before = get_version(git_path, dest)
module.exit_json(changed=False, before=before, after=before)
else:
# else do a pull
local_mods = has_local_mods(dest)
before = get_version(dest)
local_mods = has_local_mods(git_path, dest)
before = get_version(git_path, dest)
# if force, do a reset
if local_mods and module.check_mode:
module.exit_json(changed=True, msg='Local modifications exist')
(rc, out, err) = reset(module,dest,force)
(rc, out, err) = reset(git_path, module, dest, force)
if rc != 0:
module.fail_json(msg=err)
# check or get changes from remote
remote_head = get_remote_head(module, dest, version, remote)
remote_head = get_remote_head(git_path, module, dest, version, remote)
if module.check_mode:
changed = False
if remote_head == version:
# get_remote_head returned version as-is
# were given a sha1 object, see if it is present
(rc, out, err) = module.run_command("git show %s" % version)
(rc, out, err) = module.run_command("%s show %s" % (git_path, version))
if version in out:
changed = False
else:
@ -335,16 +328,16 @@ def main():
else:
changed = False
module.exit_json(changed=changed, before=before, after=remote_head)
(rc, out, err) = fetch(module, repo, dest, version, remote)
(rc, out, err) = fetch(git_path, module, repo, dest, version, remote)
if rc != 0:
module.fail_json(msg=err)
# switch to version specified regardless of whether
# we cloned or pulled
(rc, out, err) = switch_version(module, dest, remote, version)
(rc, out, err) = switch_version(git_path, module, dest, remote, version)
# determine if we changed anything
after = get_version(dest)
after = get_version(git_path, dest)
changed = False
if before != after or local_mods:

View file

@ -59,7 +59,7 @@ options:
required: false
choices: [ "present", "absent" ]
default: "present"
description:
description:
- "adds or removes authorized keys for particular user accounts"
author: Brad Olson
'''
@ -215,7 +215,7 @@ def main():
user = dict(required=True, type='str'),
key = dict(required=True, type='str'),
path = dict(required=False, type='str'),
manage_dir = dict(required=False, type='bool'),
manage_dir = dict(required=False, type='bool', default=True),
state = dict(default='present', choices=['absent','present'])
)
)

View file

@ -234,6 +234,7 @@ def main():
)
)
changed = False
rc = 0
args = {
@ -245,6 +246,8 @@ def main():
args['passno'] = module.params['passno']
if module.params['opts'] is not None:
args['opts'] = module.params['opts']
if ' ' in args['opts']:
module.fail_json(msg="unexpected space in 'opts' parameter")
if module.params['dump'] is not None:
args['dump'] = module.params['dump']
if module.params['fstab'] is not None:

View file

@ -41,10 +41,10 @@ options:
- Desired boolean value
required: true
default: null
choices: [ true, false ]
choices: [ 'yes', 'no' ]
examples:
- code: "seboolean: name=httpd_can_network_connect state=true persistent=yes"
description: Set I(httpd_can_network_connect) SELinux flag to I(true) and I(persistent)
- code: "seboolean: name=httpd_can_network_connect state=yes persistent=yes"
description: Set I(httpd_can_network_connect) flag on and keep it persistent across reboots
notes:
- Not tested on any debian based system
requirements: [ ]

View file

@ -390,7 +390,6 @@ class LinuxService(Service):
break
# Locate a tool for runtime service management (start, stop etc.)
self.svc_cmd = ''
if location.get('service', None) and os.path.exists("/etc/init.d/%s" % self.name):
# SysV init script
self.svc_cmd = location['service']
@ -405,12 +404,12 @@ class LinuxService(Service):
self.svc_initscript = initscript
# couldn't find anything yet, assume systemd
if self.svc_initscript is None:
if self.svc_cmd is None and self.svc_initscript is None:
if location.get('systemctl'):
self.svc_cmd = location['systemctl']
if self.svc_cmd is None and not self.svc_initscript:
self.module.fail_json(msg='cannot find \'service\' binary or init script for service, aborting')
self.module.fail_json(msg='cannot find \'service\' binary or init script for service, possible typo in service name?, aborting')
if location.get('initctl', None):
self.svc_initctl = location['initctl']

View file

@ -29,12 +29,7 @@ import socket
import struct
import datetime
import getpass
if not os.path.exists('/sys/devices/virtual/dmi/id/product_name'):
try:
import dmidecode
except ImportError:
import subprocess
import subprocess
DOCUMENTATION = '''
---
@ -149,10 +144,20 @@ class Facts(object):
self.facts['fqdn'] = socket.getfqdn()
self.facts['hostname'] = platform.node().split('.')[0]
self.facts['domain'] = '.'.join(self.facts['fqdn'].split('.')[1:])
arch_bits = platform.architecture()[0]
self.facts['userspace_bits'] = arch_bits.replace('bit', '')
if self.facts['machine'] == 'x86_64':
self.facts['architecture'] = self.facts['machine']
if self.facts['userspace_bits'] == '64':
self.facts['userspace_architecture'] = 'x86_64'
elif self.facts['userspace_bits'] == '32':
self.facts['userspace_architecture'] = 'i386'
elif Facts._I386RE.search(self.facts['machine']):
self.facts['architecture'] = 'i386'
if self.facts['userspace_bits'] == '64':
self.facts['userspace_architecture'] = 'x86_64'
elif self.facts['userspace_bits'] == '32':
self.facts['userspace_architecture'] = 'i386'
else:
self.facts['architecture'] = self.facts['machine']
if self.facts['system'] == 'Linux':
@ -463,13 +468,10 @@ class LinuxHardware(Hardware):
self.facts['processor_cores'] = 'NA'
def get_dmi_facts(self):
''' learn dmi facts from system
def execute(cmd):
p = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
(out, err) = p.communicate()
if p.returncode or err:
return None
return out.rstrip()
Try /sys first for dmi related facts.
If that is not available, fall back to dmidecode executable '''
if os.path.exists('/sys/devices/virtual/dmi/id/product_name'):
# Use kernel DMI info, if available
@ -484,16 +486,16 @@ class LinuxHardware(Hardware):
"Rack Mount Chassis", "Sealed-case PC", "Multi-system",
"CompactPCI", "AdvancedTCA", "Blade" ]
DMI_DICT = dict(
bios_date = '/sys/devices/virtual/dmi/id/bios_date',
bios_version = '/sys/devices/virtual/dmi/id/bios_version',
form_factor = '/sys/devices/virtual/dmi/id/chassis_type',
product_name = '/sys/devices/virtual/dmi/id/product_name',
product_serial = '/sys/devices/virtual/dmi/id/product_serial',
product_uuid = '/sys/devices/virtual/dmi/id/product_uuid',
product_version = '/sys/devices/virtual/dmi/id/product_version',
system_vendor = '/sys/devices/virtual/dmi/id/sys_vendor',
)
DMI_DICT = {
'bios_date': '/sys/devices/virtual/dmi/id/bios_date',
'bios_version': '/sys/devices/virtual/dmi/id/bios_version',
'form_factor': '/sys/devices/virtual/dmi/id/chassis_type',
'product_name': '/sys/devices/virtual/dmi/id/product_name',
'product_serial': '/sys/devices/virtual/dmi/id/product_serial',
'product_uuid': '/sys/devices/virtual/dmi/id/product_uuid',
'product_version': '/sys/devices/virtual/dmi/id/product_version',
'system_vendor': '/sys/devices/virtual/dmi/id/sys_vendor'
}
for (key,path) in DMI_DICT.items():
data = get_file_content(path)
@ -508,46 +510,28 @@ class LinuxHardware(Hardware):
else:
self.facts[key] = 'NA'
elif 'dmidecode' in sys.modules.keys():
# Use python dmidecode, if available
DMI_DICT = dict(
bios_date = '/dmidecode/BIOSinfo/ReleaseDate',
bios_version = '/dmidecode/BIOSinfo/BIOSrevision',
form_factor = '/dmidecode/ChassisInfo/ChassisType',
product_name = '/dmidecode/SystemInfo/ProductName',
product_serial = '/dmidecode/SystemInfo/SerialNumber',
product_uuid = '/dmidecode/SystemInfo/SystemUUID',
product_version = '/dmidecode/SystemInfo/Version',
system_vendor = '/dmidecode/SystemInfo/Manufacturer',
)
dmixml = dmidecode.dmidecodeXML()
dmixml.SetResultType(dmidecode.DMIXML_DOC)
xmldoc = dmixml.QuerySection('all')
dmixp = xmldoc.xpathNewContext()
for (key,path) in DMI_DICT.items():
try:
data = dmixp.xpathEval(path)
if len(data) > 0:
self.facts[key] = data[0].get_content()
else:
self.facts[key] = 'Error'
except:
self.facts[key] = 'NA'
else:
# Fall back to using dmidecode, if available
self.facts['bios_date'] = execute('dmidecode -s bios-release-date') or 'NA'
self.facts['bios_version'] = execute('dmidecode -s bios-version') or 'NA'
self.facts['form_factor'] = execute('dmidecode -s chassis-type') or 'NA'
self.facts['product_name'] = execute('dmidecode -s system-product-name') or 'NA'
self.facts['product_serial'] = execute('dmidecode -s system-serial-number') or 'NA'
self.facts['product_uuid'] = execute('dmidecode -s system-uuid') or 'NA'
self.facts['product_version'] = execute('dmidecode -s system-version') or 'NA'
self.facts['system_vendor'] = execute('dmidecode -s system-manufacturer') or 'NA'
dmi_bin = module.get_bin_path('dmidecode')
DMI_DICT = {
'bios_date': 'bios-release-date',
'bios_version': 'bios-version',
'form_factor': 'chassis-type',
'product_name': 'system-product-name',
'product_serial': 'system-serial-number',
'product_uuid': 'system-uuid',
'product_version': 'system-version',
'system_vendor': 'system-manufacturer'
}
for (k, v) in DMI_DICT.items():
if dmi_bin is not None:
(rc, out, err) = module.run_command('%s -s %s' % (dmi_bin, v))
if rc == 0:
self.facts[k] = out.rstrip()
else:
self.facts[k] = 'NA'
else:
self.facts[k] = 'NA'
def get_mount_facts(self):
self.facts['mounts'] = []
@ -555,7 +539,16 @@ class LinuxHardware(Hardware):
for line in mtab.split('\n'):
if line.startswith('/'):
fields = line.rstrip('\n').split()
self.facts['mounts'].append({'mount': fields[1], 'device':fields[0], 'fstype': fields[2], 'options': fields[3]})
statvfs_result = os.statvfs(fields[1])
self.facts['mounts'].append(
{'mount': fields[1],
'device':fields[0],
'fstype': fields[2],
'options': fields[3],
# statvfs data
'size_total': statvfs_result.f_bsize * statvfs_result.f_blocks,
'size_available': statvfs_result.f_bsize * (statvfs_result.f_bavail),
})
def get_device_facts(self):
self.facts['devices'] = {}
@ -668,15 +661,47 @@ class SunOSHardware(Hardware):
return self.facts
def get_cpu_facts(self):
rc, out, err = module.run_command("/usr/sbin/psrinfo -v")
physid = 0
sockets = {}
rc, out, err = module.run_command("/usr/bin/kstat cpu_info")
self.facts['processor'] = []
for line in out.split('\n'):
if 'processor operates' in line:
if len(line) < 1:
continue
data = line.split(None, 1)
key = data[0].strip()
# "brand" works on Solaris 10 & 11. "implementation" for Solaris 9.
if key == 'module:':
brand = ''
elif key == 'brand':
brand = data[1].strip()
elif key == 'clock_MHz':
clock_mhz = data[1].strip()
elif key == 'implementation':
processor = brand or data[1].strip()
# Add clock speed to description for SPARC CPU
if self.facts['machine'] != 'i86pc':
processor += " @ " + clock_mhz + "MHz"
if 'processor' not in self.facts:
self.facts['processor'] = []
self.facts['processor'].append(line.strip())
self.facts['processor_cores'] = 'NA'
self.facts['processor_count'] = len(self.facts['processor'])
self.facts['processor'].append(processor)
elif key == 'chip_id':
physid = data[1].strip()
if physid not in sockets:
sockets[physid] = 1
else:
sockets[physid] += 1
# Counting cores on Solaris can be complicated.
# https://blogs.oracle.com/mandalika/entry/solaris_show_me_the_cpu
# Treat 'processor_count' as physical sockets and 'processor_cores' as
# virtual CPUs visisble to Solaris. Not a true count of cores for modern SPARC as
# these processors have: sockets -> cores -> threads/virtual CPU.
if len(sockets) > 0:
self.facts['processor_count'] = len(sockets)
self.facts['processor_cores'] = reduce(lambda x, y: x + y, sockets.values())
else:
self.facts['processor_cores'] = 'NA'
self.facts['processor_count'] = len(self.facts['processor'])
def get_memory_facts(self):
rc, out, err = module.run_command(["/usr/sbin/prtconf"])
@ -1347,7 +1372,10 @@ class GenericBsdIfconfigNetwork(Network):
all_ipv4_addresses = [],
all_ipv6_addresses = [],
)
rc, out, err = module.run_command([ifconfig_path])
# FreeBSD, DragonflyBSD, NetBSD, OpenBSD and OS X all implicitly add '-a'
# when running the command 'ifconfig'.
# Solaris must explicitly run the command 'ifconfig -a'.
rc, out, err = module.run_command([ifconfig_path, '-a'])
for line in out.split('\n'):
@ -1416,6 +1444,8 @@ class GenericBsdIfconfigNetwork(Network):
def parse_inet_line(self, words, current_if, ips):
address = {'address': words[1]}
# deal with hex netmask
if re.match('([0-9a-f]){8}', words[3]) and len(words[3]) == 8:
words[3] = '0x' + words[3]
if words[3].startswith('0x'):
address['netmask'] = socket.inet_ntoa(struct.pack('!L', int(words[3], base=16)))
else:
@ -1441,7 +1471,8 @@ class GenericBsdIfconfigNetwork(Network):
address['prefix'] = words[3]
if (len(words) >= 6) and (words[4] == 'scopeid'):
address['scope'] = words[5]
if not address['address'] == '::1' and not address['address'] == 'fe80::1%lo0':
localhost6 = ['::1', '::1/128', 'fe80::1%lo0']
if address['address'] not in localhost6:
ips['all_ipv6_addresses'].append(address['address'])
current_if['ipv6'].append(address)
@ -1492,7 +1523,7 @@ class DarwinNetwork(GenericBsdIfconfigNetwork, Network):
class FreeBSDNetwork(GenericBsdIfconfigNetwork, Network):
"""
This is the FreeBSD Network Class.
It uses the GenericBsdIfconfigNetwork unchanged
It uses the GenericBsdIfconfigNetwork unchanged.
"""
platform = 'FreeBSD'
@ -1507,6 +1538,93 @@ class OpenBSDNetwork(GenericBsdIfconfigNetwork, Network):
def parse_lladdr_line(self, words, current_if, ips):
current_if['macaddress'] = words[1]
class SunOSNetwork(GenericBsdIfconfigNetwork, Network):
"""
This is the SunOS Network Class.
It uses the GenericBsdIfconfigNetwork.
Solaris can have different FLAGS and MTU for IPv4 and IPv6 on the same interface
so these facts have been moved inside the 'ipv4' and 'ipv6' lists.
"""
platform = 'SunOS'
# Solaris 'ifconfig -a' will print interfaces twice, once for IPv4 and again for IPv6.
# MTU and FLAGS also may differ between IPv4 and IPv6 on the same interface.
# 'parse_interface_line()' checks for previously seen interfaces before defining
# 'current_if' so that IPv6 facts don't clobber IPv4 facts (or vice versa).
def get_interfaces_info(self, ifconfig_path):
interfaces = {}
current_if = {}
ips = dict(
all_ipv4_addresses = [],
all_ipv6_addresses = [],
)
rc, out, err = module.run_command([ifconfig_path, '-a'])
for line in out.split('\n'):
if line:
words = line.split()
if re.match('^\S', line) and len(words) > 3:
current_if = self.parse_interface_line(words, current_if, interfaces)
interfaces[ current_if['device'] ] = current_if
elif words[0].startswith('options='):
self.parse_options_line(words, current_if, ips)
elif words[0] == 'nd6':
self.parse_nd6_line(words, current_if, ips)
elif words[0] == 'ether':
self.parse_ether_line(words, current_if, ips)
elif words[0] == 'media:':
self.parse_media_line(words, current_if, ips)
elif words[0] == 'status:':
self.parse_status_line(words, current_if, ips)
elif words[0] == 'lladdr':
self.parse_lladdr_line(words, current_if, ips)
elif words[0] == 'inet':
self.parse_inet_line(words, current_if, ips)
elif words[0] == 'inet6':
self.parse_inet6_line(words, current_if, ips)
else:
self.parse_unknown_line(words, current_if, ips)
# 'parse_interface_line' and 'parse_inet*_line' leave two dicts in the
# ipv4/ipv6 lists which is ugly and hard to read.
# This quick hack merges the dictionaries. Purely cosmetic.
for iface in interfaces:
for v in 'ipv4', 'ipv6':
combined_facts = {}
for facts in interfaces[iface][v]:
combined_facts.update(facts)
if len(combined_facts.keys()) > 0:
interfaces[iface][v] = [combined_facts]
return interfaces, ips
def parse_interface_line(self, words, current_if, interfaces):
device = words[0][0:-1]
if device not in interfaces.keys():
current_if = {'device': device, 'ipv4': [], 'ipv6': [], 'type': 'unknown'}
else:
current_if = interfaces[device]
flags = self.get_options(words[1])
if 'IPv4' in flags:
v = 'ipv4'
if 'IPv6' in flags:
v = 'ipv6'
current_if[v].append({'flags': flags, 'mtu': words[3]})
current_if['macaddress'] = 'unknown' # will be overwritten later
return current_if
# Solaris displays single digit octets in MAC addresses e.g. 0:1:2:d:e:f
# Add leading zero to each octet where needed.
def parse_ether_line(self, words, current_if, ips):
macaddress = ''
for octet in words[1].split(':'):
octet = ('0' + octet)[-2:None]
macaddress += (octet + ':')
current_if['macaddress'] = macaddress[0:-1]
class Virtual(Facts):
"""
This is a generic Virtual subclass of Facts. This should be further

View file

@ -48,7 +48,7 @@ options:
description:
- if C(checks)=I(none) no smart/facultative checks will be made
- if C(checks)=I(before) some checks performed before any update (ie. does the sysctl key is writable ?)
- if C(checks)=I(after) some checks performed after an update (ie. does kernel give back the setted value ?)
- if C(checks)=I(after) some checks performed after an update (ie. does kernel give back the set value ?)
- if C(checks)=I(both) all the smart checks I(before and after) are performed
choices: [ "none", "before", "after", "both" ]
default: both
@ -139,6 +139,15 @@ def sysctl_args_collapse(**sysctl_args):
# ==============================================================
def sysctl_check(current_step, **sysctl_args):
# no smart checks at this step ?
if sysctl_args['checks'] == 'none':
return 0, ''
if current_step == 'before' and sysctl_args['checks'] not in ['before', 'both']:
return 0, ''
if current_step == 'after' and sysctl_args['checks'] not in ['after', 'both']:
return 0, ''
# checking coherence
if sysctl_args['state'] == 'absent' and sysctl_args['value'] is not None:
return 1, 'value=x must not be supplied when state=absent'
@ -154,14 +163,6 @@ def sysctl_check(current_step, **sysctl_args):
return 1, 'key_path is not an existing file, key seems invalid'
if not os.access(sysctl_args['key_path'], os.R_OK):
return 1, 'key_path is not a readable file, key seems to be uncheckable'
# no smart checks at this step ?
if sysctl_args['checks'] == 'none':
return 0, ''
if current_step == 'before' and sysctl_args['checks'] not in ['before', 'both']:
return 0, ''
if current_step == 'after' and sysctl_args['checks'] not in ['after', 'both']:
return 0, ''
# checks before
if current_step == 'before' and sysctl_args['checks'] in ['before', 'both']:

View file

@ -179,8 +179,8 @@ except:
class User(object):
"""
This is a generic User manipulation class that is subclassed
based on platform.
based on platform.
A subclass may wish to override the following action methods:-
- create_user()
- remove_user()
@ -229,7 +229,7 @@ class User(object):
# select whether we dump additional debug info through syslog
self.syslogging = False
def execute_command(self,cmd):
def execute_command(self, cmd):
if self.syslogging:
syslog.openlog('ansible-%s' % os.path.basename(__file__))
syslog.syslog(syslog.LOG_NOTICE, 'Command %s' % '|'.join(cmd))
@ -263,12 +263,9 @@ class User(object):
cmd.append(self.group)
if self.groups is not None:
if self.groups != '':
for g in self.groups.split(','):
if not self.group_exists(g):
self.module.fail_json(msg="Group %s does not exist" % (g))
groups = self.get_groups_set()
cmd.append('-G')
cmd.append(self.groups)
cmd.append(','.join(groups))
if self.comment is not None:
cmd.append('-c')
@ -326,12 +323,8 @@ class User(object):
if current_groups and not self.append:
groups_need_mod = True
else:
groups = self.groups.split(',')
for g in groups:
if not self.group_exists(g):
self.module.fail_json(msg="Group %s does not exist" % (g))
group_diff = set(sorted(current_groups)).symmetric_difference(set(sorted(groups)))
groups = self.get_groups_set()
group_diff = set(current_groups).symmetric_difference(groups)
if group_diff:
if self.append:
@ -391,11 +384,23 @@ class User(object):
else:
return list(grp.getgrnam(group))
def get_groups_set(self):
if self.groups is None:
return None
info = self.user_info()
groups = set(self.groups.split(','))
for g in set(groups):
if not self.group_exists(g):
self.module.fail_json(msg="Group %s does not exist" % (g))
if info and self.group_info(g)[2] == info[3]:
groups.remove(g)
return groups
def user_group_membership(self):
groups = []
info = self.get_pwd_info()
for group in grp.getgrall():
if self.name in group.gr_mem and info[3] == group.gr_gid:
if self.name in group.gr_mem and not info[3] == group.gr_gid:
groups.append(group[0])
return groups
@ -574,11 +579,9 @@ class FreeBsdUser(User):
cmd.append(self.group)
if self.groups is not None:
for g in self.groups.split(','):
if not self.group_exists(g):
self.module.fail_json(msg="Group %s does not exist" % (g))
groups = self.get_groups_set()
cmd.append('-G')
cmd.append(self.groups)
cmd.append(','.join(groups))
if self.createhome:
cmd.append('-m')
@ -641,15 +644,12 @@ class FreeBsdUser(User):
if self.shell is not None and info[6] != self.shell:
cmd.append('-s')
cmd.append(self.shell)
if self.groups is not None:
current_groups = self.user_group_membership()
groups = self.groups.split(',')
for g in groups:
if not self.group_exists(g):
self.module.fail_json(msg="Group %s does not exist" % (g))
groups = self.get_groups_set()
group_diff = set(sorted(current_groups)).symmetric_difference(set(sorted(groups)))
group_diff = set(current_groups).symmetric_difference(groups)
groups_need_mod = False
if group_diff:
@ -665,7 +665,7 @@ class FreeBsdUser(User):
cmd.append('-G')
new_groups = groups
if self.append:
new_groups.extend(current_groups)
new_groups.extend(current_groups)
cmd.append(','.join(new_groups))
# modify the user if cmd will do anything
@ -696,7 +696,7 @@ class SunOS(User):
this class and the generic user class is that Solaris-type distros
don't support the concept of a "system" account and we need to
edit the /etc/shadow file manually to set a password. (Ugh)
This overrides the following methods from the generic class:-
- create_user()
- remove_user()
@ -732,11 +732,9 @@ class SunOS(User):
cmd.append(self.group)
if self.groups is not None:
for g in self.groups.split(','):
if not self.group_exists(g):
self.module.fail_json(msg="Group %s does not exist" % (g))
groups = self.get_groups_set()
cmd.append('-G')
cmd.append(self.groups)
cmd.append(','.join(groups))
if self.comment is not None:
cmd.append('-c')
@ -771,7 +769,7 @@ class SunOS(User):
fields[1] = self.password
line = ':'.join(fields)
lines.append('%s\n' % line)
open(self.SHADOWFILE, 'w+').writelines(lines)
open(self.SHADOWFILE, 'w+').writelines(lines)
except Exception, err:
self.module.fail_json(msg="failed to update users password: %s" % str(err))
@ -799,12 +797,8 @@ class SunOS(User):
if self.groups is not None:
current_groups = self.user_group_membership()
groups = self.groups.split(',')
for g in groups:
if not self.group_exists(g):
self.module.fail_json(msg="Group %s does not exist" % (g))
group_diff = set(sorted(current_groups)).symmetric_difference(set(sorted(groups)))
groups = self.get_groups_set()
group_diff = set(current_groups).symmetric_difference(groups)
groups_need_mod = False
if group_diff:
@ -820,7 +814,7 @@ class SunOS(User):
cmd.append('-G')
new_groups = groups
if self.append:
new_groups.extend(current_groups)
new_groups.extend(current_groups)
cmd.append(','.join(new_groups))
if self.comment is not None and info[4] != self.comment:
@ -856,7 +850,7 @@ class SunOS(User):
fields[1] = self.password
line = ':'.join(fields)
lines.append('%s\n' % line)
open(self.SHADOWFILE, 'w+').writelines(lines)
open(self.SHADOWFILE, 'w+').writelines(lines)
rc = 0
except Exception, err:
self.module.fail_json(msg="failed to update users password: %s" % str(err))
@ -901,11 +895,9 @@ class AIX(User):
cmd.append(self.group)
if self.groups is not None:
for g in self.groups.split(','):
if not self.group_exists(g):
self.module.fail_json(msg="Group %s does not exist" % (g))
groups = self.get_groups_set()
cmd.append('-G')
cmd.append(self.groups)
cmd.append(','.join(groups))
if self.comment is not None:
cmd.append('-c')
@ -954,12 +946,8 @@ class AIX(User):
if self.groups is not None:
current_groups = self.user_group_membership()
groups = self.groups.split(',')
for g in groups:
if not self.group_exists(g):
self.module.fail_json(msg="Group %s does not exist" % (g))
group_diff = set(sorted(current_groups)).symmetric_difference(set(sorted(groups)))
groups = self.get_groups_set()
group_diff = set(current_groups).symmetric_difference(groups)
groups_need_mod = False
if group_diff:
@ -1113,7 +1101,6 @@ def main():
result['comment'] = info[4]
result['home'] = info[5]
result['shell'] = info[6]
groups = user.user_group_membership()
result['uid'] = info[2]
if user.groups is not None:
result['groups'] = user.groups

View file

@ -50,12 +50,12 @@ options:
description:
- The atime property.
required: False
choices: [on,off]
choices: ['on','off']
canmount:
description:
- The canmount property.
required: False
choices: [on,off,noauto]
choices: ['on','off','noauto']
casesensitivity:
description:
- The casesensitivity property.
@ -65,12 +65,12 @@ options:
description:
- The checksum property.
required: False
choices: [on,off,fletcher2,fletcher4,sha256]
choices: ['on','off',fletcher2,fletcher4,sha256]
compression:
description:
- The compression property.
required: False
choices: [on,off,lzjb,gzip,gzip-1,gzip-2,gzip-3,gzip-4,gzip-5,gzip-6,gzip-7,gzip-8,gzip-9]
choices: ['on','off',lzjb,gzip,gzip-1,gzip-2,gzip-3,gzip-4,gzip-5,gzip-6,gzip-7,gzip-8,gzip-9]
copies:
description:
- The copies property.
@ -80,22 +80,22 @@ options:
description:
- The dedup property.
required: False
choices: [on,off]
choices: ['on','off']
devices:
description:
- The devices property.
required: False
choices: [on,off]
choices: ['on','off']
exec:
description:
- The exec property.
required: False
choices: [on,off]
choices: ['on','off']
jailed:
description:
- The jailed property.
required: False
choices: [on,off]
choices: ['on','off']
logbias:
description:
- The logbias property.
@ -109,7 +109,7 @@ options:
description:
- The nbmand property.
required: False
choices: [on,off]
choices: ['on','off']
normalization:
description:
- The normalization property.
@ -128,7 +128,7 @@ options:
description:
- The readonly property.
required: False
choices: [on,off]
choices: ['on','off']
recordsize:
description:
- The recordsize property.
@ -154,12 +154,12 @@ options:
description:
- The setuid property.
required: False
choices: [on,off]
choices: ['on','off']
shareiscsi:
description:
- The shareiscsi property.
required: False
choices: [on,off]
choices: ['on','off']
sharenfs:
description:
- The sharenfs property.
@ -177,12 +177,12 @@ options:
description:
- The sync property.
required: False
choices: [on,off]
choices: ['on','off']
utf8only:
description:
- The utf8only property.
required: False
choices: [on,off]
choices: ['on','off']
volsize:
description:
- The volsize property.
@ -195,17 +195,17 @@ options:
description:
- The vscan property.
required: False
choices: [on,off]
choices: ['on','off']
xattr:
description:
- The xattr property.
required: False
choices: [on,off]
choices: ['on','off']
zoned:
description:
- The zoned property.
required: False
choices: [on,off]
choices: ['on','off']
examples:
- code: zfs name=rpool/myfs state=present
description: Create a new file system called myfs in pool rpool
@ -321,6 +321,8 @@ class Zfs(object):
return module.run_command(cmd)
def main():
# FIXME: should use dict() constructor like other modules, required=False is default
module = AnsibleModule(
argument_spec = {
'name': {'required': True},

View file

@ -42,26 +42,29 @@ options:
this number of minutes before turning itself off.
required: false
default: 30
# WARNING: very careful when moving space around, below
examples:
- code: |
- hosts: devservers
gather_facts: false
connection: ssh
sudo: yes
tasks:
- action: fireball
- hosts: devservers
connection: fireball
tasks:
- command: /usr/bin/anything
description: "This example playbook has two plays: the first launches I(fireball) mode on all hosts via SSH, and the second actually starts using I(fireball) node for subsequent management over the fireball interface"
notes:
- See the advanced playbooks chapter for more about using fireball mode.
requirements: [ "zmq", "keyczar" ]
author: Michael DeHaan
'''
EXAMPLES = '''
# This example playbook has two plays: the first launches 'fireball' mode on all hosts via SSH, and
# the second actually starts using it for subsequent management over the fireball connection
- hosts: devservers
gather_facts: false
connection: ssh
sudo: yes
tasks:
- action: fireball
- hosts: devservers
connection: fireball
tasks:
- command: /usr/bin/anything
'''
import os
import sys
import shutil

View file

@ -36,16 +36,16 @@ options:
required: true
default: null
version_added: "1.2"
examples:
- description: "Example setting host facts using key=value pairs"
code: |
action: set_fact one_fact="something" other_fact="{{ local_var * 2 }}"'
- description: "Example setting host facts using complex arguments"
code: |
action: set_fact
args:
one_fact: something
other_fact: "{{ local_var * 2 }}"
notes:
- You can set play variables using the C(set_var) module.
'''
EXAMPLES = '''
# Example setting host facts using key=value pairs
set_fact: one_fact="something" other_fact="{{ local_var * 2 }}"
# Example setting host facts using complex arguments
set_fact:
one_fact: something
other_fact: "{{ local_var * 2 }}"
'''

View file

@ -60,14 +60,16 @@ options:
poll for the port being open or closed.
choices: [ "started", "stopped" ]
default: "started"
examples:
- code: "wait_for: port=8000 delay=10"
description: "Example from Ansible Playbooks"
notes: []
requirements: []
author: Jeroen Hoekx
'''
EXAMPLES = '''
# wait 300 seconds for port 8000 to become open on the host, don't start checking for 10 seconds
wait_for: port=8000 delay=10"
'''
def main():
module = AnsibleModule(

View file

@ -38,13 +38,15 @@ options:
required: true
default: null
choices: [ "present", "started", "stopped", "restarted" ]
examples:
- code: "supervisorctl: name=my_app state=started"
description: Manage the state of program I(my_app) to be in I(started) state.
requirements: [ ]
author: Matt Wright
'''
EXAMPLES = '''
# Manage the state of program to be in 'started' state.
supervisorctl: name=my_app state=started
'''
def main():
arg_spec = dict(
name=dict(required=True),

View file

@ -12,6 +12,10 @@ url="http://ansible.cc"
license=('GPL3')
depends=('python2' 'python2-paramiko' 'python2-jinja' 'python2-yaml')
makedepends=('git' 'asciidoc' 'fakeroot')
optdepends=('python2-pyzmq: needed for fireball mode'
'python2-pyasn1: needed for fireball mode'
'python2-crypto: needed for fireball mode'
'python2-keyczar: needed for fireball mode')
conflicts=('ansible')
source=("$pkgname::git://github.com/ansible/ansible.git"
"python-binary.diff")

View file

@ -302,4 +302,4 @@ class TestInventory(unittest.TestCase):
assert vars == {'inventory_hostname': 'zeus',
'inventory_hostname_short': 'zeus',
'group_names': ['greek', 'major-god', 'ungrouped'],
'var_a': '1'}
'var_a': '1#2'}

View file

@ -1,5 +1,5 @@
[major-god] # group with inline comments
zeus var_a=1 # host with inline comments
zeus var_a="1#2" # host with inline comments and "#" in the var string
# A comment
thor