Fix inventory cache interface (#50446)
* Replace InventoryFileCacheModule with a better developer-interface Use new interface for inventory plugins with backwards compatibility Auto-update the backing cache-plugin if the cache has changed after parsing the inventory plugin * Update CacheModules to use the config system and add a deprecation warning if they are being imported directly rather than using cache_loader * Fix foreman inventory caching * Add tests * Add integration test to check that fact caching works normally with cache plugins using ansible.constants and inventory caching provides a helpful error for non-compatible cache plugins * Add some developer documentation for inventory and cache plugins * Add user documentation for inventory caching * Add deprecation docs * Apply suggestions from docs review * Add changelog
This commit is contained in:
parent
831f068f98
commit
9687879840
24 changed files with 831 additions and 86 deletions
9
changelogs/fragments/restructure_inventory_cache.yaml
Normal file
9
changelogs/fragments/restructure_inventory_cache.yaml
Normal file
|
@ -0,0 +1,9 @@
|
|||
minor_changes:
|
||||
- inventory plugins - Inventory plugins that support caching can now use any cache plugin
|
||||
shipped with Ansible.
|
||||
deprecated_features:
|
||||
- inventory plugins - Inventory plugins using self.cache is deprecated and will be removed
|
||||
in 2.12. Inventory plugins should use self._cache as a dictionary to store results.
|
||||
- cache plugins - Importing cache plugins directly is deprecated and will be removed in 2.12.
|
||||
Cache plugins should use the cache_loader instead so cache options can be reconciled via
|
||||
the configuration system rather than constants.
|
|
@ -189,9 +189,90 @@ To facilitate this there are a few of helper functions used in the example below
|
|||
|
||||
The specifics will vary depending on API and structure returned. But one thing to keep in mind, if the inventory source or any other issue crops up you should ``raise AnsibleParserError`` to let Ansible know that the source was invalid or the process failed.
|
||||
|
||||
For examples on how to implement an inventory plug in, see the source code here:
|
||||
For examples on how to implement an inventory plugin, see the source code here:
|
||||
`lib/ansible/plugins/inventory <https://github.com/ansible/ansible/tree/devel/lib/ansible/plugins/inventory>`_.
|
||||
|
||||
.. _inventory_plugin_caching:
|
||||
|
||||
inventory cache
|
||||
^^^^^^^^^^^^^^^
|
||||
|
||||
Extend the inventory plugin documentation with the inventory_cache documentation fragment and use the Cacheable base class to have the caching system at your disposal.
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
extends_documentation_fragment:
|
||||
- inventory_cache
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
class InventoryModule(BaseInventoryPlugin, Constructable, Cacheable):
|
||||
|
||||
NAME = 'myplugin'
|
||||
|
||||
Next, load the cache plugin specified by the user to read from and update the cache. If your inventory plugin uses YAML based configuration files and the ``_read_config_data`` method, the cache plugin is loaded within that method. If your inventory plugin does not use ``_read_config_data``, you must load the cache explicitly with ``load_cache_plugin``.
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
NAME = 'myplugin'
|
||||
|
||||
def parse(self, inventory, loader, path, cache=True):
|
||||
super(InventoryModule, self).parse(inventory, loader, path)
|
||||
|
||||
self.load_cache_plugin()
|
||||
|
||||
Before using the cache, retrieve a unique cache key using the ``get_cache_key`` method. This needs to be done by all inventory modules using the cache, so you don't use/overwrite other parts of the cache.
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
def parse(self, inventory, loader, path, cache=True):
|
||||
super(InventoryModule, self).parse(inventory, loader, path)
|
||||
|
||||
self.load_cache_plugin()
|
||||
cache_key = self.get_cache_key(path)
|
||||
|
||||
Now that you've enabled caching, loaded the correct plugin, and retrieved a unique cache key, you can set up the flow of data between the cache and your inventory using the ``cache`` parameter of the ``parse`` method. This value comes from the inventory manager and indicates whether the inventory is being refreshed (such as via ``--flush-cache`` or the meta task ``refresh_inventory``). Although the cache shouldn't be used to populate the inventory when being refreshed, the cache should be updated with the new inventory if the user has enabled caching. You can use ``self._cache`` like a dictionary. The following pattern allows refreshing the inventory to work in conjunction with caching.
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
def parse(self, inventory, loader, path, cache=True):
|
||||
super(InventoryModule, self).parse(inventory, loader, path)
|
||||
|
||||
self.load_cache_plugin()
|
||||
cache_key = self.get_cache_key(path)
|
||||
|
||||
# cache may be True or False at this point to indicate if the inventory is being refreshed
|
||||
# get the user's cache option too to see if we should save the cache if it is changing
|
||||
user_cache_setting = self.get_option('cache')
|
||||
|
||||
# read if the user has caching enabled and the cache isn't being refreshed
|
||||
attempt_to_read_cache = user_cache_setting and cache
|
||||
# update if the user has caching enabled and the cache is being refreshed; update this value to True if the cache has expired below
|
||||
cache_needs_update = user_cache_setting and not cache
|
||||
|
||||
# attempt to read the cache if inventory isn't being refreshed and the user has caching enabled
|
||||
if attempt_to_read_cache:
|
||||
try:
|
||||
results = self._cache[cache_key]
|
||||
except KeyError:
|
||||
# This occurs if the cache_key is not in the cache or if the cache_key expired, so the cache needs to be updated
|
||||
cache_needs_update = True
|
||||
|
||||
if cache_needs_updates:
|
||||
results = self.get_inventory()
|
||||
|
||||
# set the cache
|
||||
self._cache[cache_key] = results
|
||||
|
||||
self.populate(results)
|
||||
|
||||
After the ``parse`` method is complete, the contents of ``self._cache`` is used to set the cache plugin if the contents of the cache have changed.
|
||||
|
||||
You have three other cache methods available:
|
||||
- ``set_cache_plugin`` forces the cache plugin to be set with the contents of ``self._cache`` before the ``parse`` method completes
|
||||
- ``update_cache_if_changed`` sets the cache plugin only if ``self._cache`` has been modified before the ``parse`` method completes
|
||||
- ``clear_cache`` deletes the keys in ``self._cache`` from your cache plugin
|
||||
|
||||
.. _inventory_source_common_format:
|
||||
|
||||
Inventory source common format
|
||||
|
|
|
@ -70,6 +70,7 @@ To define configurable options for your plugin, describe them in the ``DOCUMENTA
|
|||
|
||||
To access the configuration settings in your plugin, use ``self.get_option(<option_name>)``. For most plugin types, the controller pre-populates the settings. If you need to populate settings explicitly, use a ``self.set_options()`` call.
|
||||
|
||||
|
||||
Plugins that support embedded documentation (see :ref:`ansible-doc` for the list) must include well-formed doc strings to be considered for merge into the Ansible repo. If you inherit from a plugin, you must document the options it takes, either via a documentation fragment or as a copy. See :ref:`module_documenting` for more information on correct documentation. Thorough documentation is a good idea even if you're developing a plugin for local use.
|
||||
|
||||
Developing particular plugin types
|
||||
|
@ -144,6 +145,46 @@ the local time, returning the time delta in days, seconds and microseconds.
|
|||
For practical examples of action plugins,
|
||||
see the source code for the `action plugins included with Ansible Core <https://github.com/ansible/ansible/tree/devel/lib/ansible/plugins/action>`_
|
||||
|
||||
.. _developing_cache_plugins:
|
||||
|
||||
Cache plugins
|
||||
-------------
|
||||
|
||||
Cache plugins store gathered facts and data retrieved by inventory plugins.
|
||||
|
||||
Import cache plugins using the cache_loader so you can use ``self.set_options()`` and ``self.get_option(<option_name>)``. If you import a cache plugin directly in the code base, you can only access options via ``ansible.constants``, and you break the cache plugin's ability to be used by an inventory plugin.
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
from ansible.plugins.loader import cache_loader
|
||||
[...]
|
||||
plugin = cache_loader.get('custom_cache', **cache_kwargs)
|
||||
|
||||
There are two base classes for cache plugins, ``BaseCacheModule`` for database-backed caches, and ``BaseCacheFileModule`` for file-backed caches.
|
||||
|
||||
To create a cache plugin, start by creating a new ``CacheModule`` class with the appropriate base class. If you're creating a plugin using an ``__init__`` method you should initialize the base class with any provided args and kwargs to be compatible with inventory plugin cache options. The base class calls ``self.set_options(direct=kwargs)``. After the base class ``__init__`` method is called ``self.get_option(<option_name>)`` should be used to access cache options.
|
||||
|
||||
New cache plugins should take the options ``_uri``, ``_prefix``, and ``_timeout`` to be consistent with existing cache plugins.
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
from ansible.plugins.cache import BaseCacheModule
|
||||
|
||||
class CacheModule(BaseCacheModule):
|
||||
def __init__(self, *args, **kwargs):
|
||||
super(CacheModule, self).__init__(*args, **kwargs)
|
||||
self._connection = self.get_option('_uri')
|
||||
self._prefix = self.get_option('_prefix')
|
||||
self._timeout = self.get_option('_timeout')
|
||||
|
||||
If you use the ``BaseCacheModule``, you must implement the methods ``get``, ``contains``, ``keys``, ``set``, ``delete``, ``flush``, and ``copy``. The ``contains`` method should return a boolean that indicates if the key exists and has not expired. Unlike file-based caches, the ``get`` method does not raise a KeyError if the cache has expired.
|
||||
|
||||
If you use the ``BaseFileCacheModule``, you must implement ``_load`` and ``_dump`` methods that will be called from the base class methods ``get`` and ``set``.
|
||||
|
||||
If your cache plugin stores JSON, use ``AnsibleJSONEncoder`` in the ``_dump`` or ``set`` method and ``AnsibleJSONDecoder`` in the ``_load`` or ``get`` method.
|
||||
|
||||
For example cache plugins, see the source code for the `cache plugins included with Ansible Core <https://github.com/ansible/ansible/tree/devel/lib/ansible/plugins/cache>`_.
|
||||
|
||||
.. _developing_callbacks:
|
||||
|
||||
Callback plugins
|
||||
|
|
|
@ -110,6 +110,31 @@ Now the output of ``ansible-inventory -i demo.aws_ec2.yml --graph``:
|
|||
|
||||
If a host does not have the variables in the configuration above (i.e. ``tags.Name``, ``tags``, ``private_ip_address``), the host will not be added to groups other than those that the inventory plugin creates and the ``ansible_host`` host variable will not be modified.
|
||||
|
||||
If an inventory plugin supports caching, you can enable and set caching options for an individual YAML configuration source or for multiple inventory sources using environment variables or Ansible configuration files. If you enable caching for an inventory plugin without providing inventory-specific caching options, the inventory plugin will use fact-caching options. Here is an example of enabling caching for an individual YAML configuration file:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
# demo.aws_ec2.yml
|
||||
plugin: aws_ec2
|
||||
cache: yes
|
||||
cache_plugin: jsonfile
|
||||
cache_timeout: 7200
|
||||
cache_connection: /tmp/aws_inventory
|
||||
cache_prefix: aws_ec2
|
||||
|
||||
Here is an example of setting inventory caching with some fact caching defaults for the cache plugin used and the timeout in an ``ansible.cfg`` file:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[defaults]
|
||||
fact_caching = json
|
||||
fact_caching_connection = /tmp/ansible_facts
|
||||
cache_timeout = 3600
|
||||
|
||||
[inventory]
|
||||
cache = yes
|
||||
cache_connection = /tmp/ansible_inventory
|
||||
|
||||
.. _inventory_plugin_list:
|
||||
|
||||
Plugin List
|
||||
|
|
|
@ -193,6 +193,19 @@ Deprecated
|
|||
removed in 2.12. If you need the old behaviour switch to ``FactCache.first_order_merge()``
|
||||
instead.
|
||||
|
||||
* Supporting file-backed caching via self.cache is deprecated and will
|
||||
be removed in Ansible 2.12. If you maintain an inventory plugin, update it to use ``self._cache`` as a dictionary. For implementation details, see
|
||||
the :ref:`developer guide on inventory plugins<inventory_plugin_caching>`.
|
||||
|
||||
* Importing cache plugins directly is deprecated and will be removed in Ansible 2.12. Use the plugin_loader
|
||||
so direct options, environment variables, and other means of configuration can be reconciled using the config
|
||||
system rather than constants.
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
from ansible.plugins.loader import cache_loader
|
||||
cache = cache_loader.get('redis', **kwargs)
|
||||
|
||||
Modules
|
||||
=======
|
||||
|
||||
|
@ -336,6 +349,8 @@ Plugins
|
|||
|
||||
* ``osx_say`` callback plugin was renamed into :ref:`say <say_callback>`.
|
||||
|
||||
* Inventory plugins now support caching via cache plugins. To start using a cache plugin with your inventory see the section on caching in the :ref:`inventory guide<using_inventory>`. To port a custom cache plugin to be compatible with inventory see :ref:`developer guide on cache plugins<developing_cache_plugins>`.
|
||||
|
||||
Porting custom scripts
|
||||
======================
|
||||
|
||||
|
|
|
@ -270,6 +270,8 @@ class InventoryManager(object):
|
|||
try:
|
||||
# FIXME in case plugin fails 1/2 way we have partial inventory
|
||||
plugin.parse(self._inventory, self._loader, source, cache=cache)
|
||||
if getattr(plugin, '_cache', None):
|
||||
plugin.update_cache_if_changed()
|
||||
parsed = True
|
||||
display.vvv('Parsed %s inventory source with %s plugin' % (source, plugin_name))
|
||||
break
|
||||
|
|
145
lib/ansible/plugins/cache/__init__.py
vendored
145
lib/ansible/plugins/cache/__init__.py
vendored
|
@ -18,6 +18,7 @@
|
|||
from __future__ import (absolute_import, division, print_function)
|
||||
__metaclass__ = type
|
||||
|
||||
import copy
|
||||
import os
|
||||
import time
|
||||
import errno
|
||||
|
@ -26,8 +27,9 @@ from abc import ABCMeta, abstractmethod
|
|||
from ansible import constants as C
|
||||
from ansible.errors import AnsibleError
|
||||
from ansible.module_utils.six import with_metaclass
|
||||
from ansible.module_utils._text import to_bytes
|
||||
from ansible.module_utils._text import to_bytes, to_text
|
||||
from ansible.module_utils.common._collections_compat import MutableMapping
|
||||
from ansible.plugins import AnsiblePlugin
|
||||
from ansible.plugins.loader import cache_loader
|
||||
from ansible.utils.display import Display
|
||||
from ansible.vars.fact_cache import FactCache as RealFactCache
|
||||
|
@ -51,11 +53,16 @@ class FactCache(RealFactCache):
|
|||
super(FactCache, self).__init__(*args, **kwargs)
|
||||
|
||||
|
||||
class BaseCacheModule(with_metaclass(ABCMeta, object)):
|
||||
class BaseCacheModule(AnsiblePlugin):
|
||||
|
||||
# Backwards compat only. Just import the global display instead
|
||||
_display = display
|
||||
|
||||
def __init__(self, *args, **kwargs):
|
||||
self._load_name = self.__module__.split('.')[-1]
|
||||
super(BaseCacheModule, self).__init__()
|
||||
self.set_options(var_options=args, direct=kwargs)
|
||||
|
||||
@abstractmethod
|
||||
def get(self, key):
|
||||
pass
|
||||
|
@ -91,11 +98,15 @@ class BaseFileCacheModule(BaseCacheModule):
|
|||
"""
|
||||
def __init__(self, *args, **kwargs):
|
||||
|
||||
try:
|
||||
super(BaseFileCacheModule, self).__init__(*args, **kwargs)
|
||||
self._cache_dir = self._get_cache_connection(self.get_option('_uri'))
|
||||
self._timeout = float(self.get_option('_timeout'))
|
||||
except KeyError:
|
||||
self._cache_dir = self._get_cache_connection(C.CACHE_PLUGIN_CONNECTION)
|
||||
self._timeout = float(C.CACHE_PLUGIN_TIMEOUT)
|
||||
self.plugin_name = self.__module__.split('.')[-1]
|
||||
self._timeout = float(C.CACHE_PLUGIN_TIMEOUT)
|
||||
self._cache = {}
|
||||
self._cache_dir = self._get_cache_connection(C.CACHE_PLUGIN_CONNECTION)
|
||||
self._set_inventory_cache_override(**kwargs)
|
||||
self.validate_cache_connection()
|
||||
|
||||
def _get_cache_connection(self, source):
|
||||
|
@ -105,12 +116,6 @@ class BaseFileCacheModule(BaseCacheModule):
|
|||
except TypeError:
|
||||
pass
|
||||
|
||||
def _set_inventory_cache_override(self, **kwargs):
|
||||
if kwargs.get('cache_timeout'):
|
||||
self._timeout = kwargs.get('cache_timeout')
|
||||
if kwargs.get('cache_connection'):
|
||||
self._cache_dir = self._get_cache_connection(kwargs.get('cache_connection'))
|
||||
|
||||
def validate_cache_connection(self):
|
||||
if not self._cache_dir:
|
||||
raise AnsibleError("error, '%s' cache plugin requires the 'fact_caching_connection' config option "
|
||||
|
@ -262,50 +267,96 @@ class BaseFileCacheModule(BaseCacheModule):
|
|||
pass
|
||||
|
||||
|
||||
class InventoryFileCacheModule(BaseFileCacheModule):
|
||||
class CachePluginAdjudicator(MutableMapping):
|
||||
"""
|
||||
A caching module backed by file based storage.
|
||||
Intermediary between a cache dictionary and a CacheModule
|
||||
"""
|
||||
def __init__(self, plugin_name, timeout, cache_dir):
|
||||
|
||||
self.plugin_name = plugin_name
|
||||
self._timeout = timeout
|
||||
def __init__(self, plugin_name='memory', **kwargs):
|
||||
self._cache = {}
|
||||
self._cache_dir = self._get_cache_connection(cache_dir)
|
||||
self.validate_cache_connection()
|
||||
self._plugin = self.get_plugin(plugin_name)
|
||||
self._retrieved = {}
|
||||
|
||||
def validate_cache_connection(self):
|
||||
try:
|
||||
super(InventoryFileCacheModule, self).validate_cache_connection()
|
||||
except AnsibleError:
|
||||
cache_connection_set = False
|
||||
else:
|
||||
cache_connection_set = True
|
||||
self._plugin = cache_loader.get(plugin_name, **kwargs)
|
||||
if not self._plugin:
|
||||
raise AnsibleError('Unable to load the cache plugin (%s).' % plugin_name)
|
||||
|
||||
if not cache_connection_set:
|
||||
raise AnsibleError("error, '%s' inventory cache plugin requires the one of the following to be set:\n"
|
||||
"ansible.cfg:\n[default]: fact_caching_connection,\n[inventory]: cache_connection;\n"
|
||||
"Environment:\nANSIBLE_INVENTORY_CACHE_CONNECTION,\nANSIBLE_CACHE_PLUGIN_CONNECTION."
|
||||
"to be set to a writeable directory path" % self.plugin_name)
|
||||
self._plugin_name = plugin_name
|
||||
|
||||
def get(self, cache_key):
|
||||
def update_cache_if_changed(self):
|
||||
if self._retrieved != self._cache:
|
||||
self.set_cache()
|
||||
|
||||
if not self.contains(cache_key):
|
||||
# Check if cache file exists
|
||||
raise KeyError
|
||||
def set_cache(self):
|
||||
for top_level_cache_key in self._cache.keys():
|
||||
self._plugin.set(top_level_cache_key, self._cache[top_level_cache_key])
|
||||
self._retrieved = copy.deepcopy(self._cache)
|
||||
|
||||
return super(InventoryFileCacheModule, self).get(cache_key)
|
||||
def load_whole_cache(self):
|
||||
for key in self._plugin.keys():
|
||||
self._cache[key] = self._plugin.get(key)
|
||||
|
||||
def get_plugin(self, plugin_name):
|
||||
plugin = cache_loader.get(plugin_name, cache_connection=self._cache_dir, cache_timeout=self._timeout)
|
||||
if not plugin:
|
||||
raise AnsibleError('Unable to load the facts cache plugin (%s).' % (plugin_name))
|
||||
def __repr__(self):
|
||||
return to_text(self._cache)
|
||||
|
||||
def __iter__(self):
|
||||
return iter(self.keys())
|
||||
|
||||
def __len__(self):
|
||||
return len(self.keys())
|
||||
|
||||
def _do_load_key(self, key):
|
||||
load = False
|
||||
if key not in self._cache and key not in self._retrieved and self._plugin_name != 'memory':
|
||||
if isinstance(self._plugin, BaseFileCacheModule):
|
||||
load = True
|
||||
elif not isinstance(self._plugin, BaseFileCacheModule) and self._plugin.contains(key):
|
||||
# Database-backed caches don't raise KeyError for expired keys, so only load if the key is valid by checking contains()
|
||||
load = True
|
||||
return load
|
||||
|
||||
def __getitem__(self, key):
|
||||
if self._do_load_key(key):
|
||||
try:
|
||||
self._cache[key] = self._plugin.get(key)
|
||||
except KeyError:
|
||||
pass
|
||||
else:
|
||||
self._retrieved[key] = self._cache[key]
|
||||
return self._cache[key]
|
||||
|
||||
def get(self, key, default=None):
|
||||
if self._do_load_key(key):
|
||||
try:
|
||||
self._cache[key] = self._plugin.get(key)
|
||||
except KeyError as e:
|
||||
pass
|
||||
else:
|
||||
self._retrieved[key] = self._cache[key]
|
||||
return self._cache.get(key, default)
|
||||
|
||||
def items(self):
|
||||
return self._cache.items()
|
||||
|
||||
def values(self):
|
||||
return self._cache.values()
|
||||
|
||||
def keys(self):
|
||||
return self._cache.keys()
|
||||
|
||||
def pop(self, key, *args):
|
||||
if args:
|
||||
return self._cache.pop(key, args[0])
|
||||
return self._cache.pop(key)
|
||||
|
||||
def __delitem__(self, key):
|
||||
del self._cache[key]
|
||||
|
||||
def __setitem__(self, key, value):
|
||||
self._cache[key] = value
|
||||
|
||||
def flush(self):
|
||||
for key in self._cache.keys():
|
||||
self._plugin.delete(key)
|
||||
self._cache = {}
|
||||
return plugin
|
||||
|
||||
def _load(self, path):
|
||||
return self._plugin._load(path)
|
||||
|
||||
def _dump(self, value, path):
|
||||
return self._plugin._dump(value, path)
|
||||
def update(self, value):
|
||||
self._cache.update(value)
|
||||
|
|
25
lib/ansible/plugins/cache/memcached.py
vendored
25
lib/ansible/plugins/cache/memcached.py
vendored
|
@ -26,6 +26,7 @@ DOCUMENTATION = '''
|
|||
section: defaults
|
||||
_prefix:
|
||||
description: User defined prefix to use when creating the DB entries
|
||||
default: ansible_facts
|
||||
env:
|
||||
- name: ANSIBLE_CACHE_PLUGIN_PREFIX
|
||||
ini:
|
||||
|
@ -52,12 +53,15 @@ from ansible import constants as C
|
|||
from ansible.errors import AnsibleError
|
||||
from ansible.module_utils.common._collections_compat import MutableSet
|
||||
from ansible.plugins.cache import BaseCacheModule
|
||||
from ansible.utils.display import Display
|
||||
|
||||
try:
|
||||
import memcache
|
||||
except ImportError:
|
||||
raise AnsibleError("python-memcached is required for the memcached fact cache")
|
||||
|
||||
display = Display()
|
||||
|
||||
|
||||
class ProxyClientPool(object):
|
||||
"""
|
||||
|
@ -166,13 +170,22 @@ class CacheModuleKeys(MutableSet):
|
|||
class CacheModule(BaseCacheModule):
|
||||
|
||||
def __init__(self, *args, **kwargs):
|
||||
if C.CACHE_PLUGIN_CONNECTION:
|
||||
connection = C.CACHE_PLUGIN_CONNECTION.split(',')
|
||||
else:
|
||||
connection = ['127.0.0.1:11211']
|
||||
connection = ['127.0.0.1:11211']
|
||||
|
||||
try:
|
||||
super(CacheModule, self).__init__(*args, **kwargs)
|
||||
if self.get_option('_uri'):
|
||||
connection = self.get_option('_uri')
|
||||
self._timeout = self.get_option('_timeout')
|
||||
self._prefix = self.get_option('_prefix')
|
||||
except KeyError:
|
||||
display.deprecated('Rather than importing CacheModules directly, '
|
||||
'use ansible.plugins.loader.cache_loader', version='2.12')
|
||||
if C.CACHE_PLUGIN_CONNECTION:
|
||||
connection = C.CACHE_PLUGIN_CONNECTION.split(',')
|
||||
self._timeout = C.CACHE_PLUGIN_TIMEOUT
|
||||
self._prefix = C.CACHE_PLUGIN_PREFIX
|
||||
|
||||
self._timeout = C.CACHE_PLUGIN_TIMEOUT
|
||||
self._prefix = C.CACHE_PLUGIN_PREFIX
|
||||
self._cache = {}
|
||||
self._db = ProxyClientPool(connection, debug=0)
|
||||
self._keys = CacheModuleKeys(self._db, self._db.get(CacheModuleKeys.PREFIX) or [])
|
||||
|
|
20
lib/ansible/plugins/cache/mongodb.py
vendored
20
lib/ansible/plugins/cache/mongodb.py
vendored
|
@ -27,6 +27,7 @@ DOCUMENTATION = '''
|
|||
section: defaults
|
||||
_prefix:
|
||||
description: User defined prefix to use when creating the DB entries
|
||||
default: ansible_facts
|
||||
env:
|
||||
- name: ANSIBLE_CACHE_PLUGIN_PREFIX
|
||||
ini:
|
||||
|
@ -50,20 +51,33 @@ from contextlib import contextmanager
|
|||
from ansible import constants as C
|
||||
from ansible.errors import AnsibleError
|
||||
from ansible.plugins.cache import BaseCacheModule
|
||||
from ansible.utils.display import Display
|
||||
|
||||
try:
|
||||
import pymongo
|
||||
except ImportError:
|
||||
raise AnsibleError("The 'pymongo' python module is required for the mongodb fact cache, 'pip install pymongo>=3.0'")
|
||||
|
||||
display = Display()
|
||||
|
||||
|
||||
class CacheModule(BaseCacheModule):
|
||||
"""
|
||||
A caching module backed by mongodb.
|
||||
"""
|
||||
def __init__(self, *args, **kwargs):
|
||||
self._timeout = int(C.CACHE_PLUGIN_TIMEOUT)
|
||||
self._prefix = C.CACHE_PLUGIN_PREFIX
|
||||
try:
|
||||
super(CacheModule, self).__init__(*args, **kwargs)
|
||||
self._connection = self.get_option('_uri')
|
||||
self._timeout = int(self.get_option('_timeout'))
|
||||
self._prefix = self.get_option('_prefix')
|
||||
except KeyError:
|
||||
display.deprecated('Rather than importing CacheModules directly, '
|
||||
'use ansible.plugins.loader.cache_loader', version='2.12')
|
||||
self._connection = C.CACHE_PLUGIN_CONNECTION
|
||||
self._timeout = int(C.CACHE_PLUGIN_TIMEOUT)
|
||||
self._prefix = C.CACHE_PLUGIN_PREFIX
|
||||
|
||||
self._cache = {}
|
||||
self._managed_indexes = False
|
||||
|
||||
|
@ -94,7 +108,7 @@ class CacheModule(BaseCacheModule):
|
|||
This is a context manager for opening and closing mongo connections as needed. This exists as to not create a global
|
||||
connection, due to pymongo not being fork safe (http://api.mongodb.com/python/current/faq.html#is-pymongo-fork-safe)
|
||||
'''
|
||||
mongo = pymongo.MongoClient(C.CACHE_PLUGIN_CONNECTION)
|
||||
mongo = pymongo.MongoClient(self._connection)
|
||||
try:
|
||||
db = mongo.get_default_database()
|
||||
except pymongo.errors.ConfigurationError:
|
||||
|
|
30
lib/ansible/plugins/cache/redis.py
vendored
30
lib/ansible/plugins/cache/redis.py
vendored
|
@ -24,6 +24,7 @@ DOCUMENTATION = '''
|
|||
section: defaults
|
||||
_prefix:
|
||||
description: User defined prefix to use when creating the DB entries
|
||||
default: ansible_facts
|
||||
env:
|
||||
- name: ANSIBLE_CACHE_PLUGIN_PREFIX
|
||||
ini:
|
||||
|
@ -45,13 +46,17 @@ import json
|
|||
|
||||
from ansible import constants as C
|
||||
from ansible.errors import AnsibleError
|
||||
from ansible.parsing.ajson import AnsibleJSONEncoder, AnsibleJSONDecoder
|
||||
from ansible.plugins.cache import BaseCacheModule
|
||||
from ansible.utils.display import Display
|
||||
|
||||
try:
|
||||
from redis import StrictRedis, VERSION
|
||||
except ImportError:
|
||||
raise AnsibleError("The 'redis' python module (version 2.4.5 or newer) is required for the redis fact cache, 'pip install redis'")
|
||||
|
||||
display = Display()
|
||||
|
||||
|
||||
class CacheModule(BaseCacheModule):
|
||||
"""
|
||||
|
@ -63,13 +68,22 @@ class CacheModule(BaseCacheModule):
|
|||
performance.
|
||||
"""
|
||||
def __init__(self, *args, **kwargs):
|
||||
if C.CACHE_PLUGIN_CONNECTION:
|
||||
connection = C.CACHE_PLUGIN_CONNECTION.split(':')
|
||||
else:
|
||||
connection = []
|
||||
connection = []
|
||||
|
||||
try:
|
||||
super(CacheModule, self).__init__(*args, **kwargs)
|
||||
if self.get_option('_uri'):
|
||||
connection = self.get_option('_uri').split(':')
|
||||
self._timeout = float(self.get_option('_timeout'))
|
||||
self._prefix = self.get_option('_prefix')
|
||||
except KeyError:
|
||||
display.deprecated('Rather than importing CacheModules directly, '
|
||||
'use ansible.plugins.loader.cache_loader', version='2.12')
|
||||
if C.CACHE_PLUGIN_CONNECTION:
|
||||
connection = C.CACHE_PLUGIN_CONNECTION.split(':')
|
||||
self._timeout = float(C.CACHE_PLUGIN_TIMEOUT)
|
||||
self._prefix = C.CACHE_PLUGIN_PREFIX
|
||||
|
||||
self._timeout = float(C.CACHE_PLUGIN_TIMEOUT)
|
||||
self._prefix = C.CACHE_PLUGIN_PREFIX
|
||||
self._cache = {}
|
||||
self._db = StrictRedis(*connection)
|
||||
self._keys_set = 'ansible_cache_keys'
|
||||
|
@ -87,13 +101,13 @@ class CacheModule(BaseCacheModule):
|
|||
if value is None:
|
||||
self.delete(key)
|
||||
raise KeyError
|
||||
self._cache[key] = json.loads(value)
|
||||
self._cache[key] = json.loads(value, cls=AnsibleJSONDecoder)
|
||||
|
||||
return self._cache.get(key)
|
||||
|
||||
def set(self, key, value):
|
||||
|
||||
value2 = json.dumps(value)
|
||||
value2 = json.dumps(value, cls=AnsibleJSONEncoder, sort_keys=True, indent=4)
|
||||
if self._timeout > 0: # a timeout of 0 is handled as meaning 'never expire'
|
||||
self._db.setex(self._make_key(key), int(self._timeout), value2)
|
||||
else:
|
||||
|
|
|
@ -23,9 +23,13 @@ options:
|
|||
description:
|
||||
- Cache plugin to use for the inventory's source data.
|
||||
type: str
|
||||
default: memory
|
||||
env:
|
||||
- name: ANSIBLE_CACHE_PLUGIN
|
||||
- name: ANSIBLE_INVENTORY_CACHE_PLUGIN
|
||||
ini:
|
||||
- section: defaults
|
||||
key: fact_caching
|
||||
- section: inventory
|
||||
key: cache_plugin
|
||||
cache_timeout:
|
||||
|
@ -34,8 +38,11 @@ options:
|
|||
default: 3600
|
||||
type: int
|
||||
env:
|
||||
- name: ANSIBLE_CACHE_PLUGIN_TIMEOUT
|
||||
- name: ANSIBLE_INVENTORY_CACHE_TIMEOUT
|
||||
ini:
|
||||
- section: defaults
|
||||
key: fact_caching_timeout
|
||||
- section: inventory
|
||||
key: cache_timeout
|
||||
cache_connection:
|
||||
|
@ -43,8 +50,23 @@ options:
|
|||
- Cache connection data or path, read cache plugin documentation for specifics.
|
||||
type: str
|
||||
env:
|
||||
- name: ANSIBLE_CACHE_PLUGIN_CONNECTION
|
||||
- name: ANSIBLE_INVENTORY_CACHE_CONNECTION
|
||||
ini:
|
||||
- section: defaults
|
||||
key: fact_caching_connection
|
||||
- section: inventory
|
||||
key: cache_connection
|
||||
cache_prefix:
|
||||
description:
|
||||
- Prefix to use for cache plugin files/tables
|
||||
default: ansible_inventory_
|
||||
env:
|
||||
- name: ANSIBLE_CACHE_PLUGIN_PREFIX
|
||||
- name: ANSIBLE_INVENTORY_CACHE_PLUGIN_PREFIX
|
||||
ini:
|
||||
- section: default
|
||||
key: fact_caching_prefix
|
||||
- section: inventory
|
||||
key: cache_prefix
|
||||
'''
|
||||
|
|
|
@ -27,7 +27,7 @@ from ansible.errors import AnsibleError, AnsibleParserError
|
|||
from ansible.inventory.group import to_safe_group_name as original_safe
|
||||
from ansible.parsing.utils.addresses import parse_address
|
||||
from ansible.plugins import AnsiblePlugin
|
||||
from ansible.plugins.cache import InventoryFileCacheModule
|
||||
from ansible.plugins.cache import CachePluginAdjudicator as CacheObject
|
||||
from ansible.module_utils._text import to_bytes, to_native
|
||||
from ansible.module_utils.common._collections_compat import Mapping
|
||||
from ansible.module_utils.parsing.convert_bool import boolean
|
||||
|
@ -126,6 +126,25 @@ def expand_hostname_range(line=None):
|
|||
return all_hosts
|
||||
|
||||
|
||||
def get_cache_plugin(plugin_name, **kwargs):
|
||||
try:
|
||||
cache = CacheObject(plugin_name, **kwargs)
|
||||
except AnsibleError as e:
|
||||
if 'fact_caching_connection' in to_native(e):
|
||||
raise AnsibleError("error, '%s' inventory cache plugin requires the one of the following to be set "
|
||||
"to a writeable directory path:\nansible.cfg:\n[default]: fact_caching_connection,\n"
|
||||
"[inventory]: cache_connection;\nEnvironment:\nANSIBLE_INVENTORY_CACHE_CONNECTION,\n"
|
||||
"ANSIBLE_CACHE_PLUGIN_CONNECTION." % plugin_name)
|
||||
else:
|
||||
raise e
|
||||
|
||||
if plugin_name != 'memory' and kwargs and not getattr(cache._plugin, '_options', None):
|
||||
raise AnsibleError('Unable to use cache plugin {0} for inventory. Cache options were provided but may not reconcile '
|
||||
'correctly unless set via set_options. Refer to the porting guide if the plugin derives user settings '
|
||||
'from ansible.constants.'.format(plugin_name))
|
||||
return cache
|
||||
|
||||
|
||||
class BaseInventoryPlugin(AnsiblePlugin):
|
||||
""" Parses an Inventory Source"""
|
||||
|
||||
|
@ -138,7 +157,6 @@ class BaseInventoryPlugin(AnsiblePlugin):
|
|||
self._options = {}
|
||||
self.inventory = None
|
||||
self.display = display
|
||||
self.cache = None
|
||||
|
||||
def parse(self, inventory, loader, path, cache=True):
|
||||
''' Populates inventory from the given data. Raises an error on any parse failure
|
||||
|
@ -207,16 +225,13 @@ class BaseInventoryPlugin(AnsiblePlugin):
|
|||
raise AnsibleParserError('inventory source has invalid structure, it should be a dictionary, got: %s' % type(config))
|
||||
|
||||
self.set_options(direct=config)
|
||||
if self._options.get('cache'):
|
||||
self._set_cache_options(self._options)
|
||||
if 'cache' in self._options and self.get_option('cache'):
|
||||
cache_option_keys = [('_uri', 'cache_connection'), ('_timeout', 'cache_timeout'), ('_prefix', 'cache_prefix')]
|
||||
cache_options = dict((opt[0], self.get_option(opt[1])) for opt in cache_option_keys if self.get_option(opt[1]))
|
||||
self._cache = get_cache_plugin(self.get_option('cache_plugin'), **cache_options)
|
||||
|
||||
return config
|
||||
|
||||
def _set_cache_options(self, options):
|
||||
self.cache = InventoryFileCacheModule(plugin_name=options.get('cache_plugin'),
|
||||
timeout=options.get('cache_timeout'),
|
||||
cache_dir=options.get('cache_connection'))
|
||||
|
||||
def _consume_options(self, data):
|
||||
''' update existing options from alternate configuration sources not normally used by Ansible.
|
||||
Many API libraries already have existing configuration sources, this allows plugin author to leverage them.
|
||||
|
@ -252,9 +267,6 @@ class BaseInventoryPlugin(AnsiblePlugin):
|
|||
|
||||
return (hostnames, port)
|
||||
|
||||
def clear_cache(self):
|
||||
pass
|
||||
|
||||
|
||||
class BaseFileInventoryPlugin(BaseInventoryPlugin):
|
||||
""" Parses a File based Inventory Source"""
|
||||
|
@ -266,9 +278,43 @@ class BaseFileInventoryPlugin(BaseInventoryPlugin):
|
|||
super(BaseFileInventoryPlugin, self).__init__()
|
||||
|
||||
|
||||
class DeprecatedCache(object):
|
||||
def __init__(self, real_cacheable):
|
||||
self.real_cacheable = real_cacheable
|
||||
|
||||
def get(self, key):
|
||||
display.deprecated('InventoryModule should utilize self._cache as a dict instead of self.cache. '
|
||||
'When expecting a KeyError, use self._cache[key] instead of using self.cache.get(key). '
|
||||
'self._cache is a dictionary and will return a default value instead of raising a KeyError '
|
||||
'when the key does not exist', version='2.12')
|
||||
return self.real_cacheable._cache[key]
|
||||
|
||||
def set(self, key, value):
|
||||
display.deprecated('InventoryModule should utilize self._cache as a dict instead of self.cache. '
|
||||
'To set the self._cache dictionary, use self._cache[key] = value instead of self.cache.set(key, value). '
|
||||
'To force update the underlying cache plugin with the contents of self._cache before parse() is complete, '
|
||||
'call self.set_cache_plugin and it will use the self._cache dictionary to update the cache plugin', version='2.12')
|
||||
self.real_cacheable._cache[key] = value
|
||||
self.real_cacheable.set_cache_plugin()
|
||||
|
||||
def __getattr__(self, name):
|
||||
display.deprecated('InventoryModule should utilize self._cache instead of self.cache', version='2.12')
|
||||
return self.real_cacheable._cache.__getattribute__(name)
|
||||
|
||||
|
||||
class Cacheable(object):
|
||||
|
||||
_cache = {}
|
||||
_cache = CacheObject()
|
||||
|
||||
@property
|
||||
def cache(self):
|
||||
return DeprecatedCache(self)
|
||||
|
||||
def load_cache_plugin(self):
|
||||
plugin_name = self.get_option('cache_plugin')
|
||||
cache_option_keys = [('_uri', 'cache_connection'), ('_timeout', 'cache_timeout'), ('_prefix', 'cache_prefix')]
|
||||
cache_options = dict((opt[0], self.get_option(opt[1])) for opt in cache_option_keys if self.get_option(opt[1]))
|
||||
self._cache = get_cache_plugin(plugin_name, **cache_options)
|
||||
|
||||
def get_cache_key(self, path):
|
||||
return "{0}_{1}".format(self.NAME, self._get_cache_prefix(path))
|
||||
|
@ -287,7 +333,13 @@ class Cacheable(object):
|
|||
return 's_'.join([d1[:5], d2[:5]])
|
||||
|
||||
def clear_cache(self):
|
||||
self._cache = {}
|
||||
self._cache.flush()
|
||||
|
||||
def update_cache_if_changed(self):
|
||||
self._cache.update_cache_if_changed()
|
||||
|
||||
def set_cache_plugin(self):
|
||||
self._cache.set_cache()
|
||||
|
||||
|
||||
class Constructable(object):
|
||||
|
|
|
@ -53,3 +53,5 @@ class InventoryModule(BaseInventoryPlugin):
|
|||
raise AnsibleParserError("inventory config '{0}' could not be verified by plugin '{1}'".format(path, plugin_name))
|
||||
|
||||
plugin.parse(inventory, loader, path, cache=cache)
|
||||
if getattr(plugin, '_cache', None):
|
||||
plugin.update_cache_if_changed()
|
||||
|
|
|
@ -116,7 +116,7 @@ class InventoryModule(BaseInventoryPlugin, Cacheable):
|
|||
if not self.use_cache or url not in self._cache.get(self.cache_key, {}):
|
||||
|
||||
if self.cache_key not in self._cache:
|
||||
self._cache[self.cache_key] = {'url': ''}
|
||||
self._cache[self.cache_key] = {url: ''}
|
||||
|
||||
results = []
|
||||
s = self._get_session()
|
||||
|
@ -153,7 +153,9 @@ class InventoryModule(BaseInventoryPlugin, Cacheable):
|
|||
# get next page
|
||||
params['page'] += 1
|
||||
|
||||
self._cache[self.cache_key][url] = results
|
||||
# Set the cache if it is enabled or if the cache was refreshed
|
||||
if self.use_cache or self.get_option('cache'):
|
||||
self._cache[self.cache_key][url] = results
|
||||
|
||||
return self._cache[self.cache_key][url]
|
||||
|
||||
|
|
31
test/integration/targets/inventory_foreman/inspect_cache.yml
Normal file
31
test/integration/targets/inventory_foreman/inspect_cache.yml
Normal file
|
@ -0,0 +1,31 @@
|
|||
---
|
||||
- hosts: localhost
|
||||
vars:
|
||||
foreman_stub_host: "{{ lookup('env', 'FOREMAN_HOST') }}"
|
||||
foreman_stub_port: "{{ lookup('env', 'FOREMAN_PORT') }}"
|
||||
foreman_stub_api_path: /api/v2
|
||||
cached_hosts_key: "http://{{ foreman_stub_host }}:{{ foreman_stub_port }}{{ foreman_stub_api_path }}/hosts"
|
||||
tasks:
|
||||
- name: verify a cache file was created
|
||||
find:
|
||||
path:
|
||||
- ./foreman_cache
|
||||
register: matching_files
|
||||
|
||||
- assert:
|
||||
that:
|
||||
- matching_files.matched == 1
|
||||
- name: read the cached inventory
|
||||
set_fact:
|
||||
contents: "{{ lookup('file', matching_files.files.0.path) }}"
|
||||
|
||||
- name: extract all the host names
|
||||
set_fact:
|
||||
cached_hosts: "{{ contents[cached_hosts_key] | json_query('[*].name') }}"
|
||||
|
||||
- assert:
|
||||
that:
|
||||
"'{{ item }}' in cached_hosts"
|
||||
loop:
|
||||
- "v6.example-780.com"
|
||||
- "c4.j1.y5.example-487.com"
|
|
@ -9,6 +9,11 @@ export FOREMAN_HOST="${FOREMAN_HOST:-localhost}"
|
|||
export FOREMAN_PORT="${FOREMAN_PORT:-8080}"
|
||||
FOREMAN_CONFIG=test-config.foreman.yaml
|
||||
|
||||
# Set inventory caching environment variables to populate a jsonfile cache
|
||||
export ANSIBLE_INVENTORY_CACHE=True
|
||||
export ANSIBLE_INVENTORY_CACHE_PLUGIN=jsonfile
|
||||
export ANSIBLE_INVENTORY_CACHE_CONNECTION=./foreman_cache
|
||||
|
||||
# flag for checking whether cleanup has already fired
|
||||
_is_clean=
|
||||
|
||||
|
@ -33,3 +38,7 @@ validate_certs: False
|
|||
FOREMAN_YAML
|
||||
|
||||
ansible-playbook test_foreman_inventory.yml --connection=local "$@"
|
||||
ansible-playbook inspect_cache.yml --connection=local "$@"
|
||||
|
||||
# remove inventory cache
|
||||
rm -r ./foreman_cache
|
||||
|
|
2
test/integration/targets/old_style_cache_plugins/aliases
Normal file
2
test/integration/targets/old_style_cache_plugins/aliases
Normal file
|
@ -0,0 +1,2 @@
|
|||
shippable/posix/group3
|
||||
skip/osx
|
|
@ -0,0 +1 @@
|
|||
# inventory config file for consistent source
|
141
test/integration/targets/old_style_cache_plugins/plugins/cache/redis.py
vendored
Normal file
141
test/integration/targets/old_style_cache_plugins/plugins/cache/redis.py
vendored
Normal file
|
@ -0,0 +1,141 @@
|
|||
# (c) 2014, Brian Coca, Josh Drake, et al
|
||||
# (c) 2017 Ansible Project
|
||||
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
|
||||
from __future__ import (absolute_import, division, print_function)
|
||||
__metaclass__ = type
|
||||
|
||||
DOCUMENTATION = '''
|
||||
cache: redis
|
||||
short_description: Use Redis DB for cache
|
||||
description:
|
||||
- This cache uses JSON formatted, per host records saved in Redis.
|
||||
version_added: "1.9"
|
||||
requirements:
|
||||
- redis>=2.4.5 (python lib)
|
||||
options:
|
||||
_uri:
|
||||
description:
|
||||
- A colon separated string of connection information for Redis.
|
||||
required: True
|
||||
env:
|
||||
- name: ANSIBLE_CACHE_PLUGIN_CONNECTION
|
||||
ini:
|
||||
- key: fact_caching_connection
|
||||
section: defaults
|
||||
_prefix:
|
||||
description: User defined prefix to use when creating the DB entries
|
||||
env:
|
||||
- name: ANSIBLE_CACHE_PLUGIN_PREFIX
|
||||
ini:
|
||||
- key: fact_caching_prefix
|
||||
section: defaults
|
||||
_timeout:
|
||||
default: 86400
|
||||
description: Expiration timeout for the cache plugin data
|
||||
env:
|
||||
- name: ANSIBLE_CACHE_PLUGIN_TIMEOUT
|
||||
ini:
|
||||
- key: fact_caching_timeout
|
||||
section: defaults
|
||||
type: integer
|
||||
'''
|
||||
|
||||
import time
|
||||
import json
|
||||
|
||||
from ansible import constants as C
|
||||
from ansible.errors import AnsibleError
|
||||
from ansible.plugins.cache import BaseCacheModule
|
||||
|
||||
try:
|
||||
from redis import StrictRedis, VERSION
|
||||
except ImportError:
|
||||
raise AnsibleError("The 'redis' python module (version 2.4.5 or newer) is required for the redis fact cache, 'pip install redis'")
|
||||
|
||||
|
||||
class CacheModule(BaseCacheModule):
|
||||
"""
|
||||
A caching module backed by redis.
|
||||
Keys are maintained in a zset with their score being the timestamp
|
||||
when they are inserted. This allows for the usage of 'zremrangebyscore'
|
||||
to expire keys. This mechanism is used or a pattern matched 'scan' for
|
||||
performance.
|
||||
"""
|
||||
def __init__(self, *args, **kwargs):
|
||||
if C.CACHE_PLUGIN_CONNECTION:
|
||||
connection = C.CACHE_PLUGIN_CONNECTION.split(':')
|
||||
else:
|
||||
connection = []
|
||||
|
||||
self._timeout = float(C.CACHE_PLUGIN_TIMEOUT)
|
||||
self._prefix = C.CACHE_PLUGIN_PREFIX
|
||||
self._cache = {}
|
||||
self._db = StrictRedis(*connection)
|
||||
self._keys_set = 'ansible_cache_keys'
|
||||
|
||||
def _make_key(self, key):
|
||||
return self._prefix + key
|
||||
|
||||
def get(self, key):
|
||||
|
||||
if key not in self._cache:
|
||||
value = self._db.get(self._make_key(key))
|
||||
# guard against the key not being removed from the zset;
|
||||
# this could happen in cases where the timeout value is changed
|
||||
# between invocations
|
||||
if value is None:
|
||||
self.delete(key)
|
||||
raise KeyError
|
||||
self._cache[key] = json.loads(value)
|
||||
|
||||
return self._cache.get(key)
|
||||
|
||||
def set(self, key, value):
|
||||
|
||||
value2 = json.dumps(value)
|
||||
if self._timeout > 0: # a timeout of 0 is handled as meaning 'never expire'
|
||||
self._db.setex(self._make_key(key), int(self._timeout), value2)
|
||||
else:
|
||||
self._db.set(self._make_key(key), value2)
|
||||
|
||||
if VERSION[0] == 2:
|
||||
self._db.zadd(self._keys_set, time.time(), key)
|
||||
else:
|
||||
self._db.zadd(self._keys_set, {key: time.time()})
|
||||
self._cache[key] = value
|
||||
|
||||
def _expire_keys(self):
|
||||
if self._timeout > 0:
|
||||
expiry_age = time.time() - self._timeout
|
||||
self._db.zremrangebyscore(self._keys_set, 0, expiry_age)
|
||||
|
||||
def keys(self):
|
||||
self._expire_keys()
|
||||
return self._db.zrange(self._keys_set, 0, -1)
|
||||
|
||||
def contains(self, key):
|
||||
self._expire_keys()
|
||||
return (self._db.zrank(self._keys_set, key) is not None)
|
||||
|
||||
def delete(self, key):
|
||||
if key in self._cache:
|
||||
del self._cache[key]
|
||||
self._db.delete(self._make_key(key))
|
||||
self._db.zrem(self._keys_set, key)
|
||||
|
||||
def flush(self):
|
||||
for key in self.keys():
|
||||
self.delete(key)
|
||||
|
||||
def copy(self):
|
||||
# TODO: there is probably a better way to do this in redis
|
||||
ret = dict()
|
||||
for key in self.keys():
|
||||
ret[key] = self.get(key)
|
||||
return ret
|
||||
|
||||
def __getstate__(self):
|
||||
return dict()
|
||||
|
||||
def __setstate__(self, data):
|
||||
self.__init__()
|
|
@ -0,0 +1,59 @@
|
|||
# Copyright (c) 2019 Ansible Project
|
||||
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
|
||||
|
||||
from __future__ import (absolute_import, division, print_function)
|
||||
__metaclass__ = type
|
||||
|
||||
DOCUMENTATION = '''
|
||||
name: test
|
||||
plugin_type: inventory
|
||||
short_description: test inventory source
|
||||
extends_documentation_fragment:
|
||||
- inventory_cache
|
||||
'''
|
||||
|
||||
from ansible.plugins.inventory import BaseInventoryPlugin, Cacheable
|
||||
|
||||
|
||||
class InventoryModule(BaseInventoryPlugin, Cacheable):
|
||||
|
||||
NAME = 'test'
|
||||
|
||||
def populate(self, hosts):
|
||||
for host in list(hosts.keys()):
|
||||
self.inventory.add_host(host, group='all')
|
||||
for hostvar, hostval in hosts[host].items():
|
||||
self.inventory.set_variable(host, hostvar, hostval)
|
||||
|
||||
def get_hosts(self):
|
||||
return {'host1': {'one': 'two'}, 'host2': {'three': 'four'}}
|
||||
|
||||
def parse(self, inventory, loader, path, cache=True):
|
||||
super(InventoryModule, self).parse(inventory, loader, path)
|
||||
|
||||
self.load_cache_plugin()
|
||||
|
||||
cache_key = self.get_cache_key(path)
|
||||
|
||||
# cache may be True or False at this point to indicate if the inventory is being refreshed
|
||||
# get the user's cache option
|
||||
cache_setting = self.get_option('cache')
|
||||
|
||||
attempt_to_read_cache = cache_setting and cache
|
||||
cache_needs_update = cache_setting and not cache
|
||||
|
||||
# attempt to read the cache if inventory isn't being refreshed and the user has caching enabled
|
||||
if attempt_to_read_cache:
|
||||
try:
|
||||
results = self._cache[cache_key]
|
||||
except KeyError:
|
||||
# This occurs if the cache_key is not in the cache or if the cache_key expired, so the cache needs to be updated
|
||||
cache_needs_update = True
|
||||
|
||||
if cache_needs_update:
|
||||
results = self.get_hosts()
|
||||
|
||||
# set the cache
|
||||
self._cache[cache_key] = results
|
||||
|
||||
self.populate(results)
|
91
test/integration/targets/old_style_cache_plugins/runme.sh
Executable file
91
test/integration/targets/old_style_cache_plugins/runme.sh
Executable file
|
@ -0,0 +1,91 @@
|
|||
#!/usr/bin/env bash
|
||||
|
||||
# We don't set -u here, due to pypa/virtualenv#150
|
||||
set -ex
|
||||
|
||||
MYTMPDIR=$(mktemp -d 2>/dev/null || mktemp -d -t 'mytmpdir')
|
||||
|
||||
trap 'rm -rf "${MYTMPDIR}"' EXIT
|
||||
|
||||
# This is needed for the ubuntu1604py3 tests
|
||||
# Ubuntu patches virtualenv to make the default python2
|
||||
# but for the python3 tests we need virtualenv to use python3
|
||||
PYTHON=${ANSIBLE_TEST_PYTHON_INTERPRETER:-python}
|
||||
|
||||
virtualenv --system-site-packages --python "${PYTHON}" "${MYTMPDIR}/redis-cache"
|
||||
source "${MYTMPDIR}/redis-cache/bin/activate"
|
||||
|
||||
# Run test if dependencies are installed
|
||||
failed_dep_1=$(ansible localhost -m pip -a "name=redis>=2.4.5 state=present" "$@" | tee out.txt | grep -c 'FAILED!' || true)
|
||||
cat out.txt
|
||||
|
||||
installed_redis=$(ansible localhost -m package -a "name=redis-server state=present" --become "$@" | tee out.txt | grep -c '"changed": true' || true)
|
||||
failed_dep_2=$(grep out.txt -ce 'FAILED!' || true)
|
||||
cat out.txt
|
||||
|
||||
started_redis=$(ansible localhost -m service -a "name=redis-server state=started" --become "$@" | tee out.txt | grep -c '"changed": true' || true)
|
||||
failed_dep_3=$(grep out.txt -ce 'FAILED!' || true)
|
||||
cat out.txt
|
||||
|
||||
CLEANUP_REDIS () { if [ "${installed_redis}" -eq 1 ] ; then ansible localhost -m package -a "name=redis-server state=absent" --become ; fi }
|
||||
STOP_REDIS () { if [ "${installed_redis}" -ne 1 ] && [ "${started_redis}" -eq 1 ] ; then ansible localhost -m service -a "name=redis-server state=stopped" --become ; fi }
|
||||
|
||||
if [ "${failed_dep_1}" -eq 1 ] || [ "${failed_dep_2}" -eq 1 ] || [ "${failed_dep_3}" -eq 1 ] ; then
|
||||
STOP_REDIS
|
||||
CLEANUP_REDIS
|
||||
exit 0
|
||||
fi
|
||||
|
||||
export ANSIBLE_CACHE_PLUGIN=redis
|
||||
export ANSIBLE_CACHE_PLUGIN_CONNECTION=localhost:6379:0
|
||||
export ANSIBLE_CACHE_PLUGINS=./plugins/cache
|
||||
|
||||
# Use old redis for fact caching
|
||||
count=$(ansible-playbook test_fact_gathering.yml -vvv 2>&1 "$@" | tee out.txt | grep -c 'Gathering Facts' || true)
|
||||
failed_dep_version=$(grep out.txt -ce "'redis' python module (version 2.4.5 or newer) is required" || true)
|
||||
cat out.txt
|
||||
if [ "${failed_dep_version}" -eq 1 ] ; then
|
||||
STOP_REDIS
|
||||
CLEANUP_REDIS
|
||||
exit 0
|
||||
fi
|
||||
if [ "${count}" -ne 1 ] ; then
|
||||
STOP_REDIS
|
||||
CLEANUP_REDIS
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Attempt to use old redis for inventory caching; should not work
|
||||
export ANSIBLE_INVENTORY_CACHE=True
|
||||
export ANSIBLE_INVENTORY_CACHE_PLUGIN=redis
|
||||
export ANSIBLE_INVENTORY_ENABLED=test
|
||||
export ANSIBLE_INVENTORY_PLUGINS=./plugins/inventory
|
||||
|
||||
ansible-inventory -i inventory_config --graph 2>&1 "$@" | tee out.txt | grep 'Cache options were provided but may not reconcile correctly unless set via set_options'
|
||||
res=$?
|
||||
cat out.txt
|
||||
if [ "${res}" -eq 1 ] ; then
|
||||
STOP_REDIS
|
||||
CLEANUP_REDIS
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Use new style redis for fact caching
|
||||
unset ANSIBLE_CACHE_PLUGINS
|
||||
count=$(ansible-playbook test_fact_gathering.yml -vvv "$@" | tee out.txt | grep -c 'Gathering Facts' || true)
|
||||
cat out.txt
|
||||
if [ "${count}" -ne 1 ] ; then
|
||||
STOP_REDIS
|
||||
CLEANUP_REDIS
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Use new redis for inventory caching
|
||||
ansible-inventory -i inventory_config --graph "$@" 2>&1 | tee out.txt | grep 'host2'
|
||||
res=$?
|
||||
cat out.txt
|
||||
|
||||
STOP_REDIS
|
||||
CLEANUP_REDIS
|
||||
|
||||
exit $res
|
|
@ -0,0 +1,8 @@
|
|||
---
|
||||
- hosts: localhost
|
||||
connection: local
|
||||
gather_facts: no
|
||||
|
||||
- hosts: localhost
|
||||
connection: local
|
||||
gather_facts: yes
|
66
test/units/plugins/cache/test_cache.py
vendored
66
test/units/plugins/cache/test_cache.py
vendored
|
@ -21,9 +21,10 @@ __metaclass__ = type
|
|||
|
||||
from units.compat import unittest, mock
|
||||
from ansible.errors import AnsibleError
|
||||
from ansible.plugins.cache import FactCache
|
||||
from ansible.plugins.cache import FactCache, CachePluginAdjudicator
|
||||
from ansible.plugins.cache.base import BaseCacheModule
|
||||
from ansible.plugins.cache.memory import CacheModule as MemoryCache
|
||||
from ansible.plugins.loader import cache_loader
|
||||
|
||||
HAVE_MEMCACHED = True
|
||||
try:
|
||||
|
@ -43,6 +44,57 @@ except ImportError:
|
|||
else:
|
||||
from ansible.plugins.cache.redis import CacheModule as RedisCache
|
||||
|
||||
import pytest
|
||||
|
||||
|
||||
class TestCachePluginAdjudicator:
|
||||
# memory plugin cache
|
||||
cache = CachePluginAdjudicator()
|
||||
cache['cache_key'] = {'key1': 'value1', 'key2': 'value2'}
|
||||
cache['cache_key_2'] = {'key': 'value'}
|
||||
|
||||
def test___setitem__(self):
|
||||
self.cache['new_cache_key'] = {'new_key1': ['new_value1', 'new_value2']}
|
||||
assert self.cache['new_cache_key'] == {'new_key1': ['new_value1', 'new_value2']}
|
||||
|
||||
def test_inner___setitem__(self):
|
||||
self.cache['new_cache_key'] = {'new_key1': ['new_value1', 'new_value2']}
|
||||
self.cache['new_cache_key']['new_key1'][0] = 'updated_value1'
|
||||
assert self.cache['new_cache_key'] == {'new_key1': ['updated_value1', 'new_value2']}
|
||||
|
||||
def test___contains__(self):
|
||||
assert 'cache_key' in self.cache
|
||||
assert 'not_cache_key' not in self.cache
|
||||
|
||||
def test_get(self):
|
||||
assert self.cache.get('cache_key') == {'key1': 'value1', 'key2': 'value2'}
|
||||
|
||||
def test_get_with_default(self):
|
||||
assert self.cache.get('foo', 'bar') == 'bar'
|
||||
|
||||
def test_get_without_default(self):
|
||||
assert self.cache.get('foo') is None
|
||||
|
||||
def test___getitem__(self):
|
||||
with pytest.raises(KeyError) as err:
|
||||
self.cache['foo']
|
||||
|
||||
def test_pop_with_default(self):
|
||||
assert self.cache.pop('foo', 'bar') == 'bar'
|
||||
|
||||
def test_pop_without_default(self):
|
||||
with pytest.raises(KeyError) as err:
|
||||
assert self.cache.pop('foo')
|
||||
|
||||
def test_pop(self):
|
||||
v = self.cache.pop('cache_key_2')
|
||||
assert v == {'key': 'value'}
|
||||
assert 'cache_key_2' not in self.cache
|
||||
|
||||
def test_update(self):
|
||||
self.cache.update({'cache_key': {'key2': 'updatedvalue'}})
|
||||
assert self.cache['cache_key']['key2'] == 'updatedvalue'
|
||||
|
||||
|
||||
class TestFactCache(unittest.TestCase):
|
||||
|
||||
|
@ -116,9 +168,21 @@ class TestAbstractClass(unittest.TestCase):
|
|||
def test_memcached_cachemodule(self):
|
||||
self.assertIsInstance(MemcachedCache(), MemcachedCache)
|
||||
|
||||
@unittest.skipUnless(HAVE_MEMCACHED, 'python-memcached module not installed')
|
||||
def test_memcached_cachemodule_with_loader(self):
|
||||
self.assertIsInstance(cache_loader.get('memcached'), MemcachedCache)
|
||||
|
||||
def test_memory_cachemodule(self):
|
||||
self.assertIsInstance(MemoryCache(), MemoryCache)
|
||||
|
||||
def test_memory_cachemodule_with_loader(self):
|
||||
self.assertIsInstance(cache_loader.get('memory'), MemoryCache)
|
||||
|
||||
@unittest.skipUnless(HAVE_REDIS, 'Redis python module not installed')
|
||||
def test_redis_cachemodule(self):
|
||||
self.assertIsInstance(RedisCache(), RedisCache)
|
||||
|
||||
@unittest.skipUnless(HAVE_REDIS, 'Redis python module not installed')
|
||||
def test_redis_cachemodule_with_loader(self):
|
||||
# The _uri option is required for the redis plugin
|
||||
self.assertIsInstance(cache_loader.get('redis', **{'_uri': '127.0.0.1:6379:1'}), RedisCache)
|
||||
|
|
|
@ -21,8 +21,6 @@ __metaclass__ = type
|
|||
|
||||
import os
|
||||
|
||||
from collections import defaultdict
|
||||
|
||||
from units.compat import unittest
|
||||
from units.compat.mock import MagicMock, patch
|
||||
from ansible.inventory.manager import InventoryManager
|
||||
|
@ -199,7 +197,6 @@ class TestVariableManager(unittest.TestCase):
|
|||
|
||||
inv1 = InventoryManager(loader=fake_loader, sources=['/etc/ansible/inventory1'])
|
||||
v = VariableManager(inventory=mock_inventory, loader=fake_loader)
|
||||
v._fact_cache = defaultdict(dict)
|
||||
|
||||
play1 = Play.load(dict(
|
||||
hosts=['all'],
|
||||
|
@ -297,7 +294,6 @@ class TestVariableManager(unittest.TestCase):
|
|||
})
|
||||
|
||||
v = VariableManager(loader=fake_loader, inventory=mock_inventory)
|
||||
v._fact_cache = defaultdict(dict)
|
||||
|
||||
play1 = Play.load(dict(
|
||||
hosts=['all'],
|
||||
|
|
Loading…
Reference in a new issue