* Adding support for Amazon ECR
This patch adds a new module named ecr, which can create, update or
destroy Amazon EC2 Container Registries. It also handles the management
of ECR policies.
* ecs_ecr: addressed review feeback
* Renaming ecr to ecs_ecr
* Fixed docs
* Removed bad doc about empty string handling
* Added example of `delete_policy`
* Removed `policy_text` option; switched policy to `json` type so
it can accept string or dict
* Added support for specifying registry_id
* Added explicit else after returned if clauses
* Added `force_set_policy` option
* Improved `set_repository_policy` error handling
* Fixed policy comparisons when AWS doesn't keep the ordering stable
* Moved `boto_exception` into the module
* Conditional include on ansible_network_os
* copy & paste error
* More tests
* More tests
* junos tests (based on vyos)
* remove excessive whitespace
* Pass in ansible_network_os
* net_command for ios
* consistent debug
* wrapp line
* ansible-test changes made in another PR
* ansible-test changes made in another PR
map + extract is the usual way to use it but map isn't available on
older versions of jinja2 that we still work with. Test extract even on
those versions.
* Only add Content-Type if not specified in headers. Fixes#20046
* Update documentation to indicate body_format will not override Content-Type if specified in headers
- Replace nose usage with pytest.
- Remove legacy Shippable integration.sh.
- Update Makefile to use pytest and ansible-test.
- Convert most yield unit tests to pytest parametrize.
* Fix a test failure on Python 3.6
tox -e py36 failed with
======================================================================
ERROR: test_action_base__execute_module (units.plugins.action.test_action.TestActionBase)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/mg/src/ansible/test/units/plugins/action/test_action.py", line 507, in test_action_base__execute_module
self.assertEqual(action_base._execute_module(), dict(_ansible_parsed=True, rc=0, stdout="ok", stdout_lines=['ok']))
File "/home/mg/src/ansible/lib/ansible/plugins/action/__init__.py", line 596, in _execute_module
remote_module_path = self._connection._shell.join_path(tmp, remote_module_filename)
File "/home/mg/opt/python36/lib/python3.6/unittest/mock.py", line 939, in __call__
return _mock_self._mock_call(*args, **kwargs)
File "/home/mg/opt/python36/lib/python3.6/unittest/mock.py", line 1005, in _mock_call
ret_val = effect(*args, **kwargs)
File "/home/mg/src/ansible/.tox/py36/lib/python3.6/posixpath.py", line 92, in join
genericpath._check_arg_types('join', a, *p)
File "/home/mg/src/ansible/.tox/py36/lib/python3.6/genericpath.py", line 149, in _check_arg_types
(funcname, s.__class__.__name__)) from None
TypeError: join() argument must be str or bytes, not 'MagicMock'
because os.path.join() now checks argument types since Python 3.6 (due
to pathlib support, I expect).
* Use a more realistic module name in test
When using AWS we have to use the full domain name in the inventory file, which
we rather than the short name. This change avoids that ending up being
set in the tests.
The behavior now matches GNU diff.
Fixes#14094.
Example of output before this change:
TASK [healthchecks.io : hourly healthchecks.io ping] ***************************
changed: [ranka]
--- before: /etc/cron.hourly/mg-healthchecks-dot-io
+++ after: /tmp/tmpOTvXTw
@@ -1,2 +1,2 @@
#!/bin/sh
-curl -sS https://hchk.io/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx > /dev/null+curl -sS https://hchk.io/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx > /dev/null
after this change:
TASK [healthchecks.io : hourly healthchecks.io ping] ***************************
changed: [ranka]
--- before: /etc/cron.hourly/mg-healthchecks-dot-io
+++ after: /tmp/tmpOTvXTw
@@ -1,2 +1,2 @@
#!/bin/sh
-curl -sS https://hchk.io/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx > /dev/null
\ No newline at end of file
+curl -sS https://hchk.io/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx > /dev/null
The added unit tests contain more examples.
This commit also takes care to avoid "no newline at EOF" warnings when
no_log is in effect, and also when modules return dicts rather than
strings. (It also removes trailing whitespace from using json
serialization when diffing dicts, because I hate trailing whitespace in
Python source files, even if they're test files.)
* updates the ios_config module to use the network_cli plugin
* updates the local action plugin to derive from network
* add unit test cases for ios_config
* updates the deprecated ios_template module to use network_cli
* adds unit test cases for ios_template
* adds check for provider argument and displays warning message
* Cleanup git tests
* Split git tests in seperate files
* Remove use of repo_depth_url
* Use native yaml
* Remove unnecessary remote/local clones
* Fix newlines for yamllint
* If the hash is valid (full-length) but doesn't exist, git returns 128 instead of 1.
* Ensure git doesn't use hardlinks for shallow clones
* Reenable yum install root tests
No need for sos to test installroot. Something with less deps works
just as well.
* Fix yum installroot.
Fix module import to use fail_json when the modules aren't installed.
Remove wildcard imports
* Lsat task is supposed to remove sos so make that happen
* Add --installroot to YUM and DNF modules, issue #11310
This continues ansible-modules-core#1558, and
ansible-modules-core#1669
Allow specifying installroot for the yum and dnf modules
to install and remove packages in a location other than /.
* Remove empty aliases
* Simpler installroot set default logic
* Add a encode() to AnsibleVaultEncryptedUnicode
Without it, calling encode() on it results in a bytestring
of the encrypted !vault-encrypted string.
ssh connection plugin triggers this if ansible_password
is from a var using !vault-encrypted. That path ends up
calling .encode() instead of using the __str__.
Fixes#19795
* Fix str.encode() errors on py2.6
py2.6 str.encode() does not take keyword arguments.
* Refactoring: split readkeys() into readfile() and parsekeys()
* Refactoring: split writekeys() into writefile() and serialize()
* authorized_key: support --diff
* Refactoring: remove no-longer used readkeys()/writekeys()
* Integration test for authorized_key in check mode
Support for the Google API and GCloud-Python Clients have been added.
The three libraries:
* GCloud-Python: A new function, get_google_cloud_credentials, should be used. The credentials-object returned can be passed to any gcloud-python client. Using this client library requires in the installation of gcloud-python. This is preferred library for new modules.
* Google API: A new function, gcp_api_auth, should be used to take advantage of services requiring this client. This client library should be used if the desired functionality is not available in GCloud-Python. Using this library requires the installation of google-api-python-client.
* libcloud: Existing function, gcp_connect, should be used. The interface and return values have not changed and existing modules (such as gce, gce_pd and gce_net) should work without modification. Note that the credentials-fetching code has been refactored out of gcp_connect so that can be reused by all connection functions. To use this function, apache-libcloud must be installed.
Import guards have been added and will only be trigger if a user tries to use a function that is missing dependencies.
Credential-specifying mechanisms (i.e, ansible module params, env vars and libcloud secrets.py) have not changed. They have been refactored and unit tests have been added to allow for changes going forward. We are deprecating (and removing in a subsequent release) the ability to specify credentials via the libcloud secrets file. Also, we have deprecated (and also plan to remove in a subsequent release) the ability to use a p12 pem file for a key - the JSON format is strongly preferred. Deprecation warnings have been added for both of these issues (see the Ansible docs on how to disable deprecation warnings).
The gce_tag module can support updating tags on multiple instances via an instance_pattern field. Full Python regex is supported in the instance_pattern field.
'instance_pattern' and 'instance_name' are mutually exclusive and one must be specified.
The integration test for the gce_tag module has been updated to support the instance_pattern parameter. Unit tests have been added to test the list-manipulation functionality.
Run the integration test with:
TEST_FLAGS='--tags "test_gce_tag"' make gce
Run the unit tests with:
python test/units/modules/cloud/google/test_gce_tag.py
This plugin can be used with the lpass cli interface for lastpass.
[lastpass-cli](https://github.com/lastpass/lastpass-cli)
Example:
Add a lookup to your playbooks/variables somewhere:
```
some_variable: "{{ lookup('lastpass','Some Lastpass entry name or ID', field='username') }}"
```
Usage:
* start a lpass session prior to using ansible
* run ansible
* logout when finished
```
lpass login user@domain.com
ansible-playbook foo.yml
lpass logout
```
An inner single-quote pair breaks out of the outer single-quote
pair. Rather than escaping the inner quotes to protect against
this, just use the fact that `str()` is equivalent to `""`.
* Initial Commit for Infinidat Ansible Modules
Skip tests for python 2.4 as infinisdk doesn't support python 2.4
Move common code and arguments into module_utils/infinibox.py
Move common documentation to documentation_fragments. Cleanup Docs and Examples
Fix formating in modules description
Add check mode support for all modules
Import AnsibleModule only from ansible.module_utils.basic in all modules
Skip python 2.4 tests for module_utils/infinibox.py
Documentation and code cleanup
Rewrite examples in multiline format
Misc Changes
Test
* Add Infinibox modules to CHANGELOG.md
* Add ANSIBLE_METADATA to all modules
Since we no longer use a post-validated task in _process_pending_results, we
need to be sure to template fields used in original_task as they are raw and
may contain variables.
This patch also moves the handler tracking to be per-uuid, not per-object.
Doing it per-object had implications for the above due to the fact that the
copy of the original task is now being used, so the only sure way is to track
based on the uuid instead.
Fixes#18289
Print out the data that fails to validate when doing
schema checking on modules
This allows easier interpretation of error messages.
From:
```
ERROR: DOCUMENTATION.notes.2: expected basestring
```
To:
```
ERROR: DOCUMENTATION.notes.2: expected basestring @ data['notes'][2].
Got {"As with C(include) this task can be static or dynamic, If static
it implies that it won't need templating nor loops nor conditionals and
will show included tasks in the --list options. Ansible will try to
autodetect what is needed, but you can set `static": 'yes|no` at task
level to control this.'}
```
* Code smell test for iteritems and itervalues
* Change the keydict object in authorized_keys so it doesn't throw a false postive
keydict is a bad data structure anyway. We don't use the iteritems and
itervalues methods so just disable them so that the code-smell tests do
not trigger on it.
* Change release templates so they work with py3
* Fix UnboundLocalError remote_head in git
Fixes#5505
The use of remote_head was a leftover of #4562.
remote_head is not necessary, since the repo is unchanged anyway and
after is set correctly.
Further changes:
* Set changed=True and msg once local_mods are detected and reset.
* Remove need_fetch that is always True (due to previous if) to improve
clarity
* Don't exit early for local_mods but run submodules update and
switch_version
* Add test for git with local modifications
* Enable tests on python 3 for uri
* Added one more node type to SAFE_NODES into safe_eval module.
ast.USub represents unary operators. This is necessary for
parsing some unusual but still valid JSON files during testing
with Python 3.
* Use native yaml for apache2 test
* Test removal of default modules with force
a2enmod on debian has `-f`, but not on SUSE (runs there without force).
Therefore don't test that option on SUSE.
The docs already specify that the option is intended for Debian systems
only.
* Fix synchronize retries
The synchronize module munges its task args on every invocation of
run(). This was problematic because the munged data was not fit for use
by a second pass of the synchronize module. Correct this by using a copy
of the task args on every invocation of run() so that the original args
are not affected.
Local testing using this playbook seems to confirm that things work as
expected:
- hosts: all
tasks:
- delay: 2
register: task_result
retries: 1
until: task_result.rc == 0
synchronize:
dest: /tmp/out
mode: pull
src: /tmp/nonexistent/
fixes#18281
* Update synchroncization fixture assertions
When we started operating on a copy of the task args the test assertions
were no longer asserting things about the munged state but of the
pristine state. Convert the copy of task args to a class member so that
it can be compared against later in testing and update the assertions to
check this munged copy.
* Shuffle objects around for cleaner testing
Attach the temporary args dict to the task rather than the action as
this makes updating the existing tests cleaner.
This adds back the change to the network_cli plugin. Ths change adds
the ensure_connect decorator to the open_shell() method to make sure
the connection is valid before trying to open a shell.
The issue was due to the addition of the decorator that will call
_connect() when there is no connection. The _connect() method should
have been mocked in the test case. This commit fixes the test
case as well
Change was originally reverted in c414ded69a
* make hash_params more robust in the face of many corner cases
Fixes#18680
Alternative fix to #18681
* add test case for role.hash_params
* Add role.hash_params test for more types
A set, a generator/iterable, and a Container that
is not Iterable.
* Fix regression in jinja2 include search path
Since commit 3c39bb5, the 'ansible_search_path' variable is used to set
jinja2's search path for {% include %} directives. However, this path is
the the proper one because our templates live in 'templates' subdirs in
our search path.
This is a regression because previously, our include search path would
include the dirname of the currently interpreted file, which worked most
of the time.
fixes#18526
* Fix template lookup search path
Improve fix in commit c96c853 so that the search path contain both
template-suffixed paths as well as original paths.
ref PR #18617
* Add integration test for template lookups
Tests regression at #18526
This test fails on current devel branch and succeeds on PR #18617
* wip: add a unit test for playbook/base.py
This commit include a failing test
TestBaseSubClass.test_attr_class_post_validate
It fails with the error:
Traceback (most recent call last):
File "/home/adrian/src/ansible/test/units/playbook/test_base.py", line 264, in test_attr_class_post_validate
bsc = self._base_validate(ds)
File "/home/adrian/src/ansible/test/units/playbook/test_base.py", line 206, in _base_validate
bsc.post_validate(templar)
File "/home/adrian/src/ansible/lib/ansible/playbook/base.py", line 450, in post_validate
" Error was: %s" % (name, value, attribute.isa, e), obj=self.get_ds())
AnsibleParserError: the field 'test_attr_class_post_validate' has an invalid value (<class 'units.playbook.test_base.ExampleSubClass'>), and could not be converted to an class. Error was: test_attr_class_post_validate is not a valid <class 'units.playbook.test_base.ExampleSubClass'> (got a <class 'ansible.playbook.base.BaseMeta'> instead)
* wip, test refactoring
* wip, trying to add a parent->child
* wip, fix isa=class.
the ds the base using needs an instance of the class
(ie, whats normally created by the yaml loaders)
* wip, theres no need to argue, I just dont understand parents
* stub a _preprocess_data for coverage
* cleanup, required, parent, etc
* Make sure include_role inherit variables from parent role
Setting the parent of task blocks generated by include_role after they
have been produced is not sufficient - it means the tasks don't have the
correct dependency chain set afterwards, and therefore, don't properly
inherit variables from outer roles.
In addition to manually setting the parents, pass the dep_chain when
compiling the role, such that variables are correctly imported.
Fixes#18540.
* Add tests for include_role
* Fix include_role variable inheritance for multiple parent levels
* Add test cases for VyOS commands that don't honor paging settings
Testing for issue fixed in PR #18546
* Add provider line and fix indentation
For the way we invoke the tests we need to specify the `provider:`
Also fix the indentation on `register:`
* Replace pipes.quote for shlex_quote
* More migration of pipes.quote to shlex_quote
Note that we cannot yet move module code over. Modules have six-1.4
bundled which does not have shlex_quote. This shouldn't be a problem as
the function is still importable from pipes.quote. It's just that this
has become an implementation detail that makes us want to import from
shlex instead.
Once we get rid of the python2.4 dependency we can update to a newer
version of bundled six module-side and then we're free to use
shlex_quote everywhere.
If the current ansible enviroment has a config setup
that doesn't use 'smart' as the configured transport
test_play_context would fail when it assumes the
transport will be 'smart'.
Previous changes addressed a corner case, which unfortunately introduced
another bug. This patch adds a new flag to the host state (did_rescue) which
is set to true when the rescue portion of a block completes. This flag is
then checked in _check_failed_state() when the fail_state != FAILED_NONE.
This lead to the discovery of another bug - current strategies are not advancing
hosts to ITERATING_COMPLETE after doing a peek at the next task, leaving the
host state in the run_state of the final task. To address this, before gathering
the list of failed hosts in StrategyBase.run(), a final pass through the iterator
for all hosts is done to ensure each host is in its final state. This way, no
strategy derived from StrategyBase has to worry about it and it's handled.
Fixes#17983
This commit extends YAML linting by enabling standard rules from the
`yamllint` tool [1]. Since syntax errors and key duplicates are already
checked since 4d48711, this change only adds detection for cosmetic
problems. It also narrows checks to the test/ dir only.
The main goal is to prevent future problems to enter the code base
without being noticed. While it would be a huge effort to be PEP8
compliant, it is relatively easy to have correct YAML style *now* and
prevent future errors by enabling linting.
Note: for those (like me) caring about code attribution: use `git blame
-w` to ignore whitespace-only changes.
Note: I disabled some linting checks (such as indentation), they can be
enforced in the future if needed. Similarly, current checks can also be
disabled. See the `.yamllint` file.
[1]: https://yamllint.readthedocs.io/
This change corrects problems reported by the `yamllint` linter.
Since key duplication problems were removed in 4d48711, this commit
mainly fixes trailing spaces and extra empty lines at beginning/end of
files.
* Added test for sequenced-name instance generation (num_instances)
* Added param-check tags to tests that only do argument checking
Should be merged AFTER ansible/ansible-modules-core#4276
- Use assertRaisesRegexp to make sure correct exceptions are raised.
- Set docker_command to avoid docker dependency (skips find_executable).
- Use a fake path for docker_command to make sure mock.patch is working.
* Fix bug (#18355) where encrypted inventories fail
This is first part of fix for #18355
* Make DataLoader._get_file_contents return bytes
The issue #18355 is caused by a change to inventory to
stop using _get_file_contents so that it can handle text
encoding itself to better protect against harmless text
encoding errors in ini files (invalid unicode text in
comment fields).
So this makes _get_file_contents return bytes so it and other
callers can handle the to_text().
The data returned by _get_file_contents() is now a bytes object
instead of a text object. The callers of _get_file_contents() have
been updated to call to_text() themselves on the results.
Previously, the ini parser attempted to work around
ini files that potentially include non-vailid unicode
in comment lines. To do this, it stopped using
DataLoader._get_file_contents() which does the decryption of
files if vault encrypted. It didn't use that because _get_file_contents
previously did to_text() on the read data itself.
_get_file_contents() returns a bytestring now, so ini.py
can call it and still special case ini file comments when
converting to_text(). That also means encrypted inventory files
are decrypted first.
Fixes#18355
This allows validate-modules to run in an environment where
python 3 is the default. This will no longer be necessary once
validate-modules is updated to work with both python 2 and 3.
- Remove shebangs from:
- ini files
- unit tests
- module_utils
- plugins
- module_docs_fragments
- non-executable Makefiles
- Change non-modules from '/usr/bin/python' to '/usr/bin/env python'.
- Change '/bin/env' to '/usr/bin/env'.
Also removed main functions from unit tests (since they no longer
have a shebang) and fixed a python 3 compatibility issue with
update_bundled.py so it does not need to specify a python 2 shebang.
A script was added to check for unexpected shebangs in files.
This script is run during CI on Shippable.
- Update import for relocated tests.
- Fix test to expect changed from update_tags.
- Add checks for boto3 and botocore to tests.
- Set check mode with kwarg.
- Python 3 fixes for unit tests.
- Python 2.6 fix for unit tests.
- Add missing meta value for test_create_server
- Add .gitignore for pytest .cache directory
Exclude test_os_server from nose test runs since it was designed
for pytest. The test will work correctly when run using pytest.
This is a temporary issue, as we'll be moving to pytest soon.
This commit adds some unit tests for the `cloud.openstack.os_server`
module. These tests exercise `_network_args` thoroughly and
`_create_server` lightly.
These tests will **fail** until ansible/ansible-modules-core#2275 lands.
To run the tests:
pip install -r test-requirements.txt
PYTHONPATH=$PWD py.test
Originally from ansible/ansible-modules-core@3387526bca
- Correct directory name in test/README.md
- Move code-smell tests to test/sanity/code-smell
- Update code-smell.sh to use new script paths
- Add test/integration/target-prefixes.win for ansible-test
- Move module unit tests to match module directory layout
* Network Test Documentation
Will need improving over time, though this ensure that everything that was in `ansible/test-network-modules` is in `ansible/ansible`
* Update README.md
* Inventory file
* Network module prefixes
In ansible-test we should skip tests for these modules, they will be
tested via another process.
* Update target-prefixes.network
In order to support legacy plugins, the following two method signatures
are allowed for `CallbackBase.v2_playbook_on_start`:
def v2_playbook_on_start(self):
def v2_playbook_on_start(self, playbook):
Previously, the logic to handle this divergence checked to see if the
callback plugin being called supported an argument named `playbook`
in its `v2_playbook_on_start` method. This was fragile in a few ways:
- if a plugin author did not use the literal `playbook` to name their
method argument, their plugin would not be called correctly
- if a plugin author wrapped their `v2_playbook_on_start` method and
by doing so changed the argspec to no longer expose an argument
with that literal name, their plugin would not be called correctly
In order to continue to support both types of callback for backwards
compatibility while making the call more robust for plugin authors,
the logic can be reversed in order to have a positive check for the old
method signature instead of a positive check for the new one.
Signed-off-by: Steve Kuznetsov <skuznets@redhat.com>
As neon is derived from Ubuntu, ansible_os_family should have the value
"Debian" instead of "Neon". Add a test case for KDE neon and set
os_family correctly for it.
If the facts returned by setup included strings that
had double quotes in them, the asserts in test_gathering_facts.yml
would fail with errors like:
"The conditional check '\"[{u'mounts': {u'options':
u'rw,context=\"system_u:\"'}}]\" != \"UNDEF_HW\"' failed. The error was:
template error while templating string: expected token 'end of statement
block', got 'system_u'. String: {% if \"[{u'mounts': {u'options':
u'rw,context=\"system_u:\"'}}]\" != \"UNDEF_HW\" %} True {% else %}
False {% endif %}"
For one example, if mount facts returned an 'options' field that
included double quoated selinux context ids, the test would fail.
Fix is removing the double quoting in the assert 'that:' lines,
and removing the unneeded double curly brackets.
Nothing seems to use this now.
Was added originally added in2d11cfab92f9d26448461b4bc81f466d1910a15e
but the code that used it was removed in
e02b98274b
On openSUSE Tumbleweed, lsb-release -a currently reports
the distributor ID as "openSUSE Tumbleweed". On openSUSE
Leap, the distributor ID is "SUSE LINUX".
Add them to the OS_FAMILY dict as Suse family systems.
Also add an entry to TESTSETS in test_distribution_version.py
for openSUSE Tumbleweed.
Since passlib algo sometime takes a bytes, and sometime
not, depending on a internal variable, we have to convert
bnased on it, or it fail with "TypeError: salt must be bytes,
not str" (or unicode instead of bytes)
However, that's not great to use internal structure for that.
* Add tag verification test (ansible-modules-core PR 2654)
* Fix typo
* Use smaller repo for testing, add dependency control
* Test is gpg exists before running git signing tasks
* Correct the test conditionals so that gpg1 is tested
Currently (pre-repomerge) we aren't running sanity.sh from
ansible/ansible, after the merge we will. Therefore I've added the
requirements here, rather than in ansible-modules-*/test/utils/shippable