* Add a encode() to AnsibleVaultEncryptedUnicode
Without it, calling encode() on it results in a bytestring
of the encrypted !vault-encrypted string.
ssh connection plugin triggers this if ansible_password
is from a var using !vault-encrypted. That path ends up
calling .encode() instead of using the __str__.
Fixes#19795
* Fix str.encode() errors on py2.6
py2.6 str.encode() does not take keyword arguments.
Support for the Google API and GCloud-Python Clients have been added.
The three libraries:
* GCloud-Python: A new function, get_google_cloud_credentials, should be used. The credentials-object returned can be passed to any gcloud-python client. Using this client library requires in the installation of gcloud-python. This is preferred library for new modules.
* Google API: A new function, gcp_api_auth, should be used to take advantage of services requiring this client. This client library should be used if the desired functionality is not available in GCloud-Python. Using this library requires the installation of google-api-python-client.
* libcloud: Existing function, gcp_connect, should be used. The interface and return values have not changed and existing modules (such as gce, gce_pd and gce_net) should work without modification. Note that the credentials-fetching code has been refactored out of gcp_connect so that can be reused by all connection functions. To use this function, apache-libcloud must be installed.
Import guards have been added and will only be trigger if a user tries to use a function that is missing dependencies.
Credential-specifying mechanisms (i.e, ansible module params, env vars and libcloud secrets.py) have not changed. They have been refactored and unit tests have been added to allow for changes going forward. We are deprecating (and removing in a subsequent release) the ability to specify credentials via the libcloud secrets file. Also, we have deprecated (and also plan to remove in a subsequent release) the ability to use a p12 pem file for a key - the JSON format is strongly preferred. Deprecation warnings have been added for both of these issues (see the Ansible docs on how to disable deprecation warnings).
The gce_tag module can support updating tags on multiple instances via an instance_pattern field. Full Python regex is supported in the instance_pattern field.
'instance_pattern' and 'instance_name' are mutually exclusive and one must be specified.
The integration test for the gce_tag module has been updated to support the instance_pattern parameter. Unit tests have been added to test the list-manipulation functionality.
Run the integration test with:
TEST_FLAGS='--tags "test_gce_tag"' make gce
Run the unit tests with:
python test/units/modules/cloud/google/test_gce_tag.py
This plugin can be used with the lpass cli interface for lastpass.
[lastpass-cli](https://github.com/lastpass/lastpass-cli)
Example:
Add a lookup to your playbooks/variables somewhere:
```
some_variable: "{{ lookup('lastpass','Some Lastpass entry name or ID', field='username') }}"
```
Usage:
* start a lpass session prior to using ansible
* run ansible
* logout when finished
```
lpass login user@domain.com
ansible-playbook foo.yml
lpass logout
```
Since we no longer use a post-validated task in _process_pending_results, we
need to be sure to template fields used in original_task as they are raw and
may contain variables.
This patch also moves the handler tracking to be per-uuid, not per-object.
Doing it per-object had implications for the above due to the fact that the
copy of the original task is now being used, so the only sure way is to track
based on the uuid instead.
Fixes#18289
* Fix synchronize retries
The synchronize module munges its task args on every invocation of
run(). This was problematic because the munged data was not fit for use
by a second pass of the synchronize module. Correct this by using a copy
of the task args on every invocation of run() so that the original args
are not affected.
Local testing using this playbook seems to confirm that things work as
expected:
- hosts: all
tasks:
- delay: 2
register: task_result
retries: 1
until: task_result.rc == 0
synchronize:
dest: /tmp/out
mode: pull
src: /tmp/nonexistent/
fixes#18281
* Update synchroncization fixture assertions
When we started operating on a copy of the task args the test assertions
were no longer asserting things about the munged state but of the
pristine state. Convert the copy of task args to a class member so that
it can be compared against later in testing and update the assertions to
check this munged copy.
* Shuffle objects around for cleaner testing
Attach the temporary args dict to the task rather than the action as
this makes updating the existing tests cleaner.
This adds back the change to the network_cli plugin. Ths change adds
the ensure_connect decorator to the open_shell() method to make sure
the connection is valid before trying to open a shell.
The issue was due to the addition of the decorator that will call
_connect() when there is no connection. The _connect() method should
have been mocked in the test case. This commit fixes the test
case as well
Change was originally reverted in c414ded69a
* make hash_params more robust in the face of many corner cases
Fixes#18680
Alternative fix to #18681
* add test case for role.hash_params
* Add role.hash_params test for more types
A set, a generator/iterable, and a Container that
is not Iterable.
* wip: add a unit test for playbook/base.py
This commit include a failing test
TestBaseSubClass.test_attr_class_post_validate
It fails with the error:
Traceback (most recent call last):
File "/home/adrian/src/ansible/test/units/playbook/test_base.py", line 264, in test_attr_class_post_validate
bsc = self._base_validate(ds)
File "/home/adrian/src/ansible/test/units/playbook/test_base.py", line 206, in _base_validate
bsc.post_validate(templar)
File "/home/adrian/src/ansible/lib/ansible/playbook/base.py", line 450, in post_validate
" Error was: %s" % (name, value, attribute.isa, e), obj=self.get_ds())
AnsibleParserError: the field 'test_attr_class_post_validate' has an invalid value (<class 'units.playbook.test_base.ExampleSubClass'>), and could not be converted to an class. Error was: test_attr_class_post_validate is not a valid <class 'units.playbook.test_base.ExampleSubClass'> (got a <class 'ansible.playbook.base.BaseMeta'> instead)
* wip, test refactoring
* wip, trying to add a parent->child
* wip, fix isa=class.
the ds the base using needs an instance of the class
(ie, whats normally created by the yaml loaders)
* wip, theres no need to argue, I just dont understand parents
* stub a _preprocess_data for coverage
* cleanup, required, parent, etc
* Make sure include_role inherit variables from parent role
Setting the parent of task blocks generated by include_role after they
have been produced is not sufficient - it means the tasks don't have the
correct dependency chain set afterwards, and therefore, don't properly
inherit variables from outer roles.
In addition to manually setting the parents, pass the dep_chain when
compiling the role, such that variables are correctly imported.
Fixes#18540.
* Add tests for include_role
* Fix include_role variable inheritance for multiple parent levels
* Replace pipes.quote for shlex_quote
* More migration of pipes.quote to shlex_quote
Note that we cannot yet move module code over. Modules have six-1.4
bundled which does not have shlex_quote. This shouldn't be a problem as
the function is still importable from pipes.quote. It's just that this
has become an implementation detail that makes us want to import from
shlex instead.
Once we get rid of the python2.4 dependency we can update to a newer
version of bundled six module-side and then we're free to use
shlex_quote everywhere.
If the current ansible enviroment has a config setup
that doesn't use 'smart' as the configured transport
test_play_context would fail when it assumes the
transport will be 'smart'.
Previous changes addressed a corner case, which unfortunately introduced
another bug. This patch adds a new flag to the host state (did_rescue) which
is set to true when the rescue portion of a block completes. This flag is
then checked in _check_failed_state() when the fail_state != FAILED_NONE.
This lead to the discovery of another bug - current strategies are not advancing
hosts to ITERATING_COMPLETE after doing a peek at the next task, leaving the
host state in the run_state of the final task. To address this, before gathering
the list of failed hosts in StrategyBase.run(), a final pass through the iterator
for all hosts is done to ensure each host is in its final state. This way, no
strategy derived from StrategyBase has to worry about it and it's handled.
Fixes#17983
- Use assertRaisesRegexp to make sure correct exceptions are raised.
- Set docker_command to avoid docker dependency (skips find_executable).
- Use a fake path for docker_command to make sure mock.patch is working.
* Fix bug (#18355) where encrypted inventories fail
This is first part of fix for #18355
* Make DataLoader._get_file_contents return bytes
The issue #18355 is caused by a change to inventory to
stop using _get_file_contents so that it can handle text
encoding itself to better protect against harmless text
encoding errors in ini files (invalid unicode text in
comment fields).
So this makes _get_file_contents return bytes so it and other
callers can handle the to_text().
The data returned by _get_file_contents() is now a bytes object
instead of a text object. The callers of _get_file_contents() have
been updated to call to_text() themselves on the results.
Previously, the ini parser attempted to work around
ini files that potentially include non-vailid unicode
in comment lines. To do this, it stopped using
DataLoader._get_file_contents() which does the decryption of
files if vault encrypted. It didn't use that because _get_file_contents
previously did to_text() on the read data itself.
_get_file_contents() returns a bytestring now, so ini.py
can call it and still special case ini file comments when
converting to_text(). That also means encrypted inventory files
are decrypted first.
Fixes#18355
- Remove shebangs from:
- ini files
- unit tests
- module_utils
- plugins
- module_docs_fragments
- non-executable Makefiles
- Change non-modules from '/usr/bin/python' to '/usr/bin/env python'.
- Change '/bin/env' to '/usr/bin/env'.
Also removed main functions from unit tests (since they no longer
have a shebang) and fixed a python 3 compatibility issue with
update_bundled.py so it does not need to specify a python 2 shebang.
A script was added to check for unexpected shebangs in files.
This script is run during CI on Shippable.
- Update import for relocated tests.
- Fix test to expect changed from update_tags.
- Add checks for boto3 and botocore to tests.
- Set check mode with kwarg.
- Python 3 fixes for unit tests.
- Python 2.6 fix for unit tests.
- Add missing meta value for test_create_server
- Add .gitignore for pytest .cache directory
Exclude test_os_server from nose test runs since it was designed
for pytest. The test will work correctly when run using pytest.
This is a temporary issue, as we'll be moving to pytest soon.
- Correct directory name in test/README.md
- Move code-smell tests to test/sanity/code-smell
- Update code-smell.sh to use new script paths
- Add test/integration/target-prefixes.win for ansible-test
- Move module unit tests to match module directory layout
In order to support legacy plugins, the following two method signatures
are allowed for `CallbackBase.v2_playbook_on_start`:
def v2_playbook_on_start(self):
def v2_playbook_on_start(self, playbook):
Previously, the logic to handle this divergence checked to see if the
callback plugin being called supported an argument named `playbook`
in its `v2_playbook_on_start` method. This was fragile in a few ways:
- if a plugin author did not use the literal `playbook` to name their
method argument, their plugin would not be called correctly
- if a plugin author wrapped their `v2_playbook_on_start` method and
by doing so changed the argspec to no longer expose an argument
with that literal name, their plugin would not be called correctly
In order to continue to support both types of callback for backwards
compatibility while making the call more robust for plugin authors,
the logic can be reversed in order to have a positive check for the old
method signature instead of a positive check for the new one.
Signed-off-by: Steve Kuznetsov <skuznets@redhat.com>
As neon is derived from Ubuntu, ansible_os_family should have the value
"Debian" instead of "Neon". Add a test case for KDE neon and set
os_family correctly for it.
Nothing seems to use this now.
Was added originally added in2d11cfab92f9d26448461b4bc81f466d1910a15e
but the code that used it was removed in
e02b98274b
On openSUSE Tumbleweed, lsb-release -a currently reports
the distributor ID as "openSUSE Tumbleweed". On openSUSE
Leap, the distributor ID is "SUSE LINUX".
Add them to the OS_FAMILY dict as Suse family systems.
Also add an entry to TESTSETS in test_distribution_version.py
for openSUSE Tumbleweed.
Implement tag and skip_tag handling in the CLI() class. Change tag and
skip_tag command line options to be accepted multiple times on the CLI
and add them together rather than overwrite.
* Make it configurable whether to merge or overwrite multiple --tags arguments
* Make the base CLI class an abstractbaseclass so we can implement
functionality in parse() but still make subclasses implement it.
* Deprecate the overwrite feature of --tags with a message that the
default will change in 2.4 and go away in 2.5.
* Add documentation for merge_multiple_cli_flags
* Fix galaxy search so its tags argument does not conflict with generic tags
* Unit tests and more integration tests for tags
Prior to this commit, the ini parser would fail if the inventory was
not 100% utf-8. This commit makes this slightly more robust by
omitting full line comments from that requirement.
Fixes#17593
If the sftp fails, roll over to scp by default. This saves users
from having to know about the scp_if_ssh method when sftp is broken
on the remote host.
Python2 seems to allow any integer. Python3.5 on Linux seems to allow
a 32 bit unsigned int. Python3.5 on El Capitan seems to limit it to
a smaller size... perhaps a 16 bit int.
This adds some test data to test_facts.py that
includes mnt points that have a single quote in
the path.
Ala, https://github.com/ansible/ansible/issues/16855
The bug was already fixed via other changes, but this is
for regression testing.
A parameter of type int should accept int and string, but not float.
A parameter of type float should accept float, int, and string.
Also reset the arguments in another test so that it runs cleanly. This
agrees with what all the other tests are doing.
* Improve unit testing of 'password' lookup
The tests showed some UnicodeErrors for the
cases where the 'chars' param include unicode,
causing the 'getattr(string, c, c)' to fail.
So the candidate char generation code try/excepts
UnicodeErrors there now.
Some refactoring of the password.py module to make
it easier to test, and some new tests that cover more
of the password and salt generation.
* More refactoring and fixes.
* manual merge of text enc fixes from pr17475
* moving methods to module scope
* more refactoring
* A few more text encoding fixes/merges
* remove now unused code
* Add test cases and data for _gen_candidate_chars
* more test coverage for password lookup
* wip
* More text encoding fixes and test coverage
* cleanups
* reenable text_type assert
* Remove unneeded conditional in _random_password
* Add docstring for _gen_candidate_chars
* remove redundant to_text and list comphenesion
* Move set of 'chars' default in _random_password
on py2, C.DEFAULT_PASSWORD_CHARS is a regular str
type, so the assert here fails. Move setting the
default into the method and to_text(DEFAULT_PASSWORD_CHARS)
if it's needed.
* combine _random_password and _gen_password
* s/_create_password_file/_create_password_file_dir
* native strings for exception msgs
* move password to_text to _read_password_file
* move to_bytes(content) to _write_password_file
* add more test assertions about genned pw's
* Some cleanups to alikins and abadger's password lookup refactoring:
* Make DEFAULT_PASSWORD_CHARS into a text string in constants.py
- Move this into the nonconfigurable section of constants.
* Make utils.encrypt.do_encrypt() return a text string because all the
hashes in passlib should be returning ascii-only strings and they are
text strings in python3.
* Make the split up of functions more sane:
- Don't split such that conditionals have to occur in two separate functions.
- Don't go overboard: Good to split file system manipulation from parsing
but we don't need to do every file manipulation in a separate
function.
- Don't split so that creation of the password store happens in two
parts.
- Don't split in such a way that no decisions are made in run.
* Organize functions by when it gets called from run().
* Run all potential characters through the gen_candidate_chars function
because it does both normalization and validation.
* docstrings for functions
* Change when we store salt slightly. Store it whenever it was already
present in the file as well as when encrypt is requested. This will
head of potential idempotence bugs where a user has two playbook tasks
using the same password and in one they need it encrypted but in the
other they need it plaintext.
* Reorganize tests to follow the order of the functions so it's easier
to figure out if/where a function has been tested.
* Add tests for the functions that read and write the password file.
* Add tests of run() when the password has already been created.
* Test coverage currently at 100%
As suggested in feedback on
https://github.com/ansible/ansible/pull/17575, add
os_family to test_distribution_version. Add the
correct os_family to the existing testcase data
entries.
Also add os_family to the output of
gen_distribution_version_testcase.py so any new
generated entries will contain this data.
* Make is_encrypted_file handle both files opened in text and binary mode
On python3, by default files are opened in text mode. Since we know
the encoding of vault files (and especially the header which is the
first set of bytes) we can decide whether the file is an encrypted
vault file in either case.
* Fix is_encrypted_file not resetting the file position
* Update is_encrypted_file to check that all the data in the file is ascii
* For is_encrypted_file(), add start_pos and count parameters
This allows callers to specify reading vaulttext from the middle of
a file if necessary.
* Combine VaultLib.encrypt() and VaultLib.encrypt_bytestring()
* Change vault's is_encrypted() to take either text or byte strings and to return False if any part of the data is non-ascii.
* Remove unnecessary use of six.b
* Vault Cipher: mark a few methods as private.
* VaultAES256._is_equal throws a TypeError if given non byte strings
* Make VaultAES256 methods that don't need self staticmethods and classmethods
* Mark VaultAES and is_encrypted as deprecated
* Get rid of VaultFile (unused and feature implemented in a different way)
* Normalize variable and parameter names on plaintext, ciphertext, vaulttext
* Normalize variable and parameter names on "b_" prefix when dealing with bytes
* Test changes:
* Remove redundant tests( both checking the same byte string)
* Fix use of format string without format operator
* Enable vault editor tests on python3
* Initialize the vault_cipher for VaultAES256 testing in setUp()
* Make assertTrue and assertFalse take the actual method calls for
better error messages.
* Test that non-ascii byte strings compare correctly.
* Test that unicode strings and ints raise TypeError
* Test-specific:
* Removed test_methods_exist(). We only have one VaultLib so the
implementation is the assurance that the methods exist. (Can use an abc for
this if it changes).
* Add tests for both byte string and text string input where the API takes either.
* Convert "assert" to unittest assert functions or add a custom message where
that will make failures easier to debug.
* Move instantiating the VaultLib into setUp().
* Added aws_retry decorator function with unit tests
* Restructured the code to be used with a base class.
This base class CloudRetry can be reused by any other cloud provider.
This decorator should be used in situations, where you need to implement
a backoff algorithm and want to retry based on the status code from the
exception.
* updated documentation
* fixed tabs
* added botocore and boto3 to requirements.txt
* removed cloud.py from py24 tests, as it depends on boto3
* fix relative imports
* updated test to be 2.6 compat
* updated method name from retry to backoff
* readded lxd
* Updated default backoff from 2 seconds to 1.1s.
This will be about a total of 48 seconds in 10 tries. This is
configurable.
We couldn't copy to_unicode, to_bytes, to_str into module_utils because
of licensing. So once created it we had two sets of functions that did
the same things but had different implementations. To remedy that, this
change removes the ansible.utils.unicode versions of those functions.
for `VariableManager._get_magic_variables()`.
This saves a lot of time re-iterating the nearly always constant global
list of groups and their members.
Generate once and cache, and invalidate cache in case `add_host:` or
`group_by:` are used.
* Port set_*_if_different functions to python3
* Add surrogate_or_strict and surrogate_or_replace error handlers for
to_text, to_bytes, to_native
* Set default error handler to surrogate_or_replace
* Make use of the new error handlers in the already ported code
* Move the unittests for module_utils._text as they aren't in basic.py
* Cleanup around SEQUENCETYPE. On python2.6+ SEQUENCETYPE includes
strings so make sure code omits those explicitly if necessary
* Allow arg_spec aliases to be other sequence types
* Fix to_native call in selinux_context and selinux_default_context to
use the error handler correctly.
* Port set_mode_if_different to work on python3
* Port atomic_move to work on python3
* Fix check_password_prompt variable which wasn't renamed properly
tempfile.NamedTemporaryFile keeps a file handle causing os.rename() to fail with windows based vboxfs: [Errno 26] Text file busy.
Changed NamedTemporaryFile to mkstemp() and added a finally block to unlink the temp file in each and every case.
Make some python3 fixes to make the unittests pass:
* galaxy imports
* dictionary iteration in role requirements
* swap_stdout helper for unittests
* Normalize to text string in a facts.py function
Make !vault-encrypted create a AnsibleVaultUnicode
yaml object that can be used as a regular string object.
This allows a playbook to include a encrypted vault
blob for the value of a yaml variable. A 'secret_password'
variable can have it's value encrypted instead of having
to vault encrypt an entire vars file.
Add __ENCRYPTED__ to the vault yaml types so
template.Template can treat it similar
to __UNSAFE__ flags.
vault.VaultLib api changes:
- Split VaultLib.encrypt to encrypt and encrypt_bytestring
- VaultLib.encrypt() previously accepted the plaintext data
as either a byte string or a unicode string.
Doing the right thing based on the input type would fail
on py3 if given a arg of type 'bytes'. To simplify the
API, vaultlib.encrypt() now assumes input plaintext is a
py2 unicode or py3 str. It will encode to utf-8 then call
the new encrypt_bytestring(). The new methods are less
ambiguous.
- moved VaultLib.is_encrypted logic to vault module scope
and split to is_encrypted() and is_encrypted_file().
Add a test/unit/mock/yaml_helper.py
It has some helpers for testing parsing/yaml
Integration tests added as roles test_vault and test_vault_embedded
* Give native strings to selinux library functions.
SELinux takes pathnames as native strings. That means we need to
convert to bytes on python2 and convert to text on python3.
Fixes#17155
* Read kitchen documentation, make module_utils params more like kitchen API
* Remove none nonstring strategy and add strict
* Raise TypeError on invalid nonstring strategy
* Document to_native()
* Make unittests for testing module_utils.text
* Rm py2.7+ code in docker connection plugin
The docker connection plugin was using subprocess.check_output
which only exists in python 2.7 and later. Connection plugins
need to support python2.6 so this replaces it with Popen/communicate()
* Handle docker ver errors in docker connection
Add unit tests for DockerConnection
Fixes#16971
test/units/plugins/action/test_action.py had code
for handling a bug in python 3.4's mock_open that
causes errors when reading binary data.
Moved to compat/tests/mock.py so other tests can
use it by default.
Fixes#10779
Refactor some of the block device, mount point, and
mtab/fstab facts collection for linux for better
performance on systems with lots of block devices.
Instead of invoking 'lsblk' for every entry in mtab,
invoke it once, then map the results to mtab entries.
Change the args used for invoking 'findmnt' since the
previous combination of args conflicts, so this would
always fail on some systems depending on version.
Add test cases for facts Hardware()/Network()/Virtual() classes
__new__ method and verify they create the proper subclass based
on the platform.system() results.
Split out all the 'invoke some command and grab it's output'
bits related to linux mount paths into their own methods so
it is easier to mock them in unit tests.
Fix the DragonFly* classes that did not defined a 'platform'
class attribute. This caused FreeBSD systems to potentially
get the DragonFly* subclasses incorrectly. In practice it
didnt matter much since the DragonFly* subclasses duplicated
the FreeBSD ones. Actual DragonFly systems would end up with
the generic Hardware() etc instead of the DragonFly* classes.
Fix Hardware.__new__() on PY3, passing args to __new__
would cause "object() takes no parameters" errors. So
check for PY3 and just call __new__ without the args
See
https://hg.python.org/cpython/file/44ed0cd3dc6d/Objects/typeobject.c#l2818
for some explaination.
Instead of immediately returning a failed code (indicating a break in
the play execution), we internally 'or' that failure code with the result
(now an integer flag instead of a boolean) so that we can properly handle
the rescue/always portions of blocks and still remember that the break
condition was hit.
Fixes#16937
Rather than repeatedly searching for tasks by uuid via iterating over
all known blocks, cache the tasks when they are added to the PlayIterator
so the lookup becomes a simple key check in a dict.