* Refactor OpenBSD sysctl based detection in a separate class
The idea is later to reuse this code for NetBSD and FreeBSD, who
use a different sysctl key for vendor and product.
* Add detection of virtualisation on NetBSD
* Add support to detect running as a Xen guest
tested on NetBSD 7 on Rackspace.
* Add support for OpenBSD dmi fact gathering
* Refactor get_sysctl in the Hardware class
Due to difference between Darwin/NetBSD and OpenBSD, we
have to change the regexp used split the key/value
* Add support for dmi facts on NetBSD
Text strings and byte strings both have a translate method but the byte
string version is harder to use. It requires a mapping of all 256 bytes
to a translation value. Text strings only require a mapping from the
characters that are changing to the new string. Switching to text
strings on both py2 and py3 allows us to state what we're getting rid of
simply without having to rely on the maketrans() helper function.
The traceback is the following:
Traceback (most recent call last):
File \"/tmp/ansible_8s0bj604/ansible_module_setup.py\", line 134, in <module>
main()
File \"/tmp/ansible_8s0bj604/ansible_module_setup.py\", line 126, in main
data = get_all_facts(module)
File \"/tmp/ansible_8s0bj604/ansible_modlib.zip/ansible/module_utils/facts.py\", line 3641, in get_all_facts
File \"/tmp/ansible_8s0bj604/ansible_modlib.zip/ansible/module_utils/facts.py\", line 3584, in ansible_facts
File \"/tmp/ansible_8s0bj604/ansible_modlib.zip/ansible/module_utils/facts.py\", line 1600, in populate
File \"/tmp/ansible_8s0bj604/ansible_modlib.zip/ansible/module_utils/facts.py\", line 1649, in get_memory_facts
TypeError: translate() takes exactly one argument (2 given)
And the swapctl output is this:
# /sbin/swapctl -sk
total: 83090 1K-blocks allocated, 0 used, 83090 available
The only use of the code is to remove prefix in case they are present, so just
replacing them with empty space is sufficient.
smbios -i 256 return:
# smbios -i 256
ID SIZE TYPE
256 77 SMB_TYPE_SYSTEM (system information)
Manufacturer: Red Hat
Product: KVM
Version: RHEL 6.4.0 PC
UUID: 8a3b8b1a-ba59-1a4b-5f85-ab53a5a885a9
Wake-Up Event: 0x6 (power switch)
SKU Number:
Family: Red Hat Enterprise Linux
* Fix bug (#18355) where encrypted inventories fail
This is first part of fix for #18355
* Make DataLoader._get_file_contents return bytes
The issue #18355 is caused by a change to inventory to
stop using _get_file_contents so that it can handle text
encoding itself to better protect against harmless text
encoding errors in ini files (invalid unicode text in
comment fields).
So this makes _get_file_contents return bytes so it and other
callers can handle the to_text().
The data returned by _get_file_contents() is now a bytes object
instead of a text object. The callers of _get_file_contents() have
been updated to call to_text() themselves on the results.
Previously, the ini parser attempted to work around
ini files that potentially include non-vailid unicode
in comment lines. To do this, it stopped using
DataLoader._get_file_contents() which does the decryption of
files if vault encrypted. It didn't use that because _get_file_contents
previously did to_text() on the read data itself.
_get_file_contents() returns a bytestring now, so ini.py
can call it and still special case ini file comments when
converting to_text(). That also means encrypted inventory files
are decrypted first.
Fixes#18355
So to get the type of the python interpreter, we need to look at
sys.implementation.name which do not return 'cpython', instead of 'CPython',
but that's upstream breakage, so not much we can do.
While testing on netbsd 6.0, ansible setup failed with:
Traceback (most recent call last):
File \"/tmp/ansible_m2ieeq/ansible_module_setup.py\", line 134, in <module>
main()
File \"/tmp/ansible_m2ieeq/ansible_module_setup.py\", line 126, in main
data = get_all_facts(module)
File \"/tmp/ansible_m2ieeq/ansible_modlib.zip/ansible/module_utils/facts.py\", line 3609, in get_all_facts
File \"/tmp/ansible_m2ieeq/ansible_modlib.zip/ansible/module_utils/facts.py\", line 3552, in ansible_facts
File \"/tmp/ansible_m2ieeq/ansible_modlib.zip/ansible/module_utils/facts.py\", line 2500, in populate
File \"/tmp/ansible_m2ieeq/ansible_modlib.zip/ansible/module_utils/facts.py\", line 2584, in get_interfaces_info
File \"/tmp/ansible_m2ieeq/ansible_modlib.zip/ansible/module_utils/facts.py\", line 2644, in parse_inet_line
socket.error: illegal IP address string passed to inet_aton
The cause is having aliases on lo like this:
lo0: flags=8049<UP,LOOPBACK,RUNNING,MULTICAST> mtu 33184
inet 127.0.0.1 netmask 0xff000000
inet alias 127.1.1.1 netmask 0xff000000
So if the address is 'alias', we have to skip it.
* ANSIBLE_SSH_CONTROL_PATH_DIR option added
This removes the hardcoded value ( $HOME/.ansible/cp ) from ssh.py.
User is able to change the ControlPath directory ( the one that replaces %(directory)s ).
Fixes#18325
* Added config option in ansible.cfg
- Remove shebangs from:
- ini files
- unit tests
- module_utils
- plugins
- module_docs_fragments
- non-executable Makefiles
- Change non-modules from '/usr/bin/python' to '/usr/bin/env python'.
- Change '/bin/env' to '/usr/bin/env'.
Also removed main functions from unit tests (since they no longer
have a shebang) and fixed a python 3 compatibility issue with
update_bundled.py so it does not need to specify a python 2 shebang.
A script was added to check for unexpected shebangs in files.
This script is run during CI on Shippable.
If there is an intermittent network failure, we might be trying to reach
an URL multiple times. Without this patch, we would be re-adding the same
certificate to the OpenSSL default context multiple times.
Normally, this is no big issue, as OpenSSL will just silently ignore them,
after registering the error in its own error stack.
However, when python-cryptography initializes, it verifies that the current
error stack of the default OpenSSL context is empty, which it no longer is
due to us adding the certificates multiple times.
This results in cryptography throwing an Unknown OpenSSL Error with details:
OpenSSLErrorWithText(code=185057381L, lib=11, func=124, reason=101,
reason_text='error:0B07C065:x509 certificate routines:X509_STORE_add_cert:cert already in hash table'),
Signed-off-by: Patrick Uiterwijk <puiterwijk@redhat.com>
* Changes to be committed:
modified: lib/ansible/module_utils/nxos.py
- added configurable timeout to module paramaters
modified: lib/ansible/utils/module_docs_fragments/nxos.py
- added documentation for timeout
* Changes to be committed:
modified: ansible/module_utils/nxos.py
- added timeout option for nxapi transport and added documentation
- option works with CLI or NXAPI transport
* Changes to be committed:
modified: lib/ansible/utils/module_docs_fragments/nxos.py
- Changed per comments in PR 18074
* Changes to be committed:
modified: lib/ansible/module_utils/nxos.py
- added try/except block to test for timeout
* Changes to be committed:
modified: lib/ansible/module_utils/nxos.py
- tweaked timeout
if ANSIBLE_VAULT_PASSWORD_FILE is set, 'ansible-vault rekey myvault.yml'
will fail to prompt for the new vault password file, and will use
None.
Fix is to split out 'ask_vault_passwords' into 'ask_vault_passwords'
and 'ask_new_vault_passwords' to make the logic simpler. And then
make sure new_vault_pass is always set for 'rekey', and if not, then
call ask_new_vault_passwords() to set it.
ask_vault_passwords() would return values for vault_pass and new
vault_pass, and vault cli previously would not prompt for new_vault_pass
if there was a vault_pass set via a vault password file.
Fixes#18247
These ENV vars are:
- CLOUDSTACK_ZONE
- CLOUDSTACK_DOMAIN
- CLOUDSTACK_ACCOUNT
- CLOUDSTACK_PROJECT
help to DRY on every task, args still have precedence.
In order to support legacy plugins, the following two method signatures
are allowed for `CallbackBase.v2_playbook_on_start`:
def v2_playbook_on_start(self):
def v2_playbook_on_start(self, playbook):
Previously, the logic to handle this divergence checked to see if the
callback plugin being called supported an argument named `playbook`
in its `v2_playbook_on_start` method. This was fragile in a few ways:
- if a plugin author did not use the literal `playbook` to name their
method argument, their plugin would not be called correctly
- if a plugin author wrapped their `v2_playbook_on_start` method and
by doing so changed the argspec to no longer expose an argument
with that literal name, their plugin would not be called correctly
In order to continue to support both types of callback for backwards
compatibility while making the call more robust for plugin authors,
the logic can be reversed in order to have a positive check for the old
method signature instead of a positive check for the new one.
Signed-off-by: Steve Kuznetsov <skuznets@redhat.com>
When using the ansible-galaxy CLI to import roles, it's not possible to
specify an alternate_role_name, even though the REST API seems to allow
such a thing (at least on investigation of the interactions the web app
makes) That makes importing things like:
openstack/openstack-ansible-os_cloudkitty wind up with roles named
"openstack-ansible-os_cloudkitty" instead of "os_cloudkitty".
Also, the web ui is smart and imports
"openstack-infra/ansible-role-puppet" as openstack-infra.puppet ... but
the CLI imports it as openstack-infra.ansible-role-puppet. Add that
filtering as well.
Issue ansible/galaxy-issues:#185
Mitigate the effects of observing the ssh process still running
after seeing an EOF on stdout when using OpenSSH with
ControlPersist, since it does not close the stderr file descriptor
in this case.
As neon is derived from Ubuntu, ansible_os_family should have the value
"Debian" instead of "Neon". Add a test case for KDE neon and set
os_family correctly for it.
This limitation of python-3.4 mkstemp() is the final reason we made
python-3.5 our minimum version. Since we know about it, give a nice
error to the user with a hint that Python3.4 could be the issue.
Fixes#18160
* Use the local file's mode to for the argument if not explicitly given.
Fixes https://github.com/ansible/ansible-modules-core/issues/1124
* Fix octal mode for py3
* Implement preserve instead of null
* Remove duplicate line
* Update comment
* Use stat module per toshia's suggestion
When the client certificate is already stored, lxd returns a JSON error with message "Certificate already in trust store". This "error" will occur on every task run after the initial run. The cert should be in the trust store after the first run and this error message should really only be viewed as informational as it does not indicate a real problem.
Fixes:
ansible/ansible-modules-extras#2750
Slightly better handling of http headers from http (CONNECT) proxy. Buffers up to 128KiB of headers and raises exception if this size is exceeded.
This could be optimized further, but for the time being it does the trick.
Two parts to this change:
* Add a new string that requests password
* Add a new glyph that can be used to separate the prompt from the
user's input as it seems it can use fullwidth colon rather than colon.
Fixes#17867
- When there is no file at the destination yet, we have no modification time for the `If-Modified-Since`-Header. In this case trust the cache to make the right decision to either serve a cached version or to refresh from origin. This should help with mass-deployment scenarios where you want to use a local cache to relieve your uplink.
- If you don't trust the cache to make the right decision you can still force it to refresh by providing the `force: yes` option.
Since ifconfig/ip are not present on the system, and there is no /proc
to be parsed, the only way to get information is by looking at the
argument of the pfinet translator, the process in charge of network.
In turn, this is done with fsysopts on the appropriate path, who return
something like this:
# fsysopts -L /servers/socket/inet
/hurd/pfinet --interface=/dev/eth0 --address=192.168.122.130
--netmask=255.255.255.0 --gateway=192.168.122.1 --address6=fe80::5254:12:ced/10
--address6=fe80::5054:ff:fe12:ced/10 --gateway6=::
So to get the IP addresses, one has to parse that string and fill the appropriate
structure.
More information on the system and on limitation can be found on
- https://www.gnu.org/software/hurd/hurd/translator/pfinet.html
- https://www.gnu.org/software/hurd/hurd/translator/pfinet/implementation.html
- https://www.debian.org/ports/hurd/hurd-install
* Fallback to /proc/mounts if /etc/mtab do not exist
On modern system, the file is just a compatibility symlink, and
some system (like GNU Hurd) do not have it, but provides /proc/mounts
* Add support for uptime, memory and mount facts on GNU Hurd
Nothing seems to use this now.
Was added originally added in2d11cfab92f9d26448461b4bc81f466d1910a15e
but the code that used it was removed in
e02b98274b
On openSUSE Tumbleweed, lsb-release -a currently reports
the distributor ID as "openSUSE Tumbleweed". On openSUSE
Leap, the distributor ID is "SUSE LINUX".
Add them to the OS_FAMILY dict as Suse family systems.
Also add an entry to TESTSETS in test_distribution_version.py
for openSUSE Tumbleweed.
If hashtype for the password_hash filter is 'blowfish' and passlib is
available, hashing fails as the hash function for this is named 'bcrypt'
(and not 'blowfish_crypt'). Special case this so that the correct
function is called.
Since passlib algo sometime takes a bytes, and sometime
not, depending on a internal variable, we have to convert
bnased on it, or it fail with "TypeError: salt must be bytes,
not str" (or unicode instead of bytes)
However, that's not great to use internal structure for that.
The -b option reads as follows:
` The target job is directed to ignore hangup signals. This is particularly
useful for running the target program in the background.`
If needed, '-b' can be added to become_flags
Squashed commit of the following:
commit f2c9f5c011ae8be610301d597a34bfba1a391e08
Author: Aaron Bieber <aaron@bolddaemon.com>
Date: Mon Oct 17 10:58:14 2016 -0600
remove pbrun flags
commit f402679ac177c931ad64bd13306f62512a14fcd6
Author: Aaron Bieber <aaron@bolddaemon.com>
Date: Fri Oct 14 15:29:29 2016 -0600
use Password: vs assword: for matching pbrun prompt
commit cd2e90cb65854c4cc5dd8773404e520d40f82765
Author: Aaron Bieber <aaron@bolddaemon.com>
Date: Fri Oct 14 15:28:58 2016 -0600
move -b to pbrun_flags
Fixes for non-ascii passwords on
* both python2 and python3,
* local and paramiko_ssh (ssh tested working with these changes)
* sudo and su
Fixes#16557
The PlayIterator was written without nested roles in mind, but since
include_role can nest them we need to check to see if we've moved into
a new role which is a child via nesting.
Fixes#18026
The network module will now log a message when it connects to a remote host
successfully and specify the transport used. It will also log a message
when the module discconnect() method is called.
Earlier versions of EOS that do not support config sessions would
create an exception. This fix will now check if the device supports
sessions and if it doesn't, it will fall back to not using sessions
* Fix unbound method call for JSONEncoder
The way it is currently it will lead to unbound method error
```python
In [1]: import json
In [2]: json.JSONEncoder.default('object_here')
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-2-872fdacfda50> in <module>()
----> 1 json.JSONEncoder.default('object_here')
TypeError: unbound method default() must be called with JSONEncoder instance as first argument (got str instance instead)
```
But what is really wanted is to let the json module to raise the "is not serializable error" which demands a bounded instance of `JSONEncoder()`
```python
In [3]: json.JSONEncoder().default('object_here')
---------------------------------------------------------------------------
TypeError: 'object_here' is not JSON serializable
```
BTW: I think it would try to call `.to_json` of object before raising as it is a common pattern.
* Calling JSONEncoder bounded `default` method using super()
This is better API as the booleans could conflict with each other.
If the config value is a string, make sure to return it as a text string
rather than a byte string.
As recently there was back-and-forth with this hardcoded value
(0.001 -> 0.01 -> 0.005), obviousely the optimal value for it depends on
Ansible usage scanario and is better to be configurable.
This patch adds a new config option in DEFAULT section,
`internal_poll_interval`, with default of 0.001 corresponding to the
value hardcoded in Ansible v2.1.
This config option is then used instead of hardcoded values where
needed.
Related GH issue: 14219
Implement tag and skip_tag handling in the CLI() class. Change tag and
skip_tag command line options to be accepted multiple times on the CLI
and add them together rather than overwrite.
* Make it configurable whether to merge or overwrite multiple --tags arguments
* Make the base CLI class an abstractbaseclass so we can implement
functionality in parse() but still make subclasses implement it.
* Deprecate the overwrite feature of --tags with a message that the
default will change in 2.4 and go away in 2.5.
* Add documentation for merge_multiple_cli_flags
* Fix galaxy search so its tags argument does not conflict with generic tags
* Unit tests and more integration tests for tags
In order for the config to be returned with vpn passwords, the get_config()
method now supports a keyword arg include=passwords to return the desired
configuration. This replaces the show_command argument
foo.split('\n') is picky about the type of 'foo'.
if 'foo' is a bytes type, then foo.split('\n')
will fail on py3 with:
TypeError: a bytes-like object is required, not 'str'
The foo.split('\n') change isn't strictly required
when run_command returns native str types, but it
is more idiomatic and conceptually also supports other
line endings.
Prior to this commit, the ini parser would fail if the inventory was
not 100% utf-8. This commit makes this slightly more robust by
omitting full line comments from that requirement.
Fixes#17593
* Specify run_command decode error style as arg
Instead of getting the stdout/stderr text from
run_command, and then decoding to utf-8 with a
particular error scheme, use the 'errors' arg
to run_command so it does that itself.
* Use 'surrogate_or_replace' instead of 'replace'
For the text decoding error scheme in run_command calls.
* Let the local_facts run_command use default errors
* fix typo
In py3, dict.keys() is a view and not a copy of the
dicts keys, so attempting to delete items from the dict
while iterating over the keys results int
RuntimeError: dictionary changed size during iteration
Resolve by casting .keys() to a list() type.
* Remove unicode-escape which is not present on python3
Alternative fix for #17305
* Enable the assemble test on python3
* Fix other problems with assemble on python3
The kickstart kwarg should be set to False for eos based devices and
was set to True. This change cleans up problems loading json output
from cli commands
All eos_command test cases are now passing successfully
fixes#17441
When adding condition statements, the Conditional instance will now generate
an AddConditionError if is unable to map the condition to a function in the
instance
When the conditional cannot extract a value from the result string,
an unhandled exception would be raised. This fix now gracefully handles
the exception
An unhandled exeception is raised with using nxapi transport and setting
the save argument to true. This fix will allow the configuration to be
saved regardless of the transport.
fixesansible/ansible-modules-core#5094
If the sftp fails, roll over to scp by default. This saves users
from having to know about the scp_if_ssh method when sftp is broken
on the remote host.
The conditional processing was failing due for two reasons:
1) The xml to json conversion string was not happening before the runner
was processing the results
2) The Conditional instance was not parsing conditionals encoded with []
This fix address both issues.
Currently, if the host specified in delegate_to for a task is null,
Ansible will crash with a stack trace. Add a check for this state
and handle the error appropriately.
The junos load_config() method supports operations of overwrite, replace
and merge. This adds the missing overwrite keyword arg to load_config()
so that action in junos_template can be procesed correctly.
The Conditional class now raises a ValueError with message if it cannot
correclty parse the passed in conditional. This makes it easier to
detect issues in modules that specify conditionals.
The arguments for the regex search() function were transposed in the
netcli match() method that caused conditionals to fail. Switched the
arguments to fixe the bug
fixes#17749
files is really a placeholder for common code for separate service modules, was copy of current service module and this seemed to confuse people so this update should clear that up
The raw kwarg was added to return raw output from devices with if the
attempt to convert to json failed. The change was causing all json
output to be returned raw. This fixes that issue.
* refactor ignore_limits_and_restrictions
into ignore_limits and ignore_limitations
* add ansible_play_hosts_all
* update docs re ansible_play_hosts_all
* only use play.hosts when is has a value
* replace ansible_play_hosts with ansible_play_hosts_all
* remove unnecessary var
This fixes a problem with the Netconf transport in which the ssh keyfile
wasn't being used if it was defined. The ref issue is filed against 2.1.1
but have been unable to replicate the problem in that version
ref: ansible/ansible-modules-core#4966
* fixes issue #13981: unsafe_writes block appeared too late in the atomic_move
workflow. This led to errno.EBUSY to not be managed in the context of
issue #!#981
* Reduce changes to fix#13981
* Abstract the unsafe_writes fallback into a helper method.
Explicitly try/except os.rename part of the code and call this helper method.
If the code fails in shutil.copy2 or shutil.move this should not be related to issue #13981
since they write to b_tmp_dest_name.
(as suggested by @abadger)
* Check if unsafe_writes in the caller, not in _unsafe_writes.
That way the function call reads as "Do an unsafe write"
and not as "I think we should do an unsafe_write.
When using hostvars to get extra connection-specific vars for connection
plugins, use this raw lookup to avoid prematurely templating all of the
hostvar data (triggering unnecessary lookups).
Fixes#17024
* Add oVirt utility module
This patch add oVirt utility module, which contains helper functions,
for oVirt modules and also shared documentation fragment for oVirt.
* Adjust to Python 2.4
* Fixups
* Add support for poll interval and fixes
When using the Cli transport, if the session hung on a command and the
socket timed out, the config session would be left behind. This change
will allow the shell to try to get control back and remove the config
session, assuming the channel is still open.
fixesansible/ansible-modules-core#4945
* changed missing file error to warning for lookups
* changed plugins that expected exception
warning will still be displayed, they now work with None value
* Improve unit testing of 'password' lookup
The tests showed some UnicodeErrors for the
cases where the 'chars' param include unicode,
causing the 'getattr(string, c, c)' to fail.
So the candidate char generation code try/excepts
UnicodeErrors there now.
Some refactoring of the password.py module to make
it easier to test, and some new tests that cover more
of the password and salt generation.
* More refactoring and fixes.
* manual merge of text enc fixes from pr17475
* moving methods to module scope
* more refactoring
* A few more text encoding fixes/merges
* remove now unused code
* Add test cases and data for _gen_candidate_chars
* more test coverage for password lookup
* wip
* More text encoding fixes and test coverage
* cleanups
* reenable text_type assert
* Remove unneeded conditional in _random_password
* Add docstring for _gen_candidate_chars
* remove redundant to_text and list comphenesion
* Move set of 'chars' default in _random_password
on py2, C.DEFAULT_PASSWORD_CHARS is a regular str
type, so the assert here fails. Move setting the
default into the method and to_text(DEFAULT_PASSWORD_CHARS)
if it's needed.
* combine _random_password and _gen_password
* s/_create_password_file/_create_password_file_dir
* native strings for exception msgs
* move password to_text to _read_password_file
* move to_bytes(content) to _write_password_file
* add more test assertions about genned pw's
* Some cleanups to alikins and abadger's password lookup refactoring:
* Make DEFAULT_PASSWORD_CHARS into a text string in constants.py
- Move this into the nonconfigurable section of constants.
* Make utils.encrypt.do_encrypt() return a text string because all the
hashes in passlib should be returning ascii-only strings and they are
text strings in python3.
* Make the split up of functions more sane:
- Don't split such that conditionals have to occur in two separate functions.
- Don't go overboard: Good to split file system manipulation from parsing
but we don't need to do every file manipulation in a separate
function.
- Don't split so that creation of the password store happens in two
parts.
- Don't split in such a way that no decisions are made in run.
* Organize functions by when it gets called from run().
* Run all potential characters through the gen_candidate_chars function
because it does both normalization and validation.
* docstrings for functions
* Change when we store salt slightly. Store it whenever it was already
present in the file as well as when encrypt is requested. This will
head of potential idempotence bugs where a user has two playbook tasks
using the same password and in one they need it encrypted but in the
other they need it plaintext.
* Reorganize tests to follow the order of the functions so it's easier
to figure out if/where a function has been tested.
* Add tests for the functions that read and write the password file.
* Add tests of run() when the password has already been created.
* Test coverage currently at 100%
The Conditional instance will cause a stack trace if the provided conditional
does not map properly to the response. This fixes that issue so that the
Conditional instance will now raise a FailedConditionalError with the
conditional that caused the failure.
Modules *_command modules (and any other modules that create an instance
of Conditional) should be updated to catch the FailedConditionalError
exception.
This addresses a problem when *_config or *_template network modules are
being used in roles. The module will error with the above message. This
fixes that problem
fixedansible/ansible-modules-core#4840
* By default, ansible_distribution is not set on DragonFly systems,
preventing some distribution-specific tests from being written
* This commit fixes the issue by returning the quite logical value
of "DragonFly" when appropriate
If 'fact_caching=jsonfile' was configured, but
'fact_caching_connection' was not configured, jsonfile
would fail and ansible-playbook would exit with a traceback.
Fixes#17566
* Pass the absolute path to dirname when assigning basedir
If no path is specified when calling the playbook, os.path.dirname(playbook_path) returns ''
This will cause failure when creating the retry file.
Fixes#17456
* Updated to use os.pathdirname(os.path.abspath())
* Make is_encrypted_file handle both files opened in text and binary mode
On python3, by default files are opened in text mode. Since we know
the encoding of vault files (and especially the header which is the
first set of bytes) we can decide whether the file is an encrypted
vault file in either case.
* Fix is_encrypted_file not resetting the file position
* Update is_encrypted_file to check that all the data in the file is ascii
* For is_encrypted_file(), add start_pos and count parameters
This allows callers to specify reading vaulttext from the middle of
a file if necessary.
* Combine VaultLib.encrypt() and VaultLib.encrypt_bytestring()
* Change vault's is_encrypted() to take either text or byte strings and to return False if any part of the data is non-ascii.
* Remove unnecessary use of six.b
* Vault Cipher: mark a few methods as private.
* VaultAES256._is_equal throws a TypeError if given non byte strings
* Make VaultAES256 methods that don't need self staticmethods and classmethods
* Mark VaultAES and is_encrypted as deprecated
* Get rid of VaultFile (unused and feature implemented in a different way)
* Normalize variable and parameter names on plaintext, ciphertext, vaulttext
* Normalize variable and parameter names on "b_" prefix when dealing with bytes
* Test changes:
* Remove redundant tests( both checking the same byte string)
* Fix use of format string without format operator
* Enable vault editor tests on python3
* Initialize the vault_cipher for VaultAES256 testing in setUp()
* Make assertTrue and assertFalse take the actual method calls for
better error messages.
* Test that non-ascii byte strings compare correctly.
* Test that unicode strings and ints raise TypeError
* Test-specific:
* Removed test_methods_exist(). We only have one VaultLib so the
implementation is the assurance that the methods exist. (Can use an abc for
this if it changes).
* Add tests for both byte string and text string input where the API takes either.
* Convert "assert" to unittest assert functions or add a custom message where
that will make failures easier to debug.
* Move instantiating the VaultLib into setUp().
Later in the stack, further code will check and inform the user that var names must start with a letter
or underscore, so this fix only allows us to get to that previously existing policy.
Fixes#16008
When an inventory file looks executable (with a #!) but
isn't, the error message could be confusing. Especially
if the inventory file was named something like 'inventory'
or 'hosts'. Add some context and quote the filename.
This is based on https://github.com/ansible/ansible/pull/15758
While doing evil things with action plugins, I hit a code path in which
the mkdir here was failing due to lack of parent dir. Changing this to
makedirs made everything happy. Now, I'd obviously like to understand
why the parent dir exists in some places and not others - but I could
not find anywhere that C.DEFAULT_LOCAL_TMP is ensured to be created.
* Add support for no-expiration to jsonfile cache
* Let memcached cache use fact_caching_timeout=0
If fact_cache=memcached and fact_caching_timeout=0
memcached would hit a NameError on _expire_keys
Change linux fact gathering to correctly gather ansible_processor_count
and ansible_processor_vcpus on systems without vendor_id/model_name in
/proc/cpuinfo (for ex, ppc64/POWER)
* Added aws_retry decorator function with unit tests
* Restructured the code to be used with a base class.
This base class CloudRetry can be reused by any other cloud provider.
This decorator should be used in situations, where you need to implement
a backoff algorithm and want to retry based on the status code from the
exception.
* updated documentation
* fixed tabs
* added botocore and boto3 to requirements.txt
* removed cloud.py from py24 tests, as it depends on boto3
* fix relative imports
* updated test to be 2.6 compat
* updated method name from retry to backoff
* readded lxd
* Updated default backoff from 2 seconds to 1.1s.
This will be about a total of 48 seconds in 10 tries. This is
configurable.
* Fixes to the controller text model
* Change command line args to text type
* Make display replace undecodable bytes with replacement chars. This
is only a problem on pyhton3 where surrogates can enter into the msg
but sys.stdout doesn't know how to handle them.
* Remove a deprecated playbook syntax in unicode.yml
* Fix up run_cmd to change its parameters to byte string at appropriate times.
* Add a new config option to cache the check for controlpersist on the
control machine.
Fixes#15844
* Remove the option and make the behavior the default
* Make the check for controlpersist cache its status per-ssh executable
Trying to preserve the meaning of the examples. Not all occurrences in
`docsite/rst/playbooks_lookups.rst` have been changed for instance to
allow the unchanged examples to be used for testing.
Related to: #17479
The statvfs(3) manpage on Linux states that `f_blocks` is the "size of fs in `f_frsize` units". The manpages on Solaris and AIX state something similar.
With ext4 on Linux, I suspect that `f_bsize` and `f_frsize` are always identical, masking this error. On Solaris, the sizes differ for each of ufs, vxfs and zfs causing the `size_available` and `size_total` facts to be set incorrectly on this OS.
The fileglob lookup plugin only returns files, not directories.
This is to be expected, as a mixed list would not be very useful in with_fileglob.
However the fileglob filter does return anything glob.glob() returns.
This change fixes this, so that fileglob returns files (as the name indicates).
PS We could also offer a glob filter for thos that would need it ?
This relates to comments in issue #17136 and fixes confusion in #17269.
In the 'comment' filter, if the 'prefix' parameter is set as empty,
don't add an empty line before the comment. To get the previous
behaviour (empty line before comment), set the prefix to '\n'.
which got lost in recent big 'performance improvements' merge by @jimi-c.
I had made a previous PR to fix this, then @bcoca had committed an
improved fix. Now it's lost again.
cf: d2b3b2c03e (lost here)
cf: 25e9b5788b (previous fix)
Earlier PR #14849
Earlier issue #14843
Please note that jimi-c broke this last time as well ... seeing a
pattern here.