If there is an intermittent network failure, we might be trying to reach
an URL multiple times. Without this patch, we would be re-adding the same
certificate to the OpenSSL default context multiple times.
Normally, this is no big issue, as OpenSSL will just silently ignore them,
after registering the error in its own error stack.
However, when python-cryptography initializes, it verifies that the current
error stack of the default OpenSSL context is empty, which it no longer is
due to us adding the certificates multiple times.
This results in cryptography throwing an Unknown OpenSSL Error with details:
OpenSSLErrorWithText(code=185057381L, lib=11, func=124, reason=101,
reason_text='error:0B07C065:x509 certificate routines:X509_STORE_add_cert:cert already in hash table'),
Signed-off-by: Patrick Uiterwijk <puiterwijk@redhat.com>
* Changes to be committed:
modified: lib/ansible/module_utils/nxos.py
- added configurable timeout to module paramaters
modified: lib/ansible/utils/module_docs_fragments/nxos.py
- added documentation for timeout
* Changes to be committed:
modified: ansible/module_utils/nxos.py
- added timeout option for nxapi transport and added documentation
- option works with CLI or NXAPI transport
* Changes to be committed:
modified: lib/ansible/utils/module_docs_fragments/nxos.py
- Changed per comments in PR 18074
* Changes to be committed:
modified: lib/ansible/module_utils/nxos.py
- added try/except block to test for timeout
* Changes to be committed:
modified: lib/ansible/module_utils/nxos.py
- tweaked timeout
if ANSIBLE_VAULT_PASSWORD_FILE is set, 'ansible-vault rekey myvault.yml'
will fail to prompt for the new vault password file, and will use
None.
Fix is to split out 'ask_vault_passwords' into 'ask_vault_passwords'
and 'ask_new_vault_passwords' to make the logic simpler. And then
make sure new_vault_pass is always set for 'rekey', and if not, then
call ask_new_vault_passwords() to set it.
ask_vault_passwords() would return values for vault_pass and new
vault_pass, and vault cli previously would not prompt for new_vault_pass
if there was a vault_pass set via a vault password file.
Fixes#18247
These ENV vars are:
- CLOUDSTACK_ZONE
- CLOUDSTACK_DOMAIN
- CLOUDSTACK_ACCOUNT
- CLOUDSTACK_PROJECT
help to DRY on every task, args still have precedence.
In order to support legacy plugins, the following two method signatures
are allowed for `CallbackBase.v2_playbook_on_start`:
def v2_playbook_on_start(self):
def v2_playbook_on_start(self, playbook):
Previously, the logic to handle this divergence checked to see if the
callback plugin being called supported an argument named `playbook`
in its `v2_playbook_on_start` method. This was fragile in a few ways:
- if a plugin author did not use the literal `playbook` to name their
method argument, their plugin would not be called correctly
- if a plugin author wrapped their `v2_playbook_on_start` method and
by doing so changed the argspec to no longer expose an argument
with that literal name, their plugin would not be called correctly
In order to continue to support both types of callback for backwards
compatibility while making the call more robust for plugin authors,
the logic can be reversed in order to have a positive check for the old
method signature instead of a positive check for the new one.
Signed-off-by: Steve Kuznetsov <skuznets@redhat.com>
When using the ansible-galaxy CLI to import roles, it's not possible to
specify an alternate_role_name, even though the REST API seems to allow
such a thing (at least on investigation of the interactions the web app
makes) That makes importing things like:
openstack/openstack-ansible-os_cloudkitty wind up with roles named
"openstack-ansible-os_cloudkitty" instead of "os_cloudkitty".
Also, the web ui is smart and imports
"openstack-infra/ansible-role-puppet" as openstack-infra.puppet ... but
the CLI imports it as openstack-infra.ansible-role-puppet. Add that
filtering as well.
Issue ansible/galaxy-issues:#185
Mitigate the effects of observing the ssh process still running
after seeing an EOF on stdout when using OpenSSH with
ControlPersist, since it does not close the stderr file descriptor
in this case.
As neon is derived from Ubuntu, ansible_os_family should have the value
"Debian" instead of "Neon". Add a test case for KDE neon and set
os_family correctly for it.
This limitation of python-3.4 mkstemp() is the final reason we made
python-3.5 our minimum version. Since we know about it, give a nice
error to the user with a hint that Python3.4 could be the issue.
Fixes#18160
* Use the local file's mode to for the argument if not explicitly given.
Fixes https://github.com/ansible/ansible-modules-core/issues/1124
* Fix octal mode for py3
* Implement preserve instead of null
* Remove duplicate line
* Update comment
* Use stat module per toshia's suggestion
When the client certificate is already stored, lxd returns a JSON error with message "Certificate already in trust store". This "error" will occur on every task run after the initial run. The cert should be in the trust store after the first run and this error message should really only be viewed as informational as it does not indicate a real problem.
Fixes:
ansible/ansible-modules-extras#2750
Slightly better handling of http headers from http (CONNECT) proxy. Buffers up to 128KiB of headers and raises exception if this size is exceeded.
This could be optimized further, but for the time being it does the trick.
Two parts to this change:
* Add a new string that requests password
* Add a new glyph that can be used to separate the prompt from the
user's input as it seems it can use fullwidth colon rather than colon.
Fixes#17867
- When there is no file at the destination yet, we have no modification time for the `If-Modified-Since`-Header. In this case trust the cache to make the right decision to either serve a cached version or to refresh from origin. This should help with mass-deployment scenarios where you want to use a local cache to relieve your uplink.
- If you don't trust the cache to make the right decision you can still force it to refresh by providing the `force: yes` option.
Since ifconfig/ip are not present on the system, and there is no /proc
to be parsed, the only way to get information is by looking at the
argument of the pfinet translator, the process in charge of network.
In turn, this is done with fsysopts on the appropriate path, who return
something like this:
# fsysopts -L /servers/socket/inet
/hurd/pfinet --interface=/dev/eth0 --address=192.168.122.130
--netmask=255.255.255.0 --gateway=192.168.122.1 --address6=fe80::5254:12:ced/10
--address6=fe80::5054:ff:fe12:ced/10 --gateway6=::
So to get the IP addresses, one has to parse that string and fill the appropriate
structure.
More information on the system and on limitation can be found on
- https://www.gnu.org/software/hurd/hurd/translator/pfinet.html
- https://www.gnu.org/software/hurd/hurd/translator/pfinet/implementation.html
- https://www.debian.org/ports/hurd/hurd-install
* Fallback to /proc/mounts if /etc/mtab do not exist
On modern system, the file is just a compatibility symlink, and
some system (like GNU Hurd) do not have it, but provides /proc/mounts
* Add support for uptime, memory and mount facts on GNU Hurd
Nothing seems to use this now.
Was added originally added in2d11cfab92f9d26448461b4bc81f466d1910a15e
but the code that used it was removed in
e02b98274b
On openSUSE Tumbleweed, lsb-release -a currently reports
the distributor ID as "openSUSE Tumbleweed". On openSUSE
Leap, the distributor ID is "SUSE LINUX".
Add them to the OS_FAMILY dict as Suse family systems.
Also add an entry to TESTSETS in test_distribution_version.py
for openSUSE Tumbleweed.
If hashtype for the password_hash filter is 'blowfish' and passlib is
available, hashing fails as the hash function for this is named 'bcrypt'
(and not 'blowfish_crypt'). Special case this so that the correct
function is called.
Since passlib algo sometime takes a bytes, and sometime
not, depending on a internal variable, we have to convert
bnased on it, or it fail with "TypeError: salt must be bytes,
not str" (or unicode instead of bytes)
However, that's not great to use internal structure for that.
The -b option reads as follows:
` The target job is directed to ignore hangup signals. This is particularly
useful for running the target program in the background.`
If needed, '-b' can be added to become_flags
Squashed commit of the following:
commit f2c9f5c011ae8be610301d597a34bfba1a391e08
Author: Aaron Bieber <aaron@bolddaemon.com>
Date: Mon Oct 17 10:58:14 2016 -0600
remove pbrun flags
commit f402679ac177c931ad64bd13306f62512a14fcd6
Author: Aaron Bieber <aaron@bolddaemon.com>
Date: Fri Oct 14 15:29:29 2016 -0600
use Password: vs assword: for matching pbrun prompt
commit cd2e90cb65854c4cc5dd8773404e520d40f82765
Author: Aaron Bieber <aaron@bolddaemon.com>
Date: Fri Oct 14 15:28:58 2016 -0600
move -b to pbrun_flags
Fixes for non-ascii passwords on
* both python2 and python3,
* local and paramiko_ssh (ssh tested working with these changes)
* sudo and su
Fixes#16557
The PlayIterator was written without nested roles in mind, but since
include_role can nest them we need to check to see if we've moved into
a new role which is a child via nesting.
Fixes#18026