* [GCP] UrlMap module
This module provides support for UrlMaps on Google Cloud Platform. UrlMaps allow users to segment requests by hostname and path and direct those requests to Backend Services.
UrlMaps are a powerful and necessary part of HTTP(S) Global Load Balancing on Google Cloud Platform.
UrlMap takes advantage of the python-api so the appropriate infrastructure has been added to module_utils.
More about UrlMaps can be found at:
https://cloud.google.com/compute/docs/load-balancing/http/url-map
UrlMap API:
https://cloud.google.com/compute/docs/reference/latest/
Google Cloud Platform HTTP(S) Cross-Region Load Balancer:
https://cloud.google.com/compute/docs/load-balancing/http/
* updated documentation, remmoved parens
* fixed tabs
The timeout for gathering facts needs to be settable from three places
(highest precedence to lowest):
* programmatically
* ansible.cfg (equivalent to the user specifying it explicitly when
calling setup)
* from the default value
The code was changed in b4bd6c80de to
allow programmatically and the default value to work correctly but
setting via ansible.cfg/parameter was broken.
This change should fix setting via ansible.cfg and adds unittests for
all three cases
Fixes#23753
When building in automated build systems, there are sometimes cases
where the user doing the building does not have a .ssh directory. In
this case, we need to mock out some os.path functions so that the
add_host_key() function we're testing won't complain or try to create
one.
* Update module_utils.six to latest
We've been held back on the version of six we could use on the module
side to 1.4.x because of python-2.4 compatibility. Now that our minimum
is Python-2.6, we can update to the latest version of six in
module_utils and get rid of the second copy in lib/ansible/compat.
Add support for default credentials. Practically, this means that a playbook creator would not have to specify the service_account_email or credentials_file Ansible parameters.
Default Credentials only work when running on Google Cloud Platform. The 'project_id' is still required.
A test has been added to trigger this condition.
* refactor postgres,
* adds a basic unit test module
* first step towards a common utils module
* set postgresql_db doc argument defaults to what the code actually uses
* unit tests that actually test a missing/found psycopg2, no dependency needed
* add doc fragments, use common args, ansible2ify the imports
* update dict
* add AnsibleModule import
* mv AnsibleModule import to correct file
* restore some database utils we need
* rm some more duplicated pg doc fragments
* change ssl_mode from disable to prefer, add update docs
* use LibraryError pattern for import verification
per comments on #21435. basically LibraryError and touching up its usage in pg_db and the tests.
* Add tests for `get_fqdn_and_port` method.
Currently tests verify original behavior - returning default `ssh-keyscan` port
Add test around `add_host_key` to verify underlying command arguments
Add some new expectations for `get_fqdn_and_port`
Test that non-standard port is passed to `ssh-keyscan` command
* Ensure ssh hostkey checks respect server port
ssh-keyscan will default to getting the host key for port 22.
If the ssh service is running on a different port, ssh-keyscan
will need to know this.
Tidy up minor flake8 issues
* Update known_hosts tests for port being None
Ensure that git urls don't try and set port when a path
is specified
Update known_hosts tests to meet flake8
* Fix stdin swap context for test_known_hosts
Move test_known_hosts from under basic, as it is its own library.
Remove module_utils.known_hosts from pep8 legacy files list
* updates eos modules to use persistent connection socket
* removes split eos shared module and combines into one
* adds singular eos doc frag (eos_local to be removed after module updates)
* updates unit test cases
* Unittests for some of module_common.py
* Port test_run_command to use pytest-mock
The use of addCleanup(patch.stopall) from the unittest idiom was
conflicting with the pytest-mock idiom of closing all patches
automatically. Switching to pytest-mock ensures that the patches are
closed and removing the stopall stops the conflict.
* eos module now uses network_cli connection plugin
* adds unit tests for eos module
* eapi support now provided by eapi module
* updates doc fragment for eapi common properties
* Google Cloud Pubsub Module
The Google Cloud Pubsub module allows the Ansible user to:
* Create/Delete Topics
* Create/Delete Subscriptions
* Change subscription from pull to push (and configure endpoint)
* Publish messages to a topic
* Pull messages from a Subscription
An accessory module, gcpubsub_facts, has been added to list topics and subscriptions.
* Added docs for state field to DOCUMENTATION and RETURN blocks.
- Replace nose usage with pytest.
- Remove legacy Shippable integration.sh.
- Update Makefile to use pytest and ansible-test.
- Convert most yield unit tests to pytest parametrize.
Support for the Google API and GCloud-Python Clients have been added.
The three libraries:
* GCloud-Python: A new function, get_google_cloud_credentials, should be used. The credentials-object returned can be passed to any gcloud-python client. Using this client library requires in the installation of gcloud-python. This is preferred library for new modules.
* Google API: A new function, gcp_api_auth, should be used to take advantage of services requiring this client. This client library should be used if the desired functionality is not available in GCloud-Python. Using this library requires the installation of google-api-python-client.
* libcloud: Existing function, gcp_connect, should be used. The interface and return values have not changed and existing modules (such as gce, gce_pd and gce_net) should work without modification. Note that the credentials-fetching code has been refactored out of gcp_connect so that can be reused by all connection functions. To use this function, apache-libcloud must be installed.
Import guards have been added and will only be trigger if a user tries to use a function that is missing dependencies.
Credential-specifying mechanisms (i.e, ansible module params, env vars and libcloud secrets.py) have not changed. They have been refactored and unit tests have been added to allow for changes going forward. We are deprecating (and removing in a subsequent release) the ability to specify credentials via the libcloud secrets file. Also, we have deprecated (and also plan to remove in a subsequent release) the ability to use a p12 pem file for a key - the JSON format is strongly preferred. Deprecation warnings have been added for both of these issues (see the Ansible docs on how to disable deprecation warnings).
As neon is derived from Ubuntu, ansible_os_family should have the value
"Debian" instead of "Neon". Add a test case for KDE neon and set
os_family correctly for it.
On openSUSE Tumbleweed, lsb-release -a currently reports
the distributor ID as "openSUSE Tumbleweed". On openSUSE
Leap, the distributor ID is "SUSE LINUX".
Add them to the OS_FAMILY dict as Suse family systems.
Also add an entry to TESTSETS in test_distribution_version.py
for openSUSE Tumbleweed.
Python2 seems to allow any integer. Python3.5 on Linux seems to allow
a 32 bit unsigned int. Python3.5 on El Capitan seems to limit it to
a smaller size... perhaps a 16 bit int.
This adds some test data to test_facts.py that
includes mnt points that have a single quote in
the path.
Ala, https://github.com/ansible/ansible/issues/16855
The bug was already fixed via other changes, but this is
for regression testing.
A parameter of type int should accept int and string, but not float.
A parameter of type float should accept float, int, and string.
Also reset the arguments in another test so that it runs cleanly. This
agrees with what all the other tests are doing.
As suggested in feedback on
https://github.com/ansible/ansible/pull/17575, add
os_family to test_distribution_version. Add the
correct os_family to the existing testcase data
entries.
Also add os_family to the output of
gen_distribution_version_testcase.py so any new
generated entries will contain this data.
* Added aws_retry decorator function with unit tests
* Restructured the code to be used with a base class.
This base class CloudRetry can be reused by any other cloud provider.
This decorator should be used in situations, where you need to implement
a backoff algorithm and want to retry based on the status code from the
exception.
* updated documentation
* fixed tabs
* added botocore and boto3 to requirements.txt
* removed cloud.py from py24 tests, as it depends on boto3
* fix relative imports
* updated test to be 2.6 compat
* updated method name from retry to backoff
* readded lxd
* Updated default backoff from 2 seconds to 1.1s.
This will be about a total of 48 seconds in 10 tries. This is
configurable.
* Port set_*_if_different functions to python3
* Add surrogate_or_strict and surrogate_or_replace error handlers for
to_text, to_bytes, to_native
* Set default error handler to surrogate_or_replace
* Make use of the new error handlers in the already ported code
* Move the unittests for module_utils._text as they aren't in basic.py
* Cleanup around SEQUENCETYPE. On python2.6+ SEQUENCETYPE includes
strings so make sure code omits those explicitly if necessary
* Allow arg_spec aliases to be other sequence types
* Fix to_native call in selinux_context and selinux_default_context to
use the error handler correctly.
* Port set_mode_if_different to work on python3
* Port atomic_move to work on python3
* Fix check_password_prompt variable which wasn't renamed properly
tempfile.NamedTemporaryFile keeps a file handle causing os.rename() to fail with windows based vboxfs: [Errno 26] Text file busy.
Changed NamedTemporaryFile to mkstemp() and added a finally block to unlink the temp file in each and every case.
Make some python3 fixes to make the unittests pass:
* galaxy imports
* dictionary iteration in role requirements
* swap_stdout helper for unittests
* Normalize to text string in a facts.py function