* adds two new infoblox lookup plugins
* nios - lookup plugin to return nios objects to the playbook
* nios_next_ip - lookup plugin to return the next avaiable ip address
* adds some additional examples to nios lookup
* fix up pep8 failures
* updates in response to review comments
* updates to azure_rm_sqldatabase
* updates to azure_rm_sqldatabase
* updates to azure_rm_sqldatabase
* updates to azure_rm_sqldatabase
* updates to azure_rm_sqldatabase
* updates to azure_rm_sqldatabase
* updates to azure_rm_sqldatabase
* updates to azure_rm_sqldatabase
* updates to azure_rm_sqldatabase
* updates to azure_rm_sqldatabase
* updates to azure_rm_sqldatabase
* updates to azure_rm_sqldatabase
* updates to azure_rm_sqldatabase
* updates to azure_rm_sqldatabase
* updates to azure_rm_sqldatabase
* updates to azure_rm_sqldatabase
* updates to azure_rm_sqldatabase
* updates to azure_rm_sqldatabase
* updates to azure_rm_sqldatabase
* updates to azure_rm_sqldatabase
* updates to azure_rm_sqldatabase
* updates to azure_rm_sqldatabase
* updates to azure_rm_sqldatabase
* support the mutilple ip configuration
* Update azure_rm_networkinterface.py
* add test
* fix spell
* make the virtual network name more flexiable
* add test
* fix
* fix lint
* add test
* fix parameter
* deprecate the flatten ip configuration
* fix lint
* fix encoding
* fix mirror
* fix
* load model from common
* ecs_ecr: Remove registry ID from create repository call
[Boto3 documentation][1] specifies 'repositoryName' as the only expected
argument. The `**build_kwargs(registry_id)` part also adds 'registryId' which,
when executed, fails with: 'Unknown parameter in input: “registryId”, must be
one of: repositoryName'.
[AWS API documentation][2] also lists only the 'repositoryName' parameter. I.e.
this is not a problem with the boto3 library.
The default registry ID for the account that's making the request will be used
when creating the rpository. This means that if the `registry_id` specified by
the user is different from the default registry ID, then the policy changes
following the repository creation would fail, because the repository will have
been created in one repository but subsequent calls try to modify it in
another. Added a safeguard against this scenario.
[1]: https://boto3.readthedocs.io/en/latest/reference/services/ecr.html#ECR.Client.create_repository
[2]: https://docs.aws.amazon.com/AmazonECR/latest/APIReference/API_CreateRepository.html
* Fix concurrent ECR integration tests
If the `ecr_name` is the same in multiple concurrent test runs, then they can
interfere with one another causing both to fail. The `resource_prefix` is
guaranteed to be unique for different jobs running in CI an so avoids this
issue while also making it easier to identify the test which created the
resource.
* Remove compat code for to_unicode, to_str and to_bytes
Code was marked as deprecated and to be removed after 2.4
* Remove is_encrypted and is_encrypted_file
Code was marked as deprecated after 2.4 release.
* Add identifier option to apache2_module
There is a convention connecting the name passed to a2enmod and the one
appearing in apache2ctl -M. Not all modules follow this convention and
we have added a growing list of implicit conversions.
As a better long-term solution this adds an "identifier" option to be
able to set both strings explicitly.
* Run debian-specific tests only there
* Improve cleanup after apache2 tests
This is a follow-up/extension of https://github.com/ansible/ansible/pull/33630
* Add example for the new identifier option
* Put all debian tests in a block
* Imported lookup plugin from Role
* Plugin cleanup, including:
* Use existing Python YAML parsing
* Remove environment variables as connection options
* Added initial debugging information
* Reworked the lookup plugin using the Python Request library. As it's available through Ansible, it makes communication with Conjur much more straight forward.
* Removed un-used libraries
* Fixed linting issues
* Standardized output on `format` and insure it works for 2.6, 2.7, and 3.x.
* Use quote_plus from the six library for improved python 2/3 behavior.
* Refactored identity & configuration to prefer user's file. This also includes a refactor to remove an un-needed dictionary merge method.
* Removed `requests` in favor of `ansible.module_utils.urls`.
* Refactored netrc loading to warn if host is not present.
* Tests and a refactor to support easier testing.
* Added reference to website
* Fixed two linting errors
* Fixed an extra line found by linting
* Updated file write to use binary to insure config files are written correctly
* Resolved linting issues
* Refactored config & identity loading to take advantage of plugin options
* Cleanup a bunch of small items caught by linting
* Removed extra line caught by linting
* Swapped in pytest and added some tests with mocked network responses
* Pushing to see if this approach works better...
* Refactored be open_url mocking based on feedback
* Fixed a couple linting issues & refactored mocking into each method to attempt to resolve a failing test
* Use a generic MagicMock for python 2.6
* Fixes doc typo
require -> required
* Use `type: path` in identity_file and config_file
Also removes `expanduser` calls below (which will now be called automatically on
paths.)
* Defines maintainers for conjur_variable plugin
* BOTMETA.yml:
** defines $team_cyberark_conjur as maintainers of Conjur Variable plugin
** adds myself and @jvanderhoof to that team
* Adds URLs to relevant documentation for Conjur Variable lookup plugin
* Clarifies "the server," "the machine" -> "controlling host"
The machine identity used is that of the Ansible controlling host, not any
server being provisioned or instructed. This documentation change aims to make
that relationship clear.
* Adds response code to exception message on authentication failure
* Enhances exception messages to specify the controlling host
These error messages are less likely to confuse a user as to which machine is
associated with the files, identities, and configurations being described.
* Adds ANSIBLE_METADATA for Conjur variable lookup plugin
* UCS LAN Connectivity Policies and integration tests
* Docs updated
* UCS LAN Connectivity Policy and integration tests
* Fix use of description in docs and argument_spec so validate-modules passes
* UCS vNIC templates and integration tests
* docs updated
* UCS vNIC template and integration test
* Update docs and argument_spec for description, mtu, and target.
* Azure: added auth_source to override the source of the credentials
* removed broken cli auth and fix some review changes
* missed some more cli auth stuff that isn't required
* Added cli auth back in
* Added docs around cli auth
* fix azure CLI auth issues
* tweak error messages
* fix auto fallback to cli
* fix import issues when azure-cli not installed
This fix corrects the module state returned by github_module.
Now,
* When the release already exists, state is "ok"
* When the release is created, state is "changed"
Signed-off-by: Abhijeet Kasurde <akasurde@redhat.com>
The code that depends on this is all in the action plugins so we should
leave it there until we either move that action plugin code over
(fixup_perms2) or we give action plugins the ability to register new
config.
* win_setup: Add Product ID and Product Key in facts
So this is actually a very nice way to get product key information from
systems collected centrally.
Especially with systems that have been upgraded from Windows 7 or
Windows 8 to Windows 10 may not have a valid Windows 10 product
license key printed anywhere, it was a digital license.
If you ever have to reinstall the system, you may recover the system
from the recovery partition, or the original media, but cannot upgrade
to Windows 10 for free. By collecting the product key, one can always
reinstall your free Windows upgrade.
My only question is, do we want this to be part of the default facts, as
it may be considered important information. Or should we make a special
**win_product_key_facts** ?
* Add ACPI product key support
* Add integration test
* Remove Get-ProductKey function, move inline
* Inventory caching
* Add inventory caching for virtualbox
* Don't populate cache for virtualbox with stdout, use a dict of inventory instead
* Fix error creating the cache dir if it doesn't exist
* Keep cache default False and set to True in VariableManager __init__
* Check all groups before determining if a host is ungrouped.
aws_ec2 is an inventory plugin intended to replace uses of the `contrib/inventory/ec2.py`
inventory script.
For advanced grouping, please use the `constructed` inventory plugin.
Add deps/requires for fact collectors
Fact collectors can now set a required_facts
class attribute that will be a set of the names
of fact collectors they require to be run first.
ie, if a collector needs to know the ansible_distribution,
it should set it's required_facts to include 'distribution'
required_facts = set(['distribution'])
If a collector requires another collector, it gets added
to the selected collector names.
We then topological sort the ordering of the collectors
so that deps work out (ie, 'distribution' will run before
'service_mgr')
required_facts were added to the collectors for:
- network (requires 'distribution', 'platform')
- hardware (requires 'platform')
- service_mgr (requires 'distribution', 'platform')
Fix name references for facts (need 'ansible_' prefix)
is service_mgr
Fixes#30753
Enforce that there can be only one --new-vault-id or
--new-vault-password-file and use this instead of
--encrypt-vault-id
* Add a config option for default vault encrypt id
* add documentation around commonly-used Facts for Conditionals
There are a few Facts that are often used for Conditionals, so
documenting them on the Conditionals page with their possible values.
* Edit
* win_setup: Add Product ID and Product Key in facts
So this is actually a very nice way to get product key information from
systems collected centrally.
Especially with systems that have been upgraded from Windows 7 or
Windows 8 to Windows 10 may not have a valid Windows 10 product
license key printed anywhere, it was a digital license.
If you ever have to reinstall the system, you may recover the system
from the recovery partition, or the original media, but cannot upgrade
to Windows 10 for free. By collecting the product key, one can always
reinstall your free Windows upgrade.
My only question is, do we want this to be part of the default facts, as
it may be considered important information. Or should we make a special
**win_product_key_facts** ?
* Add ACPI product key support
It is not possible to modify the load balancer configuration
for ECS Service.
As it is possible to detect this, it's nicer to fail gracefully
than return AWS's less meaningful failure message.
Fix PEP8 compliance
Otherwise, it fail with:
Traceback (most recent call last):
File \"/tmp/ansible_c1zmq3i9/ansible_module_openssl_certificate.py\", line 808, in <module>
main()
File \"/tmp/ansible_c1zmq3i9/ansible_module_openssl_certificate.py\", line 787, in main
certificate.generate(module)
File \"/tmp/ansible_c1zmq3i9/ansible_module_openssl_certificate.py\", line 692, in generate
certfile.write(str(crt))
TypeError: a bytes-like object is required, not 'str'
* Verify that acme-tiny is present
* Use run_command rather than subprocess for acme-tiny
Besides consistency with the rest of the code base, this also
add 2 bug fixes:
- ansible should no longer show "warning, junk after json" when using the module
- it also verify the return code of acme-tiny, and so fail when the
verification fail. The previous code didn't check rc, so it would continue
with a empty file
The accumulated collected_facts was being update
with new facts _after_ filtering them. So only
facts that pass the filter would ever be passed
to other fact collectors.
For 'filter=ansible_service_mgr', even though it requires
the platform and distribution facts and even collects them,
they would get filtered out and never passed to the other
collectors that need them (service_mgr for ex).
Fix is just to add the unfiltered facts to collected_facts.
Adds unit tests for fact filter and collected_facts.
Fixes#32286
The search string used to look for Clear Linux
was changed in 45a9f96774 to
be more specific, but was too specific. Now finding
a substring match for 'Clear Linux' in /usr/lib/os-release
is enough to consider a match.
Since the details of the full name in os-release varies
('Clear Linux Software for Intel Architecture',
'Clear Linux OS for Intel Architecture', etc) the
search string match was failing and would fall back to the
'first word in the release file' method resulting in
ansible_distribution='NAME="Clear'
Also add a meta fact indicating which search string
was matched.
Test case info from:
https://github.com/ansible/ansible/issues/31501#issuecomment-340861535Fixes#31501
Extract vault related bits of DataLoader._get_file_contents to DataLoader._decrypt_if_vault_data
When loading vault password files, detect if they are vault encrypted, and if so, try to decrypt with any already known vault secrets.
This implements the 'Allow vault password files to be vault encrypted' (#31002) feature card from
the 2.5.0 project at https://github.com/ansible/ansible/projects/9Fixes#31002
People expect to be able to upload files to s3 using standard
locations for files.
Providing an action plugin that effectively rewrites the `src`
key to the result of finding such a file is a great help.
Tests added, and IAM permissions corrected