The metaclass boilerplate is safe to apply en masse. The future import
boilerplate needs code to be inspected to be sure that there aren't any
py2isms that need to be fixed. Split these two checks so that we can
fix them independently
Be explicit about which files are grandfathered so we can fix them up one by one
* cosmetic: Remove useless call to ec2_argument_spec()
* aws_s3: Improve ETag handling
* Extract ETag calculation into a utility function for reuse by
aws_s3_sync.
* Reduce code duplication in put/get by restructuring the logic
* Only calculate ETag when overwrite == different
* Fail gracefully when overwrite == different and MD5 isn't available
(e.g. due to FIPS-140-2).
* aws_s3: clean up integration tests
Clean up tests, add tests for overwrite settings in both directions.
* Add msg parameter to the mandatory filter
The `mandatory` filter would be more useful, particularly when dealing with nested dictionaries, with the simple addition of a `msg` parameter for supplying it with a custom failure message.
In "status: present" playbook execution for a VDO volume that is
absent, all of the parameters that are given in the playbook are
issued to the "vdo create" command, therefore any parameters that
become "unrecognized" will result in the "vdo" command returning an
error with the message "unrecognized arguments", which can then be
relayed back to the user. This is a gracefully handled failure case.
Examples of "unrecognized" parameters are new features that are not
yet in the current version of VDO, or features that were removed
since the current version of VDO.
In "status: present" playbook execution for a VDO volume that is
already present, the same behavior as the "creation" stage of the
module should occur, but doesn't occur, since the key strings for
the "vdo status" output of the parameter do not exist. This results
in a KeyError on the parameter that no longer exists.
Therefore, use "if statfield in processedvdos[desiredvdo]:" to filter
the modifiable parameters with the ones that are supported in this
VDO version, then use "if statfield not in processedvdos[desiredvdo]:"
to evade the KeyError, and add the "unsupported" parameters.
Also, instead of using the "currentvdoparams" dictionary, which
filters only the parameters reported by the "vdo status" output, use
the "modtrans" dictionary, which contains all of the possible
parameters. Therefore, if the playbook specifies an "unsupported"
parameter, let it be passed on to the "vdo" command, to display an
actionable error message.
This fixes https://github.com/ansible/ansible/issues/54556
Signed-off-by: Bryan Gurney <bgurney@redhat.com>
This should be ansible_connection, not connection_type. We can also
update local testing logic.
Remove nxos_install_os/tasks/network_local.yaml as it is nolonger used.
Signed-off-by: Paul Belanger <pabelanger@redhat.com>
VMware specific documentation that explains:
- how to run the functional tests
- and the conventions.
- clarify the difference between govcsim and vcsim
Co-Authored-By: Alicia Cozine <879121+acozine@users.noreply.github.com>
Co-Authored-By: Abhijeet Kasurde <akasurde@redhat.com>
* Use `compile` before `eval` in collection loader.
This fixes two issues:
1. File names are available when tracing execution, such as with code coverage.
2. Future statements are not inherited from the collection loader.
* Add unit tests for collection loading.
These tests verify several things:
1. That unit tests can import code from collections when the collection loader is installed.
2. That tracing reports the correct file and line numbers (to support code coverage).
3. That collection code does not inherit __future__ statements from the collection loader.
* Update unit test handling of the collection loader.
Since the collection loader is installed simply by importing ansible.plugins.loader,
we may already have a collection loader installed when the test runs. This occurs if
any other tests are collected which use that import during collection. Until that code
is moved into an initialization function to avoid loading during import, the unit tests
will need to replace any existing collection loaders so that they reflect the desired
configuration.
* Insert into sys.modules before calling exec.
This is a requirement of PEP 302.
It will prevent recursion errors when importing the current module or using a relative import.
* Use the correct value for __package__ in modules.
This allows using relative imports in collections.
* Add warning about modifying code for trace test.
* Add test for relative import in collection.
* Add __init__.py to collection to satisfy pylint.
The relative-beyond-top-level rule in pylint may not be appropriate for collections.
However, until that rule is disabled for collections this will keep tests passing.
* Fix Pacman regex for unmatched Arch package name
`ansible -m pacman -a upgrade=yes $(hostname)` failed due to not
accounting for the `+` character in the `pacman -Qu` output line:
libsigc++ 2.10.0-1 -> 2.10.1-1
Per the Arch wiki¹, package names can contain alphanumeric characters,
and any of {`@`, `.`, `_`, `+`, `-`}.
The existing `re` covered `_` (part of `\w` in Python), and `-`
(explicitly included). This change adds `@`, `.`, and `+` (in
ASCII-betical order).
¹: https://wiki.archlinux.org/index.php/Arch_package_guidelines#Package_naming
* Add explanation for `pacman -Qu` regex matching
* Remove unneeded non-capturing groups in regex
* ansible-galaxy: add collection init sub command
* Fix changelog and other sanity issues
* Slim down skeleton structure, fix encoding issue on template
* Fix doc generation code to include sub commands
* Added build step
* Tidy up the build action
* Fixed up doc changes and slight testing tweaks
* Re-organise tests to use pytest
* Added publish step and fixed up issues after working with Galaxy
* Unit test improvments
* Fix unit test on 3.5
* Add remaining build tests
* Test fixes, make the integration tests clearer to debug on failures
* Removed unicode name tests until I've got further clarification
* Added publish unit tests
* Change expected length value
* Added collection install steps, tests forthcoming
* Added unit tests for collection install entrypoint
* Added some more tests for collection install
* follow proper encoding rules and added more tests
* Add remaining tests
* tidied up tests and code based on review
* exclude pre-release versions from galaxy API