this was abandoned early on the manger side but seems like we left behind on plugin side.
more flexible extensions with yaml plugin
validate data correctly for yaml/constructed
fixed issue with only adding one child to keyed, the group only got the host that forced it's creation
fixes#31382fixes#31365
* Addition of TCP protocol to ELB target group as target groups support HTTP/S and TCP now
* Fixup stickiness type so that it checks if the current_tg has the stickiness_type key in the dict, as TCP ones do not
Trying to associate an already-associated ElasticIP was failing.
This is however supported by the `boto` method that is used
under the hood, `associate_address`:
To quote `boto` documentation:
```
This option to allow an Elastic IP address that is already
associated with another networkinterface or instance to be
re-associated with the specified instance or interface.
```
This defaults to False, both per backwards-compatibility
and to mirror the boto default value.
Fixes#27385
* Set desired capacity to min_size if no instances exist
* Improve readability of if/then clause
* Only update null desired_capacity to min_size on initial create
Any future updates to the ASG will be able to reference the existing
capacity.
* Make ansible_selinux facts a consistent type
Rather than returning a bool if the Python library is missing, return a dict with one key containing a message explaining there is no way to tell the status of SELinux on the system becasue the Python library is not present.
* Fix unit test
This reverts commit f8005d2737.
fix needs to be rethought as it applies to only newer git versions
and use of env shell breaks with non 'bourne compatible' shells
* Allow any_errors_fatal to be set in playbook.
* Default to the config file value for any_errors_fatal only if it isn't already provided.
* add _get_attr method
Make sure that example in docs is usable:
# Remove storage domain
- ovirt_storage_domains:
state: absent
name: mystorage_domain
format: true
Without this PR data_center and host parameters where required when we wanted to
remove some storage domain.
Also fixes a regression when trying to remove a detached
storage domain.
The following patch fixes a regression when trying to remove a detached
storage domain.
As part of the remove process the ovirt_storage_domains module first
tries to move the domain to maintenance and detach it.
In case of removing a detached storage domain with no DC attached to it
The maintenace process will fail with 404 (not exists) exception when
trying to fetch the DC using empty Guid.
The fix proposes a solution to return None value in case of a detached
storage domain.
* Add update_only parameter for yum module
When using latest, `update_only: yes` will ensure that only existing
packages are updated and no additional packages are installed.
* Update yum.py
Update version added for `update_only` parameter to 2.5
* add unit tests for update_only flag in yum module
* Add new lines to end of config file lines
* Properly write out selinux config file
Change module behavior to not always report a change but warn if a reboot is needed and return reboot_required.
Improve the output messages.
Add strip parameter to get_file_lines utility to help with parsing the selinux config file.
* Add return documentation
* Add integration tests for selinux module
* Use consistent capitalization for SELinux
* Use atomic_move in selinux module
* Don't copy the config file initially
There's no need to make a copy just for reading.
* Put message after set_config_policy in case the change fails
* Add aliases to selinux tests
* win_become: move error handling to Ansible outside of shell
* trimmed the output so double newlines don't get set
* added test for non-zero exit code
* missed issue URL on test
* changed exit to SetShouldExit
The /etc/os-release based distro detection doesn't
seem to work for Ubuntu 10.04 (no /etc/os-release?).
So it was testing the next case which was /etc/lsb-release to
see if it is 'Mandriva'. Since the check for existence of
(/etc/lsb-release, Mandrive) was the first non-empty dist
file match, 'ansible_distribution' was being set to 'Mandriva'
expecting to be corrected by the data from the dist file content.
But since the dist file parsing for Mandriva didn't match for
Ubuntu 10.04 /etc/lsb-release _and_ there is no Debian specific
lsb-release check, 'ansible_distribution' stayed at 'Mandriva'
and the dist file checking loop keeps going and eventually off
the end of the list before finding a better match.
Adding a debian/ubuntu specific check for /etc/lsb-release after
the debian os-release sets the info correctly and stops further
checking of dist files.
Fixes#30693
'distribution' facts were being set after checking
the existence of the dist file, and then being set
again with more detail after they were succesfully parsed.
But if the dist file was not succesfully parsed and
matched the required names, the loop continues
without resetting the earlier set facts. This is
how 'Mandriva' would end up being the 'distribution'
file for unrelated cases (it would find /etc/lsb-release,
set distro to 'Mandriva', then fail to parse/match and
continue the loop. If no other checks worked, 'Mandriva'
would stick).
* parse_dist_file_NA should check 'name' not distro for NA
parse_distribution_file_NA was checking the incoming
'distribution' fact to be 'NA', but the fact itself can
be specific at that point ('KDE Neon', for ex) but the
check is really if the 'name' it was passed is NA.
* for matches on OS_RELEASE_ALIAS (ie, 'Archlinux') do
not continue if the dist file content doesn't match. Previously
it had to because of the 'Mandriva' bug mentioned above.
This is a more general fix for #30693 than #30723
Fixes #30693
Related to #30600
In cli.CLI.unfrack_path callback, special case if the
value of '--output' is '-', and avoid expanding
it to a full path.
vault cli already has special cases for '-', so it
just needs to get the original value to work.
Fixes#30550
get_config would use ConfigManager.get_ini_value which does not
exist. What we are meant to use is
ansible.config.manager.get_ini_config_value and this method does not
expect a list, only a dictionary with a section and a key.
This PR addresses two issues:
1. The hg module was added to command module's check_command list,
so if someone runs hg directly from the command module, the command
module would warn the user "Consider using hg module rather than running hg".
We address this by removing hg from the list.
2. We added a new note to tell users push feature will be addressed
in issue #31156.
* Added support to retrieving LIG resources in HPE OneView
* Fixing copyright header according to review
* Swapping out config for full credentials in parameter for documentation
* Added support to retrieving Enclosures in HPE OneView
- Added unit tests
* Updated version_added to 2.5
* Changing return type of enclosure_script to string
* Fixing copyright header according to review
* Replaced config for credentials in parameters for documentation
Fix adds a new module 'vmware_guest_powerstate' to manage
power states of virtual machine.
Fixes: #30371
Signed-off-by: Abhijeet Kasurde <akasurde@redhat.com>
As part of the absent state of ovirt_storage_domains module,
the pre_remove method tries to move the stoage domain to
maintenance and detach it.
In case a destroy of a storage domain is being called there is no need
for those operations since the destroy might be merely a DB operation.
vm_username and vm_password are required parameters in
vmware_vm_shell. Fix adds changes to documentation as well.
Fixes: #28266
Signed-off-by: Abhijeet Kasurde <akasurde@redhat.com>
* module_utils.urls - Encode the proxy connect as binary
Under Python3 the sendall method expects binary not a string.
Prior to this change the below exception was being thrown;
Traceback (most recent call last):
File "/tmp/ansible_umxox7_x/ansible_modlib.zip/ansible/module_utils/urls.py", line 1044, in fetch_url
client_key=client_key, cookies=cookies)
File "/tmp/ansible_umxox7_x/ansible_modlib.zip/ansible/module_utils/urls.py", line 951, in open_url
r = urllib_request.urlopen(*urlopen_args)
File "/opt/blue-python/3.6/lib/python3.6/urllib/request.py", line 223, in urlopen
return opener.open(url, data, timeout)
File "/opt/blue-python/3.6/lib/python3.6/urllib/request.py", line 524, in open
req = meth(req)
File "/tmp/ansible_umxox7_x/ansible_modlib.zip/ansible/module_utils/urls.py", line 729, in http_request
s.sendall((self.CONNECT_COMMAND % (self.hostname, self.port)).decode())
AttributeError: 'str' object has no attribute 'decode'
Encoding the value is inline with the lines below (Proxy-Authorization etc) which are being sent as binary.
Code like this:
if cond1 and cond2:
pass
elif cond1:
pass
Has a hidden dependency on the order that the conditions are checked.
This makes them fragile and subject to breakage during refactors.
Rewrite the code like this:
if cond1:
if cond2:
pass
else:
pass
The nested structure makes the ordering explicit and less likely for
someone to break the code when they refactor.
* Add os_keystone_service_endpoint
This patch adds a new Ansible module which allows a user to create
an endpoint to a service with Keystone.
Fixes#23909
* os_keystone_endpoint: Fix style and messages
Fix comments, pep8, version, metadata, license header
and imports according to the Contributing Modules Checklist
Signed-off-by: Alberto Murillo <albertomurillosilva@gmail.com>
* os_keystone_endpoint: Fix return values
- Change type of 'endpoint' return value from dictionary to complex
in order to get validate_module checks passed.
- Remove 'id' from the return data since it is included inside the
'endpoint' value wich is already being returned.
- Rename 'service' field to 'service_id' which is the correct name
for the service id field returned in json.
Signed-off-by: Alberto Murillo <albertomurillosilva@gmail.com>
* os_keystone_endpoint: Update shade version
Update minimum shade version to 1.11.0
Signed-off-by: Alberto Murillo <albertomurillosilva@gmail.com>
* os_keystone_endpoint: Make region optional
Signed-off-by: Alberto Murillo <albertomurillosilva@gmail.com>
* os_keystone_endpoint: Validate service exists before using service.id
Signed-off-by: Alberto Murillo <albertomurillosilva@gmail.com>
* os_keystone_endpoint: Fix documentation for service to accept name or id
Signed-off-by: Alberto Murillo <albertomurillosilva@gmail.com>
* os_keystone_endpoint: Pass the full service object to create_endpoint()
We already have the service object retrieved in code, by passing service.id to
create_endpoint, the shade librarie queries the api again to get the full service
object.
By Passing the already rerieved service object to create_endpoint() we save one
request to the API.
Signed-off-by: Alberto Murillo <albertomurillosilva@gmail.com>
* os_keystone_endpoint: Make type explicit in module arguments.
Althoug type is default to str when not specified in module arguments
this commit explicitly defines type='str' for better readability.
Signed-off-by: Alberto Murillo <albertomurillosilva@gmail.com>
* Fix fact failures cause by ordering of collectors
Some fact collectors need info collected by other facts.
(for ex, service_mgr needs to know 'ansible_system').
This info is passed to the Collector.collect method via
the 'collected_facts' info.
But, the order the fact collectors were running in is
not a set order, so collectors like service_mgr could
run before the PlatformFactCollect ('ansible_system', etc),
so the 'ansible_system' fact would not exist yet.
Depending on the collector and the deps, this can result
in incorrect behavior and wrong or missing facts.
To make the ordering of the collectors more consistent
and predictable, the code that builds that list is now
driven by the order of collectors in default_collectors.py,
and the rest of the code tries to preserve it.
* Flip the loops when building collector names
iterate over the ordered default_collectors list
selecting them for the final list in order instead
of driving it from the unordered collector_names set.
This lets the list returned by select_collector_classes
to stay in the same order as default_collectors.collectors
For collectors that have implicit deps on other fact collectors,
the default collectors can be ordered to include those early.
* default_collectors.py now uses a handful of sub lists of
collectors that can be ordered in default_collectors.collectors.
fixes#30753fixes#30623
* Return correct changed status when EIP is reused
When reusing an existing EIP, the changed status
should be False, not True.
* If public_ip is given and it exists, return it
Ensure EIP allocation returns existing public_ip correctly
* Added ecs_taskdefinition_facts module
* Expanding documentation
Now includes all possible return values
* Fixed boto dependency
* Converting results to snake case.
* Remove EcsTaskManager class, move to main()
Remove unnecessary `except` block
* Change botocore import method
Also make Profile exception message less redundant
* Changing case conversion of the results
Now converts only the root level keys
Commented is a version that would not convert only container_definitions
Avoid the following seen when running ec2_ami tests on python3,
presumably because the return type of `map` is different between
python2 and python3.
```
Traceback (most recent call last):
File "/tmp/ansible_e44v27uj/ansible_module_ec2_snapshot_facts.py", line 242, in <module>
main()
File "/tmp/ansible_e44v27uj/ansible_module_ec2_snapshot_facts.py", line 238, in main
list_ec2_snapshots(connection, module)
File "/tmp/ansible_e44v27uj/ansible_module_ec2_snapshot_facts.py", line 193, in list_ec2_snapshots
snapshots = connection.describe_snapshots(SnapshotIds=snapshot_ids, OwnerIds=owner_ids, RestorableByUserIds=restorable_by_user_ids, Filters=filters)
File "/usr/local/lib/python3.5/dist-packages/botocore/client.py", line 312, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/usr/local/lib/python3.5/dist-packages/botocore/client.py", line 575, in _make_api_call
api_params, operation_model, context=request_context)
File "/usr/local/lib/python3.5/dist-packages/botocore/client.py", line 630, in _convert_to_request_dict
api_params, operation_model)
File "/usr/local/lib/python3.5/dist-packages/botocore/validate.py", line 291, in serialize_to_request
raise ParamValidationError(report=report.generate_report())
botocore.exceptions.ParamValidationError: Parameter validation failed:
Invalid type for parameter OwnerIds, value: <map object at 0x7ff577511048>, type: <class 'map'>, valid types: <class 'list'>, <class 'tuple'>
```
https://github.com/ansible/ansible/pull/30435#issuecomment-330750498
* fixed ansible/git invocation options
now falls back to using localhost as 'all' does not include implicit accidentally anymore
fixes#30636
* better fix
* qfq9
* Save the serialized values instead of their types
* Add tests for creating and modifying VMs without using a template
* Remove blank line
* Add tests for vm deletion
In python2 str gives byte string. In Python3 it gives unicode string so it
can't be written in a binary mode opened file.
Use to_bytes helper function to ensure content being written will be
properly encoded in both python2 and python3.
* Adds ipa_dnszone
* Use new copyright/gpl notice
* Update metadata version
* Use native error handling
* Fix boilerplate
* Remove default false
* Use localhost
* Should be 2.5
* Fix cloudwatchevent_rule exception handling
Where it is currently present, this change fixes the exception handling.
However, there are many places that it is lacking.
Fixes#30806
* Add new exception handling for cloudwatchevent_rule
Ensure all API calls are wrapped with exception handling
* PEP8 tidy up
* Remove unnecessary HAS_BOTO3 import and checks
Tidy up documentation so that NO_QA can be removed
* Use vault_id when encrypted via vault-edit
On the encryption stage of
'ansible-vault edit --vault-id=someid@passfile somefile',
the vault id was not being passed to encrypt() so the files were
always saved with the default vault id in the 1.1 version format.
When trying to edit that file a second time, also with a --vault-id,
the file would be decrypted with the secret associated with the
provided vault-id, but since the encrypted file had no vault id
in the envelope there would be no match for 'default' secrets.
(Only the --vault-id was included in the potential matches, so
the vault id actually used to decrypt was not).
If that list was empty, there would be an IndexError when trying
to encrypted the changed file. This would result in the displayed
error:
ERROR! Unexpected Exception, this is probably a bug: list index out of range
Fix is two parts:
1) use the vault id when encrypting from edit
2) when matching the secret to use for encrypting after edit,
include the vault id that was used for decryption and not just
the vault id (or lack of vault id) from the envelope.
add unit tests for #30575 and intg tests for 'ansible-vault edit'
Fixes#30575
* timezone: Add support for macOS
On macOS, preferred way of managing timezone is via `systemsetup(8)`.
Thus, we use this command instead of relying on directly modifying
`/etc/localtime` as in other *BSDs.
* timezone: Use % instead of .format() in strings
This ensures better compatibility across different versions of Python.
* Fix 'distribution' fact for ArchLinux
Allow empty wasn't breaking out of the process_dist_files
loop, so a empty /etc/arch-release would continue searching
and eventually try /etc/os-release. The os-release parsing
works, but the distro name there is 'Arch Linux' which does
not match the 2.3 behavior of 'Archlinux'
Add a OS_RELEASE_ALIAS map for the cases where we need to get
the distro name from os-release but use an alias.
We can't include 'Archlinux' in SEARCH_STRING because a name match on its keys
but without a match on the content causes a fallback to using the first
whitespace seperated item from the file content as the name.
For os-release, that is in form 'NAME=Arch Linux'
With os-release returning the right name, this also supports the
case where there is no /etc/arch-release, but there is a /etc/os-release
Fixes#30600
* pep8 and comment cleanup
* updated docs
- for devs:
- added inventory/vars section
- made some updates to general section and other plugin types
- for users:
- added 'user' plugin section to start describing the plugins
- docs on types, what they are and how to use
- removed ref to deleted AUTHORS file
- corrected several typos/headers
- added descriptions to config.rst template
- ignore generated files for cli/plugins and config
- remove new generated files on `make clean`
- moved details from devguid and intro doc to plugin specific pages
- pretied up lookup notes
- changed precedence ref to not conflict config
- removed duplicate config data, as config is autogenerated and up to date
- put new plugins under playbooks
- added `pass` cause rst/python dislikes fractions
- removed dupe in .gitignore, alpha sorted to avoid moar dupes
- added try cause rst/python freaks out
* generate plugins into their own dir
only do plugins that support docs
use toctree from main plugins page
As reported on the mailing list, if ssh_executable (from a config
setting) contains nonascii characters then we could get a UnicodeError
here. Transform into bytes before passing to subprocess so that
subprocess doesn't transform to bytes for us.
On sparc64, /proc/cpuinfo has no usual 'model name', 'Processor', 'vendor_id', 'Vendor',
as a result "ansible_processor_vcpus" is always 1.
Add check element "ncpus active" to fix the issue.
* Fix pkg_mgr fact on OpenBSD
Add a OpenBSDPkgMgrFactCollector that hardcodes pkg_mgr
to 'openbsd_pkg'. The ansible collector will choose the
OpenBSD collector if the system is OpenBSD and the 'Generic'
one otherwise.
This removes PkgMgrFactCollectors depenency on the
'system' fact being in collected_facts, which also
avoids ordering issues (if the pkg mgr fact is collected
before the system fact...)
Fixes#30623
* Feature to Specify AZURE blob storage type
* Feature to Specify AZURE blob storage type
* Feature to Specify AZURE blob storage type
* Revert "Feature to Specify AZURE blob storage type"
This reverts commit 1d33997769ef3763a2eb434404c918134761635f.
modified: lib/ansible/module_utils/azure_rm_common.py
* Feature to Specify AZURE blob storage type
Fix adds update_dns option for ipa_host module.This option will
update DNS records of the host which is managed by FreeIPA DNS server.
Fixes: #30627
Signed-off-by: Abhijeet Kasurde <akasurde@redhat.com>
* Fix nxos provider transport warning issue
* Add default value of transport arg in provider spec
* Remove default value if transport arg in top level spec
This ensure deprecation warning is seen only in case transport
is given as a top level arg in task
* Refactor nxos modules to reference transport value from provider
spec
* Fix unit test
* Remove transport arg assignment in nxos action plugin
* As assigning transport value is handled in provider spec
top level task arg assignment is no longer required
* win_scheduled_task_stat: add new module to get stat on scheduled tasks
* fixed up linting errors and aliases file
* I should learn how to spell
* removing URI from test
* added state information for the task
* removed argument so task stays running
* Undeprecate ec2_elb_*
* Make ec2_elb* full fledged modules rather than aliases
* Split tests for ec2_elb_lb and elb_classicb_lb
* Change names in documentation of old and new elb modules
Add tests for ec2_elb_lb
with new configuration the sudo flags are always set and become cannot override,
switching to simle 'or' will result in become_flags working.
also sudo_flags are deprecated.
also changed from YAML null causing a 'None' str
fixes#30629
This PR includes:
- Support for loop-tasks with proper subject/error content
- Improved output (and proper indentation)
- Complex data structures are now pretty printed
- Better selection of mail subject
As discussed before we selected win_environment to the documentation,
and point to win_uri for a more advanced module.
If we want to make this the reference module, we have to get this one
absolutely right in every possible way.
This PR cleans up both win_environment and win_uri, and makes the
required changes to the windows module development section.
This PR includes:
- An important fix to charset encoding of from address
- Documentation and examples cleanup
- PEP8 fixes
- Warning on insecure access
- Strict parameter typing
- More modern interface (using lists rather than comma, space or pipe-delimited strings)
- Warn on failure to send mail to some recipients
```
[WARNING]: Failed to send mail to 'foobar': 550 5.1.1 <foobar>:
Recipient address rejected: User unknown in local recipient table
```
- Warn on failure to parse some headers
```
[WARNING]: Skipping header 'Foobar', unable to parse
```
- Return failed recipients as return value
- Changed default encoding to utf-8
* made callbacks backwards compatible
This fixes#30597 for those that were not inheriting from base.
Added deprecation notice so those callbacks get updated.
Callback must either inherit from base (directly or indirectly),
which already implements this or implement set_options themselves.
* added note about porting guide
This is to catch vault secrets from config and
cli. Previously vault_password_file in config was
missed since it was added by setup_vault_secrets,
so check after setup_vault_secrets.
* Restore correct coloring to selective callback
This fixes the bug raised in #30506
* Fix format issues for Python 2.6 & indent
Removed the zero length fields to support format under Python 2.6
Fixed E128 continuation line under-indented for visual indent issue
* Add Routing Engine Facts
- Map routing engine output information to routing_engines facts dict.
- Add fact 'has_2RE', which is a quick way to determine how many REs
the chassis has.
* Fix a typo
* Fix more typos
* Add slot number to routing_engine dict
* Add facts about the installed chassis modules
* Fix typo
* Fixed another typo
* Fix Path
* Change path again.
* More Typos
* Add some deubgging
* Add additional information for hardware components.
- Return information about the Routing Engines.
- Return a fact to easily determine if the device
has two routing engines.
- Return information about the hardware modules.
* Addressed pep8 stardard failures.
* Add unit test fixtures.
* Rename fixture.
* Fix unit test failures.
- Rename the fixture file to what the unit test expects.
- Strip out junos namespace attributes.
Rename file to match what the unit test expects.
* Scrubbed the routing engine serial numbers.
* Add unit test facts for new tests.
- Add unit test for ansible_net_routing_engines fact
- Add unit test for ansible_net_modules fact
- Add unit test for ansible_net_has_2RE
* Fixed spacing.
* win_scheduled_task: rewrite for additionality functionality and bug fixes
* fixes for docs and os version differences
* started with the testing
* doc fix
* added more tests
* added principals tests
* finished tests for win_scheduled_task rewrite
* feedback from PR
* change to fail when both new and deprecated args are set
* change diff variable to match new standard and update doc sentance
* Don't ask for password confirm on 'ansible-vault edit'
This is to match the 2.3 behavior on:
ansible-vault edit encrypted_file.yml
Previously, the above command would consider that a 'new password'
scenario and prompt accordingly, ie:
$ ansible-vault edit encrypted_file.yml
New Password:
Confirm New Password:
The bug was cause by 'create_new_password' being used for
'edit' action. This also causes the previous implicit 'auto prompt'
to get triggered and prompt the user.
Fix is to make auto prompt explicit in the calling code to handle
the 'edit' case where we want to auto prompt but we do not want
to request a password confirm.
Fixes#30491
* finalize lookup documentation
* minor fixes to ansible-doc
- actually show which file caused error on when listing plugins
- removed redundant display of type and name
* smart quote fixes from toshio
Currently, MIQ only supports an alert type of 'prometheus', so rather than have the caller of manageiq_provider pass this info, just set it as the default.
When calling manageiq_user to an already existing user (but leaving out the password so that it doesn't automatically 're-create' the user), the module fails with:
fatal: [127.0.0.1]: FAILED! => {"changed": false, "failed": true, "module_stderr": "Shared connection to 127.0.0.1 closed.\r\n", "module_stdout": "Traceback (most recent call last):\r\n File \"/tmp/ansible_Fr7Nt3/ansible_module_manageiq_user.py\", line 324, in <module>\r\n main()\r\n File \"/tmp/ansible_Fr7Nt3/ansible_module_manageiq_user.py\", line 315, in main\r\n res_args = manageiq_user.edit_user(user, name, group, password, email)\r\n File \"/tmp/ansible_Fr7Nt3/ansible_module_manageiq_user.py\", line 229, in edit_user\r\n if self.compare_user(user, name, group_id, password, email):\r\n File \"/tmp/ansible_Fr7Nt3/ansible_module_manageiq_user.py\", line 189, in compare_user\r\n (group_id and user['group']['id'] != group_id)\r\nKeyError: 'group'\r\n", "msg": "MODULE FAILURE", "rc": 0}
The 'group' field turns out to be 'current_group_id' (at least with ManageIQ 4.6). Update the comparison accordingly.
* add 'update_password' param to manageiq_user
Currently with the manageiq_user module, if you call it repeatedly while passing the 'password' parameter, it will always run the task and mark it as 'changed'.
Following the pattern of the AWS IAM module, add an 'update_password' parameter that takes 'always' (default) or 'on_create'. This will let you set an initial password when creating a user, but allow the user to modify their password and not stomp over their password changes if you re-run the playbook/task that created the user.
* don't stomp password when other fields change
Handle case where user fields change, but we don't want to stomp on a potentially user-changed password. Previously, if a non-password field changed, and the password param was passed in, it would ignore the 'update_password': 'on_create' setting (ie it would update/modify the password even if the user already exists).
Add trailing ',' to list of params.
* windows: fix list type in legacy module utils
* only change the return for the list type instead of affecting it all
* additional null check when using an array
* Fix tags in ec2_instance_facts
The method boto3_tag_list_to_ansible_dict in module_utils/ec2.py changed
and does no longer check whether the returned result of boto3 uses
"key" or "Key" as the tag key identifier.
This fixes ec2_instance_facts to make this check in its own, since boto3
may return "key" instead of "Key"
* Since the indices for the tags are already formatted to lowercase
by the snaking, we can assume, that the index for the tags are already
formatted
* timezone module: fixed platform decision rule for Linux
— For better handling of environments where timedatectl is unavailable
* timezone module: allow absence of configuration files if specific commands are available
* timezone module: remove duplicated line
* timezone module: fixed docs to clarify returned diff
* timezone module: fixed “undefined variable err”
* Revert "timezone module: fixed docs to clarify returned diff"
This reverts commit 4b783227f7.
* timezone module: revert platform decision rule; just warn instead of futher command checks
* timezone module: [NosystemdTimezone] enhanced error message
* changed RunCommand result from Tuple to CommandResult for easier future extensibility
* moved Win32 Dictionary->multi-null-string environment munging into C#
As-merged, had several issues that prevented idempotent usage. Some args were defined at the wrong UI level. Dual-state args didn't match up with typical Ansible UI.
* fixed issue with default callback inheritance
- callbacks need to document same options as callbacks they inherit from to get them configured
- since default is also used by many 3rd party callbacks for inheritance, making the code 'tolerate' the missing docs
and fallback to using the direct constant to configure it's options.
* Added nopackages option and Fix#24997
Adding a new option - nopackages.
This enables the option to add the --nopackages flag while registering a new node to RHN Satellite. We are not uploading the rpm data on our nodes and since we started utilizing ansible for nodes registration, I figures it would be useful for others as well.
Also-
Fixes#24997 (verified in my lab)
* Fixed documentation
* Documentation changes:
- typo fix in "default"
- Added "version_added" and set to 2.4
* Documentation changes:
- Removed trailing whitespaces in nopackages['version_added']
* This change is unrelated for this feature pull request and shouldn't be here (and also seems wrong, see #25079).
* Changed "version_added" to 2.5 in the module docs
It could be something like '10beta4', which StrictVersion() would
reject. When Postgres 10 is released, it will be '10', which
StrictVersion() would STILL reject.
Fortunately, psycopg2 has a 'server_version' connection attribute that
is guaranteed to be an integer like 90605 for version 9.6.5, or 100000
for version 10. We can safely use this for version-specific code.
* Replace pause in integration tests with until.
Use resource prefix instead of generating a random number
Only try to delete keys if they exist
* Add alias to tests
1) import_role was never resulting in a static inclusion of the role
tasks due to a logic error.
2) no error was raised when import_role tried to use a with loop, resulting
in a strange error down the execution path.
* Consistency and document treatment of default bool values
* Document that default bool values can be any Ansible recognized bool.
choose the one that reads better in context
* For fragments used by the copy module, make bool types use type=bool and not choices
* Edit for clarity
keyUsage and extendedKeyUsage are currently statically limited via a
static dict defined in modules_utils/crypto.py. If one specify a value
that isn't in there, idempotency won't work.
Instead of having static dict, we uses keyUsage and extendedKyeUsage
values OpenSSL NID and compare those rather than comparing strings.
Fixes: https://github.com/ansible/ansible/issues/30316
The dellos action plugins should add the remote address of the switch
provider to the play context. This was fixed in issue #23589 in an
almost identical manner for the eos, ios, iosxr, and vyos action
plugins.
Fixes: #30350