* win_become: move error handling to Ansible outside of shell
* trimmed the output so double newlines don't get set
* added test for non-zero exit code
* missed issue URL on test
* changed exit to SetShouldExit
The /etc/os-release based distro detection doesn't
seem to work for Ubuntu 10.04 (no /etc/os-release?).
So it was testing the next case which was /etc/lsb-release to
see if it is 'Mandriva'. Since the check for existence of
(/etc/lsb-release, Mandrive) was the first non-empty dist
file match, 'ansible_distribution' was being set to 'Mandriva'
expecting to be corrected by the data from the dist file content.
But since the dist file parsing for Mandriva didn't match for
Ubuntu 10.04 /etc/lsb-release _and_ there is no Debian specific
lsb-release check, 'ansible_distribution' stayed at 'Mandriva'
and the dist file checking loop keeps going and eventually off
the end of the list before finding a better match.
Adding a debian/ubuntu specific check for /etc/lsb-release after
the debian os-release sets the info correctly and stops further
checking of dist files.
Fixes#30693
'distribution' facts were being set after checking
the existence of the dist file, and then being set
again with more detail after they were succesfully parsed.
But if the dist file was not succesfully parsed and
matched the required names, the loop continues
without resetting the earlier set facts. This is
how 'Mandriva' would end up being the 'distribution'
file for unrelated cases (it would find /etc/lsb-release,
set distro to 'Mandriva', then fail to parse/match and
continue the loop. If no other checks worked, 'Mandriva'
would stick).
* parse_dist_file_NA should check 'name' not distro for NA
parse_distribution_file_NA was checking the incoming
'distribution' fact to be 'NA', but the fact itself can
be specific at that point ('KDE Neon', for ex) but the
check is really if the 'name' it was passed is NA.
* for matches on OS_RELEASE_ALIAS (ie, 'Archlinux') do
not continue if the dist file content doesn't match. Previously
it had to because of the 'Mandriva' bug mentioned above.
This is a more general fix for #30693 than #30723
Fixes #30693
Related to #30600
In cli.CLI.unfrack_path callback, special case if the
value of '--output' is '-', and avoid expanding
it to a full path.
vault cli already has special cases for '-', so it
just needs to get the original value to work.
Fixes#30550
get_config would use ConfigManager.get_ini_value which does not
exist. What we are meant to use is
ansible.config.manager.get_ini_config_value and this method does not
expect a list, only a dictionary with a section and a key.
This PR addresses two issues:
1. The hg module was added to command module's check_command list,
so if someone runs hg directly from the command module, the command
module would warn the user "Consider using hg module rather than running hg".
We address this by removing hg from the list.
2. We added a new note to tell users push feature will be addressed
in issue #31156.
* Added support to retrieving LIG resources in HPE OneView
* Fixing copyright header according to review
* Swapping out config for full credentials in parameter for documentation
* Added support to retrieving Enclosures in HPE OneView
- Added unit tests
* Updated version_added to 2.5
* Changing return type of enclosure_script to string
* Fixing copyright header according to review
* Replaced config for credentials in parameters for documentation
Fix adds a new module 'vmware_guest_powerstate' to manage
power states of virtual machine.
Fixes: #30371
Signed-off-by: Abhijeet Kasurde <akasurde@redhat.com>
As part of the absent state of ovirt_storage_domains module,
the pre_remove method tries to move the stoage domain to
maintenance and detach it.
In case a destroy of a storage domain is being called there is no need
for those operations since the destroy might be merely a DB operation.
vm_username and vm_password are required parameters in
vmware_vm_shell. Fix adds changes to documentation as well.
Fixes: #28266
Signed-off-by: Abhijeet Kasurde <akasurde@redhat.com>
* module_utils.urls - Encode the proxy connect as binary
Under Python3 the sendall method expects binary not a string.
Prior to this change the below exception was being thrown;
Traceback (most recent call last):
File "/tmp/ansible_umxox7_x/ansible_modlib.zip/ansible/module_utils/urls.py", line 1044, in fetch_url
client_key=client_key, cookies=cookies)
File "/tmp/ansible_umxox7_x/ansible_modlib.zip/ansible/module_utils/urls.py", line 951, in open_url
r = urllib_request.urlopen(*urlopen_args)
File "/opt/blue-python/3.6/lib/python3.6/urllib/request.py", line 223, in urlopen
return opener.open(url, data, timeout)
File "/opt/blue-python/3.6/lib/python3.6/urllib/request.py", line 524, in open
req = meth(req)
File "/tmp/ansible_umxox7_x/ansible_modlib.zip/ansible/module_utils/urls.py", line 729, in http_request
s.sendall((self.CONNECT_COMMAND % (self.hostname, self.port)).decode())
AttributeError: 'str' object has no attribute 'decode'
Encoding the value is inline with the lines below (Proxy-Authorization etc) which are being sent as binary.
Code like this:
if cond1 and cond2:
pass
elif cond1:
pass
Has a hidden dependency on the order that the conditions are checked.
This makes them fragile and subject to breakage during refactors.
Rewrite the code like this:
if cond1:
if cond2:
pass
else:
pass
The nested structure makes the ordering explicit and less likely for
someone to break the code when they refactor.
* Add os_keystone_service_endpoint
This patch adds a new Ansible module which allows a user to create
an endpoint to a service with Keystone.
Fixes#23909
* os_keystone_endpoint: Fix style and messages
Fix comments, pep8, version, metadata, license header
and imports according to the Contributing Modules Checklist
Signed-off-by: Alberto Murillo <albertomurillosilva@gmail.com>
* os_keystone_endpoint: Fix return values
- Change type of 'endpoint' return value from dictionary to complex
in order to get validate_module checks passed.
- Remove 'id' from the return data since it is included inside the
'endpoint' value wich is already being returned.
- Rename 'service' field to 'service_id' which is the correct name
for the service id field returned in json.
Signed-off-by: Alberto Murillo <albertomurillosilva@gmail.com>
* os_keystone_endpoint: Update shade version
Update minimum shade version to 1.11.0
Signed-off-by: Alberto Murillo <albertomurillosilva@gmail.com>
* os_keystone_endpoint: Make region optional
Signed-off-by: Alberto Murillo <albertomurillosilva@gmail.com>
* os_keystone_endpoint: Validate service exists before using service.id
Signed-off-by: Alberto Murillo <albertomurillosilva@gmail.com>
* os_keystone_endpoint: Fix documentation for service to accept name or id
Signed-off-by: Alberto Murillo <albertomurillosilva@gmail.com>
* os_keystone_endpoint: Pass the full service object to create_endpoint()
We already have the service object retrieved in code, by passing service.id to
create_endpoint, the shade librarie queries the api again to get the full service
object.
By Passing the already rerieved service object to create_endpoint() we save one
request to the API.
Signed-off-by: Alberto Murillo <albertomurillosilva@gmail.com>
* os_keystone_endpoint: Make type explicit in module arguments.
Althoug type is default to str when not specified in module arguments
this commit explicitly defines type='str' for better readability.
Signed-off-by: Alberto Murillo <albertomurillosilva@gmail.com>
* Fix fact failures cause by ordering of collectors
Some fact collectors need info collected by other facts.
(for ex, service_mgr needs to know 'ansible_system').
This info is passed to the Collector.collect method via
the 'collected_facts' info.
But, the order the fact collectors were running in is
not a set order, so collectors like service_mgr could
run before the PlatformFactCollect ('ansible_system', etc),
so the 'ansible_system' fact would not exist yet.
Depending on the collector and the deps, this can result
in incorrect behavior and wrong or missing facts.
To make the ordering of the collectors more consistent
and predictable, the code that builds that list is now
driven by the order of collectors in default_collectors.py,
and the rest of the code tries to preserve it.
* Flip the loops when building collector names
iterate over the ordered default_collectors list
selecting them for the final list in order instead
of driving it from the unordered collector_names set.
This lets the list returned by select_collector_classes
to stay in the same order as default_collectors.collectors
For collectors that have implicit deps on other fact collectors,
the default collectors can be ordered to include those early.
* default_collectors.py now uses a handful of sub lists of
collectors that can be ordered in default_collectors.collectors.
fixes#30753fixes#30623
* Return correct changed status when EIP is reused
When reusing an existing EIP, the changed status
should be False, not True.
* If public_ip is given and it exists, return it
Ensure EIP allocation returns existing public_ip correctly
* Added ecs_taskdefinition_facts module
* Expanding documentation
Now includes all possible return values
* Fixed boto dependency
* Converting results to snake case.
* Remove EcsTaskManager class, move to main()
Remove unnecessary `except` block
* Change botocore import method
Also make Profile exception message less redundant
* Changing case conversion of the results
Now converts only the root level keys
Commented is a version that would not convert only container_definitions
Avoid the following seen when running ec2_ami tests on python3,
presumably because the return type of `map` is different between
python2 and python3.
```
Traceback (most recent call last):
File "/tmp/ansible_e44v27uj/ansible_module_ec2_snapshot_facts.py", line 242, in <module>
main()
File "/tmp/ansible_e44v27uj/ansible_module_ec2_snapshot_facts.py", line 238, in main
list_ec2_snapshots(connection, module)
File "/tmp/ansible_e44v27uj/ansible_module_ec2_snapshot_facts.py", line 193, in list_ec2_snapshots
snapshots = connection.describe_snapshots(SnapshotIds=snapshot_ids, OwnerIds=owner_ids, RestorableByUserIds=restorable_by_user_ids, Filters=filters)
File "/usr/local/lib/python3.5/dist-packages/botocore/client.py", line 312, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/usr/local/lib/python3.5/dist-packages/botocore/client.py", line 575, in _make_api_call
api_params, operation_model, context=request_context)
File "/usr/local/lib/python3.5/dist-packages/botocore/client.py", line 630, in _convert_to_request_dict
api_params, operation_model)
File "/usr/local/lib/python3.5/dist-packages/botocore/validate.py", line 291, in serialize_to_request
raise ParamValidationError(report=report.generate_report())
botocore.exceptions.ParamValidationError: Parameter validation failed:
Invalid type for parameter OwnerIds, value: <map object at 0x7ff577511048>, type: <class 'map'>, valid types: <class 'list'>, <class 'tuple'>
```
https://github.com/ansible/ansible/pull/30435#issuecomment-330750498
* fixed ansible/git invocation options
now falls back to using localhost as 'all' does not include implicit accidentally anymore
fixes#30636
* better fix
* qfq9
* Save the serialized values instead of their types
* Add tests for creating and modifying VMs without using a template
* Remove blank line
* Add tests for vm deletion
In python2 str gives byte string. In Python3 it gives unicode string so it
can't be written in a binary mode opened file.
Use to_bytes helper function to ensure content being written will be
properly encoded in both python2 and python3.
* Adds ipa_dnszone
* Use new copyright/gpl notice
* Update metadata version
* Use native error handling
* Fix boilerplate
* Remove default false
* Use localhost
* Should be 2.5