This adds a must_exist option to the service module, which gives callers the
ability to be tolerant to services that do not exist. This allows for
opportunistic manipulation of a list of services if they happen to exist on the
host. While failed_when could be used, it's difficult to track all the
different error strings that might come from various service tools regarding a
missing service.
removing policy if enabled is no
adding sanity checks
removing debuging
check if policy exists before deleting
updating version_added to 2.0
adding stickiness support to ec2_elb_lb.py (squashed commit)
* Fix docs to specify when python2.6+ is required (due to a library
dep). This helps us know when it is okay to use python2.6+ syntax in
the file.
* remove BabyJson returns. See #1211 This commit fixes all but the
openstack modules.
* Use if __name__ == '__main__' to only run the main part of the module
if the module is run as a program. This allows for the potential to
unittest the code later.
Context: I recently discovered that when setting a fact, key=value pairs and complex arguments differ in how the fact is stored. For example, when attempting to use complex arguments using key=values, the result can be stored as a unicode string as opposed to an object/list/etc.
I'm hoping the above example update will better demonstrate to and instruct people to use complex arguments instead of key=value pairs in certain situations.
If an EC2 instance is already associated with an EIP address, we use
that, rather than allocating a new EIP address and associating it with
that.
Fixes#35.
Update/fix to Support specifying cidr_ip as a list
Unicode isn't compatible with python2, so we needed some other
solution to this problem. The simplest approach is if the ip item
isn't already a list, simply convert it to one, and we're done.
Thanks to @mspiegle for this suggestion.
Remove `USAGE` from the `VALID_PRIVS` dict for both database and
table because it is not a valid privilege for either (and
breaks the implementation of `has_table_privilege` and
`has_database_privilege`
See http://www.postgresql.org/docs/9.0/static/sql-grant.html
For read-only databases, users should not change when no changes
are required.
Don't issue ALTER ROLE when role attribute flags, users password
or expiry time is not changing.
In certain cases (hashed passwords in the DB, but the password
argument is not hashed) passlib.hash is required to avoid
running ALTER ROLE.
The default value set by the module was a value of None for the
config_file parameter, which propogates into the connect method
call overriding the stated default in the method.
Instead, the default should be set with-in the parameter
specification so the file check is not requested to check None.
Do not attempt to attach an already attached volume.
Likewise, do not attempt to detach a volume that is not
attached.
This version adds support for check mode.
The ordering of disabling/enabling yum repositories matters, and
the yum module was mixing and matching the order. Specifically,
when yum-utils isn't installed, the codepath which uses the yum
python module was incorrectly ordering enabling and disabling.
The preferred order is to disable repositories and then enable them
to prevent clobbering. This was previously discussed in
ansible/ansible#5255 and incompletely addressed in 0cca4a3.
When subscribing a system with an activationkey, it seems (sometimes?)
required to pass the "--org <number>" parameter to subscription-manager.
Activation Keys can be created through the Red Hat Customer Portal, and
a subscription can be attached to those. This makes is easy to register
systems without passing username/passwords around.
The organisation ID can be retrieved by executing the following command
on a registered system (*not* the account number):
# subscription-manager identity
URL: https://access.redhat.com/management/activation_keys
Signed-off-by: Niels de Vos <ndevos@redhat.com>
Reviewed-by: Ken Dreyer <kdreyer@redhat.com>
Includes commits for:
* Don't return change if the password is not set
* Set the group to nogroup if none is specified
* Set an uid if none is specified
* Test if SHADOWFILE is set (for Darwin)
* remove unused uid
Prior to this commit, Ansible would pass '--activationkeys <value>' as a
literal string, which the remote server would interpret as a single
argument to subscription-manager.
This led to the following failure message when using an activation key:
subscription-manager: error: no such option: --activationkey "mykey"
Update the arguments so that the remote server will properly interpret
them as two separate values.
boto's rds2 renamed `vpc_security_groups` to `vpc_security_group_ids`
and changed from a list of `VPCSecurityGroupMembership` to just a
list of ids. This accommodates that change when rds2 is being used.
Upstart scripts are being incorrectly identified as SysV init scripts
due to a logic error in the `service` module.
Because upstart uses multiple commands (`/sbin/start`, `/sbin/stop`,
etc.) for managing service state, the codepath for upstart sets
`self.svc_cmd` to an empty string on line 451.
Empty strings are considered a non-truthy value in Python, so
conditionals which are checking the state of `self.svc_cmd` should
explicitly compare it to `None` to avoid overlooking the fact that
the service may be controlled by an upstart script.
Some places ([AWS RDS](https://forums.aws.amazon.com/thread.jspa?threadID=151248)) don't have, or don't allow, access to the `pg_authid` table. The only reason that is necessary is to check for a password change.
This flag is a workaround so passwords can only be set at creation time. It isn't as elegant as changing the password down the line, but it fixes the longstanding issue #297 that prevented this from being useful on AWS RDS.
body_format is a new optional argument that enables handling of JSON or
YAML serialization format for the body argument.
When set to either 'json' or 'yaml', the body argument can be a dict or list.
The body will be encoded, and the Content-Type HTTP header will be set,
accordingly to the body_format argument.
Example:
- name: Facette - Create memory graph
uri:
method: POST
url: http://facette/api/v1/library/graphs
status_code: 201
body_format: json
body:
name: "{{ ansible_fqdn }} - Memory usage"
attributes:
Source": "{{ ansible_fqdn }}"
link: "1947a490-8ac6-4bf2-47c1-ff74272f8b32"
The return code of "service pf onestatus" is usually zero on FreeBSD (tested with FreeBSD 10.0), even if pf is not running. So the service module always thinks that pf is running, even when it needs to be started.
I tried a playbook with the following (accidentally wrong) task:
tasks:
- name: authorized key test
authorized_key: key=/home/sam/.ssh/id_rsa.pub key_options='command="/foo/bar"' user=sam
I got the following traceback:
TASK: [authorized key test] ***************************************************
failed: [localhost] => {"failed": true, "parsed": false}
Traceback (most recent call last):
File "/home/sam/.ansible/tmp/ansible-tmp-1427110003.65-277897441194582/authorized_key", line 2515, in <module>
main()
File "/home/sam/.ansible/tmp/ansible-tmp-1427110003.65-277897441194582/authorized_key", line 460, in main
results = enforce_state(module, module.params)
File "/home/sam/.ansible/tmp/ansible-tmp-1427110003.65-277897441194582/authorized_key", line 385, in enforce_state
parsed_new_key = (parsed_new_key[0], parsed_new_key[1], parsed_options, parsed_new_key[3])
TypeError: 'NoneType' object has no attribute '__getitem__'
With this fix, I see the expected error instead:
TASK: [authorized key test] ***************************************************
failed: [localhost] => {"failed": true}
msg: invalid key specified: /home/sam/.ssh/id_rsa.pub
In cases when the python-apt package is not installed, ansible will
attempt to install it. After this attempt, it tries to import the
needed apt modules, but forgets to import the apt.debfile module.
The result is that playbooks that use the dpkg argument on a machine
that does not initially have the python-apt package available will
fail with the following error
AttributeError: 'module' object has no attribute 'debfile'
This patch adds the appropriate import to the apt module to ensure
that necessary libraries are available in cases when the dpkg argument
is being used on a system that does not initially have the python-apt
package installed
The following cases work for me now:
- Create new ASG with tags
- Update tags on ASG (create/change/delete)
In short, the module should now work as expected
wrt tagging. The previous code did not work at all
with latest boto for me (serialization errors) and
the logic was buggy anyway; e.g. removed tags
would never get deleted from ec2.
This will account for settings that are provided by the hierarchy of
Dockerfiles used to construct your image, rather than only accounting
for settings provided to the module directly.
This allows setting the pid namespace for a container. Currently only
the 'host' pid namespace is supported.
This requires Docker 1.4.1 and docker-py 1.0.0
Organize each state into a distinct function for readability and composability.
Rework `present` to create but not start containers. Add a `restarted` state
to unconditionally restart a container and a `reloaded` state to restart a
container if and only if its configuration is incorrect. Store our most recent
knowledge about container states in a ContainerSet object. Improve the value
registered by this task to include not only the inspect data from any changed
containers, but also action counters in their native form, a summary message
for all actions taken, and a `reload_reasons` key to store a human-readable
diagnostic to determine why each container was reloaded.
Don't pass the volumes_from argument to the Docker create_container method.
If the volumes_from argument is passed to the create_container method, Docker
raises the following exception:
docker.errors.DockerException: 'volumes_from' parameter has no effect on
create_container(). It has been moved to start()
consider the following response body (content) of a REST/JSON webservice containing escaped quotation marks:
```json
{ "key": "\"works\"" }
```
decoding this string not as raw will lose the backslash as JSON escape. later json.loads will fail to parse.
Inspired by [this thread](https://groups.google.com/forum/#!topic/ansible-project/kymtiloDme4) on the mailing list and the following python shell code:
```python
import json
string=r'{ "key": "\"works\"" }'
json.loads(string)
json.loads(string.decode('raw_unicode_escape'))
json.loads(string.decode('unicode_escape'))
```
Ports are integer values but the old code was assuming they were
strings. When login_port is put into playbook complex_args as an
integer the code would fail. This update should make the argument
validating make sure we have an integer and then we can send that value
directly to the relevant APIs.
Fixes#818
If insertbefore/insertafter didn't match anything, lineinfile module was doing nothing, instead of adding the line at end of fille as it's supposed to.
There are a completely new set of modules that do all of the things like
keystone v3 and auth_plugins and the like correctly. Structurally
upgrading these would have been massively disruptive and there is no
real good way to do so without breaking people.
These modules should be kept around for several releases - they still
work for people - and they should get bug fixes. But they should not
take new features. New features should go to the os_ modules.
When using the "creates" option with the uri module, set changed
to False if the file already exists. This behavior is consistent with
other modules which use "creates", such as command and shell.
Having read the doc for this module several times and completely missing that it can be used for existing remote archives, I propose this update to the wording to make clear from the top the two ways in which this module can be used.
ansible-doc expects the value of the description field to be a list,
otherwise the output is not correct. This patch updates the flat
description to be a list.
This is a further fix for: https://github.com/ansible/ansible/issues/9092
when the relative path contains a subdirectory. Like:
ansible localhost -m copy -a 'src=/etc/group dest=foo/bar/'
Add a word boundary \b to the regexp for checking the output of a2{en,dis}mod,
to avoid a false positive for a module that ends with the same text as the
module we're working on.
For example, the previous regexp r'.*spam already enabled' would also match
against 'eggs_spam already enabled'.
Also, get rid of the redundant '.*' from the end of the regexp.
This small change corrects behavior when one uses an .rsync-filter file to exclude some paths from both being transferred and being deleted, so that these excluded paths can be handled separately with different tasks (e.g. in order to deploy the excluded paths independently from the rest paths and notify handlers appropriately). The problem with the double -FF option is that it excludes the .rsync-filter file from being transferred to the receiver. However, deletions are done on the side of the receiver, so it is absolutely necessary the .rsync-filter file to be transferred to the receiver, so that the receiver knows what files to delete and what not to delete.
The problem was introduced in commit f5789e8e. 'tenancy' is a parameter of
ec2.run_instances, but not in ec2.request_spot_instances. So it was breaking
the support for spot requests.
This option allows the module to ensure that ONLY the specified keys
exist in the authorized_keys file. All others will be removed. This is
quite useful when rotating keys and ensuring no other key will be
accepted.
As stated in #423, the commit 7f11c3d broke ec2 spot instance launching
after 1.7.2. This is because it acts on the 'res' variable which have 2
different types in the method, and in case we request spot instances,
the resulting object is not a result of ec2.run_instances() but
ec2.request_spot_instances(). Actually this fix doesn't seem to be
relevant in the spot instances case, because by construction we won't
retrieve 'terminated' instances in the end.
There is no call to yum_base using 'cachedir' argument, so
while it work fine from a cursory look, that's useless code,
and so should be removed to clarify the code.
Using the rpm module prevent a uneeded fork, and permit
to skip the signature checking which slow down a bit the
operation, and which would be done by yum on installation
anyway.
The default is changed from 'yes' to 'no' to follow
subversion behavior (ie, requiring explicit confirmation
to erase a existing repository). Since that was not working before
cf #370 and since the option was ignored before and unused, this
should be safe to change.
We need to handle the string returned by 'default' in the same way we handle
the string returned by 'status' since the resulting flags are compared later.
If you try to set rwX permissions, ACL fails to set them at all.
Expected:
$ sudo setfacl -m 'group::rwX' www
...
drwxrwxr-x 2 root root 4096 Nov 10 17:09 www
With Ansible:
acl: name=/var/www permissions=rwX etype=group state=present
...
drwxrw-r-x 2 root root 4096 Nov 10 17:30 www
x for group is erased. =/
* Use the newly added 'default' argument to know if the default flags are set
or not.
* Handle that 'status' may either return flags or YES/NO.
* Centralize flag handling logic.
* Set action variable after check if we need to keep going.
Big thanks to @ajacoutot for implementing the rcctl 'default' argument.
Removes default value from ec2_lc so it can create launch configurations valid on a EC2-Classic environment. AWS API will not accept a assign_public_ip when creating an ASG outside of VPC.
The default is not very useful to sort between different
keys and user. Adding the hostname in the comment permit to later
sort them if you start to reuse the key and set them in different
servers. See https://github.com/ansible/ansible/pull/7420
for the rational.
This argument may be used to fetch additional refs beyond the default
refs/heads/* and refs/tags/*. Checking out GitHub pull requests or Gerrit
patch sets are two examples where this is useful.
Without this, specifying version=<sha1> with a SHA1 unreachable from any
tag or branch can't work.
De-duplicate repetitive code checking the exit code.
Include the stdout/stderr of the failed process in all cases.
Remove the returned values because no caller uses them.
Combine git commands where possible. There is no need to fetch branches
and tags as two separate operations.
Starting from docker-py>=0.5.0 it is impossible to work with private registries based on HTTP.
So we need additional parameter to allow pull from insecure registry
Related to ansible/ansible#9111
Make use of improved connect_to_aws that throws an exception
if a region can't be connected to (e.g. eu-central-1 requires
boto 2.34 onwards)
Add eu-central-1 to the two modules that hardcode their regions
Add us-gov-west-1 to ec2_ami_search to match documentation!
This pull request makes use of the changes in ansible/ansible#9419
the AIX class uses a unsafe shell for setting the user password (containing a pipe in the command). This patch adopts to the new behavior of module_utils/basic.py (since somewhere around 1.7).
besides it changes the qoutes for the echo command from double to single, because password-hashes contain $-signs and one would not have this variables expanded.
Allow passing the database option to the django_manage module for migrations. This is usefull in situations where multiple databases are used by a django application.
In particular, if `rsync` is not installed on the remote machine the following error message will be encountered:
"rsync error: remote command not found"
AWS does not recognize the subnet if it is presented in a comma delimited format with spaces. you must remove the space for Amazon to recognize the second subnet.
The default value is 'no' instead of the currently documented 'yes'.
See cloud/openstack/nova_compute.py line 543:
auto_floating_ip = dict(default=False, type='bool'),
This can be tested with this command :
ansible -c local -m copy -a 'src=/etc/group dest=foo/' all
This is a corner case of the algorithm used to find how we should
copy recursively a folder, and this commit detect it and avoid it.
Check https://github.com/ansible/ansible/issues/9092 for the story
* Make the module support enable/disable of special services like pf via rcctl.
Idea and method from @jarmani.
* Make the module handle when the user supplied 'arguments' variable does not
match the current flags in rc.conf.local.
* Update description now that the code tries to use rcctl for everything if it
is available.
Based on input from @jarmani:
* A return value of 2 now means a service does not exist. Instead of
trying to handle the different meanings of rc after running "status",
just look at stderr to know if something failed.
* Skip looking at stdout to make the code cleaner. Any errors should
turn up on stderr.
Using rds2 allows tags and the control over whether or not DBs are
publicly accessible.
Move RDS towards a pair of interfaces implementing the details of rds
and rds2
Added tests to ensure that all operations work correctly as well as
requirements files that allow virtualenvs to test either boto.rds or
boto.rds2
Yum does not always update to latest package version unless metadata cache has expired. By runing yum makecache, we ensure the metadata cache has been updated.
Signed-off-by: René Moser <mail@renemoser.net>
Or is "rules_egree" supposed to be a plural? The sentence is difficult to parse.
Maybe the correct fix is to "Purge existing rules on security group that are not found in rules_egress"?
Without this fix, _get_flavor_id() fails to find a matching flavor if
both:
* the flavor_ram parameter is specified
* the first flavor in the list does not match.
The bug is simply that the module.fail_json() call lies within the loop
iterating through the flavors. This call should only be made if the
loop completes and no matching flavors have been found.
Without this patch, ansible-doc was failing this way:
$ ansible-doc nova_compute
Traceback (most recent call last):
File "/home/francois/WORK/dev/ansible/bin/ansible-doc", line 324, in <module>
main()
File "/home/francois/WORK/dev/ansible/bin/ansible-doc", line 316, in main
text += get_man_text(doc)
File "/home/francois/WORK/dev/ansible/bin/ansible-doc", line 112, in get_man_text
desc = " ".join(opt['description'])
KeyError: 'description'
Document the wait and wait_timeout params for ec2_snapshot.
This is important because snapshots can take a long time to complete,
and the module defaults to wait=yes.
Encouraging users to use this Ansible module to enable SELinux seems
like a better idea. It also warms Dan Walsh's heart.
Signed-off-by: Major Hayden <major@mhtx.net>
Allows user to decide if git submodule should track branches/tags or track commit hashes defined in the superproject.
Add track_branches parameter to the git module.
Defaults to track branches behavior.
This adds back the change to the network_cli plugin. Ths change adds
the ensure_connect decorator to the open_shell() method to make sure
the connection is valid before trying to open a shell.
The issue was due to the addition of the decorator that will call
_connect() when there is no connection. The _connect() method should
have been mocked in the test case. This commit fixes the test
case as well
Change was originally reverted in c414ded69a
* make hash_params more robust in the face of many corner cases
Fixes#18680
Alternative fix to #18681
* add test case for role.hash_params
* Add role.hash_params test for more types
A set, a generator/iterable, and a Container that
is not Iterable.
* removes superfluous timeout kwargs from open_shell()
* cleans up play_context become check
* adds check for ssh session and calls _connect() if needed
* Moved JSON-RPC client IPAClient class to ansible.module_utils.ipa, which is extended by all ipa modules
* IPAClient: Changed to 2-clause BSD license
* IPAClient (lines 37-39): Added some additional imports for use with Python 3
* IPAClient (line 41): Explicitly extend Python base object
* IPAClient (line 57): Properly URL quoted the username/password form data as per https://www.w3.org/TR/html401/interact/forms.html#h-17.13.4.1
* IPAClient (line 62): Data should be bytes or bytearray in Python 3 (still str in Python 2)
* IPAClient (line 65): Print error message, not returned body
* IPAClient (line 70): getheader() is not present in Python 3 version of HTTPMessage; get() is present in both Python 2/3
* IPAClient (line 88): Convert form data to bytes for Python 3 again
* IPAClient (line 91): Print error message, not returned body
* IPAClient (line 96-104): json.loads() requires a string; HTTPResponse.read() returns bytes in Python 3 and str in Python 2, so decode the bytes/string using the HTTPResponse returned charset (default to 'latin-1')
* Add author/copyright notice
* Add RHEV host detection support
This adds RHEV host detection support based on a running 'vdsm' process and the existence of _/rhev/_ (which are both part of the vdsm RPM package in a RHEV installation). Without this change, a RHEV host would be reported as a kvm host (which is also true, but often not specific enough).
This closes#17058
* Only scan the process list when we determined /rhev/ exists
Small performance improvement, so we do not have to scan the process list if /rhev/ does not exist.
* fixed detection of ansible_virtualization_(role|path) facts for VM's running in
OpenStack Instances
* NOTE: this will break detection of ansible_virtualization_(role|path) facts
if you are using Openstack Instaces with nested virtualization
* fixed detection of ansible_virtualization_(role|path) facts for VM's running in
OpenStack Instances
Fixes#15165
* NOTE: this will break detection of ansible_virtualization_(role|path) facts
if you are using Openstack Instaces with nested virtualization
The parsing methods try as hard as possible to generate meaningful error messages that are all ignored and immediately overwritten by a new AnsibleError instance. Better use the original one instead.
Commit ec2521f intended to fix the scp command to fetch files
from a remote machine but it has src and dest swapped.
This change correctly treats src as the location in the remote machine
and dest as the location in the local machine.
Signed-off-by: Alberto Murillo Silva <alberto.murillo.silva@intel.com>
This updates the network_cli connection plugin to attempt to automatically
determine the remote device os. The device network os discovery can
be overridden by setting the ansible_network_os value.
* sends the serialized play_context into an already established connection
* hooks the alarm_handler() method in the connection plugin if it exists
* added configuration options for connect interval and retries
* adds syslog logging to Server() instance
This update will send the updated play_context back into an already
established connection in case privilege escalation / descalation activities
need to be performed. This change will also hook the alarm_handler() method
in the connection instance (if available) and call it in case of a
sigalarm raised.
This update adds two new configuration options
* PERSISTENT_CONNECT_INTERVAL - time to wait in between connection attempts
* PERSISTENT_CONNECT_RETRIES - max number of retries
* Implement docker support for synchronize module.
Note : you need rsync installation on your docker container.
Have a look at https://github.com/ansible/ansible/issues/16306 for more details.
Support Ansible options for remote access.
* Give user name to docker command.
* log on target based on nolog, not verbosity
fies #18569
* initialize module name
removing verbosity exposed missing name at certain stages, initialize to file name
and update later once module args are parsed
* Fix regression in jinja2 include search path
Since commit 3c39bb5, the 'ansible_search_path' variable is used to set
jinja2's search path for {% include %} directives. However, this path is
the the proper one because our templates live in 'templates' subdirs in
our search path.
This is a regression because previously, our include search path would
include the dirname of the currently interpreted file, which worked most
of the time.
fixes#18526
* Fix template lookup search path
Improve fix in commit c96c853 so that the search path contain both
template-suffixed paths as well as original paths.
ref PR #18617
* Add integration test for template lookups
Tests regression at #18526
This test fails on current devel branch and succeeds on PR #18617
Some machines have system clocks which can fall behind (for instance,
a host without a CMOS battery like Raspberry Pi). When managing those
machines we have to workaround the fact that the zip format does not
handle file timestamps before 1980. The workaround is to substitute in
the timestamp from the controller instead of from the managed machine.
Fixes#18640
* Make sure include_role inherit variables from parent role
Setting the parent of task blocks generated by include_role after they
have been produced is not sufficient - it means the tasks don't have the
correct dependency chain set afterwards, and therefore, don't properly
inherit variables from outer roles.
In addition to manually setting the parents, pass the dep_chain when
compiling the role, such that variables are correctly imported.
Fixes#18540.
* Add tests for include_role
* Fix include_role variable inheritance for multiple parent levels
Commit 8b08a28c89 removed a
call to get_exception() that was needed. Without it, the fail_json
references an undefined variable ('exception') and throws an exception.
Add the get_exception() back in where needed and update references.
Now the proper module failure is returned.
Fixes#18628
* adds new connection plugin `network_cli` which builds on paramiko
* adds new plugin `terminal` used for manipulating network_cli terminals
* adds new field to play_context `network_os` settable as ansible_network_os
This commit adds the plugins necesary to establish a persistent cli connection
to network devices of ssh. It builds on the paramiko connection plugin
to create a shell environment that will persistent through ansible-connection.
The `newtork_cli` plugin then uses the network_os in the instance of
PlayContext to load the appropriate network OS environment plugin for
handling opening and closing of shells as well as privilege escalation.
* updates paramiko_ssh to auto add keys
* updates constants with new config options
This commit adds a new feature that will allow paramiko to automatically
accept and save a host ssh key. This feature is controlled by the
`host_key_auto_add` config setting in the paramiko section. The default
is False to maintain current functionality. It also includes a new
setting `look_for_keys` with the default to False for maintaining current the
current setting.
The timeout param was exposed to the socket connection but was not
enforced for commands. This update will now cause a command to timeout
based on the module parameter.
Fetch module uses fetch_file() from plugin/connection/ssh.py to
retrieve files from the remote hosts which in turns uses
_file_transport_command(self, in_path, out_path, sftp_action) being
sftp_action = 'get'
When using scp rather than sftp, sftp_action variable is not used
and the scp command is formed in a way that the file is always
sent to the remote machine
This patch fixes _file_transport_command() to correctly form the scp
swaping src and dest if sftp_action is 'get'
Bug introduced at 8e47b9bFixes#18603
Signed-off-by: Alberto Murillo Silva <alberto.murillo.silva@intel.com>
When determining which getter style to use for the object in question,
the BaseMeta class should look at both dict's to try and locate the method.
Fixes#18522
For setfacl on Solaris we need to specify permissions like r-x.
For chmod, we need to specify them as rx (r-x means to make the file
readable and *not* executable)
for solaris, add get_dmi_facts to get product_name fact, and update memtotal_mb to integer for consistency.
for hp-ux, user machinfo to get product_serial fact
The _fixup_perms2 method checks to see if the user that is being sudo'd
is an unprivileged user or root. If it is an unprivileged user, some
checks are done to see if becoming this user would lock the ssh user out
of temp files, among other things. If this check fails, an error prints
telling the user to check the documentation for becoming an unprivileged
user.
On some systems, the stderr prints out the unprivileged user the ssh
user was trying to become contained in smartquotes. These quotes aren't
in the ASCII range, and so when we're trying to call `str.format()` to
combine the stderr message with the error text we get a
UnicodeEncodeError as python can't coerce the smartquotes using the
system default encoding. By calling `to_native()` on the error message
we can ensure that the error message is a native string for the
`Exception` handling, as `Exception` messages need to be native strings
to avoid errors (byte strings in python2, and text strings in python3)
Fixes: #18444
Previously, the Conditional class did a simple check when an
AnsibleUndefinedVariable error was raised to see if certain strings were
present. This patch tries to be smarter by evaluating the variable contained
in the error string and compared to the defined/not defined conditionals in
the conditional string.
This also modifies the UndefinedError message from HostVars slightly to
match the format returned jinja2 in general, making it easier to match the
error message in the Conditional code.
Fixes#18514
VMs in VPC and not in VPC can have an identical name. As a result VMs in a VPC must be sorted out if no VPC is given.
Due the API limitation, the only way is to check if the network of the VM is in a VPC.
With 2.0, we decided to create a special list of param names which were
taken out of the role data structure and stored as params instead (connection,
port, and remote_user). This causes problems with inheritance of these params,
so we are now deprecating that while also keeping those keys in the ds so they
are brought in as attributes on the Role correctly.
Fixes#17395
* Moved the _inventory.clear_group_dict_cache() from creating a group which doesn't exist, to adding members to the group.
* Update __init__.py
Update to use changed: block to catch all changes for cache clear as suggested