This update adds exception handling to catch errors when trying to parse
command output to json. It also removes the dependency on importing json
opting to use the AnsibleModule methods instead
This commit adds a new module, ops_command, that handles executing commands
on OpenSwitch over the CLI. Since this module is designed to work with the
OpenSwitch CLI, it only supports the CLI transport option
This commit adds a new module, ops_config, that allows playbook designers
to create tasks for configuring OpenSwitch over the CLI. The module
is designed to work directly with configuration mode in OpenSwitch and
therefore only supports the CLI transport option
This commit address a bug in the ios_config module when using the
match: strict argument. When the argument is used, the module will
compare the configuration block same as match: exact which is not the
intended behavior. This commit updates the behavior to propertly handle
the strict argument.
now uses atomic move to avoid data corruption
correclty cleans up temp files in every case
returns backup_file info if needed
validate validate before temp file gets created
backup AFTER validate
Without this change, a download failure may bail out with the message:
"Failure downloading http://foo/bar, 'NoneType' object has no attribute 'read'"
whereas with this fix, you'd get a proper error like:
"Failure downloading http://foo/bar, Request failed: <urlopen error [Errno 113] No route to host>"
or one of the many other possible download errors that can occur.
The returned list of diffs aims to simulate how a file system diff would
look before and after writing the sources list files.
![screenshot](http://i.imgur.com/dH6QXtY.png)
n.b. Ternary conditional is due to failing integration test for
python 2.4
This commit refactors the arugments used in ops_template to be strictly
typed and handle by declarative / rest and cli based configurations. It
also removes old arguments not supported and cleans up the documentation
strings
This mirrors a nearly identical change made to apt_repository.py.
Also removes the use of apt-get --force-yes as it can be dangerous
and should not be necessary (apt_repository.py does not use it).
Repeating the explanation from the apt_respository change below:
Since use_unsafe_shell is suspicious from a security point
of view (or it wouldn't be unsafe), the less we have, the less
code we have to thoroughly inspect for a security audit.
In this case, the '&&' can be replaced by doing 2 calls to run_command.
Running async_status in an "until: result.finished" loop will mask a module failure (eg, traceback) with a
template failure, because the fail dict doesn't include "finished" (eg, you'll see "ERROR! The conditional check 'bogus_out.finished' failed. The error was: ERROR! error while evaluating conditional: bogus_out.finished ({% if bogus_out.finished %} True {% else %} False {% endif %}"). Because the failure dict still includes "failed: true",
this change has no effect on stoppage/failure reporting, it just prevents the common usage pattern from masking the underlying error message.
Since our validation does conversion as well as validation, I'm not sure
this is entirely correct. May need to take a look at our conversion
code and re-examine to be sure we're doing it right.
I like to use ~/somepath instead of absolute paths because
that's more shareable. Without expansion, the path wasn't
considered a file, and the resulting cloud-config user_data
contained a string for the file path instead of the file context.
So, expand it.
restart_containers(containers.running) may try to restart containers
that are deleted when looping through get_differing_containers()
fix this by refreshing list after first loop
The ulimit will be specified as a list and separated by colons. The
hard limit is optional, in which case it is equal to the soft limit.
The ulimits are compared to the ulimits of the container and added
or adjusted accordingly on by a reload.
The module ensures that ulimits are available in the capabilities
iff ulimits is passes as a parameter.
This adds a new module, iosxr_config, that can be used for configuring
Cisco IOS XR devices. It is provides a set of arguments for sending
configuration commands to the device over cli
This adds a new module, iosxr_template, that can be used to template
configurations for IOS XR devices. Templates are then loaded into the
target device over cli
A change is coming to Ansible where module params will default to str.
Many of our modules were taking advantage of this by not being explicit
about the type, so they will break when that change merges. This hopefully
catches those cases.
- clarify docs on body_json behaviour
- only tranform into json if body input is not a string
users keep passing json string and expecint it to not be jsonified again
- fixed issue with removes not handling path expansion correctly
- switched all path variables to 'type path' to handle expansions
This commit allows the connection information for
the vsphere_guest module to be provided as environment
variables, which makes it possible to use Cloud
Credentials from Ansible Tower in playbooks that utilize
vsphere_guest.
| ENV VAR | vsphere_guest param |
| --------------- | ---------------------- |
| VMWARE_HOST | vcenter_hostname |
| VMWARE_USER | username |
| VMWARE_PASSWORD | password |
This adds a new module, junos_config, useed to configure Juniper JUNOS based
devices. The config module can be used to set an ordered set of set and
delete statements over a cli transport
This adds a new module, junos_template, that can read in a template
config and push the changes to the device. It can also backup the
current config. This module is implemented over cli
This adds a new module, junos_command that can be used for sending commands
to Juniper JUNOS based devices. The junos_command module is implemented
over a cli transport
Fix the OpenStack os_server module for when region_name is specified.
This should not be passed through to the shade create_server() call
as it's only used with the auth parameters.
Fixes bug: https://github.com/ansible/ansible-modules-core/issues/2797
This change update the return values from eos_config to be consistent with
all network config modules. This will now return updates and responses
from the module
This change updates the returns values from eos_command to be consistent
with network modules. It now returns stdout, stdout_lines and failed_conditionals
This updates the nxos_template doc string to unify the return values
across all network modules. This change now returns stdout, stdout_lines
and failed_conditionals
This modifies the return values to make them consistent across all
network command modules. The module now returns stdout, stdout_lines
and failed_conditionals
- defaulted to commands not executing in checkmode
- added force run for info gathering (for setting changed)
- added debug for what would have been run in check mode
- added check mode for spots that made changes using system calls instead of command
- removed now redundant checkmode checks
better failure now, if i missed anything, it will misreport changed value
instead of old default of actually making the change in checkmode
The old method left settings in the environment. The new method takes
care of clearing them after use. In this module, the old method was
also setting the environment too late to affect all the command line
tools which lead to a bug.
Fixes https://github.com/ansible/ansible/issues/14264
This addresses a bug in the eos_config module that would prevent it
from running properly. The module should now properly process the config
and the candidate
The eos_config module has a bug where its trying to pass an argument
that doesn't exist. This fixes that problem, removing the offending
keywork argment
The eos_template module works by allowing configurations to be pushed
to Arista EOS devices that can be templated by the Ansible Jinja2
template engine
The nxos_template module worksw by allowing configurations to be pushed
to Cisco NXOS devices over CLI or NXAPI and templated using the Ansible
Jinja2 template engine
This adds a new module nxos_command that can be used to send arbitrary
commands to NXOS devices. The module includes an argument that allows
the responses to be evaluated and causes the module not to return
control to the playbook until a set of conditions has been met.
Using the difflist feature added in ansible/ansible@c337293 we can add
two diffs to the `diff` dict returned as JSON: A `before` and `after` pair of
changed file contents and the diff of the file attributes.
n.b.: the difflist handling from the above commit is logically broken.
PR will follow.
Example output:
TASK [change line and mode] ************************************************************
changed: [localhost]
--- before: /tmp/sshd_config (content)
+++ after: /tmp/sshd_config (content)
@@ -65,21 +65,21 @@
X11DisplayOffset 10
PrintMotd no
PrintLastLog yes
TCPKeepAlive yes
#UseLogin no
#MaxStartups 10:30:60
#Banner /etc/issue.net
# Allow client to pass locale environment variables
-AcceptEnv LANG LC_*
+AcceptEnv LANG LC_* GF_ENV_*
Subsystem sftp /usr/lib/openssh/sftp-server
# Set this to 'yes' to enable PAM authentication, account processing,
# and session processing. If this is enabled, PAM authentication will
# be allowed through the ChallengeResponseAuthentication and
# PasswordAuthentication. Depending on your PAM configuration,
# PAM authentication via ChallengeResponseAuthentication may bypass
# the setting of "PermitRootLogin without-password".
# If you just want the PAM account and session checks to run without
--- before: /tmp/sshd_config (file attributes)
+++ after: /tmp/sshd_config (file attributes)
@@ -1,3 +1,3 @@
{
- "mode": "0700"
+ "mode": "0644"
}
This adds a new module eos_command to network/eos. The eos_command module
is used for sending arbitrary commands to Arista EOS devices. It includes
arguments that allow the module to wait for specific values before the
module returns control to the playbook or fails
This adds a new module for pushing configuraitons to eos devices in a
reliable and repeatable fashion. It includes support for templating
configurations and backing up the current config prior to pushing out
changes. This module works over either CLI or EAPI.
This PR has a dependency on ansible/ansible PR #14009 being merged
OCD is making me fix the inconsistency with how None is typed. First Letter Capitalized All Over Now.
cleaning up the default object that was created for the cache_security_groups and removing checks dealing with it.
clean up space
Changing default cache_security_groups from [default] to None.
This adds a new module for managing configuraiton files for Cisco NXOS
devices. It provides configuration file management including templating
and backing up the current configuration.
This PR has a dependency on ansible/ansible PR # 14012
On systems with restrictive umasks, the pip module won't allow you to
install pip packages that are usable by everyone on the system. This
commit adds a umask option to optionally override the umask on a
per-package basis.
Since there is no shell escape of the password parameter, a password with
a single quote (or even worst, a single quote and a pipe) could have
unattended consequences. Also, the less we use use_unsafe_shell=True, the
better.
As of Ansible 2.x, invocation of Django's ```manage.py``` requires a valid "shebang". Additionally, ```manage.py``` must be executable.
The old invocation was hardcoded as ```python manage.py ...``` while the new invocation is ```./manage.py ...```. See [this PR](https://github.com/ansible/ansible-modules-core/pull/1165).
This change allows more flexibility for which Python interpreter is invoked, but breaks existing deployment when ```manage.py``` is not properly configured. This documentation update adds a note explaining the new requirements for ```manage.py```.
Add the ability to completely delete a floating IP from the pool
when disassociating it from a server. When state is absent and
purge is true, the IP will be completely deleted. The default
keeps the current behavior, which is to only disassociate the IP
from the server.
The exception message, when shade fails, will contain much more
specific information about the failure if the exception is treated
as a string. The 'message' attribute alone is usually not helpful.
Support specifying an absolute path (typically /etc/crontab) rather than
a path relative to /etc/cron.d, to allow modifying the main system crontab.
Particularly useful for target systems that have /etc/crontab but no
/etc/cron.d.
Since use_unsafe_shell is suspicious from a security point
of view (or it wouldn't be unsafe), the less we have, the less
code we have to toroughly inspect for a security audit.
In this case, the '&&' can be replaced by doing 2 calls to run_command.
Starting in Django 1.7, the createcachetable command looks for cache
table names in the CACHES settings dictionary, so cache_table is no
longer required, but is still allowed.
Otherwise CDN (Akamai) downloads file without the headers. The sequence
is following:
1. Ansible uploads file to CF.
2. Akamai downloads the file and caches it in CDN.
3. Ansible sets headers.
As a result Akamai serves file without headers.
This is backwards incompatible change, because headers keys are not
prefixed with `x-object-meta-`. Which allows user to set headers like
`Access-Control-Allow-Origin`.
The command `hg up -C` by default moves to the latest revision on the
current branch. The `discard` function was trying to update to a
different branch, in case it was provided, by passing a `-r REVISION`
argument. Not only is this not the intended effect of the `discard`
function, but this also could update to a different branch that hasn't
been pulled yet, which is how we were experiencing trouble.
Instead, we unconditionally do `hg up -C -r .` to "update" to the
current revision (i.e. to "."), while `-C/--clean`ing the current
directory. This is similar to `hg revert --all`, except that it also
undoes the merge state of the working directory, in case there was
any.
Previously the logging module hard coded the default logging driver. This means
if the docker daemon is started with a different logging driver, the ansible
module would continually restart it when run.
This fix adds a call to docker.Client.info(), which is inspected if a logging
driver is not supplied in the playbook, and the container only restarted if
the logging driver applied differs from the configured default.
In usage, this has solved issues with using alternative logging drivers.
Fixes require ssl in combination with grant option
Refactoring: code cleanup to make it easier to understand
Code rewritten inspired by @willthames
Added WITH GRANT OPTION as exception; when only REQUIRESSL and/or GRANT are specified we have to add USAGE
Without this change, some trouble may occur when "deb" parameter
is used as env vars controlling dpkg are not set. For example,
installing a package that requires user input will never end since
DEBIAN_FRONTEND=noninteractive is not set.
So export env vars in APT_ENV_VARS before run dpkg, like in cases
using apt-get/aptitude.
Now module will assume that if the argument is a string it is already formated as json
and will only try to convert non strings into json string.
Also removed unused 'msg' var declarations and the ifs that set them
fixes#2009
Since we now have several exceptions to the assumption that the
result of the pull would be on the last status line returned by
docker-py's pull(), I've changed the function so that it looks
through the status lines and returns what if finds on it.
Despite the repeated `break`s, the code seems simpler and a little
more coherent like this. From what I've checked using
`https://github.com/jlafon/ansible-profile`, the execution time is
mostly the same.
If this parameter was not of the right type, the module would fail with a
traceback, with a "AttributeError: 'str' object has no attribute 'get'"
exception.
It now gives a proper error message on type errors.
Update a document file for win_get_url.ps1.
Update add a prefix proxy_ for this variables
Update a document file for win_get_url.ps1.
Update win_get_url.ps1 20150907
Before this patch:
- Command was matched if 'Command' field of docker-py
representation of Docker container ends with 'command' passed
to Ansible docker module by user.
- That can give false positives and false negatives.
- For example:
a) If 'command' was set up with more than one spaces,
like 'command=sleep 123', it would be never matched again
with a container(s) launched by this task.
Because after launching, command would be normalized and
appear, in docker-py API call, just as 'sleep 123' - with one
space. This is false negative case.
b) If 'entrypoint + command = command', for example
'sleep + 123 = sleep 123', module would give false positive
match.
This patch fixes it, by making matching more explicit - against
'Config'->Cmd' field of 'docker inspect' output, provided by docker-py
API and with proper normalization of user input by splitting it to
tokens with 'shlex.split()'.
nics is a great flexible parameter, but it's wordy. Shade now supports
a simple parameter too, which is just "network" and takes a name or id.
Add passthrough support.
In addition to supporting booting from a pre-existing volume, nova and
shade both support the concept of booting from volume based on an image.
Pass the parameters through.
Shade supports boot-time attachment of additional volumes for OpenStack
instances. Pass through the parameter so that ansible users can also
take advantage of this.
* This keeps us from hitting bugs in repoquery/yum plugins in certain
instances (#2559).
* The previous is also a small performance boost
* Also in is_installed(), when using the yum API, return if we detect
a package name has been installed. We don't need to also check
virtual provides in that case. This is another small performance
boost.
* Sort the list of packages returned by the list parameter.
If this is not set, Ansible parses the parameter as a string.
This is fine if the parameter is not provided by the caller, but
if it is set to False or True explicitly, ec2_vol receives this as
the string 'False' or the string 'True', both of which are truthy.
Thus, without this fix, setting the parameter results in encryption
always enabled.
If the requirements contains a repos url it will always report 'Successfully
installed'; there is no difference in the output to tell apart if
anything new was pulled. Use freeze to detect if the environment changed
in any way.
Should fixansible/ansible#1705
added mysql 5.7 user password modification support with backwards compatibility
resolved mysql server version check and differences in user authentication management
explicitly state support for mysql_native_password type and no others. fixed some failing logic and updated samples
updated comment to actually match logic.
simplified conditionals and a little refactor
Since use_unsafe_shell is suspicious from a security point
of view (or it wouldn't be unsafe), the less we have, the less
code we have to toroughly inspect for a security audit.
Warning catches typos in the filename. Since the playbook is saying
"make sure this user doesn't have an entry" it makes more sense to warn
than to error.
Fixes#2619
The parameters 'diff_peek' and 'validate' are not expected to be used
by users. They are internal. To make it clear, this change adds the
comments 'Internal use only' to each of those definitions to make
it clear that they are actually used, just not by end-users.
The 'diff_peek' option isn't documented at all, and provides a
rudimentary check that the content isn't binary. Documentation is
added to explain the option.
The 'validate' option has a declaration, but isn't implemented.
Therefore it may as well be removed from the module.
Previously, the `promote` command in the `rds` module would always return OK and never actually promote an instance. This was because `promote_db_instance()` had its conditions backwards: if the instance had the `replication_source` attribute indicating that it **was** a replica, it would set `changed = False` and do nothing. If the instance **wasn't** a replica, it would attempt to run `boto.rds.promote_read_replica()`, which would always fail.
'exact_count' and 'state' are mutually exclusive options they should not be in the following examples:
- # Enforce that 5 running instances named "database" with a "dbtype" of "postgres" example and
- # Enforce that 5 instances with a tag "foo" are running
The yum module allows the 'name' parameter to be given as 'pkg', in
a similar way to some of the other package managers. This change
documents this alias.
The module's 'state' parameter has two other aliases, in line with
the 'apt' action; the 'state' parameter can take 'installed' as an
alias for 'present', and 'removed' as an alias for 'absent'. These
aliases are documented.
The min_disk and min_ram parameters were not being passed to
the shade API. They also need to be integer values. Also
updated the description of these parameters for better
clarification.
There was no db restore example. I've provided one that shows how to do the restore, then add a security group (you cannot add the security group during the restore step -- it has to be done in a modify step afterward). Also, I show how to get the endpoint.
Absent unction was not working on user with login profile
also fixed the exception handling
fixed the delete user function
now works with or without loginprofile (password)
typo
have `os_server_facts` call `list_servers` rather than `get_server`, and
treat the `server` parameter as a wildcard pattern. This permits one to
get facts on a single server:
- os_server:
server: webserver1
On mutiple servers:
- os_server:
server: webserver*
Or on all servers:
- os_server:
Introduces a `detailed` parameter to request additional server details
at the cost of additional API calls.
When this was treated as a boolean, sphinx was leaving the Default
column on http://docs.ansible.com/ansible/ec2_module.html blank,
implying it would use AWS's default. In reality, it passes False, which
overrides the defaults at AWS (it's possible to boot an instance which
AWS claims will always have EBS optimization without it because of this
silently passed False).
The pysphere VIVirtualMachine.clone() method supports specifying a VM
folder to place the VM in after the clone has completed. This exposes
that functionality to playbooks.
Also documents that creating VMs could always place VMs in a specific
folder.