The command `hg up -C` by default moves to the latest revision on the
current branch. The `discard` function was trying to update to a
different branch, in case it was provided, by passing a `-r REVISION`
argument. Not only is this not the intended effect of the `discard`
function, but this also could update to a different branch that hasn't
been pulled yet, which is how we were experiencing trouble.
Instead, we unconditionally do `hg up -C -r .` to "update" to the
current revision (i.e. to "."), while `-C/--clean`ing the current
directory. This is similar to `hg revert --all`, except that it also
undoes the merge state of the working directory, in case there was
any.
Previously the logging module hard coded the default logging driver. This means
if the docker daemon is started with a different logging driver, the ansible
module would continually restart it when run.
This fix adds a call to docker.Client.info(), which is inspected if a logging
driver is not supplied in the playbook, and the container only restarted if
the logging driver applied differs from the configured default.
In usage, this has solved issues with using alternative logging drivers.
Fixes require ssl in combination with grant option
Refactoring: code cleanup to make it easier to understand
Code rewritten inspired by @willthames
Added WITH GRANT OPTION as exception; when only REQUIRESSL and/or GRANT are specified we have to add USAGE
Without this change, some trouble may occur when "deb" parameter
is used as env vars controlling dpkg are not set. For example,
installing a package that requires user input will never end since
DEBIAN_FRONTEND=noninteractive is not set.
So export env vars in APT_ENV_VARS before run dpkg, like in cases
using apt-get/aptitude.
Now module will assume that if the argument is a string it is already formated as json
and will only try to convert non strings into json string.
Also removed unused 'msg' var declarations and the ifs that set them
fixes#2009
Since we now have several exceptions to the assumption that the
result of the pull would be on the last status line returned by
docker-py's pull(), I've changed the function so that it looks
through the status lines and returns what if finds on it.
Despite the repeated `break`s, the code seems simpler and a little
more coherent like this. From what I've checked using
`https://github.com/jlafon/ansible-profile`, the execution time is
mostly the same.
If this parameter was not of the right type, the module would fail with a
traceback, with a "AttributeError: 'str' object has no attribute 'get'"
exception.
It now gives a proper error message on type errors.
Update a document file for win_get_url.ps1.
Update add a prefix proxy_ for this variables
Update a document file for win_get_url.ps1.
Update win_get_url.ps1 20150907
Before this patch:
- Command was matched if 'Command' field of docker-py
representation of Docker container ends with 'command' passed
to Ansible docker module by user.
- That can give false positives and false negatives.
- For example:
a) If 'command' was set up with more than one spaces,
like 'command=sleep 123', it would be never matched again
with a container(s) launched by this task.
Because after launching, command would be normalized and
appear, in docker-py API call, just as 'sleep 123' - with one
space. This is false negative case.
b) If 'entrypoint + command = command', for example
'sleep + 123 = sleep 123', module would give false positive
match.
This patch fixes it, by making matching more explicit - against
'Config'->Cmd' field of 'docker inspect' output, provided by docker-py
API and with proper normalization of user input by splitting it to
tokens with 'shlex.split()'.
nics is a great flexible parameter, but it's wordy. Shade now supports
a simple parameter too, which is just "network" and takes a name or id.
Add passthrough support.
In addition to supporting booting from a pre-existing volume, nova and
shade both support the concept of booting from volume based on an image.
Pass the parameters through.
Shade supports boot-time attachment of additional volumes for OpenStack
instances. Pass through the parameter so that ansible users can also
take advantage of this.
* This keeps us from hitting bugs in repoquery/yum plugins in certain
instances (#2559).
* The previous is also a small performance boost
* Also in is_installed(), when using the yum API, return if we detect
a package name has been installed. We don't need to also check
virtual provides in that case. This is another small performance
boost.
* Sort the list of packages returned by the list parameter.
If this is not set, Ansible parses the parameter as a string.
This is fine if the parameter is not provided by the caller, but
if it is set to False or True explicitly, ec2_vol receives this as
the string 'False' or the string 'True', both of which are truthy.
Thus, without this fix, setting the parameter results in encryption
always enabled.
If the requirements contains a repos url it will always report 'Successfully
installed'; there is no difference in the output to tell apart if
anything new was pulled. Use freeze to detect if the environment changed
in any way.
Should fixansible/ansible#1705
added mysql 5.7 user password modification support with backwards compatibility
resolved mysql server version check and differences in user authentication management
explicitly state support for mysql_native_password type and no others. fixed some failing logic and updated samples
updated comment to actually match logic.
simplified conditionals and a little refactor
Since use_unsafe_shell is suspicious from a security point
of view (or it wouldn't be unsafe), the less we have, the less
code we have to toroughly inspect for a security audit.
Warning catches typos in the filename. Since the playbook is saying
"make sure this user doesn't have an entry" it makes more sense to warn
than to error.
Fixes#2619
The parameters 'diff_peek' and 'validate' are not expected to be used
by users. They are internal. To make it clear, this change adds the
comments 'Internal use only' to each of those definitions to make
it clear that they are actually used, just not by end-users.
The 'diff_peek' option isn't documented at all, and provides a
rudimentary check that the content isn't binary. Documentation is
added to explain the option.
The 'validate' option has a declaration, but isn't implemented.
Therefore it may as well be removed from the module.
Previously, the `promote` command in the `rds` module would always return OK and never actually promote an instance. This was because `promote_db_instance()` had its conditions backwards: if the instance had the `replication_source` attribute indicating that it **was** a replica, it would set `changed = False` and do nothing. If the instance **wasn't** a replica, it would attempt to run `boto.rds.promote_read_replica()`, which would always fail.
'exact_count' and 'state' are mutually exclusive options they should not be in the following examples:
- # Enforce that 5 running instances named "database" with a "dbtype" of "postgres" example and
- # Enforce that 5 instances with a tag "foo" are running
The yum module allows the 'name' parameter to be given as 'pkg', in
a similar way to some of the other package managers. This change
documents this alias.
The module's 'state' parameter has two other aliases, in line with
the 'apt' action; the 'state' parameter can take 'installed' as an
alias for 'present', and 'removed' as an alias for 'absent'. These
aliases are documented.
The min_disk and min_ram parameters were not being passed to
the shade API. They also need to be integer values. Also
updated the description of these parameters for better
clarification.
There was no db restore example. I've provided one that shows how to do the restore, then add a security group (you cannot add the security group during the restore step -- it has to be done in a modify step afterward). Also, I show how to get the endpoint.
Absent unction was not working on user with login profile
also fixed the exception handling
fixed the delete user function
now works with or without loginprofile (password)
typo
have `os_server_facts` call `list_servers` rather than `get_server`, and
treat the `server` parameter as a wildcard pattern. This permits one to
get facts on a single server:
- os_server:
server: webserver1
On mutiple servers:
- os_server:
server: webserver*
Or on all servers:
- os_server:
Introduces a `detailed` parameter to request additional server details
at the cost of additional API calls.
When this was treated as a boolean, sphinx was leaving the Default
column on http://docs.ansible.com/ansible/ec2_module.html blank,
implying it would use AWS's default. In reality, it passes False, which
overrides the defaults at AWS (it's possible to boot an instance which
AWS claims will always have EBS optimization without it because of this
silently passed False).
The pysphere VIVirtualMachine.clone() method supports specifying a VM
folder to place the VM in after the clone has completed. This exposes
that functionality to playbooks.
Also documents that creating VMs could always place VMs in a specific
folder.
Closes#1189.
This will cause the settings in Ansible to override the system settings.
That will have no effect except on systems that have an out-of-Ansible
configuration that disables automatic installation of recommended
packages. Previously, ansible would use the OS default whenever
install_recommends wasn't part of the playbook. This change will cause
the Ansible default configuration setting of installing recommended
packages to override the configuration files set on the OS for things
installed through ansible, even when there is no install_recommends
specified in the playbook. Because the OS default matches the Ansible
default, this shouldn't have wide impact.
Give user a course of action in the case where the suggestions do not
work. This will hopefully allow us to work through any further issues
much faster.
Check commit enables using tls when using the docker_image module. It
also removes the default for docker_url which doesn't allow us to check
for DOCKER_HOST which is a more sane default. This allows you to use
docker_image on OSX but more documentation is needed.
* reading from a socket that gave some data we weren't looking for and
then closed.
* read from a socket that stays open and never sends data.
* reading from a socket that sends data but not the data we're looking
for.
Fixes#2051