When using ansible deployment on git push, git insert "remote:"
at the start of ansible output. If you force the color on ansible,
the "remote:" also get colored if the string to display is on
more than 1 line.
This change make sure that each end of line reset the color, instead
of reseting only at the end of the string.
Added iocage connector that extends the jail connector. Uses iocage to translate iocage tags or UUIDs/partial UUIDs to the actual jail name and then uses the jail connector for actual functionality.
This plugin can be used with the lpass cli interface for lastpass.
[lastpass-cli](https://github.com/lastpass/lastpass-cli)
Example:
Add a lookup to your playbooks/variables somewhere:
```
some_variable: "{{ lookup('lastpass','Some Lastpass entry name or ID', field='username') }}"
```
Usage:
* start a lpass session prior to using ansible
* run ansible
* logout when finished
```
lpass login user@domain.com
ansible-playbook foo.yml
lpass logout
```
* Initial Commit for Infinidat Ansible Modules
Skip tests for python 2.4 as infinisdk doesn't support python 2.4
Move common code and arguments into module_utils/infinibox.py
Move common documentation to documentation_fragments. Cleanup Docs and Examples
Fix formating in modules description
Add check mode support for all modules
Import AnsibleModule only from ansible.module_utils.basic in all modules
Skip python 2.4 tests for module_utils/infinibox.py
Documentation and code cleanup
Rewrite examples in multiline format
Misc Changes
Test
* Add Infinibox modules to CHANGELOG.md
* Add ANSIBLE_METADATA to all modules
* Add update parameter in junos_config module which supports
configuration action like merge, replace and overwrite.
* Add support for replace along with update
argument
Since we no longer use a post-validated task in _process_pending_results, we
need to be sure to template fields used in original_task as they are raw and
may contain variables.
This patch also moves the handler tracking to be per-uuid, not per-object.
Doing it per-object had implications for the above due to the fact that the
copy of the original task is now being used, so the only sure way is to track
based on the uuid instead.
Fixes#18289
If the plugin version expected is, say '1.20', then specifying it
as...
version: 1.20
... will make the YAML parser interpret it as a float, and the
value obtained by the module will be 1.2 instead of 1.20, which
will cause downloading of wrong version of the module.
This patch updates the docs so that users don't face this issue.
* Fix # #5839 Add 'update' parameter in junos_config module
Add update parameter in junos_config module which supports
configuration action like merge, replace and overwrite.
* Fix documentation issue
* Fix review comment to add replace argument
Make replace and update argument mutually
exclusive, to support replace for backward
compatibility.
Previously, packages were installed one at a time in a loop. This caused
a couple of problems.
First, it was a performance issue - pacman would have to perform all of
its checks once per package. This is unnecessarily costly, especially
when you're trying to install several related packages at the same time.
Second, if a package you're trying to install depends on a virtual
package that is provided by several different packages (such as the
"libgl" package on Arch) and you aren't also installing something that
provides that virtual package at the same time, pacman will produce an
interactive prompt to allow the user to select a relevant package. This
is obviously incompatible with how ansible operates. Yes, this problem
could be avoided by installing packages in a different order, but the
order of installation shouldn't matter, and there may be situations
where it is not possible to control the order of installation.
With this refactoring, all of the above problems are avoided. The code
will now work out all of the packages that need to be installed from any
configured repositories and any packages that need to be installed from
local files, and then install all the repository packages in one go and
then all of the local file packages in one go.
This is a redesign in how plugins call _remote_checksum().
- _remote_stat() has been modified to report the real error as
AnsiblError
- Action plugin **unarchive** calls _remote_stat() directly instead of
_remote_checksum()
- Action plugin **unarchive** also handles the exceptions directly
- Ensure get_exception() returns native text
Two other action plugins, **template** and **fetch**, also do a remote checksum.
In **template** we already call _remote_stat(), just like we now do for
unarchive, in **fetch** we do call _remote_checksum() and we make the
exact same mistake as the unarchive plugin. So that one could use a
redesign as well.
This fixes#19494
Before:
```
[dag@moria ansible.testing]$ ansible-playbook -v test137.yml
Using /home/dag/home-made/ansible.testing/ansible.cfg as config file
PLAY [localhost]
******************************************************************************************************
TASK [unarchive]
******************************************************************************************************
fatal: [localhost]: FAILED! => {"changed": false, "failed": true, "msg":
"python isn't present on the system. Unable to compute checksum"}
PLAY RECAP
******************************************************************************************************
localhost : ok=0 changed=0 unreachable=0
failed=1
```
After:
```
[dag@moria ansible.testing]$ ansible-playbook -v test137.yml
Using /home/dag/home-made/ansible.testing/ansible.cfg as config file
PLAY [localhost]
*************************************************************************************************************
TASK [unarchive]
*************************************************************************************************************
fatal: [localhost]: FAILED! => {"changed": false, "failed": true, "msg":
"Failed to get information on remote file (/tmp/): sudo: unknown user:
foobar\nsudo: unable to initialize policy plugin\n"}
PLAY RECAP
*******************************************************************************************************************
localhost : ok=0 changed=0 unreachable=0
failed=1
```
* Update system/user.py module.
Add ability to add real system users with next free system uid (< 500) on macOS.
* Improve syntax in system/user.py module.
Remove complex if else line and replace by simple comparison which yields the same boolean value.
* Remove "True" comparison of user.py.
Remove comparison to true, as it is not pep8 conform.
* Add new parameters to taskdefinition module - network_mode and task_role_arn
* Add version_added field for doco
* Change version_added parameter to 2.3
For devices that do not support mutliplexing, we cannot automatically
determine the network os. This removes the os guess static method
from the terminal plugin. For this devices, the network_os
value must be configured
It's possible to compress packages using several different compression
methods, or not compressed at all. Previously, the pacman module only
supported files compressed using xz. This update ensures that all
compression types currently supported by pacman are supported by the
ansible pacman module.
The list of supported compression methods at the time of writing can be
found here:
https://git.archlinux.org/pacman.git/tree/scripts/makepkg.sh.in#n747
This fix ensures that if there are specific module errors (in our case
the python interpreter was not found) then command and shell returns a
proper error.
It also fixes a few other imperfections that we noticed during
troubleshooting:
- Return the real RC if it were available
- Improve a dictionary evaluation using .get()
- Return an RC of -1 if it is unknown (instead of returning 0)
This fixes#18846
This fix ensures a proper error is shown when a group_vars files cannot
be parsed correctly. Without this patch you get:
```
[dag@moria ansible.testing]$ ansible-playbook test132.yml
ERROR! Unexpected Exception: dictionary update sequence element #0 has length 1; 2 is required
to see the full traceback, use -vvv
```
With this patch you get:
```
[dag@moria ansible.testing]$ ansible-playbook test132.yml
ERROR! Problem parsing file '/home/dag/home-made/ansible.testing/group_vars/test135': line 1, column 1
```
This fixes#18843
Sudoers is a great example to show how you can prevent shutting yourself
out. But SSHd is at least as important to avoid syntax errors causing a
lot of grieve. So I think it deserves a spot in this list :-)
Currently this function directs to the standard NetworkModule,
whose run_commands function takes no arguments (other than self).
This directs the call to the connection's cli method to run the command
directly on the device.
Connection plugin can define default action plugin to use by providing
action_handler instance variable. This will override the default
action plugin normal
* adds new error AnsibleModuleExit to handle module returns
* adds new action plugin network for attaching connection to network modules
* adds new shared module local to receive connection
* splits out function to update task_args with common updates
This commit provides a mechansim for running local modules that require
a connection object for interative commands tyically implemented for
network devices. It provides a way to locally import modules (post fork)
and run them using exception handling to exit.
* Fix bug #5328 apache module loading
Currently, the apache2_module module parses apache configs
for correctness when enabling or disabling apache2 modules.
This behavior introduced a conflict condition when transitioning
between mpm modules, such as mpm_worker and mpm_event.
This change accounts for the specific error condition raised
by ``apachectl -M``:
``AH00534: apache2: Configuration error: No MPM loaded.``
When loading or unloading a module with a name that contains 'mpm_',
apache2_module will ignore the error raised by apachectl if stderr
contains 'AH00534'.
Fixes#5328
* Add AH00534 warning
* Added changes from PR #5629
* Modified ignore_configcheck behavior
* Code smell test for iteritems and itervalues
* Change the keydict object in authorized_keys so it doesn't throw a false postive
keydict is a bad data structure anyway. We don't use the iteritems and
itervalues methods so just disable them so that the code-smell tests do
not trigger on it.
* Change release templates so they work with py3
The process to poll for data in the stdout and/or stderr pipes during a
low-level command execution was repetitive. Factoring this out into a
function DRYs out the code.
Signed-off-by: Steve Kuznetsov <skuznets@redhat.com>
For the comparisions that need to be done, this map call needs
to convert to a list because the six import in ansible changes
the behavior of map to return an iterator instead of a list
* Fix UnboundLocalError remote_head in git
Fixes#5505
The use of remote_head was a leftover of #4562.
remote_head is not necessary, since the repo is unchanged anyway and
after is set correctly.
Further changes:
* Set changed=True and msg once local_mods are detected and reset.
* Remove need_fetch that is always True (due to previous if) to improve
clarity
* Don't exit early for local_mods but run submodules update and
switch_version
* Add test for git with local modifications
* Enable tests on python 3 for uri
* Added one more node type to SAFE_NODES into safe_eval module.
ast.USub represents unary operators. This is necessary for
parsing some unusual but still valid JSON files during testing
with Python 3.
* Rebase of https://github.com/ansible/ansible-modules-extras/pull/708
708 was full of extraneous merge commits interwoven with commits to
implement the feature. In the end the only way I could clean this up
in reasonable time was to just take a regular diff between the PR and
the base. This lost the history of intermediate commits but I've
preserved attribution to @dayton967 via git's --author field.
Although I preserved the logic of the PR, there were a few additional
things that I cleaned up:
* Fixed import of email.mime.multipart
* Used the argspec to set port and timeout to integers instead of having
ad hoc code inside of the module.
* Used argspec's choices for secure instead of ad hoc code inside of the
module.
* Removed some unused variables
* Made secure_state a python boolean instead of using 0 and 1
* Used secure with string comparisons instead of turning it into an
integer code. This is much more readable.
* Fixed catching of SMTPExceptions (SMTPException wasn't imported
directly so it needed to use the smtplib namespace.)
* Fix synchronize retries
The synchronize module munges its task args on every invocation of
run(). This was problematic because the munged data was not fit for use
by a second pass of the synchronize module. Correct this by using a copy
of the task args on every invocation of run() so that the original args
are not affected.
Local testing using this playbook seems to confirm that things work as
expected:
- hosts: all
tasks:
- delay: 2
register: task_result
retries: 1
until: task_result.rc == 0
synchronize:
dest: /tmp/out
mode: pull
src: /tmp/nonexistent/
fixes#18281
* Update synchroncization fixture assertions
When we started operating on a copy of the task args the test assertions
were no longer asserting things about the munged state but of the
pristine state. Convert the copy of task args to a class member so that
it can be compared against later in testing and update the assertions to
check this munged copy.
* Shuffle objects around for cleaner testing
Attach the temporary args dict to the task rather than the action as
this makes updating the existing tests cleaner.
The overwrite parameter is forcibly set to false, meaning a module
passing that parameter will have no effect. The overwrite facility
is necessary to ensure that conflicting options can be written the
configuration (which, in replace mode, they cannot).
This change ensures that if overwrite is set, it will not be changed
to False in the logic.
* Fixes#18663. Bad handling of existing config in dellos9 module.
The dellos9 module doesn't build correctly the internal
structures used to represent the existing config of the managed
network device. This leads to apply changes every time the
playbook is run, even if the existing config is the same that the
one you are trying to push into the device.
Probably this problem exist also in the dellos6 and dellos10
modules, but I only fixed it in the dellos9 module.
The fix modifies two methods. The first one is `get_config`,
where the return clause didn't work correctly when the flow
doesn't enter in the `if` block. In that case the `contents`
variable is not an array an this should be handled.
The second fix is in the `get_sublevel_config` method. In this
case the indentation whitespaces of the parents should be rebuild
because further functions and methods required it to handle
correctly comparisons used to check if changes should be pushed
into device.
* Fixes#18663 for dellos10 module with the same patches as dellos9.
mkstemp() returns a tuple containing an OS-level handle to an open file
(as would be returned by os.open()) and the absolute pathname of that
file, in that order.
This patch makes sure that the fd opened by tempfile.mkstemp() is
re-used and closed properly.
* added alpha version of the 'sorcery' module
* fully conforming YAML
* use bundled check for executables
* - codex_list(): use commands instead of checksums to get sorcery version and verify codex equality - renamed: - manage_depends() -> match_depends() - tocast -> cast_queue, todispel -> dispel_queue, needs_recast -> depends_ok - SORCERY_LOG -> SORCERY_LOG_DIR, SORCERY_STATE -> SORCERY_STATE_DIR - removed: - SORCERY_VERSION_FILE - CODEX - added commentary to match_depends() and manage_spells() - fixed bug about dropped dependency line for previously existed dependency - fixed bug about not fixing depends for the 'latest' state - simplified several code constructions
* cleaned up some docs
* do not use separate message for Codex update, rely on the 'changed' status instead
* use built-in list conversion (_check_type_list()) for spells
* corrected spell name extraction from list in match_depends()
* avoid non-matching dependencies line duplication in depends file
* added more complex playbook example
* tiny stylistic fix for docs
* replaced ternary construction with a regular statement
* replaced yet another ternary construction with a regular statement
* enable Python 2.4 compatibility by splitting try-finally block
* enable Python 2.4 compatibility by replacing 'with' statement with try-except+try-finally blocks
* unify spells' assign
* replaced one regex with startswith()
* go Ansible 2.1
* added dummy RETURN template
* go Ansible 2.2
* better clarify permissions' requirements
* - updated copyright years - fixed rebuild command bug - re-used run_command_environ_update dict for env var management
* handle Python 3.5
* Revert "handle Python 3.5"
This reverts commit 33a5a0eb64c1193318298e111f063cdd5f93b73a.
* handle Python 3.5 (2nd try)
* go Ansible 2.3
* clarity++
* For realz this time
* Fix tempfile.mkstemp (#2)
* back to square one, removing temp file from the mix
* Adding temp back
* Adding tuple back
* Adding another tuple back
* Trying to get around weird Jenkins behavior of blowing up when both .hpi and jpi file found
* Incorporating PR feedback
* Delete .hpi file instead of backing it up, some basic clean up
* Moving file deletion to the right location
* Blank lines. They always get me.
* Allow re-using an existing template when updating a stack by not passing 'template' or 'template_url'. This is a big one for me as our deploy process creates a new stack and then modifies the old one; to avoid changing the resources inside the old one, we have had to avoid using the Ansible module and use the AWS CLI instead in order to pass `--use-previous-template`.
* Split create and update logic into separate functions
* Remove dead `update` variable
PR move of https://github.com/ansible/ansible-modules-core/pull/3588
##### ISSUE TYPE
- Docs Pull Request
##### COMPONENT NAME
ec2_group.py
##### ANSIBLE VERSION
```
ansible 2.0.2.0
config file = /Users/tpai/src/cm-secure/ansible.cfg
configured module search path = Default w/o overrides
```
##### SUMMARY
Make it clear you can specify the created group in the rules list, allowing idempotent use for group<->group networking rules.
This is a really useful feature that isn't obvious enough in the docs.
Fixes unnecessary VM restart.
VM userdata is currently not returned by the API listVirtualMachine and task will always be marked as changed in has_changed(), which will result in an unnecessary VM restart if force=true.
Reported by @Mayeu
The vmid is no longer a required parameter
For the 'present' state:
If not set, the next available one will be fetched from the API
For the 'started', 'stopped', 'restarted' and 'absent' states:
If not set, the module will try to fetch it from the API based on the hostname
Inspired from the behavior of the proxmox_kvm module
Creation of a maintenance window returns a 201 (PagerDuty Developer documentation is unfortunately incorrect). Deleting a maintenance window returns a 204.
This whole module is really lacking in security guidelines, but
downloading RPMs via plain `http://` without gpg is quite bad. Let's
use `https://` for the EPEL examples for a start.
This change makes os_stack module idempotent. Otherwise, re-use of the
module fails with:
Error updating stack: ERROR: The Parameter (...) was not provided.
Fixes#3165.
* Moved JSON-RPC client IPAClient class to ansible.module_utils.ipa, which is extended by all ipa modules
* ipa_user: incorporate displayname and userpassword attributes in module_user
* ipa_user: capitalized "I" in comment
* ipa_user: updated get_ssh_key_fingerprint to include possibility of the uploaded SSH key including user@hostname comment, which also appears in the queried fingerprint. This fixes a mismatch in the calculated and queried SSH key fingerprint in the user_diff calculation when the user already exists.
* ipa_hbacrule: ipaenabledflag must be 'TRUE' or 'FALSE', not 'NO'
* ipa_sudorule: ipaenabledflag must be 'TRUE' or 'FALSE', not 'NO'
* Add author to files missing it
* Fix kibana
* More native YAML
* More native YAML
* More native YAML
* More native YAML. Now only languages/ is missing
* Use native yaml sintax for packaging/languages as well
* Some more and quote fixes
* Fix wrong grouping
The policycoreutils python API for RHEL6 and RHEL7 are sufficiently
different, requiring some additional definitions and specific conversion
that works on old and new implementations.
It also implements a fix for non-ascii error messages (like when using a
French locale configuration).
This fixes#3551.
* Added new option to select the active a10 partition
* added version_added to the description of the new option
* added RETURN documentation
* fixed indents
* Removed empty cases, removed unneeded aliases
* removed artifacts from merging
* updated version_added to 2.3
* removed host, username and password option
* removed write_config and validate_certs documentation
* add a10_server_axapi3 module
* added return documentation
* modified a10_server_axapi3.py per feedback
* fixed line 60 s/action/operation/
* modified a10_server_axapi3.py per feedback
* modified a10_server_axapi3.py per feedback
* corrected YAML format error in documentation
* removed slp_server_ip and slp_server check in code since the arguments are labeled as required, per feedback
* modified: a10_server.py
modified: a10_service_group.py
modified: a10_virtual_server.py
Changed main() block, restricted import to only functions used.
* removed space for main() to be last line
* removed invalid lines
* Modified Documentations for a10_server.py, a10_service_group.py, a10_virtual_server.py
* Take out alias:[] and choices:[] in Documentation from a10_service_group.py and a10_virtual_server.py since they are now the default
* deleted a10_server.py, a10_service_group.py, a10_virtual_server.py
* deleted 'version_last_modified' line in Documentation across a10_server.py, a10_service_group.py and a10_virtual_server.py as they were added in error, change validate_certs version_added in a10_server.py
* added newline after main()
* added newline after main() for a10_server_axapi3.py
* Create `serverless` module for handling deploys on the Serverless Framework
* fix interpreter line
* Successfully exit when a stage is already absent
Both the `homebrew` and `homebrew_cask` modules iterate over
dictionaries using `iteritems`. This is a Python 2-specific method whose
behavior is similar to `items` in Python 3+. The `iteritems` function in
the six library was designed to make it possible to use the correct
method.
Traceback:
Traceback (most recent call last):
File \"/tmp/ansible_d28_6uwl/ansible_module_make.py\", line 153, in <module>
main()
File \"/tmp/ansible_d28_6uwl/ansible_module_make.py\", line 119, in main
rc, out, err = run_command(base_command + ['--question'], module, check_rc=False)
File \"/tmp/ansible_d28_6uwl/ansible_module_make.py\", line 79, in run_command
return rc, sanitize_output(out), sanitize_output(err)
File \"/tmp/ansible_d28_6uwl/ansible_module_make.py\", line 95, in sanitize_output
return output.rstrip(b(\"\\r\\n\"))
TypeError: rstrip arg must be None or str
There is also a six.iteritems issue, fixed using six.
* gluster_volume: Fixes issue when creating a new volume failing due to peers not being present. The peers which are not 'localhost' should invoke wait_for_peer, but the find method returns -1 (not 0) on non-localhost peers.
By default, sparse option is true in ovirt. So the raw disk
creation in a block storage domain will fail with error "Disk
configuration (RAW Sparse) is incompatible with the storage domain
type".
The commit adds sparse option where it is send as False when
format is raw and True when format is qcow2
* New module proxmox_kvm
* fixed qxl value vor vga param
> | Name | Type | Format | Description |
> |------|------|--------|-------------|
> | vga | enum | std \| cirrus \| vmware \| qxl \| serial0 \| serial1 \| serial2 \| serial3 \| qxl2 \| qxl3 \| qxl4 | Select the VGA type. If you want to use high resolution modes (>= 1280x1024x16) then you should use the options 'std' or 'vmware'. Default is 'std' for win8/win7/w2k8, and 'cirrus' for other OS types. The 'qxl' option enables the SPICE display sever. For win* OS you can select how many independent displays you want, Linux guests can add displays them self. You can also run without any graphic card, using a serial device as terminal. |
* Fix create_vm() fail on PV 4.3
* Set default for force as null in doc
* proxmox_kvm: revision fixes
* proxmox_kvm: more revision fixes
* Fix indentation
* revision fixes
* Ensure PEP-3110: Catching Exceptions
* KeyError, to KeyError as -- PEP-3110: Catching Exceptions
* Fix Yaml document syntax; Notes: => Notes -
* Refix documentation issue
* Fix Documentation
* Remove Notes: in description
* Add current state and it return value
* Update documentation
* fixed local variable 'results' referenced before assignment
* Fix fixed local variable 'results' referenced before assignment
* minor fixes in error messages
* merge upstream/devel int devel
* minor fixes in error messages
* Fix indentation and documentation
* Update validate_certs description
* translate() has a different api for text vs byte strings
* maketrans must be imported from a different location on py2 vs py3
Since this is such a small string outside of a loop we don't have to
worry too much about speed so it's better to have a single piece of code
that works on both py2 and py3
Fixes#3249
The python-consul library already supports this, so it is just a simple
case of enablement.
This does not break the current logic in `add` of parsing as a check,
then parsing as a service if that fails… because service_name is
mandatory on a service registration and is invalid on a check
registration.
* ipa_group: Fix: 'list' object has no attribute 'get'
* ipa_hbacrule: Fix: 'list' object has no attribute 'get'
* ipa_host: Fix: 'list' object has no attribute 'get'
* ipa_hostgroup: Fix: 'list' object has no attribute 'get'
* ipa_role: Fix: 'list' object has no attribute 'get'
* ipa_sudocmd: Fix: 'list' object has no attribute 'get'
* ipa_sudocmdgroup: Fix: 'list' object has no attribute 'get'
* ipa_sudorule: Fix: 'list' object has no attribute 'get'
* ipa_user: Fix: 'list' object has no attribute 'get'
* ipa_sudorule: Fix: invalid 'cn': Only one value is allowed
* ipa_hostgroup: module returns changed if assigned hosts or hostgroups are not in lowercase
* extra detail on which step triggered 'change', detect and handle powershell mishandling nssm's unicode as utf8
* Simpler handling of nssm output encoding
Thanks @nitzmahone for a cleaner way to control PowerShell's behavior
While I still have no idea why or how the `map` call is being swapped out while still running in python 2.7, this change will fix the following error, as well as improve py3 compatibility.
* Add FreeIPA modules
* Update version_added from 2.2 to 2.3
* ipa_*: Use Python 2.4 syntax to concatenate strings
* ipa_*: Replace 'except Exception as e' with 'e = get_exception()'
* ipa_*: import simplejson if json can't be imported
* ipa_hbacrule: Fix: 'SyntaxError' on Python 2.4
* ipa_sudorule: Fix: 'SyntaxError' on Python 2.4
* ipa_*: Fix 'SyntaxError' on Python 2.4
* ipa_*: Import get_exception from ansible.module_utils.pycompat24
* Add FreeIPA modules
* Update version_added from 2.2 to 2.3
* ipa_*: Fix 'SyntaxError' on Python 2.4
* ipa_*: Replace Python requests by ansible.module_utils.url
* ipa_*: Replace Python requests by ansible.module_utils.url
* ipa_*: Add option validate_certs
* ipa_*: Remove requests from Ansible module documentation requirements
* ipa_sudorule: Remove unnecessary empty line
* ipa_sudorule: Remove markdown code from example
* ipa_group: Add choices of state option
* ipa_host: Rename options nshostlocation to ns_host_location, nshardwareplatform to ns_hardware_platform, nsosversion to ns_os_version, macaddress to mac_address and usercertificate to user_certificate and add aliases to be backward compatible
* Return actual queue attributes with result
Previously this was only returning the desired queue attributes, and not even returning the QueueARN for use elsewhere. Now it will return "results.attributes" that is retrieved with boto's get_queue_attributes().
* update return structure to reflect current SQS config; add documentation of return values
* Remove redundancy from if/else statement
Added support to explicitly manage task definitions be revision. If the
revision expectations of the ansible task cannot be met, an error is
thrown.
If revision is not explicitly specified, enhanced module to be
idempotent with respect to task definitions. It will search for an
active revision of the task definition that matches the containers and
volumes specified. If none can be found, a new revision will be created.
Currently <active> tag is passed within the disk element which is
incorrect. As a result, disk will remain inactive even though the
default option is true.