* remove unused variables
* fetch branch name instead of HEAD
fix#3782, which was introduced by f1bacc1d3f
* disable git depth option for old git versions
fixes#3782
git support for `--depth` did not fully work in old git versions (before 1.8.2)
fall back to full clones/fetches on those versions
Reading the entire tar file into memory can result in out-of-memory
conditions such as this traceback:
Traceback (most recent call last):
File "/tmp/ansible_YELTSu/ansible_module_docker_image.py", line 486, in load_image
self.client.load_image(image_data)
File "/usr/local/lib/python2.7/dist-packages/docker/api/image.py", line 147, in load_image
res = self._post(self._url("/images/load"), data=data)
...
File "/usr/lib/python2.7/httplib.py", line 997, in endheaders
self._send_output(message_body)
File "/usr/lib/python2.7/httplib.py", line 848, in _send_output
msg += message_body
MemoryError
Luckily docker-py's load_image(), which calls requests post(), accepts a
file-like object instead of a string. Pass in the file object to avoid
reading the full file into memory. This allows larger tar files to load
succesfully.
Signed-off-by: Stefan Hajnoczi <stefanha@gmail.com>
* This moves the lines in the code that parse the `env` and `env_file` options for docker to the end of the `__init__()` function.
This is needed because the `_check_capabilites` function needs both a working `self.client` and a proper `self.docker_py_versioninfo`.
`_check_capabilities` is used by `ensure_capabilities` which is, in turn, used by `get_environment`
This means that before this commit, the environment variables could not be loaded because both `self.client` and `self.docker_py_versioninfo` were not set at that time.
This commit fixes that by putting the environment variable parsing after those two.
* This moves the lines in the code that parse the `env` and `env_file` options for docker to the end of the `__init__()` function.
This is needed because the `_check_capabilites` function needs both a working `self.client` and a proper `self.docker_py_versioninfo`.
`_check_capabilities` is used by `ensure_capabilities` which is, in turn, used by `get_environment`
This means that before this commit, the environment variables could not be loaded because both `self.client` and `self.docker_py_versioninfo` were not set at that time.
This commit fixes that by putting the environment variable parsing after those two.
The default pagination is every 100 items with a maximum of 1000 from
Amazon. This properly uses the marker returned by Amazon to concatenate
the various pages from the results.
This fixes#2440.
* Fixing error exception handling for python. Does not need to be compatible with Python2.4 b/c boto is Python 2.6 and above.
* Fixing error exception handling for python. Does not need to be compatible with Python2.4 b/c boto is Python 2.6 and above.
* Fixing compile time errors IRT error exception handling for Python 3.5.
This does not need to be compatible with Python2.4 b/c Boto is Python 2.6 and above.
Fix KeyError: 'prepared' while installing dependencies using deb=<file>.deb
This error shows up when --diff was not passed by and the deb files has dependencies not yet installed.
Closes#3752.
Currently, when writing user's crontab, ansible calls
crontab <file> -u <user>
This is incorrect according to crontab(1) on both FreeBSD and Linux,
which suggest that file argument should be the last.
At least on FreeBSD, this leads to incorrect cron module bahavior which
writes to root's crontab instead of users's
Fallback to unzip if zipfile fails and hope that unzip can deal with it
(sites have an easier time upgrading the unzip utility than all of
python).
https://bugs.python.org/issue3997Fixes#3560
packaging/language/pip.py:
virtualenv option:
Mention that virtualenv is created if it does not exist.
(Explicit is better than implicit.)
Mention other relevant options.
notes:
initialized -> created
Wrap long lines.
This is to address this error:
fatal: [site]: FAILED! => {"changed": false, "failed": true, "msg": "Failed to connect to S3: Region does not seem to be available for awsmodule boto.s3. If the region definitely exists, you may need to upgrade boto or extend with endpoints_path"}
Commit 0dd58e9 changed the logic so an exception is thrown (by
`connect_to_aws`) before the `s3 is None` check is performed. This
changes the `None` check to a catch so the old logic can compensate.