* [cloud] ec2_vpc_route_table: ignore routes without DestinationCidrBlock
Add module warnings rather than silently skipping
* Permit warnings for routes tables containing vpc endpoints to be turned off
* Add tests to ensure a VPC endpoint associated with a route table does not result in a traceback
* adding possibility to specify resource group for referred virtual network
* fixed sanity issues
* removed trailing whitespace
* added test
* fixed documentation
* try to fix unstable test
* Tidied up the description of virtual_network_resource_group
* adding possibility of disabling public ip address
* fixed indent
* fixed whitespace
* fixed mistake
* try to create test with vm without public ip address
* try to fix test
* another attempt for test
* fixing test
* create vm with no ip with different name and delete it immediately
* a few additional fixes
* another attempt to pass test
* must be deleted
* simplified no ip test
* reorganised tests
* Wrapped choice in C()
* fixed issue when public ip address should not be created
* adding test for public ip address
* fixed samples
* another fix to sample formatting
* fixed test
* fix test
* fixed test
* another attempt to fix test
* maybe it works now
* still wrong
* improved check per customer request
* removed stupid semicolon
* updated test to match main scenario
* changed ip configurations to list
* another attempt
* add additional test coverage for tower modules
* add test coverage for the tower_credential module
* add test coverage for the tower_user module
* fix a bug in py3 for tower_credential when ssh_key_data is specified
* add test coverage for tower_host, tower_label, and tower_project
* add test coverage for tower_inventory and tower_job_template
* add more test coverage for tower modules
- tower_job_launch
- tower_job_list
- tower_job_wait
- tower_job_cancel
* add a check mode/version assertion for tower module integration tests
* add test coverage for the tower_role module
* add test coverage for the tower_group module
* add more integration test edge cases for various tower modules
* give the job_wait module more time before failing
* randomize passwords in the tower_user and tower_group tests
* Check that connection error msg are not unsafe
* Connection error messages are unsafe: wrap them
For example, in case of error, docker connection plugin returns exception
message containing Go template. These messages weren't tagged as unsafe
and were consequently rendered:
The conditional check 'result is failed' failed. The error was:
{
'msg': u'Docker version check ([\'/usr/bin/docker\', \'version\', \'--format\', "\'{{.Server.Version}}\'"]) failed: Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Get http://%2Fvar%2Frun%2Fdocker.sock/v1.35/version: dial unix /var/run/docker.sock: connect: permission denied\n',
'failed': True
}:
template error while templating string: unexpected '.'.
String: Docker version check (['/usr/bin/docker', 'version', '--format', "'{{.Server.Version}}'"]) failed: Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Get http://%2Fvar%2Frun%2Fdocker.sock/v1.35/version: dial unix /var/run/docker.sock: connect: permission denied
* eos can not check config without config session support
* add testcase for check_mode without config session
* fix eos eapi to read use_session env var
Fixes#36979
If `abort` is not issued in the top level session prompt
the existing session goes to pending state.
The fix is to come out of config mode by issuing `end` command
and again to same config session and execute `abort` which
`abort` is issued at the top level session prompt.
Fixes#35993 - Changes to update_size in commit eb4cc31 made it so
the group dict passed into update_size was not modified. As a result,
the 'replace' call does not see an updated min_size like it previously
did and doesn't pause to wait for any new instances to spin up. Instead,
it moves straight into terminating old instances. Fix is to add batch_size
to min_size when calling wait_for_new_inst.
Fixes#28087 - Make replace_all_instances and replace_instances behave
exactly the same by setting replace_instances = current list of instances
when replace_all_instances used. Root cause of issue was that without lc_check
terminate_batch will terminate all instances passed to it and after updating
the asg size we were querying the asg again for the list of instances - so terminate batch
saw the list including new ones just spun up.
When creating new asg with replace_all_instances: yes and lc_check: false
the instances that are initially created are then subsequently replaced.
This change makes it so replace only occurs if the asg already existed.
Add integration tests for #28087 and #35993.