ansible/test/integration/targets/k8s
Will Thames aaf29c785f Provide Kubernetes resource validation to k8s module (#43352)
* Provide Kubernetes resource validation to k8s module

Use kubernetes-validate to validate Kubernetes resource
definitions against the published schema

* Additional tests for kubernetes-validate

* Improve k8s error messages on exceptions

Parse the response body for the message rather than returning
a JSON blob

If we've validated and there are warnings, return those too - they
can be more helpful

```
"msg": "Failed to patch object: {\"kind\":\"Status\",\"apiVersion\":\"v1\",\"metadata\":{},
       \"status\":\"Failure\",\"message\":\"[pos 334]: json: decNum: got first char 'h'\",\"code\":500}\n",
```
vs
```
"msg": "Failed to patch object: [pos 334]: json: decNum: got first char 'h'\nresource
        validation error at spec.replicas: 'hello' is not of type u'integer'",
```

* Update versions used

In particular openshift/origin:3.9.0

* Add changelog for k8s validate change
2018-11-16 12:44:59 +00:00
..
playbooks Provide Kubernetes resource validation to k8s module (#43352) 2018-11-16 12:44:59 +00:00
aliases Revert "Disabled failing k8s integration test." 2018-08-17 13:26:45 -07:00
README.md Add wait functionality to k8s module (#47493) 2018-11-13 12:50:15 +00:00
runme.sh Provide Kubernetes resource validation to k8s module (#43352) 2018-11-16 12:44:59 +00:00

Wait tests

wait tests require at least one node, and don't work on the normal k8s openshift-origin container as provided by ansible-test --docker -v k8s

minikube, Kubernetes from Docker or any other Kubernetes service will suffice.

If kubectl is already using the right config file and context, you can just do

cd test/integration/targets/k8s
./runme.sh -vv

otherwise set one or both of K8S_AUTH_KUBECONFIG and K8S_AUTH_CONTEXT and use the same command