geerlingguy / tower-operator Goto Github PK
View Code? Open in Web Editor NEWDEPRECATED: This project was moved and renamed to: https://github.com/ansible/awx-operator
DEPRECATED: This project was moved and renamed to: https://github.com/ansible/awx-operator
Right now, the operator seems to spin up Ansible Tower -- but it's still a manual process to apply the license. Would applying the license be something we could see adding to this operator?
See https://github.com/geerlingguy/mcrouter-operator for an example.
Basically, need to build an official alpha1 image and put it up somewhere (e.g. Docker Hub), compile a default operator YAML manifest for deployment into Kubernetes, and finally show people how to deploy the operator in the README.
Might be a good time to also write a blog post about it (maybe... might want to wait until the operator is more fleshed-out).
Right now (after #5), I'm only testing the AWX CR in Travis CI.
I need to split a molecule config to do the same thing as the test-local
scenario, but have it be like test-tower
and use the tower
CR.
If you scale the deployments, and statefulset higher than 1 replica, they will get automatically scaled back down to 1. Why is this? How can I make it so I can scale higher than one?
I attempted to stand up an instance today and Operator failed to deploy due to being unable to pull the container ansible/tower-operator:0.4.0
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled <unknown> default-scheduler Successfully assigned default/tower-operator-67bb7f6b8b-5gb4j to node2
Warning Failed 16s kubelet, node2 Error: ImagePullBackOff
Normal BackOff 16s kubelet, node2 Back-off pulling image "ansible/tower-operator:0.4.0"
Warning Failed 2s (x2 over 17s) kubelet, node2 Failed to pull image "ansible/tower-operator:0.4.0": rpc error: code = Unknown desc = Error response from daemon: pull access denied for ansible/tower-operator, repository does not exist or may require 'docker login': denied: requested access to the resource is denied
Warning Failed 2s (x2 over 17s) kubelet, node2 Error: ErrImagePull
I tried updating the image to awx-operator but the operator (after it deploys) does not create an awx deployment or pgsql deployment.
Hi Jeff,
I've copied over ldap settings from our existing awx and added source control credentials in order to add projects.
However both ldap and git access does not work (connection timeout). The kubernetes host can ping the git server.
As far as I understand default network policy should allow all. Am I missing something?
Thanks
Ansible is switching the k8s_facts
module to k8s_info
. Should eventually look to update this prior to deprecation.
[DEPRECATION WARNING]: The 'k8s_facts' module has been renamed to 'k8s_info'.
This feature will be removed in version 2.13. Deprecation warnings can be
disabled by setting deprecation_warnings=False in ansible.cfg.
I just realized YAML linting has been disabled in the default molecule configs... let's remedy that and get Molecule 3.0 working with yamllint and ansible-lint (see related: ansible/molecule#2560)
The latest version is v0.14.0 (current version being used by this Operator is v0.12.0), see https://quay.io/repository/operator-framework/ansible-operator?tag=latest&tab=tags
If you attempt to configure LDAP authentication that is backed by a custom (non-trusted) CA, you'll see the following error:
2020-06-09 17:21:22,693 WARNING django_auth_ldap Caught LDAPError while authenticating test-user: SERVER_DOWN({'desc': "Can't contact LDAP server", 'info': 'error:14090086:SSL routines:ssl3_get_server_certificate:certificate verify failed (self signed certificate in certificate chain)'},)
It would be useful to be able to attach custom CA's via the CR that is created that would resolve this issue.
3.6.2 has been released, so update the operator.
Right now I'm building out everything using open source AWX, just for convenience's sake. But I'm working on building the operator in a way where users could choose between AWX and Tower (if they want support and a license, and all that).
See:
Docs for setup:
Hi Jeff,
How do you configure kubernetes to enable remote access to awx via https.
All pods are running and enabled ingress but have no access to the webite either by port 80 or 443.
Many thanks for your time
Hi Jeff,
AWX is stuck at "AWX is Upgrading" when using your guide.
Not sure how to debug.
Thanks
Some modules require additional pip packages to be installed. Managing venvs
isn't really possible from what I can tell thus far within the operator, and I think that type of management is perfect for this operator.
I thought this might be a good issue for me to help contribute on, but didn't want to surprise PR without discussing how this should be implemented.
With that said, any thoughts or preferences on how to implement? Should users just create their own tower container from the tower base image? Should the operator leverage init containers to install python dependencies and venvs at boot?
Hi Jeff,
I am completely new to Kubernetes and managed to configure an Ubuntu Server with microk8s.
awx-postgres-0 is in a constant pending state the output is shown here:
Name: awx-postgres-0
Namespace: awx
Priority: 0
Node: <none>
Labels: app=tower-postgres
controller-revision-hash=awx-postgres-5599b677
statefulset.kubernetes.io/pod-name=awx-postgres-0
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Controlled By: StatefulSet/awx-postgres
Containers:
postgres:
Image: postgres:10
Port: 3306/TCP
Host Port: 0/TCP
Environment:
POSTGRES_DB: awx
POSTGRES_USER: awx
POSTGRES_PASSWORD: <set to the key 'password' in secret 'awx-postgres-pass'> Optional: false
Mounts:
/var/lib/postgresql/data from postgres (rw,path="data")
/var/run/secrets/kubernetes.io/serviceaccount from default-token-fxk6j (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
postgres:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: postgres-awx-postgres-0
ReadOnly: false
default-token-fxk6j:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-fxk6j
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 20s (x15 over 18m) default-scheduler error while running "VolumeBinding" filter plugin for pod "awx-postgres-0": pod has unbound immediate PersistentVolumeClaims
The guide does not really explain how to setup postgres?
Thanks
In some use cases, I"d like to only run Tower/AWX inside the cluster, but rely on an external set of postgres databases that I'm already operatoring. As part of the operator, I'd like to be able to point to this database instead of spinning one up in the cluster.
From a comment on my blog:
- For Openshift, postgresql image is unable to write on data directory, so to work we need to set anyuid scc for the namespace.
- Maybe for the same reason, the status of "updating" still running forever. (I had to create the route manually).
- The App tower-task is unable to have 1 replica running.
- I had to change the yaml to place the tower-operator service account on the project I have created. It's all placed to run on default project.
I have only been testing the operator in Kubernetes clusters (Minikube and KinD), I haven't been testing in CRC or other OpenShift-ish clusters. I would like to make sure the operator is easy to deploy into OpenShift/OKD as well, and I know there can be restrictions around things like PVs (which is required for this operator because otherwise tower's data would get blown away any time you updated or any time that container stopped).
Because my previous installation ran out of space when trying to restore the awx database I've reinstalled Ubuntu Server as Open SSH server with microk8s installed.
No I am unable to get awx up and running and am not sure why.
These are my steps:
microk8s.enable dns
microk8s.enable storage
microk8s.enable dashboard
Add --allow-privileged into /var/snap/microk8s/current/args/kube-apiserver followed by microk8s.stop and microk8s start.
microk8s.kubectl apply -f https://raw.githubusercontent.com/geerlingguy/tower-operator/master/deploy/tower-operator.yaml
microk8s.kubectl create namespace awx
apiVersion: tower.ansible.com/v1alpha1
kind: Tower
metadata:
name: awx
namespace: awx
spec:
tower_hostname: hostname
tower_secret_key: awxsecret
tower_admin_user: admin
tower_admin_email: mail
tower_admin_password: password
tower_task_image: ansible/awx_task:9.3.0
tower_web_image: ansible/awx_web:9.3.0
tower_postgres_pass: awxpass
tower_postgres_storage_request: 12Gi
microk8s.kubectl apply -f awx.yml
microk8s.kubectl describe pods -n awx
Name: awx-memcached-587b55d5fd-6mkb4
Namespace: awx
Priority: 0
Node: microk8s/192.168.1.119
Start Time: Thu, 19 Mar 2020 08:26:31 +0000
Labels: app=tower-memcached
pod-template-hash=587b55d5fd
Annotations: <none>
Status: Running
IP: 10.1.9.23
IPs:
IP: 10.1.9.23
Controlled By: ReplicaSet/awx-memcached-587b55d5fd
Containers:
memcached:
Container ID: containerd://38a91f0d34c920e0c3112d8b5064ba57dce5e3203facc634389f2e0da550ff0e
Image: memcached:alpine
Image ID: docker.io/library/memcached@sha256:891a989217a70d9b703bc93ea63e87c9853c0590458f2fec44c7cb95fa224858
Port: 11211/TCP
Host Port: 0/TCP
State: Running
Started: Thu, 19 Mar 2020 08:26:33 +0000
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-sqw2l (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
default-token-sqw2l:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-sqw2l
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled <unknown> default-scheduler Successfully assigned awx/awx-memcached-587b55d5fd-6mkb4 to microk8s
Normal Pulled 16m kubelet, microk8s Container image "memcached:alpine" already present on machine
Normal Created 16m kubelet, microk8s Created container memcached
Normal Started 16m kubelet, microk8s Started container memcached
Name: awx-postgres-0
Namespace: awx
Priority: 0
Node: microk8s/192.168.1.119
Start Time: Thu, 19 Mar 2020 08:26:34 +0000
Labels: app=tower-postgres
controller-revision-hash=awx-postgres-5599b677
statefulset.kubernetes.io/pod-name=awx-postgres-0
Annotations: <none>
Status: Running
IP: 10.1.9.24
IPs:
IP: 10.1.9.24
Controlled By: StatefulSet/awx-postgres
Containers:
postgres:
Container ID: containerd://f8c85ef36086c3db42f27911c80644c0d20e7dc0179f38af5326d86641620f1f
Image: postgres:10
Image ID: docker.io/library/postgres@sha256:73d3ac7b17b8cd2122d27026ec3552080e8aaea95fef0b6e671fa795ac547f94
Port: 3306/TCP
Host Port: 0/TCP
State: Running
Started: Thu, 19 Mar 2020 08:26:36 +0000
Ready: True
Restart Count: 0
Environment:
POSTGRES_DB: awx
POSTGRES_USER: awx
POSTGRES_PASSWORD: <set to the key 'password' in secret 'awx-postgres-pass'> Optional: false
Mounts:
/var/lib/postgresql/data from postgres (rw,path="data")
/var/run/secrets/kubernetes.io/serviceaccount from default-token-sqw2l (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
postgres:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: postgres-awx-postgres-0
ReadOnly: false
default-token-sqw2l:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-sqw2l
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling <unknown> default-scheduler error while running "VolumeBinding" filter plugin for pod "awx-postgres-0": pod has unbound immediate PersistentVolumeClaims
Warning FailedScheduling <unknown> default-scheduler error while running "VolumeBinding" filter plugin for pod "awx-postgres-0": pod has unbound immediate PersistentVolumeClaims
Normal Scheduled <unknown> default-scheduler Successfully assigned awx/awx-postgres-0 to microk8s
Normal Pulled 16m kubelet, microk8s Container image "postgres:10" already present on machine
Normal Created 16m kubelet, microk8s Created container postgres
Normal Started 16m kubelet, microk8s Started container postgres
Name: awx-rabbitmq-7f8f6ff647-5gtmg
Namespace: awx
Priority: 0
Node: microk8s/192.168.1.119
Start Time: Thu, 19 Mar 2020 08:26:34 +0000
Labels: app=tower-rabbitmq
pod-template-hash=7f8f6ff647
Annotations: <none>
Status: Running
IP: 10.1.9.25
IPs:
IP: 10.1.9.25
Controlled By: ReplicaSet/awx-rabbitmq-7f8f6ff647
Containers:
rabbitmq:
Container ID: containerd://3eadbeb7f263212109e9a55f54672f0bb7ea3ae5afa81a2123bf51463581e991
Image: rabbitmq:3
Image ID: docker.io/library/rabbitmq@sha256:b20295815348317f0d8cc89051154df6c39fdc92b0f83f57cc591e191c484e8b
Ports: 15672/TCP, 5672/TCP
Host Ports: 0/TCP, 0/TCP
State: Running
Started: Thu, 19 Mar 2020 08:26:36 +0000
Ready: True
Restart Count: 0
Environment:
RABBITMQ_DEFAULT_VHOST: awx
RABBITMQ_NODE_PORT: 5672
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-sqw2l (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
default-token-sqw2l:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-sqw2l
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled <unknown> default-scheduler Successfully assigned awx/awx-rabbitmq-7f8f6ff647-5gtmg to microk8s
Normal Pulled 16m kubelet, microk8s Container image "rabbitmq:3" already present on machine
Normal Created 16m kubelet, microk8s Created container rabbitmq
Normal Started 16m kubelet, microk8s Started container rabbitmq
Name: awx-tower-task-6f47bb89c5-6b299
Namespace: awx
Priority: 0
Node: microk8s/192.168.1.119
Start Time: Thu, 19 Mar 2020 08:26:38 +0000
Labels: app=tower-task
pod-template-hash=6f47bb89c5
Annotations: <none>
Status: Running
IP: 10.1.9.27
IPs:
IP: 10.1.9.27
Controlled By: ReplicaSet/awx-tower-task-6f47bb89c5
Containers:
tower-task:
Container ID: containerd://04ffcba2fa29b053984da4ce61dd3f3c90ebb83ed82453472921811a0a09a34c
Image: ansible/awx_task:9.3.0
Image ID: docker.io/ansible/awx_task@sha256:be02eed7970804856f32fbb99385ee13e7da31edca6602d7d0514c2b44b2044f
Port: <none>
Host Port: <none>
Command:
/usr/bin/launch_awx_task.sh
State: Running
Started: Thu, 19 Mar 2020 08:26:43 +0000
Ready: True
Restart Count: 0
Requests:
cpu: 500m
memory: 1Gi
Environment Variables from:
awx-tower-configmap ConfigMap Optional: false
awx-tower-secret Secret Optional: false
Environment: <none>
Mounts:
/etc/tower/SECRET_KEY from secret-key (ro,path="SECRET_KEY")
/etc/tower/conf.d/environment.sh from environment (ro,path="environment.sh")
/etc/tower/settings.py from settings (ro,path="settings.py")
/var/run/secrets/kubernetes.io/serviceaccount from default-token-sqw2l (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
secret-key:
Type: Secret (a volume populated by a Secret)
SecretName: awx-tower-secret
Optional: false
environment:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: awx-tower-configmap
Optional: false
settings:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: awx-tower-configmap
Optional: false
default-token-sqw2l:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-sqw2l
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled <unknown> default-scheduler Successfully assigned awx/awx-tower-task-6f47bb89c5-6b299 to microk8s
Normal Pulling 16m kubelet, microk8s Pulling image "ansible/awx_task:9.3.0"
Normal Pulled 16m kubelet, microk8s Successfully pulled image "ansible/awx_task:9.3.0"
Normal Created 16m kubelet, microk8s Created container tower-task
Normal Started 16m kubelet, microk8s Started container tower-task
Name: awx-tower-web-c98cd6555-jtzfg
Namespace: awx
Priority: 0
Node: microk8s/192.168.1.119
Start Time: Thu, 19 Mar 2020 08:26:37 +0000
Labels: app=tower
pod-template-hash=c98cd6555
Annotations: <none>
Status: Running
IP: 10.1.9.26
IPs:
IP: 10.1.9.26
Controlled By: ReplicaSet/awx-tower-web-c98cd6555
Containers:
tower:
Container ID: containerd://c32c89a8657d58ba5529ef0d96679094f0bebbbd61bd4dbd354afdd2e46c160a
Image: ansible/awx_web:9.3.0
Image ID: docker.io/ansible/awx_web@sha256:e3716cce276a9774650a4fbbb5d80c98fa734db633e8ae4ea661d178c23b89df
Port: 8052/TCP
Host Port: 0/TCP
State: Running
Started: Thu, 19 Mar 2020 08:26:39 +0000
Ready: True
Restart Count: 0
Requests:
cpu: 1
memory: 2Gi
Environment: <none>
Mounts:
/etc/nginx/nginx.conf from nginx-conf (ro,path="nginx.conf")
/etc/tower/SECRET_KEY from secret-key (ro,path="SECRET_KEY")
/etc/tower/conf.d/environment.sh from environment (ro,path="environment.sh")
/etc/tower/settings.py from settings (ro,path="settings.py")
/var/run/secrets/kubernetes.io/serviceaccount from default-token-sqw2l (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
secret-key:
Type: Secret (a volume populated by a Secret)
SecretName: awx-tower-secret
Optional: false
environment:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: awx-tower-configmap
Optional: false
settings:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: awx-tower-configmap
Optional: false
nginx-conf:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: awx-tower-configmap
Optional: false
default-token-sqw2l:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-sqw2l
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled <unknown> default-scheduler Successfully assigned awx/awx-tower-web-c98cd6555-jtzfg to microk8s
Normal Pulled 16m kubelet, microk8s Container image "ansible/awx_web:9.3.0" already present on machine
Normal Created 16m kubelet, microk8s Created container tower
Normal Started 16m kubelet, microk8s Started container tower
Postgres log shows and google searches doesn't really show any usefull information about this error.
default-scheduler error while running "VolumeBinding" filter plugin for pod "awx-postgres-0": pod has unbound immediate PersistentVolumeClaims
The postgres files are present in /var/snap/microk8s/common/default-storage/awx-postgres-awx-postgres-0-pvc-e1ca6588-3780-41bb-820e-2321f6e60e1c/data
awx-tower-web-xxx log output shows:
could not connect to server: Connection timed out Is the server running on host "awx-postgres.awx.svc.cluster.local" (91.201.60.73) and accepting TCP/IP connections on port 5432?
Ansible output:
ansible-playbook 2.9.6
config file = /etc/ansible/ansible.cfg
configured module search path = ['/usr/share/ansible/openshift']
ansible python module location = /usr/local/lib/python3.6/site-packages/ansible
executable location = /usr/local/bin/ansible-playbook
python version = 3.6.8 (default, Oct 11 2019, 15:04:54) [GCC 8.3.1 20190507 (Red Hat 8.3.1-4)]
Using /etc/ansible/ansible.cfg as config file
PLAYBOOK: main.yml *************************************************************
1 plays in /opt/ansible/main.yml
PLAY [localhost] ***************************************************************
META: ran handlers
TASK [tower : Ensure configured Tower resources exist in the cluster.] *********
task path: /opt/ansible/roles/tower/tasks/main.yml:2
changed: [localhost] => (item=tower_memcached.yaml.j2) => {"ansible_loop_var": "item", "changed": true, "item": "tower_memcached.yaml.j2", "result": {"results": [{"changed": true, "method": "create", "result": {"apiVersion": "apps/v1", "kind": "Deployment", "metadata": {"creationTimestamp": "2020-03-19T08:26:31Z", "generation": 1, "labels": {"app": "tower-memcached"}, "name": "awx-memcached", "namespace": "awx", "ownerReferences": [{"apiVersion": "tower.ansible.com/v1alpha1", "kind": "Tower", "name": "awx", "uid": "fcc3dc9d-0c52-4523-a428-fa5cfa203864"}], "resourceVersion": "5630", "selfLink": "/apis/apps/v1/namespaces/awx/deployments/awx-memcached", "uid": "4117354d-2673-454c-959b-b06f917726dc"}, "spec": {"progressDeadlineSeconds": 600, "replicas": 1, "revisionHistoryLimit": 10, "selector": {"matchLabels": {"app": "tower-memcached"}}, "strategy": {"rollingUpdate": {"maxSurge": "25%", "maxUnavailable": "25%"}, "type": "RollingUpdate"}, "template": {"metadata": {"creationTimestamp": null, "labels": {"app": "tower-memcached"}}, "spec": {"containers": [{"image": "memcached:alpine", "imagePullPolicy": "IfNotPresent", "name": "memcached", "ports": [{"containerPort": 11211, "protocol": "TCP"}], "resources": {}, "terminationMessagePath": "/dev/termination-log", "terminationMessagePolicy": "File"}], "dnsPolicy": "ClusterFirst", "restartPolicy": "Always", "schedulerName": "default-scheduler", "securityContext": {}, "terminationGracePeriodSeconds": 30}}}, "status": {}}, "warnings": []}, {"changed": true, "method": "create", "result": {"apiVersion": "v1", "kind": "Service", "metadata": {"creationTimestamp": "2020-03-19T08:26:31Z", "labels": {"app": "tower-memcached"}, "name": "awx-memcached", "namespace": "awx", "ownerReferences": [{"apiVersion": "tower.ansible.com/v1alpha1", "kind": "Tower", "name": "awx", "uid": "fcc3dc9d-0c52-4523-a428-fa5cfa203864"}], "resourceVersion": "5643", "selfLink": "/api/v1/namespaces/awx/services/awx-memcached", "uid": "9898bcda-a845-4013-bb17-6535d8aac57b"}, "spec": {"clusterIP": "None", "ports": [{"port": 11211, "protocol": "TCP", "targetPort": 11211}], "selector": {"app": "tower-memcached"}, "sessionAffinity": "None", "type": "ClusterIP"}, "status": {"loadBalancer": {}}}, "warnings": []}]}}
changed: [localhost] => (item=tower_postgres.yaml.j2) => {"ansible_loop_var": "item", "changed": true, "item": "tower_postgres.yaml.j2", "result": {"results": [{"changed": true, "method": "create", "result": {"apiVersion": "v1", "data": {"password": "YXd4cGFzczk="}, "kind": "Secret", "metadata": {"creationTimestamp": "2020-03-19T08:26:33Z", "name": "awx-postgres-pass", "namespace": "awx", "ownerReferences": [{"apiVersion": "tower.ansible.com/v1alpha1", "kind": "Tower", "name": "awx", "uid": "fcc3dc9d-0c52-4523-a428-fa5cfa203864"}], "resourceVersion": "5649", "selfLink": "/api/v1/namespaces/awx/secrets/awx-postgres-pass", "uid": "67042d2a-41a1-486a-97d7-525164ef8dc1"}, "type": "Opaque"}, "warnings": []}, {"changed": true, "method": "create", "result": {"apiVersion": "apps/v1", "kind": "StatefulSet", "metadata": {"creationTimestamp": "2020-03-19T08:26:33Z", "generation": 1, "labels": {"app": "tower-postgres"}, "name": "awx-postgres", "namespace": "awx", "ownerReferences": [{"apiVersion": "tower.ansible.com/v1alpha1", "kind": "Tower", "name": "awx", "uid": "fcc3dc9d-0c52-4523-a428-fa5cfa203864"}], "resourceVersion": "5650", "selfLink": "/apis/apps/v1/namespaces/awx/statefulsets/awx-postgres", "uid": "28befb75-ee99-4f78-9894-38a90f14d29f"}, "spec": {"podManagementPolicy": "OrderedReady", "replicas": 1, "revisionHistoryLimit": 10, "selector": {"matchLabels": {"app": "tower-postgres"}}, "serviceName": "awx", "template": {"metadata": {"creationTimestamp": null, "labels": {"app": "tower-postgres"}}, "spec": {"containers": [{"env": [{"name": "POSTGRES_DB", "value": "awx"}, {"name": "POSTGRES_USER", "value": "awx"}, {"name": "POSTGRES_PASSWORD", "valueFrom": {"secretKeyRef": {"key": "password", "name": "awx-postgres-pass"}}}], "image": "postgres:10", "imagePullPolicy": "IfNotPresent", "name": "postgres", "ports": [{"containerPort": 3306, "name": "postgres", "protocol": "TCP"}], "resources": {}, "terminationMessagePath": "/dev/termination-log", "terminationMessagePolicy": "File", "volumeMounts": [{"mountPath": "/var/lib/postgresql/data", "name": "postgres", "subPath": "data"}]}], "dnsPolicy": "ClusterFirst", "restartPolicy": "Always", "schedulerName": "default-scheduler", "securityContext": {}, "terminationGracePeriodSeconds": 30}}, "updateStrategy": {"type": "RollingUpdate"}, "volumeClaimTemplates": [{"apiVersion": "v1", "kind": "PersistentVolumeClaim", "metadata": {"creationTimestamp": null, "name": "postgres"}, "spec": {"accessModes": ["ReadWriteOnce"], "resources": {"requests": {"storage": "12Gi"}}, "volumeMode": "Filesystem"}, "status": {"phase": "Pending"}}]}, "status": {"replicas": 0}}, "warnings": []}, {"changed": true, "method": "create", "result": {"apiVersion": "v1", "kind": "Service", "metadata": {"creationTimestamp": "2020-03-19T08:26:33Z", "labels": {"app": "tower-postgres"}, "name": "awx-postgres", "namespace": "awx", "ownerReferences": [{"apiVersion": "tower.ansible.com/v1alpha1", "kind": "Tower", "name": "awx", "uid": "fcc3dc9d-0c52-4523-a428-fa5cfa203864"}], "resourceVersion": "5658", "selfLink": "/api/v1/namespaces/awx/services/awx-postgres", "uid": "a0c286ca-6585-4036-9647-91b3f3264060"}, "spec": {"clusterIP": "None", "ports": [{"port": 5432, "protocol": "TCP", "targetPort": 5432}], "selector": {"app": "tower-postgres"}, "sessionAffinity": "None", "type": "ClusterIP"}, "status": {"loadBalancer": {}}}, "warnings": []}]}}
changed: [localhost] => (item=tower_rabbitmq.yaml.j2) => {"ansible_loop_var": "item", "changed": true, "item": "tower_rabbitmq.yaml.j2", "result": {"results": [{"changed": true, "method": "create", "result": {"apiVersion": "apps/v1", "kind": "Deployment", "metadata": {"creationTimestamp": "2020-03-19T08:26:34Z", "generation": 1, "labels": {"app": "tower-rabbitmq"}, "name": "awx-rabbitmq", "namespace": "awx", "ownerReferences": [{"apiVersion": "tower.ansible.com/v1alpha1", "kind": "Tower", "name": "awx", "uid": "fcc3dc9d-0c52-4523-a428-fa5cfa203864"}], "resourceVersion": "5684", "selfLink": "/apis/apps/v1/namespaces/awx/deployments/awx-rabbitmq", "uid": "546e6ade-3f39-43e6-9a5d-c8b2c1162e68"}, "spec": {"progressDeadlineSeconds": 600, "replicas": 1, "revisionHistoryLimit": 10, "selector": {"matchLabels": {"app": "tower-rabbitmq"}}, "strategy": {"rollingUpdate": {"maxSurge": "25%", "maxUnavailable": "25%"}, "type": "RollingUpdate"}, "template": {"metadata": {"creationTimestamp": null, "labels": {"app": "tower-rabbitmq"}}, "spec": {"containers": [{"env": [{"name": "RABBITMQ_DEFAULT_VHOST", "value": "awx"}, {"name": "RABBITMQ_NODE_PORT", "value": "5672"}], "image": "rabbitmq:3", "imagePullPolicy": "IfNotPresent", "name": "rabbitmq", "ports": [{"containerPort": 15672, "protocol": "TCP"}, {"containerPort": 5672, "protocol": "TCP"}], "resources": {}, "terminationMessagePath": "/dev/termination-log", "terminationMessagePolicy": "File"}], "dnsPolicy": "ClusterFirst", "restartPolicy": "Always", "schedulerName": "default-scheduler", "securityContext": {}, "terminationGracePeriodSeconds": 30}}}, "status": {}}, "warnings": []}, {"changed": true, "method": "create", "result": {"apiVersion": "v1", "kind": "Service", "metadata": {"creationTimestamp": "2020-03-19T08:26:34Z", "labels": {"app": "tower-rabbitmq"}, "name": "awx-rabbitmq", "namespace": "awx", "ownerReferences": [{"apiVersion": "tower.ansible.com/v1alpha1", "kind": "Tower", "name": "awx", "uid": "fcc3dc9d-0c52-4523-a428-fa5cfa203864"}], "resourceVersion": "5689", "selfLink": "/api/v1/namespaces/awx/services/awx-rabbitmq", "uid": "ffd04c66-5789-4c92-a205-97b6a57656b2"}, "spec": {"clusterIP": "None", "ports": [{"port": 5672, "protocol": "TCP", "targetPort": 5672}], "selector": {"app": "tower-rabbitmq"}, "sessionAffinity": "None", "type": "ClusterIP"}, "status": {"loadBalancer": {}}}, "warnings": []}]}}
y \"default-src 'self'; connect-src 'self' ws: wss:; style-src 'self' 'unsafe-inline'; script-src 'self' 'unsafe-inline' *.pendo.io; img-src 'self' *.pendo.io data:; report-uri /csp-violation/\";\n\n # Protect against click-jacking https://www.owasp.org/index.php/Testing_for_Clickjacking_(OTG-CLIENT-009)\n add_header X-Frame-Options \"DENY\";\n\n location /nginx_status {\n stub_status on;\n access_log off;\n allow 127.0.0.1;\n deny all;\n }\n\n location /static/ {\n alias /var/lib/awx/public/static/;\n }\n\n location /favicon.ico {\n alias /var/lib/awx/public/static/favicon.ico;\n }\n\n location /websocket {\n # Pass request to the upstream alias\n proxy_pass http://daphne;\n # Require http version 1.1 to allow for upgrade requests\n proxy_http_version 1.1;\n # We want proxy_buffering off for proxying to websockets.\n proxy_buffering off;\n # http://en.wikipedia.org/wiki/X-Forwarded-For\n proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;\n # enable this if you use HTTPS:\n proxy_set_header X-Forwarded-Proto https;\n # pass the Host: header from the client for the sake of redirects\n proxy_set_header Host $http_host;\n # We've set the Host header, so we don't need Nginx to muddle\n # about with redirects\n proxy_redirect off;\n # Depending on the request value, set the Upgrade and\n # connection headers\n proxy_set_header Upgrade $http_upgrade;\n proxy_set_header Connection $connection_upgrade;\n }\n\n location / {\n # Add trailing / if missing\n rewrite ^(.*)$http_host(.*[^/])$ $1$http_host$2/ permanent;\n uwsgi_read_timeout 120s;\n uwsgi_pass uwsgi;\n include /etc/nginx/uwsgi_params; proxy_set_header X-Forwarded-Port 443;\n }\n }\n}\n", "settings": "import os\nimport socket\n\ndef get_secret():\n if os.path.exists(\"/etc/tower/SECRET_KEY\"):\n return open('/etc/tower/SECRET_KEY', 'rb').read().strip()\n\nADMINS = ()\nSTATIC_ROOT = '/var/lib/awx/public/static'\nPROJECTS_ROOT = '/var/lib/awx/projects'\nJOBOUTPUT_ROOT = '/var/lib/awx/job_status'\n\nSECRET_KEY = get_secret()\n\nALLOWED_HOSTS = ['*']\n\nINTERNAL_API_URL = 'http://127.0.0.1:8052'\n\n# Container environments don't like chroots\nAWX_PROOT_ENABLED = False\n\n# Automatically deprovision pods that go offline\nAWX_AUTO_DEPROVISION_INSTANCES = True\n\nCLUSTER_HOST_ID = socket.gethostname()\nSYSTEM_UUID = '00000000-0000-0000-0000-000000000000'\n\nCSRF_COOKIE_SECURE = False\nSESSION_COOKIE_SECURE = False\n\nSERVER_EMAIL = 'root@localhost'\nDEFAULT_FROM_EMAIL = 'webmaster@localhost'\nEMAIL_SUBJECT_PREFIX = '[AWX] '\n\nEMAIL_HOST = 'localhost'\nEMAIL_PORT = 25\nEMAIL_HOST_USER = ''\nEMAIL_HOST_PASSWORD = ''\nEMAIL_USE_TLS = False\n\nLOGGING['handlers']['console'] = {\n '()': 'logging.StreamHandler',\n 'level': 'DEBUG',\n 'formatter': 'simple',\n}\n\nLOGGING['loggers']['django.request']['handlers'] = ['console']\nLOGGING['loggers']['rest_framework.request']['handlers'] = ['console']\nLOGGING['loggers']['awx']['handlers'] = ['console', 'external_logger']\nLOGGING['loggers']['awx.main.commands.run_callback_receiver']['handlers'] = ['console']\nLOGGING['loggers']['awx.main.tasks']['handlers'] = ['console', 'external_logger']\nLOGGING['loggers']['awx.main.scheduler']['handlers'] = ['console', 'external_logger']\nLOGGING['loggers']['django_auth_ldap']['handlers'] = ['console']\nLOGGING['loggers']['social']['handlers'] = ['console']\nLOGGING['loggers']['system_tracking_migrations']['handlers'] = ['console']\nLOGGING['loggers']['rbac_migrations']['handlers'] = ['console']\nLOGGING['loggers']['awx.isolated.manager.playbooks']['handlers'] = ['console']\nLOGGING['handlers']['callback_receiver'] = {'class': 'logging.NullHandler'}\nLOGGING['handlers']['task_system'] = {'class': 'logging.NullHandler'}\nLOGGING['handlers']['tower_warnings'] = {'class': 'logging.NullHandler'}\nLOGGING['handlers']['rbac_migrations'] = {'class': 'logging.NullHandler'}\nLOGGING['handlers']['system_tracking_migrations'] = {'class': 'logging.NullHandler'}\nLOGGING['handlers']['management_playbooks'] = {'class': 'logging.NullHandler'}\n\nDATABASES = {\n 'default': {\n 'ATOMIC_REQUESTS': True,\n 'ENGINE': 'awx.main.db.profiled_pg',\n 'NAME': 'awx',\n 'USER': 'awx',\n 'PASSWORD': 'awxpass9',\n 'HOST': 'awx-postgres.awx.svc.cluster.local',\n 'PORT': '5432',\n }\n}\n\nif os.getenv(\"DATABASE_SSLMODE\", False):\n DATABASES['default']['OPTIONS'] = {'sslmode': os.getenv(\"DATABASE_SSLMODE\")}\n\nCACHES = {\n 'default': {\n 'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',\n 'LOCATION': '{}:{}'.format(\"awx-memcached.awx.svc.cluster.local\", \"11211\")\n },\n 'ephemeral': {\n 'BACKEND': 'django.core.cache.backends.locmem.LocMemCache',\n },\n}\n\nBROKER_URL = 'amqp://{}:{}@{}:{}/{}'.format(\n 'guest',\n 'guest',\n 'awx-rabbitmq.awx.svc.cluster.local',\n '5672',\n 'awx')\n\nCHANNEL_LAYERS = {\n 'default': {'BACKEND': 'asgi_amqp.AMQPChannelLayer',\n 'ROUTING': 'awx.main.routing.channel_routing',\n 'CONFIG': {'url': BROKER_URL}}\n}\n\nUSE_X_FORWARDED_PORT = True\n"}, "kind": "ConfigMap", "metadata": {"creationTimestamp": "2020-03-19T08:26:35Z", "labels": {"app": "tower"}, "name": "awx-tower-configmap", "namespace": "awx", "ownerReferences": [{"apiVersion": "tower.ansible.com/v1alpha1", "kind": "Tower", "name": "awx", "uid": "fcc3dc9d-0c52-4523-a428-fa5cfa203864"}], "resourceVersion": "5710", "selfLink": "/api/v1/namespaces/awx/configmaps/awx-tower-configmap", "uid": "7d37fc8d-940a-4changed: [localhost] => (item=tower_config.yaml.j2) => {"ansible_loop_var": "item", "changed": true, "item": "tower_config.yaml.j2", "method": "create", "result": {"apiVersion": "v1", "data": {"environment": "DATABASE_USER=awx\nDATABASE_NAME=awx\nDATABASE_HOST='awx-postgres.awx.svc.cluster.local'\nDATABASE_PORT='5432'\nDATABASE_PASSWORD=awxpass9\nMEMCACHED_HOST='awx-memcached.awx.svc.cluster.local'\nMEMCACHED_PORT='11211'\nRABBITMQ_HOST='awx-rabbitmq.awx.svc.cluster.local'\nRABBITMQ_PORT='5672'\nAWX_SKIP_MIGRATIONS=true\n", "nginx_conf": "worker_processes 1;\npid /tmp/nginx.pid;\n\nevents {\n worker_connections 1024;\n}\n\nhttp {\n include /etc/nginx/mime.types;\n default_type application/octet-stream;\n server_tokens off;\n\n log_format main '$remote_addr - $remote_user [$time_local] \"$request\" '\n '$status $body_bytes_sent \"$http_referer\" '\n '\"$http_user_agent\" \"$http_x_forwarded_for\"';\n\n access_log /dev/stdout main;\n\n map $http_upgrade $connection_upgrade {\n default upgrade;\n '' close;\n }\n\n sendfile on;\n #tcp_nopush on;\n #gzip on;\n\n upstream uwsgi {\n server 127.0.0.1:8050;\n }\n\n upstream daphne {\n server 127.0.0.1:8051;\n }\n\n server {\n listen 8052 default_server;\n\n # If you have a domain name, this is where to add it\n server_name _;\n keepalive_timeout 65;\n\n # HSTS (ngx_http_headers_module is required) (15768000 seconds = 6 months)\n add_header Strict-Transport-Security max-age=15768000;\n add_header Content-Security-Policy \"default-src 'self'; connect-src 'self' ws: wss:; style-src 'self' 'unsafe-inline'; script-src 'self' 'unsafe-inline' *.pendo.io; img-src 'self' *.pendo.io data:; report-uri /csp-violation/\";\n add_header X-Content-Security-Policy \"default-src 'self'; connect-src 'self' ws: wss:; style-src 'self' 'unsafe-inline'; script-src 'self' 'unsafe-inline' *.pendo.io; img-src 'self' *.pendo.io data:; report-uri /csp-violation/\";\n\n # Protect against click-jacking https://www.owasp.org/index.php/Testing_for_Clickjacking_(OTG-CLIENT-009)\n add_header X-Frame-Options \"DENY\";\n\n location /nginx_status {\n stub_status on;\n access_log off;\n allow 127.0.0.1;\n deny all;\n }\n\n location /static/ {\n alias /var/lib/awx/public/static/;\n }\n\n location /favicon.ico {\n alias /var/lib/awx/public/static/favicon.ico;\n }\n\n location /websocket {\n # Pass request to the upstream alias\n proxy_pass http://daphne;\n # Require http version 1.1 to allow for upgrade requests\n proxy_http_version 1.1;\n # We want proxy_buffering off for proxying to websockets.\n proxy_buffering off;\n # http://en.wikipedia.org/wiki/X-Forwarded-For\n proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;\n # enable this if you use HTTPS:\n proxy_set_header X-Forwarded-Proto https;\n # pass the Host: header from the client for the sake of redirects\n proxy_set_header Host $http_host;\n # We've set the Host header, so we don't need Nginx to muddle\n # about with redirects\n proxy_redirect off;\n # Depending on the request value, set the Upgrade and\n # connection headers\n proxy_set_header Upgrade $http_upgrade;\n proxy_set_header Connection $connection_upgrade;\n }\n\n location / {\n # Add trailing / if missing\n rewrite ^(.*)$http_host(.*[^/])$ $1$http_host$2/ permanent;\n uwsgi_read_timeout 120s;\n uwsgi_pass uwsgi;\n include /etc/nginx/uwsgi_params; proxy_set_header X-Forwarded-Port 443;\n }\n }\n}\n", "settings": "import os\nimport socket\n\ndef get_secret():\n if os.path.exists(\"/etc/tower/SECRET_KEY\"):\n return open('/etc/tower/SECRET_KEY', 'rb').read().strip()\n\nADMINS = ()\nSTATIC_ROOT = '/var/lib/awx/public/static'\nPROJECTS_ROOT = '/var/lib/awx/projects'\nJOBOUTPUT_ROOT = '/var/lib/awx/job_status'\n\nSECRET_KEY = get_secret()\n\nALLOWED_HOSTS = ['*']\n\nINTERNAL_API_URL = 'http://127.0.0.1:8052'\n\n# Container environments don't like chroots\nAWX_PROOT_ENABLED = False\n\n# Automatically deprovision pods that go offline\nAWX_AUTO_DEPROVISION_INSTANCES = True\n\nCLUSTER_HOST_ID = socket.gethostname()\nSYSTEM_UUID = '00000000-0000-0000-0000-000000000000'\n\nCSRF_COOKIE_SECURE = False\nSESSION_COOKIE_SECURE = False\n\nSERVER_EMAIL = 'root@localhost'\nDEFAULT_FROM_EMAIL = 'webmaster@localhost'\nEMAIL_SUBJECT_PREFIX = '[AWX] '\n\nEMAIL_HOST = 'localhost'\nEMAIL_PORT = 25\nEMAIL_HOST_USER = ''\nEMAIL_HOST_PASSWORD = ''\nEMAIL_USE_TLS = False\n\nLOGGING['handlers']['console'] = {\n '()': 'logging.StreamHandler',\n 'level': 'DEBUG',\n 'formatter': 'simple',\n}\n\nLOGGING['loggers']['django.request']['handlers'] = ['console']\nLOGGING['loggers']['rest_framework.request']['handlers'] = ['console']\nLOGGING['loggers']['awx']['handlers'] = ['console', 'external_logger']\nLOGGING['loggers']['awx.main.commands.run_callback_receiver']['handlers'] = ['console']\nLOGGING['loggers']['awx.main.tasks']['handlers'] = ['console', 'external_logger']\nLOGGING['loggers']['awx.main.scheduler']['handlers'] = ['console', 'external_logger']\nLOGGING['loggers']['django_auth_ldap']['handlers'] = ['console']\nLOGGING['loggers']['social']['handlers'] = ['console']\nLOGGING['loggers']['system_tracking_migrations']['handlers'] = ['console']\nLOGGING['loggers']['rbac_migrations']['handlers'] = ['console']\nLOGGING['loggers']['awx.isolated.manager.playbooks']['handlers'] = ['console']\nLOGGING['handlers']['callback_receiver'] = {'class': 'logging.NullHandler'}\nLOGGING['handlers']['task_system'] = {'class': 'logging.NullHandler'}\nLOGGING['handlers']['tower_warnings'] = {'class': 'logging.NullHandler'}\nLOGGING['handlers']['rbac_migrations'] = {'class': 'logging.NullHandler'}\nLOGGING['handlers']['system_tracking_migrations'] = {'class': 'logging.NullHandler'}\nLOGGING['handlers']['management_playbooks'] = {'class': 'logging.NullHandler'}\n\nDATABASES = {\n 'default': {\n 'ATOMIC_REQUESTS': True,\n 'ENGINE': 'awx.main.db.profiled_pg',\n 'NAME': 'awx',\n 'USER': 'awx',\n 'PASSWORD': 'awxpass9',\n 'HOST': 'awx-postgres.awx.svc.cluster.local',\n 'PORT': '5432',\n }\n}\n\nif os.getenv(\"DATABASE_SSLMODE\", False):\n DATABASES['default']['OPTIONS'] = {'sslmode': os.getenv(\"DATABASE_SSLMODE\")}\n\nCACHES = {\n 'default': {\n 'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',\n 'LOCATION': '{}:{}'.format(\"awx-memcached.awx.svc.cluster.local\", \"11211\")\n },\n 'ephemeral': {\n 'BACKEND': 'django.core.cache.backends.locmem.LocMemCache',\n },\n}\n\nBROKER_URL = 'amqp://{}:{}@{}:{}/{}'.format(\n 'guest',\n 'guest',\n 'awx-rabbitmq.awx.svc.cluster.local',\n '5672',\n 'awx')\n\nCHANNEL_LAYERS = {\n 'default': {'BACKEND': 'asgi_amqp.AMQPChannelLayer',\n 'ROUTING': 'awx.main.routing.channel_routing',\n 'CONFIG': {'url': BROKER_URL}}\n}\n\nUSE_X_FORWARDED_PORT = True\n"}, "kind": "ConfigMap", "metadata": {"creationTimestamp": "2020-03-19T08:26:35Z", "labels": {"app": "tower"}, "name": "awx-tower-configmap", "namespace": "awx", "ownerReferences": [{"apiVersion": "tower.ansible.com/v1alpha1", "kind": "Tower", "name": "awx", "uid": "fcc3dc9d-0c52-4523-a428-fa5cfa203864"}], "resourceVersion": "5710", "selfLink": "/api/v1/namespaces/awx/configmaps/awx-tower-configmap", "uid": "7d37fc8d-940a-4907-8de7-d20517f103a1"}}}
": "/etc/tower/SECRET_KEY", "name": "secret-key", "readOnly": true, "subPath": "SECRET_KEY"}, {"mountPath": "/etc/tower/conf.d/environment.sh", "name": "environment", "readOnly": true, "subPath": "environment.sh"}, {"mountPath": "/etc/tower/settings.py", "name": "settings", "readOnly": true, "subPath": "settings.py"}, {"mountPath": "/etc/nginx/nginx.conf", "name": "nginx-conf", "readOnly": true, "subPath": "nginx.conf"}]}], "dnsPolicy": "ClusterFirst", "restartPolicy": "Always", "schedulerName": "default-scheduler", "securityContext": {}, "terminationGracePeriodSeconds": 30, "volumes": [{"name": "secret-key", "secret": {"defaultMode": 420, "items": [{"key": "secret_key", "path": "SECRET_KEY"}], "secretName": "awx-tower-secret"}}, {"configMap": {"defaultMode": 420, "items": [{"key": "environment", "path": "environment.sh"}], "name": "awx-tower-configmap"}, "name": "environment"}, {"configMap": {"defaultMode": 420, "items": [{"key": "settings", "path": "settings.py"}], "name": "awx-tower-configmap"}, "name": "settings"}, {"configMap": {"defaultMode": 420, "items": [{"key": "nginx_conf", "path": "nginx.conf"}], "name": "awx-tower-configmap"}, "name": "nginx-conf"}]}}}, "status": {}}, "warnings": []}, {"changed": true, "method": "create", "result": {"apiVersion": "v1", "kind": "Service", "metadata": {"creationTimestamp": "2020-03-19T08:26:37Z", "labels": {"app": "tower"}, "name": "awx-service", "namespace": "awx", "ownerReferences": [{"apiVersion": "tower.ansible.com/v1alpha1", "kind": "Tower", "name": "awx", "uid": "fcc3dc9d-0c52-4523-a428-fa5cfa203864"}], "resourceVersion": "5729", "selfLink": "/api/v1/namespaces/awx/services/awx-service", "uid": "90f806f0-93e5-45fd-9a52-a1b8b90f001a"}, "spec": {"clusterIP": "10.152.183.143", "ports": [{"port": 80, "protocol": "TCP", "targetPort": 8052}], "selector": {"app": "tower"}, "sessionAffinity": "None", "type": "ClusterIP"}, "status": {"loadBalancer": {}}}, "warnings": []}, {"changed": true, "method": "create", "result": {"apichanged: [localhost] => (item=tower_web.yaml.j2) => {"ansible_loop_var": "item", "changed": true, "item": "tower_web.yaml.j2", "result": {"results": [{"changed": true, "method": "create", "result": {"apiVersion": "v1", "data": {"admin_password": "cHdBbjgyIXNp", "secret_key": "YXd4c2VjcmV0"}, "kind": "Secret", "metadata": {"creationTimestamp": "2020-03-19T08:26:37Z", "name": "awx-tower-secret", "namespace": "awx", "ownerReferences": [{"apiVersion": "tower.ansible.com/v1alpha1", "kind": "Tower", "name": "awx", "uid": "fcc3dc9d-0c52-4523-a428-fa5cfa203864"}], "resourceVersion": "5722", "selfLink": "/api/v1/namespaces/awx/secrets/awx-tower-secret", "uid": "1ad7948a-efe5-482f-bd1b-438347200844"}, "type": "Opaque"}, "warnings": []}, {"changed": true, "method": "create", "result": {"apiVersion": "apps/v1", "kind": "Deployment", "metadata": {"creationTimestamp": "2020-03-19T08:26:37Z", "generation": 1, "labels": {"app": "tower"}, "name": "awx-tower-web", "namespace": "awx", "ownerReferences": [{"apiVersion": "tower.ansible.com/v1alpha1", "kind": "Tower", "name": "awx", "uid": "fcc3dc9d-0c52-4523-a428-fa5cfa203864"}], "resourceVersion": "5723", "selfLink": "/apis/apps/v1/namespaces/awx/deployments/awx-tower-web", "uid": "fd0a1235-e2c0-4b16-856b-f95b94083831"}, "spec": {"progressDeadlineSeconds": 600, "replicas": 1, "revisionHistoryLimit": 10, "selector": {"matchLabels": {"app": "tower"}}, "strategy": {"rollingUpdate": {"maxSurge": "25%", "maxUnavailable": "25%"}, "type": "RollingUpdate"}, "template": {"metadata": {"creationTimestamp": null, "labels": {"app": "tower"}}, "spec": {"containers": [{"image": "ansible/awx_web:9.3.0", "imagePullPolicy": "IfNotPresent", "name": "tower", "ports": [{"containerPort": 8052, "protocol": "TCP"}], "resources": {"requests": {"cpu": "1", "memory": "2Gi"}}, "terminationMessagePath": "/dev/termination-log", "terminationMessagePolicy": "File", "volumeMounts": [{"mountPath": "/etc/tower/SECRET_KEY", "name": "secret-key", "readOnly": true, "subPath": "SECRET_KEY"}, {"mountPath": "/etc/tower/conf.d/environment.sh", "name": "environment", "readOnly": true, "subPath": "environment.sh"}, {"mountPath": "/etc/tower/settings.py", "name": "settings", "readOnly": true, "subPath": "settings.py"}, {"mountPath": "/etc/nginx/nginx.conf", "name": "nginx-conf", "readOnly": true, "subPath": "nginx.conf"}]}], "dnsPolicy": "ClusterFirst", "restartPolicy": "Always", "schedulerName": "default-scheduler", "securityContext": {}, "terminationGracePeriodSeconds": 30, "volumes": [{"name": "secret-key", "secret": {"defaultMode": 420, "items": [{"key": "secret_key", "path": "SECRET_KEY"}], "secretName": "awx-tower-secret"}}, {"configMap": {"defaultMode": 420, "items": [{"key": "environment", "path": "environment.sh"}], "name": "awx-tower-configmap"}, "name": "environment"}, {"configMap": {"defaultMode": 420, "items": [{"key": "settings", "path": "settings.py"}], "name": "awx-tower-configmap"}, "name": "settings"}, {"configMap": {"defaultMode": 420, "items": [{"key": "nginx_conf", "path": "nginx.conf"}], "name": "awx-tower-configmap"}, "name": "nginx-conf"}]}}}, "status": {}}, "warnings": []}, {"changed": true, "method": "create", "result": {"apiVersion": "v1", "kind": "Service", "metadata": {"creationTimestamp": "2020-03-19T08:26:37Z", "labels": {"app": "tower"}, "name": "awx-service", "namespace": "awx", "ownerReferences": [{"apiVersion": "tower.ansible.com/v1alpha1", "kind": "Tower", "name": "awx", "uid": "fcc3dc9d-0c52-4523-a428-fa5cfa203864"}], "resourceVersion": "5729", "selfLink": "/api/v1/namespaces/awx/services/awx-service", "uid": "90f806f0-93e5-45fd-9a52-a1b8b90f001a"}, "spec": {"clusterIP": "10.152.183.143", "ports": [{"port": 80, "protocol": "TCP", "targetPort": 8052}], "selector": {"app": "tower"}, "sessionAffinity": "None", "type": "ClusterIP"}, "status": {"loadBalancer": {}}}, "warnings": []}, {"changed": true, "method": "create", "result": {"apiVersion": "extensions/v1beta1", "kind": "Ingress", "metadata": {"creationTimestamp": "2020-03-19T08:26:37Z", "generation": 1, "name": "awx-ingress", "namespace": "awx", "ownerReferences": [{"apiVersion": "tower.ansible.com/v1alpha1", "kind": "Tower", "name": "awx", "uid": "fcc3dc9d-0c52-4523-a428-fa5cfa203864"}], "resourceVersion": "5739", "selfLink": "/apis/extensions/v1beta1/namespaces/awx/ingresses/awx-ingress", "uid": "a3b64afc-bd34-4f13-a98b-428eafed2ad5"}, "spec": {"rules": [{"host": "microk8s", "http": {"paths": [{"backend": {"serviceName": "awx-service", "servicePort": 80}, "path": "/"}]}}]}, "status": {"loadBalancer": {}}}, "warnings": []}]}}
changed: [localhost] => (item=tower_task.yaml.j2) => {"ansible_loop_var": "item", "changed": true, "item": "tower_task.yaml.j2", "method": "create", "result": {"apiVersion": "apps/v1", "kind": "Deployment", "metadata": {"creationTimestamp": "2020-03-19T08:26:38Z", "generation": 1, "labels": {"app": "tower-task"}, "name": "awx-tower-task", "namespace": "awx", "ownerReferences": [{"apiVersion": "tower.ansible.com/v1alpha1", "kind": "Tower", "name": "awx", "uid": "fcc3dc9d-0c52-4523-a428-fa5cfa203864"}], "resourceVersion": "5744", "selfLink": "/apis/apps/v1/namespaces/awx/deployments/awx-tower-task", "uid": "671ee6d5-24f4-42fa-928e-e93281581bbf"}, "spec": {"progressDeadlineSeconds": 600, "replicas": 1, "revisionHistoryLimit": 10, "selector": {"matchLabels": {"app": "tower-task"}}, "strategy": {"rollingUpdate": {"maxSurge": "25%", "maxUnavailable": "25%"}, "type": "RollingUpdate"}, "template": {"metadata": {"creationTimestamp": null, "labels": {"app": "tower-task"}}, "spec": {"containers": [{"command": ["/usr/bin/launch_awx_task.sh"], "envFrom": [{"configMapRef": {"name": "awx-tower-configmap"}}, {"secretRef": {"name": "awx-tower-secret"}}], "image": "ansible/awx_task:9.3.0", "imagePullPolicy": "IfNotPresent", "name": "tower-task", "resources": {"requests": {"cpu": "500m", "memory": "1Gi"}}, "securityContext": {"privileged": true}, "terminationMessagePath": "/dev/termination-log", "terminationMessagePolicy": "File", "volumeMounts": [{"mountPath": "/etc/tower/SECRET_KEY", "name": "secret-key", "readOnly": true, "subPath": "SECRET_KEY"}, {"mountPath": "/etc/tower/conf.d/environment.sh", "name": "environment", "readOnly": true, "subPath": "environment.sh"}, {"mountPath": "/etc/tower/settings.py", "name": "settings", "readOnly": true, "subPath": "settings.py"}]}], "dnsPolicy": "ClusterFirst", "restartPolicy": "Always", "schedulerName": "default-scheduler", "securityContext": {}, "terminationGracePeriodSeconds": 30, "volumes": [{"name": "secret-key", "secret": {"defaultMode": 420, "items": [{"key": "secret_key", "path": "SECRET_KEY"}], "secretName": "awx-tower-secret"}}, {"configMap": {"defaultMode": 420, "items": [{"key": "environment", "path": "environment.sh"}], "name": "awx-tower-configmap"}, "name": "environment"}, {"configMap": {"defaultMode": 420, "items": [{"key": "settings", "path": "settings.py"}], "name": "awx-tower-configmap"}, "name": "settings"}]}}}, "status": {}}}
TASK [tower : Get the Tower pod information.] **********************************
task path: /opt/ansible/roles/tower/tasks/main.yml:14
ok: [localhost] => {"attempts": 1, "changed": false, "resources": [{"apiVersion": "v1", "kind": "Pod", "metadata": {"creationTimestamp": "2020-03-19T08:26:37Z", "generateName": "awx-tower-web-c98cd6555-", "labels": {"app": "tower", "pod-template-hash": "c98cd6555"}, "name": "awx-tower-web-c98cd6555-jtzfg", "namespace": "awx", "ownerReferences": [{"apiVersion": "apps/v1", "blockOwnerDeletion": true, "controller": true, "kind": "ReplicaSet", "name": "awx-tower-web-c98cd6555", "uid": "77238ee6-e1f4-4610-a65c-701359cc8efd"}], "resourceVersion": "5765", "selfLink": "/api/v1/namespaces/awx/pods/awx-tower-web-c98cd6555-jtzfg", "uid": "77dfe879-08c6-4643-981b-2c456698e89a"}, "spec": {"containers": [{"image": "ansible/awx_web:9.3.0", "imagePullPolicy": "IfNotPresent", "name": "tower", "ports": [{"containerPort": 8052, "protocol": "TCP"}], "resources": {"requests": {"cpu": "1", "memory": "2Gi"}}, "terminationMessagePath": "/dev/termination-log", "terminationMessagePolicy": "File", "volumeMounts": [{"mountPath": "/etc/tower/SECRET_KEY", "name": "secret-key", "readOnly": true, "subPath": "SECRET_KEY"}, {"mountPath": "/etc/tower/conf.d/environment.sh", "name": "environment", "readOnly": true, "subPath": "environment.sh"}, {"mountPath": "/etc/tower/settings.py", "name": "settings", "readOnly": true, "subPath": "settings.py"}, {"mountPath": "/etc/nginx/nginx.conf", "name": "nginx-conf", "readOnly": true, "subPath": "nginx.conf"}, {"mountPath": "/var/run/secrets/kubernetes.io/serviceaccount", "name": "default-token-sqw2l", "readOnly": true}]}], "dnsPolicy": "ClusterFirst", "enableServiceLinks": true, "nodeName": "microk8s", "priority": 0, "restartPolicy": "Always", "schedulerName": "default-scheduler", "securityContext": {}, "serviceAccount": "default", "serviceAccountName": "default", "terminationGracePeriodSeconds": 30, "tolerations": [{"effect": "NoExecute", "key": "node.kubernetes.io/not-ready", "operator": "Exists", "tolerationSeconds": 300}, {"effect": "NoExecute", "key": "node.kubernetes.io/unreachable", "operator": "Exists", "tolerationSeconds": 300}], "volumes": [{"name": "secret-key", "secret": {"defaultMode": 420, "items": [{"key": "secret_key", "path": "SECRET_KEY"}], "secretName": "awx-tower-secret"}}, {"configMap": {"defaultMode": 420, "items": [{"key": "environment", "path": "environment.sh"}], "name": "awx-tower-configmap"}, "name": "environment"}, {"configMap": {"defaultMode": 420, "items": [{"key": "settings", "path": "settings.py"}], "name": "awx-tower-configmap"}, "name": "settings"}, {"configMap": {"defaultMode": 420, "items": [{"key": "nginx_conf", "path": "nginx.conf"}], "name": "awx-tower-configmap"}, "name": "nginx-conf"}, {"name": "default-token-sqw2l", "secret": {"defaultMode": 420, "secretName": "default-token-sqw2l"}}]}, "status": {"conditions": [{"lastProbeTime": null, "lastTransitionTime": "2020-03-19T08:26:37Z", "status": "True", "type": "Initialized"}, {"lastProbeTime": null, "lastTransitionTime": "2020-03-19T08:26:39Z", "status": "True", "type": "Ready"}, {"lastProbeTime": null, "lastTransitionTime": "2020-03-19T08:26:39Z", "status": "True", "type": "ContainersReady"}, {"lastProbeTime": null, "lastTransitionTime": "2020-03-19T08:26:37Z", "status": "True", "type": "PodScheduled"}], "containerStatuses": [{"containerID": "containerd://c32c89a8657d58ba5529ef0d96679094f0bebbbd61bd4dbd354afdd2e46c160a", "image": "docker.io/ansible/awx_web:9.3.0", "imageID": "docker.io/ansible/awx_web@sha256:e3716cce276a9774650a4fbbb5d80c98fa734db633e8ae4ea661d178c23b89df", "lastState": {}, "name": "tower", "ready": true, "restartCount": 0, "started": true, "state": {"running": {"startedAt": "2020-03-19T08:26:39Z"}}}], "hostIP": "192.168.1.119", "phase": "Running", "podIP": "10.1.9.26", "podIPs": [{"ip": "10.1.9.26"}], "qosClass": "Burstable", "startTime": "2020-03-19T08:26:37Z"}}]}
TASK [tower : Set the tower pod name as a variable.] ***************************
TASK [tower : Set the tower pod name as a variable.] ***************************
task path: /opt/ansible/roles/tower/tasks/main.yml:25
ok: [localhost] => {"ansible_facts": {"tower_pod_name": "awx-tower-web-c98cd6555-jtzfg"}, "changed": false}
TASK [tower : Verify tower_pod_name is populated.] *****************************
task path: /opt/ansible/roles/tower/tasks/main.yml:29
ok: [localhost] => {
"changed": false,
"msg": "All assertions passed"
}
TASK [tower : Check if database is populated (auth_user table exists).] ********
task path: /opt/ansible/roles/tower/tasks/main.yml:34
skipping: [localhost] => {"changed": false, "skip_reason": "Conditional result was False"}
TASK [tower : Migrate the database if the K8s resources were updated.] *********
task path: /opt/ansible/roles/tower/tasks/main.yml:46
r/lib/awx/venv/awx/lib/python3.6/site-packages/django/core/management/base.py\", line 323, in run_from_argv\n self.execute(*args, **cmd_options)\n File \"/var/lib/awx/venv/awx/lib/python3.6/site-packages/django/core/management/base.py\", line 364, in execute\n output = self.handle(*args, **options)\n File \"/var/lib/awx/venv/awx/lib/python3.6/site-packages/django/core/management/base.py\", line 83, in wrapped\n res = handle_func(*args, **kwargs)\n File \"/var/lib/awx/venv/awx/lib/python3.6/site-packages/django/core/management/commands/migrate.py\", line 87, in handle\n executor = MigrationExecutor(connection, self.migration_progress_callback)\n File \"/var/lib/awx/venv/awx/lib/python3.6/site-packages/django/db/migrations/executor.py\", line 18, in __init__\n self.loader = MigrationLoader(self.connection)\n File \"/var/lib/awx/venv/awx/lib/python3.6/site-packages/django/db/migrations/loader.py\", line 49, in __init__\n self.build_graph()\n File \"/var/lib/awx/venv/awx/lib/python3.6/site-packages/django/db/migrations/loader.py\", line 212, in build_graph\n self.applied_migrations = recorder.applied_migrations()\n File \"/var/lib/awx/venv/awx/lib/python3.6/site-packages/django/db/migrations/recorder.py\", line 73, in applied_migrations\n if self.has_table():\n File \"/var/lib/awx/venv/awx/lib/python3.6/site-packages/django/db/migrations/recorder.py\", line 56, in has_table\n return self.Migration._meta.db_table in self.connection.introspection.table_names(self.connection.cursor())\n File \"/var/lib/awx/venv/awx/lib/python3.6/site-packages/django/db/backends/base/base.py\", line 256, in cursor\n return self._cursor()\n File \"/var/lib/awx/venv/awx/lib/python3.6/site-packages/django/db/backends/base/base.py\", line 233, in _cursor\n self.ensure_connection()\n File \"/var/lib/awx/venv/awx/lib/python3.6/site-packages/django/db/backends/base/base.py\", line 217, in ensure_connection\n self.connect()\n File \"/var/lib/awx/venv/awx/lib/python3.6/site-packages/django/db/utils.py\", line 89, in __exit__\n raise dj_exc_value.with_traceback(traceback) from exc_value\n File \"/var/lib/awx/venv/awx/lib/python3.6/site-packages/django/db/backends/base/base.py\", line 217, in ensure_connection\n self.connect()\n File \"/var/lib/awx/venv/awx/lib/python3.6/site-packages/django/db/backends/base/base.py\", line 195, in connect\n self.connection = self.get_new_connection(conn_params)\n File \"/var/lib/awx/venv/awx/lib/python3.6/site-packages/django/db/backends/postgresql/base.py\", line 178, in get_new_connection\n connection = Database.connect(**conn_params)\n File \"/var/lib/awx/venv/awx/lib/python3.6/site-packages/psycopg2/__init__.py\", line 126, in connect\n conn = _connect(dsn, connection_factory=connection_factory, **kwasync)\ndjango.db.utils.OperationalError: could not connect to server: Connection timed out\n\tIs the server running on host \"awx-postgres.awx.svc.cluster.local\" (91.201.60.73) and accepting\n\tTCP/IP connections on port 5432?\n\ncommand terminated with exit code 1", "stderr_lines": ["Traceback (most recent call last):", " File \"/var/lib/awx/venv/awx/lib/python3.6/site-packages/django/db/backends/base/base.py\", line 217, in ensure_connection", " self.connect()", " File \"/var/lib/awx/venv/awx/lib/python3.6/site-packages/django/db/backends/base/base.py\", line 195, in connect", " self.connection = self.get_new_connection(conn_params)", " File \"/var/lib/awx/venv/awx/lib/python3.6/site-packages/django/db/backends/postgresql/base.py\", line 178, in get_new_connection", " connection = Database.connect(**conn_params)", " File \"/var/lib/awx/venv/awx/lib/python3.6/site-packages/psycopg2/__init__.py\", line 126, in connect", " conn = _connect(dsn, connection_factory=connection_factory, **kwasync)", "psycopg2.OperationalError: could not connect to server: Connection timed out", "\tIs the server running on host \"awx-postgres.awx.svc.cluster.local\" (91.201.60.73) and accepting", "\tTCP/IP connections on port 5432?", "", "", "The above exception was the direct cause of the following exception:", "", "Traceback (most recent call last):", " File \"/usr/bin/awx-manage\", line 8, in <module>", " sys.exit(manage())", " File \"/var/lib/awx/venv/awx/lib/python3.6/site-packages/awx/__init__.py\", line 152, in manage", " execute_from_command_line(sys.argv)", " File \"/var/lib/awx/venv/awx/lib/python3.6/site-packages/django/core/management/__init__.py\", line 381, in execute_from_command_line", " utility.execute()", " File \"/var/lib/awx/venv/awx/lib/python3.6/site-packages/django/core/management/__init__.py\", line 375, in execute", " self.fetch_command(subcommand).run_from_argv(self.argv)", " File \"/var/lib/awx/venv/awx/lib/python3.6/site-packages/django/core/management/base.py\", line 323, in run_from_argv", " self.execute(*args, **cmd_options)", " File \"/var/lib/awx/venv/awx/lib/python3.6/site-packages/django/core/management/base.py\", line 364, in execute", " output = self.handle(*args, **options)", " File \"/var/lib/awx/venv/awx/lib/python3.6/site-packages/django/core/management/base.py\", line 83, in wrapped", " res = handle_func(*args, **kwargs)", " File \"/var/lib/awx/venv/awx/lib/python3.6/site-packages/django/core/management/commands/migrate.py\", line 87, in handle", " executor = MigrationExecutor(connection, self.migration_progress_callback)", " File \"/var/lib/awx/venv/awx/lib/python3.6/site-packages/django/db/migrations/executor.py\", line 18, in __init__", " self.loader = MigrationLoader(self.connection)", " File \"/var/lib/awx/venv/awx/lib/python3.6/site-packages/django/db/migrations/loader.py\", line 49, in __init__", " self.build_graph()", " File \"/var/lib/awx/venv/awx/lib/python3.6/site-packages/django/db/migrations/loader.py\", line 212, in build_graph", " self.applied_migrations = recorder.applied_migrations()", " File \"/var/lib/awx/venv/awx/lib/python3fatal: [localhost]: FAILED! => {"changed": true, "cmd": "kubectl exec -n awx awx-tower-web-c98cd6555-jtzfg -- bash -c \"awx-manage migrate --noinput\"", "delta": "0:19:47.851857", "end": "2020-03-19 08:46:30.636223", "msg": "non-zero return code", "rc": 1, "start": "2020-03-19 08:26:42.784366", "stderr": "Traceback (most recent call last):\n File \"/var/lib/awx/venv/awx/lib/python3.6/site-packages/django/db/backends/base/base.py\", line 217, in ensure_connection\n self.connect()\n File \"/var/lib/awx/venv/awx/lib/python3.6/site-packages/django/db/backends/base/base.py\", line 195, in connect\n self.connection = self.get_new_connection(conn_params)\n File \"/var/lib/awx/venv/awx/lib/python3.6/site-packages/django/db/backends/postgresql/base.py\", line 178, in get_new_connection\n connection = Database.connect(**conn_params)\n File \"/var/lib/awx/venv/awx/lib/python3.6/site-packages/psycopg2/__init__.py\", line 126, in connect\n conn = _connect(dsn, connection_factory=connection_factory, **kwasync)\npsycopg2.OperationalError: could not connect to server: Connection timed out\n\tIs the server running on host \"awx-postgres.awx.svc.cluster.local\" (91.201.60.73) and accepting\n\tTCP/IP connections on port 5432?\n\n\nThe above exception was the direct cause of the following exception:\n\nTraceback (most recent call last):\n File \"/usr/bin/awx-manage\", line 8, in <module>\n sys.exit(manage())\n File \"/var/lib/awx/venv/awx/lib/python3.6/site-packages/awx/__init__.py\", line 152, in manage\n execute_from_command_line(sys.argv)\n File \"/var/lib/awx/venv/awx/lib/python3.6/site-packages/django/core/management/__init__.py\", line 381, in execute_from_command_line\n utility.execute()\n File \"/var/lib/awx/venv/awx/lib/python3.6/site-packages/django/core/management/__init__.py\", line 375, in execute\n self.fetch_command(subcommand).run_from_argv(self.argv)\n File \"/var/lib/awx/venv/awx/lib/python3.6/site-packages/django/core/management/base.py\", line 323, in run_from_argv\n self.execute(*args, **cmd_options)\n File \"/var/lib/awx/venv/awx/lib/python3.6/site-packages/django/core/management/base.py\", line 364, in execute\n output = self.handle(*args, **options)\n File \"/var/lib/awx/venv/awx/lib/python3.6/site-packages/django/core/management/base.py\", line 83, in wrapped\n res = handle_func(*args, **kwargs)\n File \"/var/lib/awx/venv/awx/lib/python3.6/site-packages/django/core/management/commands/migrate.py\", line 87, in handle\n executor = MigrationExecutor(connection, self.migration_progress_callback)\n File \"/var/lib/awx/venv/awx/lib/python3.6/site-packages/django/db/migrations/executor.py\", line 18, in __init__\n self.loader = MigrationLoader(self.connection)\n File \"/var/lib/awx/venv/awx/lib/python3.6/site-packages/django/db/migrations/loader.py\", line 49, in __init__\n self.build_graph()\n File \"/var/lib/awx/venv/awx/lib/python3.6/site-packages/django/db/migrations/loader.py\", line 212, in build_graph\n self.applied_migrations = recorder.applied_migrations()\n File \"/var/lib/awx/venv/awx/lib/python3.6/site-packages/django/db/migrations/recorder.py\", line 73, in applied_migrations\n if self.has_table():\n File \"/var/lib/awx/venv/awx/lib/python3.6/site-packages/django/db/migrations/recorder.py\", line 56, in has_table\n return self.Migration._meta.db_table in self.connection.introspection.table_names(self.connection.cursor())\n File \"/var/lib/awx/venv/awx/lib/python3.6/site-packages/django/db/backends/base/base.py\", line 256, in cursor\n return self._cursor()\n File \"/var/lib/awx/venv/awx/lib/python3.6/site-packages/django/db/backends/base/base.py\", line 233, in _cursor\n self.ensure_connection()\n File \"/var/lib/awx/venv/awx/lib/python3.6/site-packages/django/db/backends/base/base.py\", line 217, in ensure_connection\n self.connect()\n File \"/var/lib/awx/venv/awx/lib/python3.6/site-packages/django/db/utils.py\", line 89, in __exit__\n raise dj_exc_value.with_traceback(traceback) from exc_value\n File \"/var/lib/awx/venv/awx/lib/python3.6/site-packages/django/db/backends/base/base.py\", line 217, in ensure_connection\n self.connect()\n File \"/var/lib/awx/venv/awx/lib/python3.6/site-packages/django/db/backends/base/base.py\", line 195, in connect\n self.connection = self.get_new_connection(conn_params)\n File \"/var/lib/awx/venv/awx/lib/python3.6/site-packages/django/db/backends/postgresql/base.py\", line 178, in get_new_connection\n connection = Database.connect(**conn_params)\n File \"/var/lib/awx/venv/awx/lib/python3.6/site-packages/psycopg2/__init__.py\", line 126, in connect\n conn = _connect(dsn, connection_factory=connection_factory, **kwasync)\ndjango.db.utils.OperationalError: could not connect to server: Connection timed out\n\tIs the server running on host \"awx-postgres.awx.svc.cluster.local\" (91.201.60.73) and accepting\n\tTCP/IP connections on port 5432?\n\ncommand terminated with exit code 1", "stderr_lines": ["Traceback (most recent call last):", " File \"/var/lib/awx/venv/awx/lib/python3.6/site-packages/django/db/backends/base/base.py\", line 217, in ensure_connection", " self.connect()", " File \"/var/lib/awx/venv/awx/lib/python3.6/site-packages/django/db/backends/base/base.py\", line 195, in connect", " self.connection = self.get_new_connection(conn_params)", " File \"/var/lib/awx/venv/awx/lib/python3.6/site-packages/django/db/backends/postgresql/base.py\", line 178, in get_new_connection", " connection = Database.connect(**conn_params)", " File \"/var/lib/awx/venv/awx/lib/python3.6/site-packages/psycopg2/__init__.py\", line 126, in connect", " conn = _connect(dsn, connection_factory=connection_factory, **kwasync)", "psycopg2.OperationalError: could not connect to server: Connection timed out", "\tIs the server running on host \"awx-postgres.awx.svc.cluster.local\" (91.201.60.73) and accepting", "\tTCP/IP connections on port 5432?", "", "", "The above exception was the direct cause of the following exception:", "", "Traceback (most recent call last):", " File \"/usr/bin/awx-manage\", line 8, in <module>", " sys.exit(manage())", " File \"/var/lib/awx/venv/awx/lib/python3.6/site-packages/awx/__init__.py\", line 152, in manage", " execute_from_command_line(sys.argv)", " File \"/var/lib/awx/venv/awx/lib/python3.6/site-packages/django/core/management/__init__.py\", line 381, in execute_from_command_line", " utility.execute()", " File \"/var/lib/awx/venv/awx/lib/python3.6/site-packages/django/core/management/__init__.py\", line 375, in execute", " self.fetch_command(subcommand).run_from_argv(self.argv)", " File \"/var/lib/awx/venv/awx/lib/python3.6/site-packages/django/core/management/base.py\", line 323, in run_from_argv", " self.execute(*args, **cmd_options)", " File \"/var/lib/awx/venv/awx/lib/python3.6/site-packages/django/core/management/base.py\", line 364, in execute", " output = self.handle(*args, **options)", " File \"/var/lib/awx/venv/awx/lib/python3.6/site-packages/django/core/management/base.py\", line 83, in wrapped", " res = handle_func(*args, **kwargs)", " File \"/var/lib/awx/venv/awx/lib/python3.6/site-packages/django/core/management/commands/migrate.py\", line 87, in handle", " executor = MigrationExecutor(connection, self.migration_progress_callback)", " File \"/var/lib/awx/venv/awx/lib/python3.6/site-packages/django/db/migrations/executor.py\", line 18, in __init__", " self.loader = MigrationLoader(self.connection)", " File \"/var/lib/awx/venv/awx/lib/python3.6/site-packages/django/db/migrations/loader.py\", line 49, in __init__", " self.build_graph()", " File \"/var/lib/awx/venv/awx/lib/python3.6/site-packages/django/db/migrations/loader.py\", line 212, in build_graph", " self.applied_migrations = recorder.applied_migrations()", " File \"/var/lib/awx/venv/awx/lib/python3.6/site-packages/django/db/migrations/recorder.py\", line 73, in applied_migrations", " if self.has_table():", " File \"/var/lib/awx/venv/awx/lib/python3.6/site-packages/django/db/migrations/recorder.py\", line 56, in has_table", " return self.Migration._meta.db_table in self.connection.introspection.table_names(self.connection.cursor())", " File \"/var/lib/awx/venv/awx/lib/python3.6/site-packages/django/db/backends/base/base.py\", line 256, in cursor", " return self._cursor()", " File \"/var/lib/awx/venv/awx/lib/python3.6/site-packages/django/db/backends/base/base.py\", line 233, in _cursor", " self.ensure_connection()", " File \"/var/lib/awx/venv/awx/lib/python3.6/site-packages/django/db/backends/base/base.py\", line 217, in ensure_connection", " self.connect()", " File \"/var/lib/awx/venv/awx/lib/python3.6/site-packages/django/db/utils.py\", line 89, in __exit__", " raise dj_exc_value.with_traceback(traceback) from exc_value", " File \"/var/lib/awx/venv/awx/lib/python3.6/site-packages/django/db/backends/base/base.py\", line 217, in ensure_connection", " self.connect()", " File \"/var/lib/awx/venv/awx/lib/python3.6/site-packages/django/db/backends/base/base.py\", line 195, in connect", " self.connection = self.get_new_connection(conn_params)", " File \"/var/lib/awx/venv/awx/lib/python3.6/site-packages/django/db/backends/postgresql/base.py\", line 178, in get_new_connection", " connection = Database.connect(**conn_params)", " File \"/var/lib/awx/venv/awx/lib/python3.6/site-packages/psycopg2/__init__.py\", line 126, in connect", " conn = _connect(dsn, connection_factory=connection_factory, **kwasync)", "django.db.utils.OperationalError: could not connect to server: Connection timed out", "\tIs the server running on host \"awx-postgres.awx.svc.cluster.local\" (91.201.60.73) and accepting", "\tTCP/IP connections on port 5432?", "", "command terminated with exit code 1"], "stdout": "", "stdout_lines": []}
PLAY RECAP *********************************************************************
localhost : ok=4 changed=1 unreachable=0 failed=1 skipped=1 rescued=0 ignored=0
Web page shows:
502 Bad Gateway openresty/1.15.8.1
Currently on v0.14.0
, latest is v0.16.0
.
For example:
2019-11-08 22:34:58,030 ERROR celery.beat Removing corrupted schedule file '/var/lib/awx/beat.db': error(11, 'Resource temporarily unavailable')
Traceback (most recent call last):
File "/var/lib/awx/venv/awx/lib64/python3.6/site-packages/celery/beat.py", line 485, in setup_schedule
self._store = self._open_schedule()
File "/var/lib/awx/venv/awx/lib64/python3.6/site-packages/celery/beat.py", line 475, in _open_schedule
return self.persistence.open(self.schedule_filename, writeback=True)
File "/usr/lib64/python3.6/shelve.py", line 243, in open
return DbfilenameShelf(filename, flag, protocol, writeback)
File "/usr/lib64/python3.6/shelve.py", line 227, in __init__
Shelf.__init__(self, dbm.open(filename, flag), protocol, writeback)
File "/usr/lib64/python3.6/dbm/__init__.py", line 94, in open
return mod.open(file, flag, mode)
_gdbm.error: [Errno 11] Resource temporarily unavailable
Not sure if this is a big deal or not, but wanted to post it here and track down any other celery-related issues. Note that I don't think I'm running any instance of celery
independent of the RabbitMQ deployment or the AWX web deployment...
It looks like the official Kubernetes installer has a separate container running in the main web pod for celery: https://github.com/ansible/awx/blob/devel/installer/roles/kubernetes/templates/deployment.yml.j2#L233-L278
As always, really nice work @geerlingguy :) You always seem to be one step ahead when I start searching for anything ansible-related on the interwebs.
i'm using Calico CNI, and thus directly reaching pod endpoints via BGP. i don't really need/want ingress support in my deployments and prefer to use externalIPs
. since operators can be somewhat ridged in terms of implementation/design, is this something that you've already considered making optional? thanks for everything you do!
See: https://github.com/geerlingguy/awx-container/blob/master/docker-compose.yml (for a starting point).
Basically, I need to add in containers and resources for:
Note that I'm still trying to see if there are any 'official' tower docker images available via Docker Hub, Quay, or elsewhere. Will ask around internally to see if there are or not.
Right now the tests just verify things get created and don't break in Kubernetes... I would like to add a more functional test that verifies AWX/Tower is actually installed at some point, using curl/uri inside the KinD container with a timeout of maybe 5 or 10 minutes (it does take a while for AWX/Tower initialization to complete).
Follow-up to #1.
Currently, the Ingress resource is hardcoded to just allow access via port 80. It would be good to allow users to provide more details and/or annotations for Ingress to support HTTPS/443. See the basic TLS docs (https://kubernetes.io/docs/concepts/services-networking/ingress/#tls) for cert support, or look into supporting cert-manager or something similar too.
It would be nice to be able to run a clustered set of towers as part of this operator as mentioned here: https://docs.ansible.com/ansible-tower/latest/html/administration/clustering.html.
3.6.1 was released today, with some security and bugfix patches:
https://quay.io/repository/ansible-tower/ansible-tower?tab=tags
The latest versions (3.7.0 / 10(?).0.0) will soon be using Redis instead of RabbitMQ; more info here: ansible/awx#5443
Docker Compose changes: https://github.com/ansible/awx/pull/6034/files#diff-ee215160a0808b30b25efa63ca9ac0f9
Kubernetes role changes:
Though some have run into issues (see ansible/awx#6365) โ so for this operator it may be prudent to wait a little.
Most of the playbooks split up between the two molecule scenarios (test-local
and test-minikube
) are identical. Where they are not, it's usually a variable here or there that changes.
I would like to merge everything into the default
scenario, then include where necessary in the specific scenarios for Minikube and KinD (local).
After this, it would be nice to upstream this work into the Operator SDK project, so others can benefit from being able to easily test and debug operators in KinD (great for CI/speed) or Minikube (great for local development, and some CI use cases (e.g. ingress)).
Would be good to have a contributors guide so that folks could understand what kind of guidelines there are for getting involved here.
For example, not sure what (if any) version numbers I should be bumping in any of the docs as I look to add OpenShift functionality. Or even if maybe this is something to look to add to any of the automation (Travis, GitHub Actions, etc.)
Tower 3.6.x requires Postgres 10 (though seems to run okay on 9.6 for now...), so that version should be upgraded by replacing the three definitions of tower_postgres_image
in the codebase.
In #5, I discovered the k8s_exec
module from this PR (ansible/ansible#55029) does not work when running inside an Ansible-based Operator due to some proxy request handling the Operator does for Kubernetes API requests.
The gist of the problem is k8s_exec
uses a websocket to communicate with Kubernetes to run an exec
command, but the proxy does not handle the 101
handshake response correctly (instead returning a 200
), which results in a failure of the k8s_exec
module.
I was going to try to get that issue fixed in #5, but as a workaround, I'm currently using kubectl
, which is installed in the operator image with the following line:
# Install kubectl.
COPY --from=lachlanevenson/k8s-kubectl:v1.16.2 /usr/local/bin/kubectl /usr/local/bin/kubectl
This is a little fragile, as it means the kubectl
currently shipping with this operator is locked into a specific version (which likely won't cause issues, but isn't wonderful especially if it could be used as an attack vector if a vulnerability is found with whatever the current version is).
So for this issue to be complete, the following should be done:
roles/tower/library/k8s_exec.py
based on this PR.main.yml
to use k8s_exec
instead of shell
+ kubectl
.See Tower Release Notes: https://docs.ansible.com/ansible-tower/3.7.0/html/release-notes/#
One of the major changes:
Updated Tower to no longer rely on RabbitMQ; Redis is added as a new dependency
So we'll need to update the operator to use Redis instead of Rabbit.
From #39:
The latest versions (3.7.0 / 10(?).0.0) will soon be using Redis instead of RabbitMQ; more info here: ansible/awx#5443
Docker Compose changes: https://github.com/ansible/awx/pull/6034/files#diff-ee215160a0808b30b25efa63ca9ac0f9
Kubernetes role changes:
Though some have run into issues (see ansible/awx#6365) โ so for this operator it may be prudent to wait a little.
Hi @geerlingguy,
Thank you very much for this work. I have a question regarding the Ansible roles you are using. Did you assess the pros and cons of reusing the official AWX roles in your operator ?
https://github.com/ansible/awx/tree/devel/installer
Hi again Jeff,
Sorry to spam with issues at the moment. Solving obstacles day by day.
I've now managed to get awx running and with much faster response times then our local docker environment setup.
However, how do we migrate the existing database (postgres10) to the kubernetes environement.
The namespace is configured with persistentVolumeClaim and can't find the mount folder on the host. I was hoping to just copy the postgres data folder content like we do right now when upgrading awx versions.
Any easy way to accomplish this scenario?
Many thanks!
CPU is one thing, but memory is another; without enough, Tower kind of implodes. In the official installer, they set the following defaults for spec.containers.resources.requests
:
web_mem_request: 1
web_cpu_request: 500
task_mem_request: 2
task_cpu_request: 1500
They also request quite a bit of memory for RabbitMQ, though at this point I'm inclined to leave it unspecified. I have been running everything under a local minikube cluster with 6G of total RAM available inside, and things run smoothly enough at least for demonstration/light usage. So I don't want to jam requirements in that makes it impossible to run on a workstation with less than 16 GB of RAM available.
In the latest release of the operator, it looks like the deployment has gone from each components having its own deployment (task, web, etc.) to all containers being inside a single pod. Was there a technical reason behind this? Seems like this would cause issues if you wanted things to scale independently of each other moving forward (i.e. only scale web due to increased traffic, etc.)
There's a new release today, and it seems to basically fix some bugs. There is one change which may require a modification to the memcached deployment:
Improved memcached in OpenShift deployments to listen on a more secure domain socket (CVE-2020-10697)
Operator container shows the below.
File "/usr/local/lib/python3.6/site-packages/ansible/utils/path.py", line 90, in makedirs_safe
raise AnsibleError("Unable to create local directories(%s): %s" % (to_native(rpath), to_native(e)))
ansible.errors.AnsibleError: Unable to create local directories(/opt/ansible/.ansible/tmp): [Errno 13] Permission denied: b'/opt/ansible/.ansible/tmp'
Currently this operator doesn't have a validation section for the CRD, nor a spec CSV with CRD fields defined. I'd like to get all that fixed so this operator can pass the operator-sdk scorecard
check.
Mostly this would require adopting the complexity of the OpenShift installer's templates, and possibly re-structuring the Pod architecture (the OpenShift deployment seems to deploy a ton of stuff inside the single Tower Deployment's Pod, instead of operating the services like RabbitMQ independently).
But from looking at the downloadable installer's kubernetes
role vs. this operator's tower
role, it seems like both could be combined without a huge effort. I am postponing work on this until the operator is more stable, however, and also do not want to commit too much effort to a combination effort until a decision may be made to share the role between projects.
As it is, it's nice to maintain this operator-specific role inside the operator project (just for project velocity and dependency management purposes).
Tower 3.6.3 was just released: https://docs.ansible.com/ansible-tower/latest/html/installandreference/release_notes.html#ansible-tower-version-3-6-3
AWX was also updated to 9.2.0 recently (https://hub.docker.com/r/ansible/awx_task/tags), so might as well update that at the same time.
One issue that I saw mentioned in the OpenShift issue was that the tower-operator service account gets created in the default namespace. I think it's important to break this out into its own seperate issue as I believe this will cause issues for more than just those who are running in OpenShift.
The way I see it, there are two issues that need resolved here:
Remove namespace from Service Account creation
and
tower-operator/deploy/tower-operator.yaml
Line 86 in 39aec6b
That's the easy part. The less easy part comes next.
How do handle applying the ClusterRole to the ServiceAccount without knowing which namespace/project it is going to be created in
tower-operator/deploy/role_binding.yaml
Lines 6 to 9 in 39aec6b
and
tower-operator/deploy/tower-operator.yaml
Lines 72 to 75 in 39aec6b
The answer, at this point, I think all comes down to the level of complexity you want involved in installing the operator. And this may very well become abstracted once this is hidden behind just being installed from OperatorHub -- but for now I think there are a few options:
This could be done with helm or even more ansible & jinja. This will allow users to provide their values in some other file and then installation is still all done behind a single command (helm install
or ansible-playbook
)
This moves away from a clean, one command install -- but would give the user the ability to define their namespace in the patch command and then it would generate the appropriate yaml for where they want to actually install the operator at
ex:
kubectl/oc patch -f tower-operator.yaml -p '{ MY PATCH HERE }' | kubectl/oc apply -f -
I'm not saying that the above are the only two ways. I've been playing around with a handful of other ways that haven't led anywhere at this point:
So I think there are potentially other ways. These are more just suggestions for the short term. Again this may be a non-issue once something like OperatorHub comes into play. But for now, I think this is something that needs handled as otherwise if the user isn't installing in the default
namespace there is a bunch of manual intervention required to get this running.
I tried to upgrade today from awx 9.1.1 to 9.2.0 and encountered an error during the migration. It looks as though it tried to run the migration on the previous container being replaced.
I was actually just testing the upgrade earlier in the day and it was successful, so it looks to be timing related though doesn't happen every run. A longer delay may be needed to ensure that the new container has finished creating before trying the migration.
I see that there is already a 5 second delay here so it may need to be longer, or better still could we configure it through the operator config?
- name: Get the Tower pod information.
# TODO: Change to k8s_info after Ansible 2.9.0 is available in Operator image.
k8s_facts:
kind: Pod
namespace: '{{ meta.namespace }}'
label_selectors:
- app=tower
register: tower_pods
until: "tower_pods['resources'][0]['status']['phase'] == 'Running'"
delay: 5
retries: 60
Operator output:
--------------------------- Ansible Task StdOut -------------------------------
TASK [Migrate the database if the K8s resources were updated.] ********************************
fatal: [localhost]: FAILED! => {
"changed": true,
"cmd": "kubectl exec -n awx awx-tower-tower-web-89c99cb89-6lxgl -- bash -c \"awx-manage migrate --noinput\"",
"delta": "0:00:00.127667",
"end": "2020-02-12 14:11:24.460539",
"invocation": {
"module_args": {
"_raw_params": "kubectl exec -n awx awx-tower-tower-web-89c99cb89-6lxgl -- bash -c \"awx-manage migrate --noinput\"",
"_uses_shell": true,
"argv": null,
"chdir": null,
"creates": null,
"executable": null,
"removes": null,
"stdin": null,
"stdin_add_newline": true,
"strip_empty_ends": true,
"warn": true
}
},
"msg": "non-zero return code",
"rc": 1,
"start": "2020-02-12 14:11:24.332872",
"stderr": "error: unable to upgrade connection: container not found (\"tower\")",
"stderr_lines": [
"error: unable to upgrade connection: container not found (\"tower\")"
],
"stdout": "",
"stdout_lines": []
}
I was able to execute the migration against the new container:
kubectl exec -n awx awx-tower-tower-web-797cd6487f-dc2vh -- bash -c "awx-manage migrate --noinput"
Operations to perform:
Apply all migrations: auth, conf, contenttypes, main, oauth2_provider, sessions, sites, social_django, sso, taggit
Running migrations:
Applying main.0102_v370_unifiedjob_canceled... OK
Applying main.0103_v370_remove_computed_fields... OK
Applying main.0104_v370_cleanup_old_scan_jts... OK
Applying main.0105_v370_remove_jobevent_parent_and_hosts... OK
Applying main.0106_v370_remove_inventory_groups_with_active_failures... OK
Applying main.0107_v370_workflow_convergence_api_toggle... OK
Applying main.0108_v370_unifiedjob_dependencies_processed... OK
Events:
24m Normal ScalingReplicaSet deployment/awx-tower-tower-task Scaled down replica set awx-tower-tower-task-5c4799bdf to 0
25m Normal Scheduled pod/awx-tower-tower-web-797cd6487f-dc2vh Successfully assigned awx/awx-tower-tower-web-797cd6487f-dc2vh to ip-10-16-2-184.eu-west-1.compute.internal
25m Normal Pulling pod/awx-tower-tower-web-797cd6487f-dc2vh Pulling image "ansible/awx_web:9.2.0"
24m Normal Pulled pod/awx-tower-tower-web-797cd6487f-dc2vh Successfully pulled image "ansible/awx_web:9.2.0"
24m Normal Created pod/awx-tower-tower-web-797cd6487f-dc2vh Created container tower
24m Normal Started pod/awx-tower-tower-web-797cd6487f-dc2vh Started container tower
25m Normal SuccessfulCreate replicaset/awx-tower-tower-web-797cd6487f Created pod: awx-tower-tower-web-797cd6487f-dc2vh
24m Normal Killing pod/awx-tower-tower-web-89c99cb89-6lxgl Stopping container tower
24m Normal SuccessfulDelete replicaset/awx-tower-tower-web-89c99cb89 Deleted pod: awx-tower-tower-web-89c99cb89-6lxgl
25m Normal ScalingReplicaSet deployment/awx-tower-tower-web Scaled up replica set awx-tower-tower-web-797cd6487f to 1
24m Normal ScalingReplicaSet deployment/awx-tower-tower-web Scaled down replica set awx-tower-tower-web-89c99cb89 to 0
When trying to create a Tower CR with the latest release, I see the following error in the operator logs:
u001b[0;34m1 plays in /opt/ansible/main.yml\u001b[0m\r\n\u001b[0;34m\u001b[\n\r\nPLAY [localhost] ***************************************************************\n\u001b[0;34mMETA: ran handlers\u001b[0m\r\n\u001b[0;34m\u001b[\n\r\nTASK [tower : Ensure configured Tower resources exist in the cluster.] *********\r\n\u001b[1;30mtask path: /opt/ansible/roles/tower/tasks/main.yml:2\u001b[0m\r\n\u001b[1;30m\u001b[\n\u001b[0;31mfailed: [localhost] (item=tower_memcached.yaml.j2) => {\"ansible_loop_var\": \"item\", \"item\": \"tower_memcached.yaml.j2\", \"msg\": \"Authentication or permission failure. In some cases, you may have been able to authenticate and did not have permissions on the target directory. Consider changing the remote tmp path in ansible.cfg to a path rooted in \\\"/tmp\\\". Failed command was: ( umask 77 && mkdir -p \\\"` echo /.ansible/tmp/ansible-tmp-1590759069.5041451-4700137306081 `\\\" && echo ansible-tmp-1590759069.5041451-4700137306081=\\\"` echo /.ansible/tmp/ansible-tmp-1590759069.5041451-4700137306081 `\\\" ), exited with result 1\", \"unreachable\": true}
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.