Giter Club home page Giter Club logo

tower-operator's Introduction

Ansible Tower/AWX Operator

DEPRECATED: This project was moved and renamed to: https://github.com/ansible/awx-operator

Build Status

An Ansible Tower operator for Kubernetes built with Operator SDK and Ansible.

Also configurable to run the open source AWX instead of Tower (helpful for certain use cases where a license requirement is not warranted, like CI environments).

Purpose

There are already official OpenShift/Kubernetes installers available for both AWX and Ansible Tower:

This operator is meant to provide a more Kubernetes-native installation method for Ansible Tower or AWX via a Tower Custom Resource Definition (CRD).

Note that the operator is not supported by Red Hat, and is in alpha status. Long-term, it will hopefully become a supported installation method, and be listed on OperatorHub.io. But for now, use it at your own risk!

Usage

This Kubernetes Operator is meant to be deployed in your Kubernetes cluster(s) and can manage one or more Tower or AWX instances in any namespace.

First you need to deploy Tower Operator into your cluster:

kubectl apply -f https://raw.githubusercontent.com/geerlingguy/tower-operator/master/deploy/tower-operator.yaml

Then you can create instances of Tower, for example:

  1. Make sure the namespace you're deploying into already exists (e.g. kubectl create namespace ansible-tower).

  2. Create a file named my-tower.yml with the following contents:

    ---
    apiVersion: tower.ansible.com/v1beta1
    kind: Tower
    metadata:
      name: tower
      namespace: ansible-tower
    spec:
      tower_hostname: tower.mycompany.com
      tower_secret_key: aabbcc
      
      tower_admin_user: test
      tower_admin_email: [email protected]
      tower_admin_password: changeme
    
  3. Use kubectl to create the mcrouter instance in your cluster:

    kubectl apply -f my-tower.yml
    

After a few minutes, your new Tower instance will be accessible at http://tower.mycompany.com/ (assuming your cluster has an Ingress controller configured). Log in using the tower_admin_ credentials configured in the spec, and supply a valid license to begin using Tower.

Red Hat Registry Authentication

To deploy Ansible Tower, images are pulled from the Red Hat Registry. Your Kubernetes or OpenShift cluster will have to have Authentication Enabled for the Red Hat Registry for this to work, otherwise the Tower image will not be pulled.

If you deploy Ansible AWX, images are available from public registries, so no authentication is required.

Deploy AWX instead of Tower

If you would like to deploy AWX (the open source upstream of Tower) into your cluster instead of Tower, override the default variables in the Tower spec for the tower_task_image and tower_web_image, so the AWX container images are used instead, and set the deployment_type to ``awx`:

---
spec:
  ...
  deployment_type: awx
  tower_task_image: ansible/awx_task:11.2.0
  tower_web_image: ansible/awx_web:11.2.0

Ingress Types

Depending on the cluster that you're running on, you may wish to use an Ingress to access your tower or you may wish to use a Route to access your tower. To toggle between these two options, you can add the following to your Tower custom resource:

---
spec:
  ...
  tower_ingress_type: Route

OR

---
spec:
  ...
  tower_ingress_type: Ingress

By default, no ingress/route is deployed as the default is set to none.

Privileged Tasks

Depending on the type of tasks that you'll be running, you may find that you need the tower task pod to run as privileged. This can open yourself up to a variety of security concerns, so you should be aware (and verify that you have the privileges) to do this if necessary. In order to toggle this feature, you can add the following to your Tower custom resource:

---
spec:
  ...
  tower_task_privileged: true

If you are attempting to do this on an OpenShift cluster, you will need to grant the tower ServiceAccount the privileged SCC, which can be done with:

oc adm policy add-scc-to-user privileged -z tower

Again, this is the most relaxed SCC that is provided by OpenShift, so be sure to familiarize yourself with the security concerns that accompany this action.

Persistent storage for Postgres

If you need to use a specific storage class for Postgres' storage, specify tower_postgres_storage_class in your Tower spec:

---
spec:
  ...
  tower_postgres_storage_class: fast-ssd

If it's not specified, Postgres will store it's data on a volume using the default storage class for your cluster.

Development

Testing

This Operator includes a Molecule-based test environment, which can be executed standalone in Docker (e.g. in CI or in a single Docker container anywhere), or inside any kind of Kubernetes cluster (e.g. Minikube).

You need to make sure you have Molecule installed before running the following commands. You can install Molecule with:

pip install 'molecule[docker]'

Running molecule test sets up a clean environment, builds the operator, runs all configured tests on an example operator instance, then tears down the environment (at least in the case of Docker).

If you want to actively develop the operator, use molecule converge, which does everything but tear down the environment at the end.

Testing in Docker (standalone)

molecule test -s test-local

This environment is meant for headless testing (e.g. in a CI environment, or when making smaller changes which don't need to be verified through a web interface). It is difficult to test things like Tower's web UI or to connect other applications on your local machine to the services running inside the cluster, since it is inside a Docker container with no static IP address.

Testing in Minikube

minikube start --memory 8g --cpus 4
minikube addons enable ingress
molecule test -s test-minikube

Minikube is a more full-featured test environment running inside a full VM on your computer, with an assigned IP address. This makes it easier to test things like NodePort services and Ingress from outside the Kubernetes cluster (e.g. in a browser on your computer).

Once the operator is deployed, you can visit the Tower UI in your browser by following these steps:

  1. Make sure you have an entry like IP_ADDRESS example-tower.test in your /etc/hosts file. (Get the IP address with minikube ip.)
  2. Visit http://example-tower.test/ in your browser. (Default admin login is test/changeme.)

Release Process

There are a few moving parts to this project:

  1. The Docker image which powers Tower Operator.
  2. The tower-operator.yaml Kubernetes manifest file which initially deploys the Operator into a cluster.

Each of these must be appropriately built in preparation for a new tag:

Build a new release of the Operator for Docker Hub

Run the following command inside this directory:

operator-sdk build geerlingguy/tower-operator:0.4.0

Then push the generated image to Docker Hub:

docker push geerlingguy/tower-operator:0.4.0

Build a new version of the tower-operator.yaml file

Update the tower-operator version in two places:

  1. deploy/tower-operator.yaml: in the ansible and operator container definitions in the tower-operator Deployment.
  2. build/chain-operator-files.yml: the operator_image variable.

Once the versions are updated, run the playbook in the build/ directory:

ansible-playbook chain-operator-files.yml

After it is built, test it on a local cluster:

minikube start --memory 6g --cpus 4
minikube addons enable ingress
kubectl apply -f deploy/tower-operator.yaml
kubectl create namespace example-tower
kubectl apply -f deploy/crds/tower_v1beta1_tower_cr_awx.yaml
<test everything>
minikube delete

If everything works, commit the updated version, then tag a new repository release with the same tag as the Docker image pushed earlier.

Author

This operator was built in 2019 by Jeff Geerling, author of Ansible for DevOps and Ansible for Kubernetes.

tower-operator's People

Contributors

fstern avatar geerlingguy avatar matburt avatar shanemcd avatar tylerauerbeck avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

tower-operator's Issues

Update to AWX 11.2.0, Tower 3.7

AWX is upgrading

Hi Jeff,

AWX is stuck at "AWX is Upgrading" when using your guide.

Not sure how to debug.

Thanks

Test for actual AWX login page availability in Travis CI KinD tests

Right now the tests just verify things get created and don't break in Kubernetes... I would like to add a more functional test that verifies AWX/Tower is actually installed at some point, using curl/uri inside the KinD container with a timeout of maybe 5 or 10 minutes (it does take a while for AWX/Tower initialization to complete).

Follow-up to #1.

Support LDAP configuration with custom CA

If you attempt to configure LDAP authentication that is backed by a custom (non-trusted) CA, you'll see the following error:

2020-06-09 17:21:22,693 WARNING  django_auth_ldap Caught LDAPError while authenticating test-user: SERVER_DOWN({'desc': "Can't contact LDAP server", 'info': 'error:14090086:SSL routines:ssl3_get_server_certificate:certificate verify failed (self signed certificate in certificate chain)'},)

It would be useful to be able to attach custom CA's via the CR that is created that would resolve this issue.

Test both AWX and Tower CRs in Travis CI

Right now (after #5), I'm only testing the AWX CR in Travis CI.

I need to split a molecule config to do the same thing as the test-local scenario, but have it be like test-tower and use the tower CR.

Single Tower Deployment vs Deployment Per Component

In the latest release of the operator, it looks like the deployment has gone from each components having its own deployment (task, web, etc.) to all containers being inside a single pod. Was there a technical reason behind this? Seems like this would cause issues if you wanted things to scale independently of each other moving forward (i.e. only scale web due to increased traffic, etc.)

Automate Ansible Tower license

Right now, the operator seems to spin up Ansible Tower -- but it's still a manual process to apply the license. Would applying the license be something we could see adding to this operator?

Create a contributors guide

Would be good to have a contributors guide so that folks could understand what kind of guidelines there are for getting involved here.

For example, not sure what (if any) version numbers I should be bumping in any of the docs as I look to add OpenShift functionality. Or even if maybe this is something to look to add to any of the automation (Travis, GitHub Actions, etc.)

Clean up molecule test directory duplicate Ansible plays

Most of the playbooks split up between the two molecule scenarios (test-local and test-minikube) are identical. Where they are not, it's usually a variable here or there that changes.

I would like to merge everything into the default scenario, then include where necessary in the specific scenarios for Minikube and KinD (local).

After this, it would be nice to upstream this work into the Operator SDK project, so others can benefit from being able to easily test and debug operators in KinD (great for CI/speed) or Minikube (great for local development, and some CI use cases (e.g. ingress)).

[FEATURE REQUEST] Ansible venvs

Some modules require additional pip packages to be installed. Managing venvs isn't really possible from what I can tell thus far within the operator, and I think that type of management is perfect for this operator.

I thought this might be a good issue for me to help contribute on, but didn't want to surprise PR without discussing how this should be implemented.

With that said, any thoughts or preferences on how to implement? Should users just create their own tower container from the tower base image? Should the operator leverage init containers to install python dependencies and venvs at boot?

Migration failure due to pods stopping

I tried to upgrade today from awx 9.1.1 to 9.2.0 and encountered an error during the migration. It looks as though it tried to run the migration on the previous container being replaced.

I was actually just testing the upgrade earlier in the day and it was successful, so it looks to be timing related though doesn't happen every run. A longer delay may be needed to ensure that the new container has finished creating before trying the migration.

I see that there is already a 5 second delay here so it may need to be longer, or better still could we configure it through the operator config?

- name: Get the Tower pod information.
  # TODO: Change to k8s_info after Ansible 2.9.0 is available in Operator image.
  k8s_facts:
    kind: Pod
    namespace: '{{ meta.namespace }}'
    label_selectors:
      - app=tower
  register: tower_pods
  until: "tower_pods['resources'][0]['status']['phase'] == 'Running'"
  delay: 5
  retries: 60

Operator output:

--------------------------- Ansible Task StdOut -------------------------------

 TASK [Migrate the database if the K8s resources were updated.] ******************************** 
fatal: [localhost]: FAILED! => {
    "changed": true,
    "cmd": "kubectl exec -n awx awx-tower-tower-web-89c99cb89-6lxgl -- bash -c \"awx-manage migrate --noinput\"",
    "delta": "0:00:00.127667",
    "end": "2020-02-12 14:11:24.460539",
    "invocation": {
        "module_args": {
            "_raw_params": "kubectl exec -n awx awx-tower-tower-web-89c99cb89-6lxgl -- bash -c \"awx-manage migrate --noinput\"",
            "_uses_shell": true,
            "argv": null,
            "chdir": null,
            "creates": null,
            "executable": null,
            "removes": null,
            "stdin": null,
            "stdin_add_newline": true,
            "strip_empty_ends": true,
            "warn": true
        }
    },
    "msg": "non-zero return code",
    "rc": 1,
    "start": "2020-02-12 14:11:24.332872",
    "stderr": "error: unable to upgrade connection: container not found (\"tower\")",
    "stderr_lines": [
        "error: unable to upgrade connection: container not found (\"tower\")"
    ],
    "stdout": "",
    "stdout_lines": []
}

I was able to execute the migration against the new container:

kubectl exec -n awx awx-tower-tower-web-797cd6487f-dc2vh -- bash -c "awx-manage migrate --noinput"
Operations to perform:
  Apply all migrations: auth, conf, contenttypes, main, oauth2_provider, sessions, sites, social_django, sso, taggit
Running migrations:
  Applying main.0102_v370_unifiedjob_canceled... OK
  Applying main.0103_v370_remove_computed_fields... OK
  Applying main.0104_v370_cleanup_old_scan_jts... OK
  Applying main.0105_v370_remove_jobevent_parent_and_hosts... OK
  Applying main.0106_v370_remove_inventory_groups_with_active_failures... OK
  Applying main.0107_v370_workflow_convergence_api_toggle... OK
  Applying main.0108_v370_unifiedjob_dependencies_processed... OK

Events:

24m         Normal    ScalingReplicaSet   deployment/awx-tower-tower-task              Scaled down replica set awx-tower-tower-task-5c4799bdf to 0
25m         Normal    Scheduled           pod/awx-tower-tower-web-797cd6487f-dc2vh     Successfully assigned awx/awx-tower-tower-web-797cd6487f-dc2vh to ip-10-16-2-184.eu-west-1.compute.internal
25m         Normal    Pulling             pod/awx-tower-tower-web-797cd6487f-dc2vh     Pulling image "ansible/awx_web:9.2.0"
24m         Normal    Pulled              pod/awx-tower-tower-web-797cd6487f-dc2vh     Successfully pulled image "ansible/awx_web:9.2.0"
24m         Normal    Created             pod/awx-tower-tower-web-797cd6487f-dc2vh     Created container tower
24m         Normal    Started             pod/awx-tower-tower-web-797cd6487f-dc2vh     Started container tower
25m         Normal    SuccessfulCreate    replicaset/awx-tower-tower-web-797cd6487f    Created pod: awx-tower-tower-web-797cd6487f-dc2vh
24m         Normal    Killing             pod/awx-tower-tower-web-89c99cb89-6lxgl      Stopping container tower
24m         Normal    SuccessfulDelete    replicaset/awx-tower-tower-web-89c99cb89     Deleted pod: awx-tower-tower-web-89c99cb89-6lxgl
25m         Normal    ScalingReplicaSet   deployment/awx-tower-tower-web               Scaled up replica set awx-tower-tower-web-797cd6487f to 1
24m         Normal    ScalingReplicaSet   deployment/awx-tower-tower-web               Scaled down replica set awx-tower-tower-web-89c99cb89 to 0

Can't scale the deployments and statefulset

If you scale the deployments, and statefulset higher than 1 replica, they will get automatically scaled back down to 1. Why is this? How can I make it so I can scale higher than one?

Allow switching between running open source AWX and Ansible Tower

Right now I'm building out everything using open source AWX, just for convenience's sake. But I'm working on building the operator in a way where users could choose between AWX and Tower (if they want support and a license, and all that).

See:

Docs for setup:

Error creating Tower CR

When trying to create a Tower CR with the latest release, I see the following error in the operator logs:

u001b[0;34m1 plays in /opt/ansible/main.yml\u001b[0m\r\n\u001b[0;34m\u001b[\n\r\nPLAY [localhost] ***************************************************************\n\u001b[0;34mMETA: ran handlers\u001b[0m\r\n\u001b[0;34m\u001b[\n\r\nTASK [tower : Ensure configured Tower resources exist in the cluster.] *********\r\n\u001b[1;30mtask path: /opt/ansible/roles/tower/tasks/main.yml:2\u001b[0m\r\n\u001b[1;30m\u001b[\n\u001b[0;31mfailed: [localhost] (item=tower_memcached.yaml.j2) => {\"ansible_loop_var\": \"item\", \"item\": \"tower_memcached.yaml.j2\", \"msg\": \"Authentication or permission failure. In some cases, you may have been able to authenticate and did not have permissions on the target directory. Consider changing the remote tmp path in ansible.cfg to a path rooted in \\\"/tmp\\\". Failed command was: ( umask 77 && mkdir -p \\\"` echo /.ansible/tmp/ansible-tmp-1590759069.5041451-4700137306081 `\\\" && echo ansible-tmp-1590759069.5041451-4700137306081=\\\"` echo /.ansible/tmp/ansible-tmp-1590759069.5041451-4700137306081 `\\\" ), exited with result 1\", \"unreachable\": true}

AWX network problems

Hi Jeff,

I've copied over ldap settings from our existing awx and added source control credentials in order to add projects.

However both ldap and git access does not work (connection timeout). The kubernetes host can ping the git server.

As far as I understand default network policy should allow all. Am I missing something?

Thanks

Switch to k8s_info from k8s_facts

Ansible is switching the k8s_facts module to k8s_info. Should eventually look to update this prior to deprecation.

[DEPRECATION WARNING]: The 'k8s_facts' module has been renamed to 'k8s_info'. 
This feature will be removed in version 2.13. Deprecation warnings can be 
disabled by setting deprecation_warnings=False in ansible.cfg.

Avoid creating service account in default namespace

One issue that I saw mentioned in the OpenShift issue was that the tower-operator service account gets created in the default namespace. I think it's important to break this out into its own seperate issue as I believe this will cause issues for more than just those who are running in OpenShift.

The way I see it, there are two issues that need resolved here:

Remove namespace from Service Account creation

and

namespace: default

That's the easy part. The less easy part comes next.

How do handle applying the ClusterRole to the ServiceAccount without knowing which namespace/project it is going to be created in

subjects:
- kind: ServiceAccount
name: tower-operator
namespace: default

and

subjects:
- kind: ServiceAccount
name: tower-operator
namespace: default

The answer, at this point, I think all comes down to the level of complexity you want involved in installing the operator. And this may very well become abstracted once this is hidden behind just being installed from OperatorHub -- but for now I think there are a few options:

  1. Use some templating language so users can provide their values

This could be done with helm or even more ansible & jinja. This will allow users to provide their values in some other file and then installation is still all done behind a single command (helm install or ansible-playbook)

  1. Patching

This moves away from a clean, one command install -- but would give the user the ability to define their namespace in the patch command and then it would generate the appropriate yaml for where they want to actually install the operator at

ex:

kubectl/oc patch -f tower-operator.yaml -p '{ MY PATCH HERE }' | kubectl/oc apply -f -
  1. Maybe there's something better

I'm not saying that the above are the only two ways. I've been playing around with a handful of other ways that haven't led anywhere at this point:

  • hoping that ClusterRoleBinding would default the namespace to the current namespace context if it wasn't provided
  • Trying to be tricky with the Downward API

So I think there are potentially other ways. These are more just suggestions for the short term. Again this may be a non-issue once something like OperatorHub comes into play. But for now, I think this is something that needs handled as otherwise if the user isn't installing in the default namespace there is a bunch of manual intervention required to get this running.

Get operator to pass 'operator-sdk scorecard'

Currently this operator doesn't have a validation section for the CRD, nor a spec CSV with CRD fields defined. I'd like to get all that fixed so this operator can pass the operator-sdk scorecard check.

Build alpha operator image version and supply instructions for use in README

See https://github.com/geerlingguy/mcrouter-operator for an example.

Basically, need to build an official alpha1 image and put it up somewhere (e.g. Docker Hub), compile a default operator YAML manifest for deployment into Kubernetes, and finally show people how to deploy the operator in the README.

Might be a good time to also write a blog post about it (maybe... might want to wait until the operator is more fleshed-out).

Make Ingress optional

As always, really nice work @geerlingguy :) You always seem to be one step ahead when I start searching for anything ansible-related on the interwebs.

i'm using Calico CNI, and thus directly reaching pod endpoints via BGP. i don't really need/want ingress support in my deployments and prefer to use externalIPs. since operators can be somewhat ridged in terms of implementation/design, is this something that you've already considered making optional? thanks for everything you do!

Ability to use external postgres database

In some use cases, I"d like to only run Tower/AWX inside the cluster, but rely on an external set of postgres databases that I'm already operatoring. As part of the operator, I'd like to be able to point to this database instead of spinning one up in the cluster.

404 Not Found - Postgres configuration error

Because my previous installation ran out of space when trying to restore the awx database I've reinstalled Ubuntu Server as Open SSH server with microk8s installed.

No I am unable to get awx up and running and am not sure why.

These are my steps:

microk8s.enable dns
microk8s.enable storage
microk8s.enable dashboard

Add --allow-privileged into /var/snap/microk8s/current/args/kube-apiserver followed by microk8s.stop and microk8s start.

microk8s.kubectl apply -f https://raw.githubusercontent.com/geerlingguy/tower-operator/master/deploy/tower-operator.yaml

microk8s.kubectl create namespace awx
apiVersion: tower.ansible.com/v1alpha1
kind: Tower
metadata:
  name: awx
  namespace: awx
spec:
  tower_hostname: hostname
  tower_secret_key: awxsecret
  tower_admin_user: admin
  tower_admin_email: mail
  tower_admin_password: password
  tower_task_image: ansible/awx_task:9.3.0
  tower_web_image: ansible/awx_web:9.3.0
  tower_postgres_pass: awxpass
  tower_postgres_storage_request: 12Gi
microk8s.kubectl apply -f awx.yml

microk8s.kubectl describe pods -n awx
Name:         awx-memcached-587b55d5fd-6mkb4
Namespace:    awx
Priority:     0
Node:         microk8s/192.168.1.119
Start Time:   Thu, 19 Mar 2020 08:26:31 +0000
Labels:       app=tower-memcached
              pod-template-hash=587b55d5fd
Annotations:  <none>
Status:       Running
IP:           10.1.9.23
IPs:
  IP:           10.1.9.23
Controlled By:  ReplicaSet/awx-memcached-587b55d5fd
Containers:
  memcached:
    Container ID:   containerd://38a91f0d34c920e0c3112d8b5064ba57dce5e3203facc634389f2e0da550ff0e
    Image:          memcached:alpine
    Image ID:       docker.io/library/memcached@sha256:891a989217a70d9b703bc93ea63e87c9853c0590458f2fec44c7cb95fa224858
    Port:           11211/TCP
    Host Port:      0/TCP
    State:          Running
      Started:      Thu, 19 Mar 2020 08:26:33 +0000
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-sqw2l (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  default-token-sqw2l:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-sqw2l
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type    Reason     Age        From               Message
  ----    ------     ----       ----               -------
  Normal  Scheduled  <unknown>  default-scheduler  Successfully assigned awx/awx-memcached-587b55d5fd-6mkb4 to microk8s
  Normal  Pulled     16m        kubelet, microk8s  Container image "memcached:alpine" already present on machine
  Normal  Created    16m        kubelet, microk8s  Created container memcached
  Normal  Started    16m        kubelet, microk8s  Started container memcached


Name:         awx-postgres-0
Namespace:    awx
Priority:     0
Node:         microk8s/192.168.1.119
Start Time:   Thu, 19 Mar 2020 08:26:34 +0000
Labels:       app=tower-postgres
              controller-revision-hash=awx-postgres-5599b677
              statefulset.kubernetes.io/pod-name=awx-postgres-0
Annotations:  <none>
Status:       Running
IP:           10.1.9.24
IPs:
  IP:           10.1.9.24
Controlled By:  StatefulSet/awx-postgres
Containers:
  postgres:
    Container ID:   containerd://f8c85ef36086c3db42f27911c80644c0d20e7dc0179f38af5326d86641620f1f
    Image:          postgres:10
    Image ID:       docker.io/library/postgres@sha256:73d3ac7b17b8cd2122d27026ec3552080e8aaea95fef0b6e671fa795ac547f94
    Port:           3306/TCP
    Host Port:      0/TCP
    State:          Running
      Started:      Thu, 19 Mar 2020 08:26:36 +0000
    Ready:          True
    Restart Count:  0
    Environment:
      POSTGRES_DB:        awx
      POSTGRES_USER:      awx
      POSTGRES_PASSWORD:  <set to the key 'password' in secret 'awx-postgres-pass'>  Optional: false
    Mounts:
      /var/lib/postgresql/data from postgres (rw,path="data")
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-sqw2l (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  postgres:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  postgres-awx-postgres-0
    ReadOnly:   false
  default-token-sqw2l:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-sqw2l
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason            Age        From               Message
  ----     ------            ----       ----               -------
  Warning  FailedScheduling  <unknown>  default-scheduler  error while running "VolumeBinding" filter plugin for pod "awx-postgres-0": pod has unbound immediate PersistentVolumeClaims
  Warning  FailedScheduling  <unknown>  default-scheduler  error while running "VolumeBinding" filter plugin for pod "awx-postgres-0": pod has unbound immediate PersistentVolumeClaims
  Normal   Scheduled         <unknown>  default-scheduler  Successfully assigned awx/awx-postgres-0 to microk8s
  Normal   Pulled            16m        kubelet, microk8s  Container image "postgres:10" already present on machine
  Normal   Created           16m        kubelet, microk8s  Created container postgres
  Normal   Started           16m        kubelet, microk8s  Started container postgres


Name:         awx-rabbitmq-7f8f6ff647-5gtmg
Namespace:    awx
Priority:     0
Node:         microk8s/192.168.1.119
Start Time:   Thu, 19 Mar 2020 08:26:34 +0000
Labels:       app=tower-rabbitmq
              pod-template-hash=7f8f6ff647
Annotations:  <none>
Status:       Running
IP:           10.1.9.25
IPs:
  IP:           10.1.9.25
Controlled By:  ReplicaSet/awx-rabbitmq-7f8f6ff647
Containers:
  rabbitmq:
    Container ID:   containerd://3eadbeb7f263212109e9a55f54672f0bb7ea3ae5afa81a2123bf51463581e991
    Image:          rabbitmq:3
    Image ID:       docker.io/library/rabbitmq@sha256:b20295815348317f0d8cc89051154df6c39fdc92b0f83f57cc591e191c484e8b
    Ports:          15672/TCP, 5672/TCP
    Host Ports:     0/TCP, 0/TCP
    State:          Running
      Started:      Thu, 19 Mar 2020 08:26:36 +0000
    Ready:          True
    Restart Count:  0
    Environment:
      RABBITMQ_DEFAULT_VHOST:  awx
      RABBITMQ_NODE_PORT:      5672
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-sqw2l (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  default-token-sqw2l:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-sqw2l
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type    Reason     Age        From               Message
  ----    ------     ----       ----               -------
  Normal  Scheduled  <unknown>  default-scheduler  Successfully assigned awx/awx-rabbitmq-7f8f6ff647-5gtmg to microk8s
  Normal  Pulled     16m        kubelet, microk8s  Container image "rabbitmq:3" already present on machine
  Normal  Created    16m        kubelet, microk8s  Created container rabbitmq
  Normal  Started    16m        kubelet, microk8s  Started container rabbitmq


Name:         awx-tower-task-6f47bb89c5-6b299
Namespace:    awx
Priority:     0
Node:         microk8s/192.168.1.119
Start Time:   Thu, 19 Mar 2020 08:26:38 +0000
Labels:       app=tower-task
              pod-template-hash=6f47bb89c5
Annotations:  <none>
Status:       Running
IP:           10.1.9.27
IPs:
  IP:           10.1.9.27
Controlled By:  ReplicaSet/awx-tower-task-6f47bb89c5
Containers:
  tower-task:
    Container ID:  containerd://04ffcba2fa29b053984da4ce61dd3f3c90ebb83ed82453472921811a0a09a34c
    Image:         ansible/awx_task:9.3.0
    Image ID:      docker.io/ansible/awx_task@sha256:be02eed7970804856f32fbb99385ee13e7da31edca6602d7d0514c2b44b2044f
    Port:          <none>
    Host Port:     <none>
    Command:
      /usr/bin/launch_awx_task.sh
    State:          Running
      Started:      Thu, 19 Mar 2020 08:26:43 +0000
    Ready:          True
    Restart Count:  0
    Requests:
      cpu:     500m
      memory:  1Gi
    Environment Variables from:
      awx-tower-configmap  ConfigMap  Optional: false
      awx-tower-secret     Secret     Optional: false
    Environment:           <none>
    Mounts:
      /etc/tower/SECRET_KEY from secret-key (ro,path="SECRET_KEY")
      /etc/tower/conf.d/environment.sh from environment (ro,path="environment.sh")
      /etc/tower/settings.py from settings (ro,path="settings.py")
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-sqw2l (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  secret-key:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  awx-tower-secret
    Optional:    false
  environment:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      awx-tower-configmap
    Optional:  false
  settings:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      awx-tower-configmap
    Optional:  false
  default-token-sqw2l:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-sqw2l
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type    Reason     Age        From               Message
  ----    ------     ----       ----               -------
  Normal  Scheduled  <unknown>  default-scheduler  Successfully assigned awx/awx-tower-task-6f47bb89c5-6b299 to microk8s
  Normal  Pulling    16m        kubelet, microk8s  Pulling image "ansible/awx_task:9.3.0"
  Normal  Pulled     16m        kubelet, microk8s  Successfully pulled image "ansible/awx_task:9.3.0"
  Normal  Created    16m        kubelet, microk8s  Created container tower-task
  Normal  Started    16m        kubelet, microk8s  Started container tower-task


Name:         awx-tower-web-c98cd6555-jtzfg
Namespace:    awx
Priority:     0
Node:         microk8s/192.168.1.119
Start Time:   Thu, 19 Mar 2020 08:26:37 +0000
Labels:       app=tower
              pod-template-hash=c98cd6555
Annotations:  <none>
Status:       Running
IP:           10.1.9.26
IPs:
  IP:           10.1.9.26
Controlled By:  ReplicaSet/awx-tower-web-c98cd6555
Containers:
  tower:
    Container ID:   containerd://c32c89a8657d58ba5529ef0d96679094f0bebbbd61bd4dbd354afdd2e46c160a
    Image:          ansible/awx_web:9.3.0
    Image ID:       docker.io/ansible/awx_web@sha256:e3716cce276a9774650a4fbbb5d80c98fa734db633e8ae4ea661d178c23b89df
    Port:           8052/TCP
    Host Port:      0/TCP
    State:          Running
      Started:      Thu, 19 Mar 2020 08:26:39 +0000
    Ready:          True
    Restart Count:  0
    Requests:
      cpu:        1
      memory:     2Gi
    Environment:  <none>
    Mounts:
      /etc/nginx/nginx.conf from nginx-conf (ro,path="nginx.conf")
      /etc/tower/SECRET_KEY from secret-key (ro,path="SECRET_KEY")
      /etc/tower/conf.d/environment.sh from environment (ro,path="environment.sh")
      /etc/tower/settings.py from settings (ro,path="settings.py")
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-sqw2l (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  secret-key:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  awx-tower-secret
    Optional:    false
  environment:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      awx-tower-configmap
    Optional:  false
  settings:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      awx-tower-configmap
    Optional:  false
  nginx-conf:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      awx-tower-configmap
    Optional:  false
  default-token-sqw2l:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-sqw2l
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type    Reason     Age        From               Message
  ----    ------     ----       ----               -------
  Normal  Scheduled  <unknown>  default-scheduler  Successfully assigned awx/awx-tower-web-c98cd6555-jtzfg to microk8s
  Normal  Pulled     16m        kubelet, microk8s  Container image "ansible/awx_web:9.3.0" already present on machine
  Normal  Created    16m        kubelet, microk8s  Created container tower
  Normal  Started    16m        kubelet, microk8s  Started container tower

Postgres log shows and google searches doesn't really show any usefull information about this error.

default-scheduler error while running "VolumeBinding" filter plugin for pod "awx-postgres-0": pod has unbound immediate PersistentVolumeClaims

The postgres files are present in /var/snap/microk8s/common/default-storage/awx-postgres-awx-postgres-0-pvc-e1ca6588-3780-41bb-820e-2321f6e60e1c/data

awx-tower-web-xxx log output shows:

could not connect to server: Connection timed out Is the server running on host "awx-postgres.awx.svc.cluster.local" (91.201.60.73) and accepting TCP/IP connections on port 5432?

Ansible output:

ansible-playbook 2.9.6
  config file = /etc/ansible/ansible.cfg
  configured module search path = ['/usr/share/ansible/openshift']
  ansible python module location = /usr/local/lib/python3.6/site-packages/ansible
  executable location = /usr/local/bin/ansible-playbook
  python version = 3.6.8 (default, Oct 11 2019, 15:04:54) [GCC 8.3.1 20190507 (Red Hat 8.3.1-4)]
Using /etc/ansible/ansible.cfg as config file

PLAYBOOK: main.yml *************************************************************
1 plays in /opt/ansible/main.yml

PLAY [localhost] ***************************************************************
META: ran handlers

TASK [tower : Ensure configured Tower resources exist in the cluster.] *********
task path: /opt/ansible/roles/tower/tasks/main.yml:2
changed: [localhost] => (item=tower_memcached.yaml.j2) => {"ansible_loop_var": "item", "changed": true, "item": "tower_memcached.yaml.j2", "result": {"results": [{"changed": true, "method": "create", "result": {"apiVersion": "apps/v1", "kind": "Deployment", "metadata": {"creationTimestamp": "2020-03-19T08:26:31Z", "generation": 1, "labels": {"app": "tower-memcached"}, "name": "awx-memcached", "namespace": "awx", "ownerReferences": [{"apiVersion": "tower.ansible.com/v1alpha1", "kind": "Tower", "name": "awx", "uid": "fcc3dc9d-0c52-4523-a428-fa5cfa203864"}], "resourceVersion": "5630", "selfLink": "/apis/apps/v1/namespaces/awx/deployments/awx-memcached", "uid": "4117354d-2673-454c-959b-b06f917726dc"}, "spec": {"progressDeadlineSeconds": 600, "replicas": 1, "revisionHistoryLimit": 10, "selector": {"matchLabels": {"app": "tower-memcached"}}, "strategy": {"rollingUpdate": {"maxSurge": "25%", "maxUnavailable": "25%"}, "type": "RollingUpdate"}, "template": {"metadata": {"creationTimestamp": null, "labels": {"app": "tower-memcached"}}, "spec": {"containers": [{"image": "memcached:alpine", "imagePullPolicy": "IfNotPresent", "name": "memcached", "ports": [{"containerPort": 11211, "protocol": "TCP"}], "resources": {}, "terminationMessagePath": "/dev/termination-log", "terminationMessagePolicy": "File"}], "dnsPolicy": "ClusterFirst", "restartPolicy": "Always", "schedulerName": "default-scheduler", "securityContext": {}, "terminationGracePeriodSeconds": 30}}}, "status": {}}, "warnings": []}, {"changed": true, "method": "create", "result": {"apiVersion": "v1", "kind": "Service", "metadata": {"creationTimestamp": "2020-03-19T08:26:31Z", "labels": {"app": "tower-memcached"}, "name": "awx-memcached", "namespace": "awx", "ownerReferences": [{"apiVersion": "tower.ansible.com/v1alpha1", "kind": "Tower", "name": "awx", "uid": "fcc3dc9d-0c52-4523-a428-fa5cfa203864"}], "resourceVersion": "5643", "selfLink": "/api/v1/namespaces/awx/services/awx-memcached", "uid": "9898bcda-a845-4013-bb17-6535d8aac57b"}, "spec": {"clusterIP": "None", "ports": [{"port": 11211, "protocol": "TCP", "targetPort": 11211}], "selector": {"app": "tower-memcached"}, "sessionAffinity": "None", "type": "ClusterIP"}, "status": {"loadBalancer": {}}}, "warnings": []}]}}
changed: [localhost] => (item=tower_postgres.yaml.j2) => {"ansible_loop_var": "item", "changed": true, "item": "tower_postgres.yaml.j2", "result": {"results": [{"changed": true, "method": "create", "result": {"apiVersion": "v1", "data": {"password": "YXd4cGFzczk="}, "kind": "Secret", "metadata": {"creationTimestamp": "2020-03-19T08:26:33Z", "name": "awx-postgres-pass", "namespace": "awx", "ownerReferences": [{"apiVersion": "tower.ansible.com/v1alpha1", "kind": "Tower", "name": "awx", "uid": "fcc3dc9d-0c52-4523-a428-fa5cfa203864"}], "resourceVersion": "5649", "selfLink": "/api/v1/namespaces/awx/secrets/awx-postgres-pass", "uid": "67042d2a-41a1-486a-97d7-525164ef8dc1"}, "type": "Opaque"}, "warnings": []}, {"changed": true, "method": "create", "result": {"apiVersion": "apps/v1", "kind": "StatefulSet", "metadata": {"creationTimestamp": "2020-03-19T08:26:33Z", "generation": 1, "labels": {"app": "tower-postgres"}, "name": "awx-postgres", "namespace": "awx", "ownerReferences": [{"apiVersion": "tower.ansible.com/v1alpha1", "kind": "Tower", "name": "awx", "uid": "fcc3dc9d-0c52-4523-a428-fa5cfa203864"}], "resourceVersion": "5650", "selfLink": "/apis/apps/v1/namespaces/awx/statefulsets/awx-postgres", "uid": "28befb75-ee99-4f78-9894-38a90f14d29f"}, "spec": {"podManagementPolicy": "OrderedReady", "replicas": 1, "revisionHistoryLimit": 10, "selector": {"matchLabels": {"app": "tower-postgres"}}, "serviceName": "awx", "template": {"metadata": {"creationTimestamp": null, "labels": {"app": "tower-postgres"}}, "spec": {"containers": [{"env": [{"name": "POSTGRES_DB", "value": "awx"}, {"name": "POSTGRES_USER", "value": "awx"}, {"name": "POSTGRES_PASSWORD", "valueFrom": {"secretKeyRef": {"key": "password", "name": "awx-postgres-pass"}}}], "image": "postgres:10", "imagePullPolicy": "IfNotPresent", "name": "postgres", "ports": [{"containerPort": 3306, "name": "postgres", "protocol": "TCP"}], "resources": {}, "terminationMessagePath": "/dev/termination-log", "terminationMessagePolicy": "File", "volumeMounts": [{"mountPath": "/var/lib/postgresql/data", "name": "postgres", "subPath": "data"}]}], "dnsPolicy": "ClusterFirst", "restartPolicy": "Always", "schedulerName": "default-scheduler", "securityContext": {}, "terminationGracePeriodSeconds": 30}}, "updateStrategy": {"type": "RollingUpdate"}, "volumeClaimTemplates": [{"apiVersion": "v1", "kind": "PersistentVolumeClaim", "metadata": {"creationTimestamp": null, "name": "postgres"}, "spec": {"accessModes": ["ReadWriteOnce"], "resources": {"requests": {"storage": "12Gi"}}, "volumeMode": "Filesystem"}, "status": {"phase": "Pending"}}]}, "status": {"replicas": 0}}, "warnings": []}, {"changed": true, "method": "create", "result": {"apiVersion": "v1", "kind": "Service", "metadata": {"creationTimestamp": "2020-03-19T08:26:33Z", "labels": {"app": "tower-postgres"}, "name": "awx-postgres", "namespace": "awx", "ownerReferences": [{"apiVersion": "tower.ansible.com/v1alpha1", "kind": "Tower", "name": "awx", "uid": "fcc3dc9d-0c52-4523-a428-fa5cfa203864"}], "resourceVersion": "5658", "selfLink": "/api/v1/namespaces/awx/services/awx-postgres", "uid": "a0c286ca-6585-4036-9647-91b3f3264060"}, "spec": {"clusterIP": "None", "ports": [{"port": 5432, "protocol": "TCP", "targetPort": 5432}], "selector": {"app": "tower-postgres"}, "sessionAffinity": "None", "type": "ClusterIP"}, "status": {"loadBalancer": {}}}, "warnings": []}]}}
changed: [localhost] => (item=tower_rabbitmq.yaml.j2) => {"ansible_loop_var": "item", "changed": true, "item": "tower_rabbitmq.yaml.j2", "result": {"results": [{"changed": true, "method": "create", "result": {"apiVersion": "apps/v1", "kind": "Deployment", "metadata": {"creationTimestamp": "2020-03-19T08:26:34Z", "generation": 1, "labels": {"app": "tower-rabbitmq"}, "name": "awx-rabbitmq", "namespace": "awx", "ownerReferences": [{"apiVersion": "tower.ansible.com/v1alpha1", "kind": "Tower", "name": "awx", "uid": "fcc3dc9d-0c52-4523-a428-fa5cfa203864"}], "resourceVersion": "5684", "selfLink": "/apis/apps/v1/namespaces/awx/deployments/awx-rabbitmq", "uid": "546e6ade-3f39-43e6-9a5d-c8b2c1162e68"}, "spec": {"progressDeadlineSeconds": 600, "replicas": 1, "revisionHistoryLimit": 10, "selector": {"matchLabels": {"app": "tower-rabbitmq"}}, "strategy": {"rollingUpdate": {"maxSurge": "25%", "maxUnavailable": "25%"}, "type": "RollingUpdate"}, "template": {"metadata": {"creationTimestamp": null, "labels": {"app": "tower-rabbitmq"}}, "spec": {"containers": [{"env": [{"name": "RABBITMQ_DEFAULT_VHOST", "value": "awx"}, {"name": "RABBITMQ_NODE_PORT", "value": "5672"}], "image": "rabbitmq:3", "imagePullPolicy": "IfNotPresent", "name": "rabbitmq", "ports": [{"containerPort": 15672, "protocol": "TCP"}, {"containerPort": 5672, "protocol": "TCP"}], "resources": {}, "terminationMessagePath": "/dev/termination-log", "terminationMessagePolicy": "File"}], "dnsPolicy": "ClusterFirst", "restartPolicy": "Always", "schedulerName": "default-scheduler", "securityContext": {}, "terminationGracePeriodSeconds": 30}}}, "status": {}}, "warnings": []}, {"changed": true, "method": "create", "result": {"apiVersion": "v1", "kind": "Service", "metadata": {"creationTimestamp": "2020-03-19T08:26:34Z", "labels": {"app": "tower-rabbitmq"}, "name": "awx-rabbitmq", "namespace": "awx", "ownerReferences": [{"apiVersion": "tower.ansible.com/v1alpha1", "kind": "Tower", "name": "awx", "uid": "fcc3dc9d-0c52-4523-a428-fa5cfa203864"}], "resourceVersion": "5689", "selfLink": "/api/v1/namespaces/awx/services/awx-rabbitmq", "uid": "ffd04c66-5789-4c92-a205-97b6a57656b2"}, "spec": {"clusterIP": "None", "ports": [{"port": 5672, "protocol": "TCP", "targetPort": 5672}], "selector": {"app": "tower-rabbitmq"}, "sessionAffinity": "None", "type": "ClusterIP"}, "status": {"loadBalancer": {}}}, "warnings": []}]}}
y \"default-src 'self'; connect-src 'self' ws: wss:; style-src 'self' 'unsafe-inline'; script-src 'self' 'unsafe-inline' *.pendo.io; img-src 'self' *.pendo.io data:; report-uri /csp-violation/\";\n\n        # Protect against click-jacking https://www.owasp.org/index.php/Testing_for_Clickjacking_(OTG-CLIENT-009)\n        add_header X-Frame-Options \"DENY\";\n\n        location /nginx_status {\n            stub_status on;\n            access_log off;\n            allow 127.0.0.1;\n            deny all;\n        }\n\n        location /static/ {\n            alias /var/lib/awx/public/static/;\n        }\n\n        location /favicon.ico {\n            alias /var/lib/awx/public/static/favicon.ico;\n        }\n\n        location /websocket {\n            # Pass request to the upstream alias\n            proxy_pass http://daphne;\n            # Require http version 1.1 to allow for upgrade requests\n            proxy_http_version 1.1;\n            # We want proxy_buffering off for proxying to websockets.\n            proxy_buffering off;\n            # http://en.wikipedia.org/wiki/X-Forwarded-For\n            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;\n            # enable this if you use HTTPS:\n            proxy_set_header X-Forwarded-Proto https;\n            # pass the Host: header from the client for the sake of redirects\n            proxy_set_header Host $http_host;\n            # We've set the Host header, so we don't need Nginx to muddle\n            # about with redirects\n            proxy_redirect off;\n            # Depending on the request value, set the Upgrade and\n            # connection headers\n            proxy_set_header Upgrade $http_upgrade;\n            proxy_set_header Connection $connection_upgrade;\n        }\n\n        location / {\n            # Add trailing / if missing\n            rewrite ^(.*)$http_host(.*[^/])$ $1$http_host$2/ permanent;\n            uwsgi_read_timeout 120s;\n            uwsgi_pass uwsgi;\n            include /etc/nginx/uwsgi_params;                proxy_set_header X-Forwarded-Port 443;\n        }\n    }\n}\n", "settings": "import os\nimport socket\n\ndef get_secret():\n    if os.path.exists(\"/etc/tower/SECRET_KEY\"):\n        return open('/etc/tower/SECRET_KEY', 'rb').read().strip()\n\nADMINS = ()\nSTATIC_ROOT = '/var/lib/awx/public/static'\nPROJECTS_ROOT = '/var/lib/awx/projects'\nJOBOUTPUT_ROOT = '/var/lib/awx/job_status'\n\nSECRET_KEY = get_secret()\n\nALLOWED_HOSTS = ['*']\n\nINTERNAL_API_URL = 'http://127.0.0.1:8052'\n\n# Container environments don't like chroots\nAWX_PROOT_ENABLED = False\n\n# Automatically deprovision pods that go offline\nAWX_AUTO_DEPROVISION_INSTANCES = True\n\nCLUSTER_HOST_ID = socket.gethostname()\nSYSTEM_UUID = '00000000-0000-0000-0000-000000000000'\n\nCSRF_COOKIE_SECURE = False\nSESSION_COOKIE_SECURE = False\n\nSERVER_EMAIL = 'root@localhost'\nDEFAULT_FROM_EMAIL = 'webmaster@localhost'\nEMAIL_SUBJECT_PREFIX = '[AWX] '\n\nEMAIL_HOST = 'localhost'\nEMAIL_PORT = 25\nEMAIL_HOST_USER = ''\nEMAIL_HOST_PASSWORD = ''\nEMAIL_USE_TLS = False\n\nLOGGING['handlers']['console'] = {\n    '()': 'logging.StreamHandler',\n    'level': 'DEBUG',\n    'formatter': 'simple',\n}\n\nLOGGING['loggers']['django.request']['handlers'] = ['console']\nLOGGING['loggers']['rest_framework.request']['handlers'] = ['console']\nLOGGING['loggers']['awx']['handlers'] = ['console', 'external_logger']\nLOGGING['loggers']['awx.main.commands.run_callback_receiver']['handlers'] = ['console']\nLOGGING['loggers']['awx.main.tasks']['handlers'] = ['console', 'external_logger']\nLOGGING['loggers']['awx.main.scheduler']['handlers'] = ['console', 'external_logger']\nLOGGING['loggers']['django_auth_ldap']['handlers'] = ['console']\nLOGGING['loggers']['social']['handlers'] = ['console']\nLOGGING['loggers']['system_tracking_migrations']['handlers'] = ['console']\nLOGGING['loggers']['rbac_migrations']['handlers'] = ['console']\nLOGGING['loggers']['awx.isolated.manager.playbooks']['handlers'] = ['console']\nLOGGING['handlers']['callback_receiver'] = {'class': 'logging.NullHandler'}\nLOGGING['handlers']['task_system'] = {'class': 'logging.NullHandler'}\nLOGGING['handlers']['tower_warnings'] = {'class': 'logging.NullHandler'}\nLOGGING['handlers']['rbac_migrations'] = {'class': 'logging.NullHandler'}\nLOGGING['handlers']['system_tracking_migrations'] = {'class': 'logging.NullHandler'}\nLOGGING['handlers']['management_playbooks'] = {'class': 'logging.NullHandler'}\n\nDATABASES = {\n    'default': {\n        'ATOMIC_REQUESTS': True,\n        'ENGINE': 'awx.main.db.profiled_pg',\n        'NAME': 'awx',\n        'USER': 'awx',\n        'PASSWORD': 'awxpass9',\n        'HOST': 'awx-postgres.awx.svc.cluster.local',\n        'PORT': '5432',\n    }\n}\n\nif os.getenv(\"DATABASE_SSLMODE\", False):\n    DATABASES['default']['OPTIONS'] = {'sslmode': os.getenv(\"DATABASE_SSLMODE\")}\n\nCACHES = {\n    'default': {\n        'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',\n        'LOCATION': '{}:{}'.format(\"awx-memcached.awx.svc.cluster.local\", \"11211\")\n    },\n    'ephemeral': {\n        'BACKEND': 'django.core.cache.backends.locmem.LocMemCache',\n    },\n}\n\nBROKER_URL = 'amqp://{}:{}@{}:{}/{}'.format(\n    'guest',\n    'guest',\n    'awx-rabbitmq.awx.svc.cluster.local',\n    '5672',\n    'awx')\n\nCHANNEL_LAYERS = {\n    'default': {'BACKEND': 'asgi_amqp.AMQPChannelLayer',\n                'ROUTING': 'awx.main.routing.channel_routing',\n                'CONFIG': {'url': BROKER_URL}}\n}\n\nUSE_X_FORWARDED_PORT = True\n"}, "kind": "ConfigMap", "metadata": {"creationTimestamp": "2020-03-19T08:26:35Z", "labels": {"app": "tower"}, "name": "awx-tower-configmap", "namespace": "awx", "ownerReferences": [{"apiVersion": "tower.ansible.com/v1alpha1", "kind": "Tower", "name": "awx", "uid": "fcc3dc9d-0c52-4523-a428-fa5cfa203864"}], "resourceVersion": "5710", "selfLink": "/api/v1/namespaces/awx/configmaps/awx-tower-configmap", "uid": "7d37fc8d-940a-4changed: [localhost] => (item=tower_config.yaml.j2) => {"ansible_loop_var": "item", "changed": true, "item": "tower_config.yaml.j2", "method": "create", "result": {"apiVersion": "v1", "data": {"environment": "DATABASE_USER=awx\nDATABASE_NAME=awx\nDATABASE_HOST='awx-postgres.awx.svc.cluster.local'\nDATABASE_PORT='5432'\nDATABASE_PASSWORD=awxpass9\nMEMCACHED_HOST='awx-memcached.awx.svc.cluster.local'\nMEMCACHED_PORT='11211'\nRABBITMQ_HOST='awx-rabbitmq.awx.svc.cluster.local'\nRABBITMQ_PORT='5672'\nAWX_SKIP_MIGRATIONS=true\n", "nginx_conf": "worker_processes  1;\npid        /tmp/nginx.pid;\n\nevents {\n    worker_connections  1024;\n}\n\nhttp {\n    include       /etc/nginx/mime.types;\n    default_type  application/octet-stream;\n    server_tokens off;\n\n    log_format  main  '$remote_addr - $remote_user [$time_local] \"$request\" '\n                      '$status $body_bytes_sent \"$http_referer\" '\n                      '\"$http_user_agent\" \"$http_x_forwarded_for\"';\n\n    access_log /dev/stdout main;\n\n    map $http_upgrade $connection_upgrade {\n        default upgrade;\n        ''      close;\n    }\n\n    sendfile        on;\n    #tcp_nopush     on;\n    #gzip  on;\n\n    upstream uwsgi {\n        server 127.0.0.1:8050;\n    }\n\n    upstream daphne {\n        server 127.0.0.1:8051;\n    }\n\n    server {\n        listen 8052 default_server;\n\n        # If you have a domain name, this is where to add it\n        server_name _;\n        keepalive_timeout 65;\n\n        # HSTS (ngx_http_headers_module is required) (15768000 seconds = 6 months)\n        add_header Strict-Transport-Security max-age=15768000;\n        add_header Content-Security-Policy \"default-src 'self'; connect-src 'self' ws: wss:; style-src 'self' 'unsafe-inline'; script-src 'self' 'unsafe-inline' *.pendo.io; img-src 'self' *.pendo.io data:; report-uri /csp-violation/\";\n        add_header X-Content-Security-Policy \"default-src 'self'; connect-src 'self' ws: wss:; style-src 'self' 'unsafe-inline'; script-src 'self' 'unsafe-inline' *.pendo.io; img-src 'self' *.pendo.io data:; report-uri /csp-violation/\";\n\n        # Protect against click-jacking https://www.owasp.org/index.php/Testing_for_Clickjacking_(OTG-CLIENT-009)\n        add_header X-Frame-Options \"DENY\";\n\n        location /nginx_status {\n            stub_status on;\n            access_log off;\n            allow 127.0.0.1;\n            deny all;\n        }\n\n        location /static/ {\n            alias /var/lib/awx/public/static/;\n        }\n\n        location /favicon.ico {\n            alias /var/lib/awx/public/static/favicon.ico;\n        }\n\n        location /websocket {\n            # Pass request to the upstream alias\n            proxy_pass http://daphne;\n            # Require http version 1.1 to allow for upgrade requests\n            proxy_http_version 1.1;\n            # We want proxy_buffering off for proxying to websockets.\n            proxy_buffering off;\n            # http://en.wikipedia.org/wiki/X-Forwarded-For\n            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;\n            # enable this if you use HTTPS:\n            proxy_set_header X-Forwarded-Proto https;\n            # pass the Host: header from the client for the sake of redirects\n            proxy_set_header Host $http_host;\n            # We've set the Host header, so we don't need Nginx to muddle\n            # about with redirects\n            proxy_redirect off;\n            # Depending on the request value, set the Upgrade and\n            # connection headers\n            proxy_set_header Upgrade $http_upgrade;\n            proxy_set_header Connection $connection_upgrade;\n        }\n\n        location / {\n            # Add trailing / if missing\n            rewrite ^(.*)$http_host(.*[^/])$ $1$http_host$2/ permanent;\n            uwsgi_read_timeout 120s;\n            uwsgi_pass uwsgi;\n            include /etc/nginx/uwsgi_params;                proxy_set_header X-Forwarded-Port 443;\n        }\n    }\n}\n", "settings": "import os\nimport socket\n\ndef get_secret():\n    if os.path.exists(\"/etc/tower/SECRET_KEY\"):\n        return open('/etc/tower/SECRET_KEY', 'rb').read().strip()\n\nADMINS = ()\nSTATIC_ROOT = '/var/lib/awx/public/static'\nPROJECTS_ROOT = '/var/lib/awx/projects'\nJOBOUTPUT_ROOT = '/var/lib/awx/job_status'\n\nSECRET_KEY = get_secret()\n\nALLOWED_HOSTS = ['*']\n\nINTERNAL_API_URL = 'http://127.0.0.1:8052'\n\n# Container environments don't like chroots\nAWX_PROOT_ENABLED = False\n\n# Automatically deprovision pods that go offline\nAWX_AUTO_DEPROVISION_INSTANCES = True\n\nCLUSTER_HOST_ID = socket.gethostname()\nSYSTEM_UUID = '00000000-0000-0000-0000-000000000000'\n\nCSRF_COOKIE_SECURE = False\nSESSION_COOKIE_SECURE = False\n\nSERVER_EMAIL = 'root@localhost'\nDEFAULT_FROM_EMAIL = 'webmaster@localhost'\nEMAIL_SUBJECT_PREFIX = '[AWX] '\n\nEMAIL_HOST = 'localhost'\nEMAIL_PORT = 25\nEMAIL_HOST_USER = ''\nEMAIL_HOST_PASSWORD = ''\nEMAIL_USE_TLS = False\n\nLOGGING['handlers']['console'] = {\n    '()': 'logging.StreamHandler',\n    'level': 'DEBUG',\n    'formatter': 'simple',\n}\n\nLOGGING['loggers']['django.request']['handlers'] = ['console']\nLOGGING['loggers']['rest_framework.request']['handlers'] = ['console']\nLOGGING['loggers']['awx']['handlers'] = ['console', 'external_logger']\nLOGGING['loggers']['awx.main.commands.run_callback_receiver']['handlers'] = ['console']\nLOGGING['loggers']['awx.main.tasks']['handlers'] = ['console', 'external_logger']\nLOGGING['loggers']['awx.main.scheduler']['handlers'] = ['console', 'external_logger']\nLOGGING['loggers']['django_auth_ldap']['handlers'] = ['console']\nLOGGING['loggers']['social']['handlers'] = ['console']\nLOGGING['loggers']['system_tracking_migrations']['handlers'] = ['console']\nLOGGING['loggers']['rbac_migrations']['handlers'] = ['console']\nLOGGING['loggers']['awx.isolated.manager.playbooks']['handlers'] = ['console']\nLOGGING['handlers']['callback_receiver'] = {'class': 'logging.NullHandler'}\nLOGGING['handlers']['task_system'] = {'class': 'logging.NullHandler'}\nLOGGING['handlers']['tower_warnings'] = {'class': 'logging.NullHandler'}\nLOGGING['handlers']['rbac_migrations'] = {'class': 'logging.NullHandler'}\nLOGGING['handlers']['system_tracking_migrations'] = {'class': 'logging.NullHandler'}\nLOGGING['handlers']['management_playbooks'] = {'class': 'logging.NullHandler'}\n\nDATABASES = {\n    'default': {\n        'ATOMIC_REQUESTS': True,\n        'ENGINE': 'awx.main.db.profiled_pg',\n        'NAME': 'awx',\n        'USER': 'awx',\n        'PASSWORD': 'awxpass9',\n        'HOST': 'awx-postgres.awx.svc.cluster.local',\n        'PORT': '5432',\n    }\n}\n\nif os.getenv(\"DATABASE_SSLMODE\", False):\n    DATABASES['default']['OPTIONS'] = {'sslmode': os.getenv(\"DATABASE_SSLMODE\")}\n\nCACHES = {\n    'default': {\n        'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',\n        'LOCATION': '{}:{}'.format(\"awx-memcached.awx.svc.cluster.local\", \"11211\")\n    },\n    'ephemeral': {\n        'BACKEND': 'django.core.cache.backends.locmem.LocMemCache',\n    },\n}\n\nBROKER_URL = 'amqp://{}:{}@{}:{}/{}'.format(\n    'guest',\n    'guest',\n    'awx-rabbitmq.awx.svc.cluster.local',\n    '5672',\n    'awx')\n\nCHANNEL_LAYERS = {\n    'default': {'BACKEND': 'asgi_amqp.AMQPChannelLayer',\n                'ROUTING': 'awx.main.routing.channel_routing',\n                'CONFIG': {'url': BROKER_URL}}\n}\n\nUSE_X_FORWARDED_PORT = True\n"}, "kind": "ConfigMap", "metadata": {"creationTimestamp": "2020-03-19T08:26:35Z", "labels": {"app": "tower"}, "name": "awx-tower-configmap", "namespace": "awx", "ownerReferences": [{"apiVersion": "tower.ansible.com/v1alpha1", "kind": "Tower", "name": "awx", "uid": "fcc3dc9d-0c52-4523-a428-fa5cfa203864"}], "resourceVersion": "5710", "selfLink": "/api/v1/namespaces/awx/configmaps/awx-tower-configmap", "uid": "7d37fc8d-940a-4907-8de7-d20517f103a1"}}}
": "/etc/tower/SECRET_KEY", "name": "secret-key", "readOnly": true, "subPath": "SECRET_KEY"}, {"mountPath": "/etc/tower/conf.d/environment.sh", "name": "environment", "readOnly": true, "subPath": "environment.sh"}, {"mountPath": "/etc/tower/settings.py", "name": "settings", "readOnly": true, "subPath": "settings.py"}, {"mountPath": "/etc/nginx/nginx.conf", "name": "nginx-conf", "readOnly": true, "subPath": "nginx.conf"}]}], "dnsPolicy": "ClusterFirst", "restartPolicy": "Always", "schedulerName": "default-scheduler", "securityContext": {}, "terminationGracePeriodSeconds": 30, "volumes": [{"name": "secret-key", "secret": {"defaultMode": 420, "items": [{"key": "secret_key", "path": "SECRET_KEY"}], "secretName": "awx-tower-secret"}}, {"configMap": {"defaultMode": 420, "items": [{"key": "environment", "path": "environment.sh"}], "name": "awx-tower-configmap"}, "name": "environment"}, {"configMap": {"defaultMode": 420, "items": [{"key": "settings", "path": "settings.py"}], "name": "awx-tower-configmap"}, "name": "settings"}, {"configMap": {"defaultMode": 420, "items": [{"key": "nginx_conf", "path": "nginx.conf"}], "name": "awx-tower-configmap"}, "name": "nginx-conf"}]}}}, "status": {}}, "warnings": []}, {"changed": true, "method": "create", "result": {"apiVersion": "v1", "kind": "Service", "metadata": {"creationTimestamp": "2020-03-19T08:26:37Z", "labels": {"app": "tower"}, "name": "awx-service", "namespace": "awx", "ownerReferences": [{"apiVersion": "tower.ansible.com/v1alpha1", "kind": "Tower", "name": "awx", "uid": "fcc3dc9d-0c52-4523-a428-fa5cfa203864"}], "resourceVersion": "5729", "selfLink": "/api/v1/namespaces/awx/services/awx-service", "uid": "90f806f0-93e5-45fd-9a52-a1b8b90f001a"}, "spec": {"clusterIP": "10.152.183.143", "ports": [{"port": 80, "protocol": "TCP", "targetPort": 8052}], "selector": {"app": "tower"}, "sessionAffinity": "None", "type": "ClusterIP"}, "status": {"loadBalancer": {}}}, "warnings": []}, {"changed": true, "method": "create", "result": {"apichanged: [localhost] => (item=tower_web.yaml.j2) => {"ansible_loop_var": "item", "changed": true, "item": "tower_web.yaml.j2", "result": {"results": [{"changed": true, "method": "create", "result": {"apiVersion": "v1", "data": {"admin_password": "cHdBbjgyIXNp", "secret_key": "YXd4c2VjcmV0"}, "kind": "Secret", "metadata": {"creationTimestamp": "2020-03-19T08:26:37Z", "name": "awx-tower-secret", "namespace": "awx", "ownerReferences": [{"apiVersion": "tower.ansible.com/v1alpha1", "kind": "Tower", "name": "awx", "uid": "fcc3dc9d-0c52-4523-a428-fa5cfa203864"}], "resourceVersion": "5722", "selfLink": "/api/v1/namespaces/awx/secrets/awx-tower-secret", "uid": "1ad7948a-efe5-482f-bd1b-438347200844"}, "type": "Opaque"}, "warnings": []}, {"changed": true, "method": "create", "result": {"apiVersion": "apps/v1", "kind": "Deployment", "metadata": {"creationTimestamp": "2020-03-19T08:26:37Z", "generation": 1, "labels": {"app": "tower"}, "name": "awx-tower-web", "namespace": "awx", "ownerReferences": [{"apiVersion": "tower.ansible.com/v1alpha1", "kind": "Tower", "name": "awx", "uid": "fcc3dc9d-0c52-4523-a428-fa5cfa203864"}], "resourceVersion": "5723", "selfLink": "/apis/apps/v1/namespaces/awx/deployments/awx-tower-web", "uid": "fd0a1235-e2c0-4b16-856b-f95b94083831"}, "spec": {"progressDeadlineSeconds": 600, "replicas": 1, "revisionHistoryLimit": 10, "selector": {"matchLabels": {"app": "tower"}}, "strategy": {"rollingUpdate": {"maxSurge": "25%", "maxUnavailable": "25%"}, "type": "RollingUpdate"}, "template": {"metadata": {"creationTimestamp": null, "labels": {"app": "tower"}}, "spec": {"containers": [{"image": "ansible/awx_web:9.3.0", "imagePullPolicy": "IfNotPresent", "name": "tower", "ports": [{"containerPort": 8052, "protocol": "TCP"}], "resources": {"requests": {"cpu": "1", "memory": "2Gi"}}, "terminationMessagePath": "/dev/termination-log", "terminationMessagePolicy": "File", "volumeMounts": [{"mountPath": "/etc/tower/SECRET_KEY", "name": "secret-key", "readOnly": true, "subPath": "SECRET_KEY"}, {"mountPath": "/etc/tower/conf.d/environment.sh", "name": "environment", "readOnly": true, "subPath": "environment.sh"}, {"mountPath": "/etc/tower/settings.py", "name": "settings", "readOnly": true, "subPath": "settings.py"}, {"mountPath": "/etc/nginx/nginx.conf", "name": "nginx-conf", "readOnly": true, "subPath": "nginx.conf"}]}], "dnsPolicy": "ClusterFirst", "restartPolicy": "Always", "schedulerName": "default-scheduler", "securityContext": {}, "terminationGracePeriodSeconds": 30, "volumes": [{"name": "secret-key", "secret": {"defaultMode": 420, "items": [{"key": "secret_key", "path": "SECRET_KEY"}], "secretName": "awx-tower-secret"}}, {"configMap": {"defaultMode": 420, "items": [{"key": "environment", "path": "environment.sh"}], "name": "awx-tower-configmap"}, "name": "environment"}, {"configMap": {"defaultMode": 420, "items": [{"key": "settings", "path": "settings.py"}], "name": "awx-tower-configmap"}, "name": "settings"}, {"configMap": {"defaultMode": 420, "items": [{"key": "nginx_conf", "path": "nginx.conf"}], "name": "awx-tower-configmap"}, "name": "nginx-conf"}]}}}, "status": {}}, "warnings": []}, {"changed": true, "method": "create", "result": {"apiVersion": "v1", "kind": "Service", "metadata": {"creationTimestamp": "2020-03-19T08:26:37Z", "labels": {"app": "tower"}, "name": "awx-service", "namespace": "awx", "ownerReferences": [{"apiVersion": "tower.ansible.com/v1alpha1", "kind": "Tower", "name": "awx", "uid": "fcc3dc9d-0c52-4523-a428-fa5cfa203864"}], "resourceVersion": "5729", "selfLink": "/api/v1/namespaces/awx/services/awx-service", "uid": "90f806f0-93e5-45fd-9a52-a1b8b90f001a"}, "spec": {"clusterIP": "10.152.183.143", "ports": [{"port": 80, "protocol": "TCP", "targetPort": 8052}], "selector": {"app": "tower"}, "sessionAffinity": "None", "type": "ClusterIP"}, "status": {"loadBalancer": {}}}, "warnings": []}, {"changed": true, "method": "create", "result": {"apiVersion": "extensions/v1beta1", "kind": "Ingress", "metadata": {"creationTimestamp": "2020-03-19T08:26:37Z", "generation": 1, "name": "awx-ingress", "namespace": "awx", "ownerReferences": [{"apiVersion": "tower.ansible.com/v1alpha1", "kind": "Tower", "name": "awx", "uid": "fcc3dc9d-0c52-4523-a428-fa5cfa203864"}], "resourceVersion": "5739", "selfLink": "/apis/extensions/v1beta1/namespaces/awx/ingresses/awx-ingress", "uid": "a3b64afc-bd34-4f13-a98b-428eafed2ad5"}, "spec": {"rules": [{"host": "microk8s", "http": {"paths": [{"backend": {"serviceName": "awx-service", "servicePort": 80}, "path": "/"}]}}]}, "status": {"loadBalancer": {}}}, "warnings": []}]}}
changed: [localhost] => (item=tower_task.yaml.j2) => {"ansible_loop_var": "item", "changed": true, "item": "tower_task.yaml.j2", "method": "create", "result": {"apiVersion": "apps/v1", "kind": "Deployment", "metadata": {"creationTimestamp": "2020-03-19T08:26:38Z", "generation": 1, "labels": {"app": "tower-task"}, "name": "awx-tower-task", "namespace": "awx", "ownerReferences": [{"apiVersion": "tower.ansible.com/v1alpha1", "kind": "Tower", "name": "awx", "uid": "fcc3dc9d-0c52-4523-a428-fa5cfa203864"}], "resourceVersion": "5744", "selfLink": "/apis/apps/v1/namespaces/awx/deployments/awx-tower-task", "uid": "671ee6d5-24f4-42fa-928e-e93281581bbf"}, "spec": {"progressDeadlineSeconds": 600, "replicas": 1, "revisionHistoryLimit": 10, "selector": {"matchLabels": {"app": "tower-task"}}, "strategy": {"rollingUpdate": {"maxSurge": "25%", "maxUnavailable": "25%"}, "type": "RollingUpdate"}, "template": {"metadata": {"creationTimestamp": null, "labels": {"app": "tower-task"}}, "spec": {"containers": [{"command": ["/usr/bin/launch_awx_task.sh"], "envFrom": [{"configMapRef": {"name": "awx-tower-configmap"}}, {"secretRef": {"name": "awx-tower-secret"}}], "image": "ansible/awx_task:9.3.0", "imagePullPolicy": "IfNotPresent", "name": "tower-task", "resources": {"requests": {"cpu": "500m", "memory": "1Gi"}}, "securityContext": {"privileged": true}, "terminationMessagePath": "/dev/termination-log", "terminationMessagePolicy": "File", "volumeMounts": [{"mountPath": "/etc/tower/SECRET_KEY", "name": "secret-key", "readOnly": true, "subPath": "SECRET_KEY"}, {"mountPath": "/etc/tower/conf.d/environment.sh", "name": "environment", "readOnly": true, "subPath": "environment.sh"}, {"mountPath": "/etc/tower/settings.py", "name": "settings", "readOnly": true, "subPath": "settings.py"}]}], "dnsPolicy": "ClusterFirst", "restartPolicy": "Always", "schedulerName": "default-scheduler", "securityContext": {}, "terminationGracePeriodSeconds": 30, "volumes": [{"name": "secret-key", "secret": {"defaultMode": 420, "items": [{"key": "secret_key", "path": "SECRET_KEY"}], "secretName": "awx-tower-secret"}}, {"configMap": {"defaultMode": 420, "items": [{"key": "environment", "path": "environment.sh"}], "name": "awx-tower-configmap"}, "name": "environment"}, {"configMap": {"defaultMode": 420, "items": [{"key": "settings", "path": "settings.py"}], "name": "awx-tower-configmap"}, "name": "settings"}]}}}, "status": {}}}

TASK [tower : Get the Tower pod information.] **********************************
task path: /opt/ansible/roles/tower/tasks/main.yml:14
ok: [localhost] => {"attempts": 1, "changed": false, "resources": [{"apiVersion": "v1", "kind": "Pod", "metadata": {"creationTimestamp": "2020-03-19T08:26:37Z", "generateName": "awx-tower-web-c98cd6555-", "labels": {"app": "tower", "pod-template-hash": "c98cd6555"}, "name": "awx-tower-web-c98cd6555-jtzfg", "namespace": "awx", "ownerReferences": [{"apiVersion": "apps/v1", "blockOwnerDeletion": true, "controller": true, "kind": "ReplicaSet", "name": "awx-tower-web-c98cd6555", "uid": "77238ee6-e1f4-4610-a65c-701359cc8efd"}], "resourceVersion": "5765", "selfLink": "/api/v1/namespaces/awx/pods/awx-tower-web-c98cd6555-jtzfg", "uid": "77dfe879-08c6-4643-981b-2c456698e89a"}, "spec": {"containers": [{"image": "ansible/awx_web:9.3.0", "imagePullPolicy": "IfNotPresent", "name": "tower", "ports": [{"containerPort": 8052, "protocol": "TCP"}], "resources": {"requests": {"cpu": "1", "memory": "2Gi"}}, "terminationMessagePath": "/dev/termination-log", "terminationMessagePolicy": "File", "volumeMounts": [{"mountPath": "/etc/tower/SECRET_KEY", "name": "secret-key", "readOnly": true, "subPath": "SECRET_KEY"}, {"mountPath": "/etc/tower/conf.d/environment.sh", "name": "environment", "readOnly": true, "subPath": "environment.sh"}, {"mountPath": "/etc/tower/settings.py", "name": "settings", "readOnly": true, "subPath": "settings.py"}, {"mountPath": "/etc/nginx/nginx.conf", "name": "nginx-conf", "readOnly": true, "subPath": "nginx.conf"}, {"mountPath": "/var/run/secrets/kubernetes.io/serviceaccount", "name": "default-token-sqw2l", "readOnly": true}]}], "dnsPolicy": "ClusterFirst", "enableServiceLinks": true, "nodeName": "microk8s", "priority": 0, "restartPolicy": "Always", "schedulerName": "default-scheduler", "securityContext": {}, "serviceAccount": "default", "serviceAccountName": "default", "terminationGracePeriodSeconds": 30, "tolerations": [{"effect": "NoExecute", "key": "node.kubernetes.io/not-ready", "operator": "Exists", "tolerationSeconds": 300}, {"effect": "NoExecute", "key": "node.kubernetes.io/unreachable", "operator": "Exists", "tolerationSeconds": 300}], "volumes": [{"name": "secret-key", "secret": {"defaultMode": 420, "items": [{"key": "secret_key", "path": "SECRET_KEY"}], "secretName": "awx-tower-secret"}}, {"configMap": {"defaultMode": 420, "items": [{"key": "environment", "path": "environment.sh"}], "name": "awx-tower-configmap"}, "name": "environment"}, {"configMap": {"defaultMode": 420, "items": [{"key": "settings", "path": "settings.py"}], "name": "awx-tower-configmap"}, "name": "settings"}, {"configMap": {"defaultMode": 420, "items": [{"key": "nginx_conf", "path": "nginx.conf"}], "name": "awx-tower-configmap"}, "name": "nginx-conf"}, {"name": "default-token-sqw2l", "secret": {"defaultMode": 420, "secretName": "default-token-sqw2l"}}]}, "status": {"conditions": [{"lastProbeTime": null, "lastTransitionTime": "2020-03-19T08:26:37Z", "status": "True", "type": "Initialized"}, {"lastProbeTime": null, "lastTransitionTime": "2020-03-19T08:26:39Z", "status": "True", "type": "Ready"}, {"lastProbeTime": null, "lastTransitionTime": "2020-03-19T08:26:39Z", "status": "True", "type": "ContainersReady"}, {"lastProbeTime": null, "lastTransitionTime": "2020-03-19T08:26:37Z", "status": "True", "type": "PodScheduled"}], "containerStatuses": [{"containerID": "containerd://c32c89a8657d58ba5529ef0d96679094f0bebbbd61bd4dbd354afdd2e46c160a", "image": "docker.io/ansible/awx_web:9.3.0", "imageID": "docker.io/ansible/awx_web@sha256:e3716cce276a9774650a4fbbb5d80c98fa734db633e8ae4ea661d178c23b89df", "lastState": {}, "name": "tower", "ready": true, "restartCount": 0, "started": true, "state": {"running": {"startedAt": "2020-03-19T08:26:39Z"}}}], "hostIP": "192.168.1.119", "phase": "Running", "podIP": "10.1.9.26", "podIPs": [{"ip": "10.1.9.26"}], "qosClass": "Burstable", "startTime": "2020-03-19T08:26:37Z"}}]}

TASK [tower : Set the tower pod name as a variable.] ***************************

TASK [tower : Set the tower pod name as a variable.] ***************************
task path: /opt/ansible/roles/tower/tasks/main.yml:25
ok: [localhost] => {"ansible_facts": {"tower_pod_name": "awx-tower-web-c98cd6555-jtzfg"}, "changed": false}

TASK [tower : Verify tower_pod_name is populated.] *****************************
task path: /opt/ansible/roles/tower/tasks/main.yml:29
ok: [localhost] => {
    "changed": false,
    "msg": "All assertions passed"
}

TASK [tower : Check if database is populated (auth_user table exists).] ********
task path: /opt/ansible/roles/tower/tasks/main.yml:34
skipping: [localhost] => {"changed": false, "skip_reason": "Conditional result was False"}

TASK [tower : Migrate the database if the K8s resources were updated.] *********
task path: /opt/ansible/roles/tower/tasks/main.yml:46
r/lib/awx/venv/awx/lib/python3.6/site-packages/django/core/management/base.py\", line 323, in run_from_argv\n    self.execute(*args, **cmd_options)\n  File \"/var/lib/awx/venv/awx/lib/python3.6/site-packages/django/core/management/base.py\", line 364, in execute\n    output = self.handle(*args, **options)\n  File \"/var/lib/awx/venv/awx/lib/python3.6/site-packages/django/core/management/base.py\", line 83, in wrapped\n    res = handle_func(*args, **kwargs)\n  File \"/var/lib/awx/venv/awx/lib/python3.6/site-packages/django/core/management/commands/migrate.py\", line 87, in handle\n    executor = MigrationExecutor(connection, self.migration_progress_callback)\n  File \"/var/lib/awx/venv/awx/lib/python3.6/site-packages/django/db/migrations/executor.py\", line 18, in __init__\n    self.loader = MigrationLoader(self.connection)\n  File \"/var/lib/awx/venv/awx/lib/python3.6/site-packages/django/db/migrations/loader.py\", line 49, in __init__\n    self.build_graph()\n  File \"/var/lib/awx/venv/awx/lib/python3.6/site-packages/django/db/migrations/loader.py\", line 212, in build_graph\n    self.applied_migrations = recorder.applied_migrations()\n  File \"/var/lib/awx/venv/awx/lib/python3.6/site-packages/django/db/migrations/recorder.py\", line 73, in applied_migrations\n    if self.has_table():\n  File \"/var/lib/awx/venv/awx/lib/python3.6/site-packages/django/db/migrations/recorder.py\", line 56, in has_table\n    return self.Migration._meta.db_table in self.connection.introspection.table_names(self.connection.cursor())\n  File \"/var/lib/awx/venv/awx/lib/python3.6/site-packages/django/db/backends/base/base.py\", line 256, in cursor\n    return self._cursor()\n  File \"/var/lib/awx/venv/awx/lib/python3.6/site-packages/django/db/backends/base/base.py\", line 233, in _cursor\n    self.ensure_connection()\n  File \"/var/lib/awx/venv/awx/lib/python3.6/site-packages/django/db/backends/base/base.py\", line 217, in ensure_connection\n    self.connect()\n  File \"/var/lib/awx/venv/awx/lib/python3.6/site-packages/django/db/utils.py\", line 89, in __exit__\n    raise dj_exc_value.with_traceback(traceback) from exc_value\n  File \"/var/lib/awx/venv/awx/lib/python3.6/site-packages/django/db/backends/base/base.py\", line 217, in ensure_connection\n    self.connect()\n  File \"/var/lib/awx/venv/awx/lib/python3.6/site-packages/django/db/backends/base/base.py\", line 195, in connect\n    self.connection = self.get_new_connection(conn_params)\n  File \"/var/lib/awx/venv/awx/lib/python3.6/site-packages/django/db/backends/postgresql/base.py\", line 178, in get_new_connection\n    connection = Database.connect(**conn_params)\n  File \"/var/lib/awx/venv/awx/lib/python3.6/site-packages/psycopg2/__init__.py\", line 126, in connect\n    conn = _connect(dsn, connection_factory=connection_factory, **kwasync)\ndjango.db.utils.OperationalError: could not connect to server: Connection timed out\n\tIs the server running on host \"awx-postgres.awx.svc.cluster.local\" (91.201.60.73) and accepting\n\tTCP/IP connections on port 5432?\n\ncommand terminated with exit code 1", "stderr_lines": ["Traceback (most recent call last):", "  File \"/var/lib/awx/venv/awx/lib/python3.6/site-packages/django/db/backends/base/base.py\", line 217, in ensure_connection", "    self.connect()", "  File \"/var/lib/awx/venv/awx/lib/python3.6/site-packages/django/db/backends/base/base.py\", line 195, in connect", "    self.connection = self.get_new_connection(conn_params)", "  File \"/var/lib/awx/venv/awx/lib/python3.6/site-packages/django/db/backends/postgresql/base.py\", line 178, in get_new_connection", "    connection = Database.connect(**conn_params)", "  File \"/var/lib/awx/venv/awx/lib/python3.6/site-packages/psycopg2/__init__.py\", line 126, in connect", "    conn = _connect(dsn, connection_factory=connection_factory, **kwasync)", "psycopg2.OperationalError: could not connect to server: Connection timed out", "\tIs the server running on host \"awx-postgres.awx.svc.cluster.local\" (91.201.60.73) and accepting", "\tTCP/IP connections on port 5432?", "", "", "The above exception was the direct cause of the following exception:", "", "Traceback (most recent call last):", "  File \"/usr/bin/awx-manage\", line 8, in <module>", "    sys.exit(manage())", "  File \"/var/lib/awx/venv/awx/lib/python3.6/site-packages/awx/__init__.py\", line 152, in manage", "    execute_from_command_line(sys.argv)", "  File \"/var/lib/awx/venv/awx/lib/python3.6/site-packages/django/core/management/__init__.py\", line 381, in execute_from_command_line", "    utility.execute()", "  File \"/var/lib/awx/venv/awx/lib/python3.6/site-packages/django/core/management/__init__.py\", line 375, in execute", "    self.fetch_command(subcommand).run_from_argv(self.argv)", "  File \"/var/lib/awx/venv/awx/lib/python3.6/site-packages/django/core/management/base.py\", line 323, in run_from_argv", "    self.execute(*args, **cmd_options)", "  File \"/var/lib/awx/venv/awx/lib/python3.6/site-packages/django/core/management/base.py\", line 364, in execute", "    output = self.handle(*args, **options)", "  File \"/var/lib/awx/venv/awx/lib/python3.6/site-packages/django/core/management/base.py\", line 83, in wrapped", "    res = handle_func(*args, **kwargs)", "  File \"/var/lib/awx/venv/awx/lib/python3.6/site-packages/django/core/management/commands/migrate.py\", line 87, in handle", "    executor = MigrationExecutor(connection, self.migration_progress_callback)", "  File \"/var/lib/awx/venv/awx/lib/python3.6/site-packages/django/db/migrations/executor.py\", line 18, in __init__", "    self.loader = MigrationLoader(self.connection)", "  File \"/var/lib/awx/venv/awx/lib/python3.6/site-packages/django/db/migrations/loader.py\", line 49, in __init__", "    self.build_graph()", "  File \"/var/lib/awx/venv/awx/lib/python3.6/site-packages/django/db/migrations/loader.py\", line 212, in build_graph", "    self.applied_migrations = recorder.applied_migrations()", "  File \"/var/lib/awx/venv/awx/lib/python3fatal: [localhost]: FAILED! => {"changed": true, "cmd": "kubectl exec -n awx awx-tower-web-c98cd6555-jtzfg -- bash -c \"awx-manage migrate --noinput\"", "delta": "0:19:47.851857", "end": "2020-03-19 08:46:30.636223", "msg": "non-zero return code", "rc": 1, "start": "2020-03-19 08:26:42.784366", "stderr": "Traceback (most recent call last):\n  File \"/var/lib/awx/venv/awx/lib/python3.6/site-packages/django/db/backends/base/base.py\", line 217, in ensure_connection\n    self.connect()\n  File \"/var/lib/awx/venv/awx/lib/python3.6/site-packages/django/db/backends/base/base.py\", line 195, in connect\n    self.connection = self.get_new_connection(conn_params)\n  File \"/var/lib/awx/venv/awx/lib/python3.6/site-packages/django/db/backends/postgresql/base.py\", line 178, in get_new_connection\n    connection = Database.connect(**conn_params)\n  File \"/var/lib/awx/venv/awx/lib/python3.6/site-packages/psycopg2/__init__.py\", line 126, in connect\n    conn = _connect(dsn, connection_factory=connection_factory, **kwasync)\npsycopg2.OperationalError: could not connect to server: Connection timed out\n\tIs the server running on host \"awx-postgres.awx.svc.cluster.local\" (91.201.60.73) and accepting\n\tTCP/IP connections on port 5432?\n\n\nThe above exception was the direct cause of the following exception:\n\nTraceback (most recent call last):\n  File \"/usr/bin/awx-manage\", line 8, in <module>\n    sys.exit(manage())\n  File \"/var/lib/awx/venv/awx/lib/python3.6/site-packages/awx/__init__.py\", line 152, in manage\n    execute_from_command_line(sys.argv)\n  File \"/var/lib/awx/venv/awx/lib/python3.6/site-packages/django/core/management/__init__.py\", line 381, in execute_from_command_line\n    utility.execute()\n  File \"/var/lib/awx/venv/awx/lib/python3.6/site-packages/django/core/management/__init__.py\", line 375, in execute\n    self.fetch_command(subcommand).run_from_argv(self.argv)\n  File \"/var/lib/awx/venv/awx/lib/python3.6/site-packages/django/core/management/base.py\", line 323, in run_from_argv\n    self.execute(*args, **cmd_options)\n  File \"/var/lib/awx/venv/awx/lib/python3.6/site-packages/django/core/management/base.py\", line 364, in execute\n    output = self.handle(*args, **options)\n  File \"/var/lib/awx/venv/awx/lib/python3.6/site-packages/django/core/management/base.py\", line 83, in wrapped\n    res = handle_func(*args, **kwargs)\n  File \"/var/lib/awx/venv/awx/lib/python3.6/site-packages/django/core/management/commands/migrate.py\", line 87, in handle\n    executor = MigrationExecutor(connection, self.migration_progress_callback)\n  File \"/var/lib/awx/venv/awx/lib/python3.6/site-packages/django/db/migrations/executor.py\", line 18, in __init__\n    self.loader = MigrationLoader(self.connection)\n  File \"/var/lib/awx/venv/awx/lib/python3.6/site-packages/django/db/migrations/loader.py\", line 49, in __init__\n    self.build_graph()\n  File \"/var/lib/awx/venv/awx/lib/python3.6/site-packages/django/db/migrations/loader.py\", line 212, in build_graph\n    self.applied_migrations = recorder.applied_migrations()\n  File \"/var/lib/awx/venv/awx/lib/python3.6/site-packages/django/db/migrations/recorder.py\", line 73, in applied_migrations\n    if self.has_table():\n  File \"/var/lib/awx/venv/awx/lib/python3.6/site-packages/django/db/migrations/recorder.py\", line 56, in has_table\n    return self.Migration._meta.db_table in self.connection.introspection.table_names(self.connection.cursor())\n  File \"/var/lib/awx/venv/awx/lib/python3.6/site-packages/django/db/backends/base/base.py\", line 256, in cursor\n    return self._cursor()\n  File \"/var/lib/awx/venv/awx/lib/python3.6/site-packages/django/db/backends/base/base.py\", line 233, in _cursor\n    self.ensure_connection()\n  File \"/var/lib/awx/venv/awx/lib/python3.6/site-packages/django/db/backends/base/base.py\", line 217, in ensure_connection\n    self.connect()\n  File \"/var/lib/awx/venv/awx/lib/python3.6/site-packages/django/db/utils.py\", line 89, in __exit__\n    raise dj_exc_value.with_traceback(traceback) from exc_value\n  File \"/var/lib/awx/venv/awx/lib/python3.6/site-packages/django/db/backends/base/base.py\", line 217, in ensure_connection\n    self.connect()\n  File \"/var/lib/awx/venv/awx/lib/python3.6/site-packages/django/db/backends/base/base.py\", line 195, in connect\n    self.connection = self.get_new_connection(conn_params)\n  File \"/var/lib/awx/venv/awx/lib/python3.6/site-packages/django/db/backends/postgresql/base.py\", line 178, in get_new_connection\n    connection = Database.connect(**conn_params)\n  File \"/var/lib/awx/venv/awx/lib/python3.6/site-packages/psycopg2/__init__.py\", line 126, in connect\n    conn = _connect(dsn, connection_factory=connection_factory, **kwasync)\ndjango.db.utils.OperationalError: could not connect to server: Connection timed out\n\tIs the server running on host \"awx-postgres.awx.svc.cluster.local\" (91.201.60.73) and accepting\n\tTCP/IP connections on port 5432?\n\ncommand terminated with exit code 1", "stderr_lines": ["Traceback (most recent call last):", "  File \"/var/lib/awx/venv/awx/lib/python3.6/site-packages/django/db/backends/base/base.py\", line 217, in ensure_connection", "    self.connect()", "  File \"/var/lib/awx/venv/awx/lib/python3.6/site-packages/django/db/backends/base/base.py\", line 195, in connect", "    self.connection = self.get_new_connection(conn_params)", "  File \"/var/lib/awx/venv/awx/lib/python3.6/site-packages/django/db/backends/postgresql/base.py\", line 178, in get_new_connection", "    connection = Database.connect(**conn_params)", "  File \"/var/lib/awx/venv/awx/lib/python3.6/site-packages/psycopg2/__init__.py\", line 126, in connect", "    conn = _connect(dsn, connection_factory=connection_factory, **kwasync)", "psycopg2.OperationalError: could not connect to server: Connection timed out", "\tIs the server running on host \"awx-postgres.awx.svc.cluster.local\" (91.201.60.73) and accepting", "\tTCP/IP connections on port 5432?", "", "", "The above exception was the direct cause of the following exception:", "", "Traceback (most recent call last):", "  File \"/usr/bin/awx-manage\", line 8, in <module>", "    sys.exit(manage())", "  File \"/var/lib/awx/venv/awx/lib/python3.6/site-packages/awx/__init__.py\", line 152, in manage", "    execute_from_command_line(sys.argv)", "  File \"/var/lib/awx/venv/awx/lib/python3.6/site-packages/django/core/management/__init__.py\", line 381, in execute_from_command_line", "    utility.execute()", "  File \"/var/lib/awx/venv/awx/lib/python3.6/site-packages/django/core/management/__init__.py\", line 375, in execute", "    self.fetch_command(subcommand).run_from_argv(self.argv)", "  File \"/var/lib/awx/venv/awx/lib/python3.6/site-packages/django/core/management/base.py\", line 323, in run_from_argv", "    self.execute(*args, **cmd_options)", "  File \"/var/lib/awx/venv/awx/lib/python3.6/site-packages/django/core/management/base.py\", line 364, in execute", "    output = self.handle(*args, **options)", "  File \"/var/lib/awx/venv/awx/lib/python3.6/site-packages/django/core/management/base.py\", line 83, in wrapped", "    res = handle_func(*args, **kwargs)", "  File \"/var/lib/awx/venv/awx/lib/python3.6/site-packages/django/core/management/commands/migrate.py\", line 87, in handle", "    executor = MigrationExecutor(connection, self.migration_progress_callback)", "  File \"/var/lib/awx/venv/awx/lib/python3.6/site-packages/django/db/migrations/executor.py\", line 18, in __init__", "    self.loader = MigrationLoader(self.connection)", "  File \"/var/lib/awx/venv/awx/lib/python3.6/site-packages/django/db/migrations/loader.py\", line 49, in __init__", "    self.build_graph()", "  File \"/var/lib/awx/venv/awx/lib/python3.6/site-packages/django/db/migrations/loader.py\", line 212, in build_graph", "    self.applied_migrations = recorder.applied_migrations()", "  File \"/var/lib/awx/venv/awx/lib/python3.6/site-packages/django/db/migrations/recorder.py\", line 73, in applied_migrations", "    if self.has_table():", "  File \"/var/lib/awx/venv/awx/lib/python3.6/site-packages/django/db/migrations/recorder.py\", line 56, in has_table", "    return self.Migration._meta.db_table in self.connection.introspection.table_names(self.connection.cursor())", "  File \"/var/lib/awx/venv/awx/lib/python3.6/site-packages/django/db/backends/base/base.py\", line 256, in cursor", "    return self._cursor()", "  File \"/var/lib/awx/venv/awx/lib/python3.6/site-packages/django/db/backends/base/base.py\", line 233, in _cursor", "    self.ensure_connection()", "  File \"/var/lib/awx/venv/awx/lib/python3.6/site-packages/django/db/backends/base/base.py\", line 217, in ensure_connection", "    self.connect()", "  File \"/var/lib/awx/venv/awx/lib/python3.6/site-packages/django/db/utils.py\", line 89, in __exit__", "    raise dj_exc_value.with_traceback(traceback) from exc_value", "  File \"/var/lib/awx/venv/awx/lib/python3.6/site-packages/django/db/backends/base/base.py\", line 217, in ensure_connection", "    self.connect()", "  File \"/var/lib/awx/venv/awx/lib/python3.6/site-packages/django/db/backends/base/base.py\", line 195, in connect", "    self.connection = self.get_new_connection(conn_params)", "  File \"/var/lib/awx/venv/awx/lib/python3.6/site-packages/django/db/backends/postgresql/base.py\", line 178, in get_new_connection", "    connection = Database.connect(**conn_params)", "  File \"/var/lib/awx/venv/awx/lib/python3.6/site-packages/psycopg2/__init__.py\", line 126, in connect", "    conn = _connect(dsn, connection_factory=connection_factory, **kwasync)", "django.db.utils.OperationalError: could not connect to server: Connection timed out", "\tIs the server running on host \"awx-postgres.awx.svc.cluster.local\" (91.201.60.73) and accepting", "\tTCP/IP connections on port 5432?", "", "command terminated with exit code 1"], "stdout": "", "stdout_lines": []}

PLAY RECAP *********************************************************************
localhost                  : ok=4    changed=1    unreachable=0    failed=1    skipped=1    rescued=0    ignored=0 

Web page shows:

502 Bad Gateway openresty/1.15.8.1

Make sure Tower Operator can be deployed easily on OpenShift

From a comment on my blog:

  1. For Openshift, postgresql image is unable to write on data directory, so to work we need to set anyuid scc for the namespace.
  2. Maybe for the same reason, the status of "updating" still running forever. (I had to create the route manually).
  3. The App tower-task is unable to have 1 replica running.
  4. I had to change the yaml to place the tower-operator service account on the project I have created. It's all placed to run on default project.

I have only been testing the operator in Kubernetes clusters (Minikube and KinD), I haven't been testing in CRC or other OpenShift-ish clusters. I would like to make sure the operator is easy to deploy into OpenShift/OKD as well, and I know there can be restrictions around things like PVs (which is required for this operator because otherwise tower's data would get blown away any time you updated or any time that container stopped).

Prepare for AWX/Tower new versions using Redis instead of RabbitMQ

Upgrade Postgres to version 10 (currently 9.6)

Tower 3.6.x requires Postgres 10 (though seems to run okay on 9.6 for now...), so that version should be upgraded by replacing the three definitions of tower_postgres_image in the codebase.

Investigate celery errors and task queue not working on first start

For example:

2019-11-08 22:34:58,030 ERROR    celery.beat Removing corrupted schedule file '/var/lib/awx/beat.db': error(11, 'Resource temporarily unavailable')
Traceback (most recent call last):
  File "/var/lib/awx/venv/awx/lib64/python3.6/site-packages/celery/beat.py", line 485, in setup_schedule
    self._store = self._open_schedule()
  File "/var/lib/awx/venv/awx/lib64/python3.6/site-packages/celery/beat.py", line 475, in _open_schedule
    return self.persistence.open(self.schedule_filename, writeback=True)
  File "/usr/lib64/python3.6/shelve.py", line 243, in open
    return DbfilenameShelf(filename, flag, protocol, writeback)
  File "/usr/lib64/python3.6/shelve.py", line 227, in __init__
    Shelf.__init__(self, dbm.open(filename, flag), protocol, writeback)
  File "/usr/lib64/python3.6/dbm/__init__.py", line 94, in open
    return mod.open(file, flag, mode)
_gdbm.error: [Errno 11] Resource temporarily unavailable

Not sure if this is a big deal or not, but wanted to post it here and track down any other celery-related issues. Note that I don't think I'm running any instance of celery independent of the RabbitMQ deployment or the AWX web deployment...

It looks like the official Kubernetes installer has a separate container running in the main web pod for celery: https://github.com/ansible/awx/blob/devel/installer/roles/kubernetes/templates/deployment.yml.j2#L233-L278

Allow setting resources.requests.memory so Tower gets enough memory

CPU is one thing, but memory is another; without enough, Tower kind of implodes. In the official installer, they set the following defaults for spec.containers.resources.requests:

web_mem_request: 1
web_cpu_request: 500

task_mem_request: 2
task_cpu_request: 1500

They also request quite a bit of memory for RabbitMQ, though at this point I'm inclined to leave it unspecified. I have been running everything under a local minikube cluster with 6G of total RAM available inside, and things run smoothly enough at least for demonstration/light usage. So I don't want to jam requirements in that makes it impossible to run on a workstation with less than 16 GB of RAM available.

Operator Inoperable after breaking change upstream

I attempted to stand up an instance today and Operator failed to deploy due to being unable to pull the container ansible/tower-operator:0.4.0

Events:
  Type     Reason     Age               From               Message
  ----     ------     ----              ----               -------
  Normal   Scheduled  <unknown>         default-scheduler  Successfully assigned default/tower-operator-67bb7f6b8b-5gb4j to node2
  Warning  Failed     16s               kubelet, node2     Error: ImagePullBackOff
  Normal   BackOff    16s               kubelet, node2     Back-off pulling image "ansible/tower-operator:0.4.0"
  Warning  Failed     2s (x2 over 17s)  kubelet, node2     Failed to pull image "ansible/tower-operator:0.4.0": rpc error: code = Unknown desc = Error response from daemon: pull access denied for ansible/tower-operator, repository does not exist or may require 'docker login': denied: requested access to the resource is denied
  Warning  Failed     2s (x2 over 17s)  kubelet, node2     Error: ErrImagePull

I tried updating the image to awx-operator but the operator (after it deploys) does not create an awx deployment or pgsql deployment.

How to configure public access via https

Hi Jeff,

How do you configure kubernetes to enable remote access to awx via https.

All pods are running and enabled ingress but have no access to the webite either by port 80 or 443.

Many thanks for your time

How to restore Postgres database into new installation

Hi again Jeff,

Sorry to spam with issues at the moment. Solving obstacles day by day.

I've now managed to get awx running and with much faster response times then our local docker environment setup.

However, how do we migrate the existing database (postgres10) to the kubernetes environement.

The namespace is configured with persistentVolumeClaim and can't find the mount folder on the host. I was hoping to just copy the postgres data folder content like we do right now when upgrading awx versions.

Any easy way to accomplish this scenario?

Many thanks!

Make it so tower role could be shared with official OpenShift installer

Mostly this would require adopting the complexity of the OpenShift installer's templates, and possibly re-structuring the Pod architecture (the OpenShift deployment seems to deploy a ton of stuff inside the single Tower Deployment's Pod, instead of operating the services like RabbitMQ independently).

But from looking at the downloadable installer's kubernetes role vs. this operator's tower role, it seems like both could be combined without a huge effort. I am postponing work on this until the operator is more stable, however, and also do not want to commit too much effort to a combination effort until a decision may be made to share the role between projects.

As it is, it's nice to maintain this operator-specific role inside the operator project (just for project velocity and dependency management purposes).

Switch to using k8s_exec module instead of shell module + kubectl

In #5, I discovered the k8s_exec module from this PR (ansible/ansible#55029) does not work when running inside an Ansible-based Operator due to some proxy request handling the Operator does for Kubernetes API requests.

The gist of the problem is k8s_exec uses a websocket to communicate with Kubernetes to run an exec command, but the proxy does not handle the 101 handshake response correctly (instead returning a 200), which results in a failure of the k8s_exec module.

I was going to try to get that issue fixed in #5, but as a workaround, I'm currently using kubectl, which is installed in the operator image with the following line:

# Install kubectl.
COPY --from=lachlanevenson/k8s-kubectl:v1.16.2 /usr/local/bin/kubectl /usr/local/bin/kubectl

This is a little fragile, as it means the kubectl currently shipping with this operator is locked into a specific version (which likely won't cause issues, but isn't wonderful especially if it could be used as an attack vector if a vulnerability is found with whatever the current version is).

So for this issue to be complete, the following should be done:

  • Add back roles/tower/library/k8s_exec.py based on this PR.
  • Switch each of the tasks in the role's main.yml to use k8s_exec instead of shell + kubectl.
  • Work to resolve upstream issue operator-framework/operator-sdk#2204
  • Verify the switch works correctly when running inside the operator, and doesn't throw errors.

awx-postgres-0 pod pending (PersistentVolumeClaim)

Hi Jeff,

I am completely new to Kubernetes and managed to configure an Ubuntu Server with microk8s.

awx-postgres-0 is in a constant pending state the output is shown here:

Name:           awx-postgres-0
Namespace:      awx
Priority:       0
Node:           <none>
Labels:         app=tower-postgres
                controller-revision-hash=awx-postgres-5599b677
                statefulset.kubernetes.io/pod-name=awx-postgres-0
Annotations:    <none>
Status:         Pending
IP:             
IPs:            <none>
Controlled By:  StatefulSet/awx-postgres
Containers:
  postgres:
    Image:      postgres:10
    Port:       3306/TCP
    Host Port:  0/TCP
    Environment:
      POSTGRES_DB:        awx
      POSTGRES_USER:      awx
      POSTGRES_PASSWORD:  <set to the key 'password' in secret 'awx-postgres-pass'>  Optional: false
    Mounts:
      /var/lib/postgresql/data from postgres (rw,path="data")
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-fxk6j (ro)
Conditions:
  Type           Status
  PodScheduled   False 
Volumes:
  postgres:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  postgres-awx-postgres-0
    ReadOnly:   false
  default-token-fxk6j:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-fxk6j
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason            Age                 From               Message
  ----     ------            ----                ----               -------
  Warning  FailedScheduling  20s (x15 over 18m)  default-scheduler  error while running "VolumeBinding" filter plugin for pod "awx-postgres-0": pod has unbound immediate PersistentVolumeClaims

The guide does not really explain how to setup postgres?

Thanks

Update to Tower 3.6.4

There's a new release today, and it seems to basically fix some bugs. There is one change which may require a modification to the memcached deployment:

Improved memcached in OpenShift deployments to listen on a more secure domain socket (CVE-2020-10697)

Permission denied?

Operator container shows the below.

 File "/usr/local/lib/python3.6/site-packages/ansible/utils/path.py", line 90, in makedirs_safe
    raise AnsibleError("Unable to create local directories(%s): %s" % (to_native(rpath), to_native(e)))
ansible.errors.AnsibleError: Unable to create local directories(/opt/ansible/.ansible/tmp): [Errno 13] Permission denied: b'/opt/ansible/.ansible/tmp'

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.