Giter Club home page Giter Club logo

spicedb-operator's Introduction

SpiceDB Operator

Container Image Docs Build Status Discord Server Twitter

A Kubernetes operator for managing SpiceDB clusters.

Features include:

  • Creation, management, and scaling of SpiceDB clusters with a single Custom Resource
  • Automated datastore migrations when upgrading SpiceDB versions

Have questions? Join our Discord.

Looking to contribute? See CONTRIBUTING.md.

Getting Started

In order to get started, you'll need a Kubernetes cluster. For local development, install your tool of choice. You can use whatever, so long as you're comfortable with it and it works on your platform. We recommend one of the following:

Next, you'll install a release of the operator:

kubectl apply --server-side -f https://github.com/authzed/spicedb-operator/releases/latest/download/bundle.yaml

Finally you can create your first cluster:

kubectl apply --server-side -f - <<EOF
apiVersion: authzed.com/v1alpha1
kind: SpiceDBCluster
metadata:
  name: dev
spec:
  config:
    datastoreEngine: memory
  secretName: dev-spicedb-config
---
apiVersion: v1
kind: Secret
metadata:
  name: dev-spicedb-config
stringData:
  preshared_key: "averysecretpresharedkey" 
EOF

Connecting To Your Cluster

If you haven't already, make sure you've installed zed.

Port forward the grpc endpoint:

kubectl port-forward deployment/dev-spicedb 50051:50051

Now you can use zed to interact with SpiceDB:

zed --insecure --endpoint=localhost:50051 --token=averysecretpresharedkey schema read

Where To Go From Here

  • Check out the examples directory to see how to configure SpiceDBCluster for production, including datastore backends, TLS, and Ingress.
  • Learn how to use SpiceDB via the docs and playground.
  • Ask questions and join the community in discord.

Automatic and Suggested Updates

The SpiceDB operator now ships with a set of release channels for SpiceDB. Release channels allow the operator to walk through a safe series of updates, like the phased migration for postgres in SpiceDB v1.14.0

There are two ways you can choose to use update channels:

  • automatic updates
  • suggested updates

Which mode you choose depends on your tolerance for uncertainty. If possible, we recommend running a stage or canary instance with automatic updates enabled, and using suggested updates for production and production-like environments.

If no channel is selected, a default (stable) channel will be used for the selected datastore.

Available Update Channels:

Datastore Channels
postgres stable
cockroachdb stable
mysql stable
spanner stable
memory stable

Automatic Updates

If you do not specify a version that you want to run, the operator will always keep you up to date with the newest version in the channel.

If the operator or the update graph changes, the head of the channel may change and trigger an update.

apiVersion: authzed.com/v1alpha1
kind: SpiceDBCluster
metadata:
  name: dev
  namespace: default
spec:
  channel: stable 
  config:
    datastoreEngine: cockroachdb
status:
  version:
    name: v1.16.1
    channel: stable 

Suggested Updates

Even if you do not want automatic updates, you should choose an update channel - this ensures you do not miss important upgrade steps in phased migrations.

By specifying a version, the operator will install the specific version you have requested. If another version is already running, the operator will walk through the steps defined in the update channel, but will stop once it reaches version. No updates will be taken automatically, you must pick the next version to run and write it into the spec.version field. This keeps SpiceDB updates "on rails" while giving you full control over when and how to roll out updates.

Once you are at the specified version, the operator will inform you of available updates in the status of the SpiceDBCluster:

apiVersion: authzed.com/v1alpha1
kind: SpiceDBCluster
metadata:
  name: dev
spec:
  channel: stable 
  version: v1.14.0
  config:
    datastoreEngine: cockroachdb
status:
  version:
    name: v1.14.0
    channel: stable 
  availableVersions:
  - name: v1.14.1
    channel: stable
    description: direct update with no migrations

Note that it can also show you updates that are available in other channels, if you wish to switch back and forth (be careful! if you switch to another channel and update, there may not be a path to get back to the original channel!) Only the nearest-neighbor update will be shown for channels other than the current one.

Force Override

You can opt out of update channels entirely, and force spicedb-operator to install a specific image and manage it as a spicedb instance.

This is not recommended, but may be useful for development environments or to try prerelease versions of SpiceDB before they are in an update channel.

apiVersion: authzed.com/v1alpha1
kind: SpiceDBCluster
metadata:
  name: dev
spec:
  config:
    image: ghcr.io/authzed/spicedb:v1.11.0-prerelease

spicedb-operator's People

Contributors

alecmerdler avatar bison avatar dependabot[bot] avatar ecordell avatar ensonic avatar gauthamchandra avatar jakedt avatar jawnsy avatar jzelinskie avatar mgagliardo91 avatar samkim avatar thomasklein94 avatar vroldanbet avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

spicedb-operator's Issues

datastore_uri is required when spinning memory-datastore

In order to quickly test the operator, I applied a CR with datastore type memory. The operator complained because it expected datastore_uri:

apiVersion: authzed.com/v1alpha1
kind: SpiceDBCluster
metadata:
  name: spicedb
  namespace: spicedb
spec:
  config:
    replicas: "2"
    datastoreEngine: memory
  secretName: spicedb
I0630 13:28:30.842585   74763 event.go:294] "Event occurred" object="spicedb/spicedb" kind="Secret" apiVersion="v1" type="Normal" reason="SecretAdoptedBySpiceDB" message="Secret was referenced as the secret source for SpiceDBCluster spicedb/spicedb; it has been labelled to mark it as part of the configuration for that controller."
I0630 13:28:30.857236   74763 event.go:294] "Event occurred" object="spicedb/spicedb" kind="SpiceDBCluster" apiVersion="authzed.com/v1alpha1" type="Warning" reason="InvalidSpiceDBConfig" message="invalid config: secret must contain a datastore_uri field"
I0630 13:28:30.861214   74763 event.go:294] "Event occurred" object="spicedb/spicedb" kind="SpiceDBCluster" apiVersion="authzed.com/v1alpha1" type="Warning" reason="InvalidSpiceDBConfig" message="invalid config: secret must contain a datastore_uri field"

Ability to skip mounting the secret

With #135, there are a lot more possibilities for how you might mount data for SpiceDB to consume.

For example, you might want to use an external secret store like AWS Secrets Manager to store your preshared keys, and mount it directly for SpiceDB via a CSI driver.

If you're managing the secrets outside of kubernetes, we should provide an option to skip the secret adoption / validation / mounting that normally occurs, so that it doesn't conflict with bespoke config.

Changing replicas during rollout can make cluster get stuck rolling out

If the SpiceDBCluster has the condition WaitingForDeploymentAvailability and the spec.config.replicas value is changed, then the cluster can get stuck in the waiting state:

  Conditions:
    Last Transition Time:  2022-11-09T18:38:05Z
    Message:               Waiting for deployment to be available: 3/2 available, 3/2 ready, 3/2 updated, 163/163 generation.
    Reason:                WaitingForDeploymentAvailability
    Status:                True
    Type:                  RollingDeployment

As a workaround, this can be fixed by changing the replicas to match the deployment's replicas, waiting for the rollout to finish, and then changing the replicas to the desired value.

Report API failures in SpiceDBCluster status

Right now, when an API call fails (i.e. when creating or updating a resource on the cluster), the operator requeues the object and tries again later (respecting APF responses if present).

This is generally the right thing to do, but it can hide non-transient errors (like RBAC problems).

We could spend time sorting through which errors are transient and which are not, but I think a more general approach would be:

  • Any time we need to requeue, we should attempt to record the reason for it on in the object's status.
  • The only exception would be if the operator can't update the status to record the reason for the requeue.

This should result in an operator that never requires reading logs for unusual situations, unless you can see that it has been wedged somehow (which should be evident from a stuck observedGeneration on the status)

Custom Pod Annotations

I see in the docs (https://docs.authzed.com/spicedb/operator) that there is an option for extraPodLabels, but I couldn't find if passing custom annotations was supported via the operator. In our stack, we use custom annotations to be able to do things such as collect prometheus metrics for specific pods, etc. Given that the metrics are exposed on the SpiceDB cluster pods at :9090/metrics, we would like to be able to configure the pod to have a few custom annotations that would direct our collector to pick up the OpenMetrics output.

For extra context, we are using Datadog and this is how their Kubernetes collector works. Would it be possible to allow passing through custom annotations to the pods if its not already a capability?

Thanks!

New Containers aren't healthy after SpiceDBCluster spec.config changes. Deployments blocked. `service unhealthy (responded with "NOT_SERVING")`

If I make any change to a SpiceDBCluster, the new containers will not become healthy, and not take over the old deployment.
Whether this deployment requires a new ReplicaSet or not.

kubectl -n spicedb get spicedbcluster/spicedb

NAME                                 AGE     CHANNEL   DESIRED   CURRENT   WARNINGS   MIGRATING   UPDATING   INVALID   PAUSED
spicedbcluster.authzed.com/spicedb   5d22h                       v1.18.0                          True

kubectl -n spicedb describe spicedbcluster/spicedb

...
Status:
  Conditions:
    Last Transition Time:  2023-03-27T19:35:54Z
    Message:               Waiting for deployment to be available: 1/2 available, 1/2 ready, 2/2 updated, 8/8 generation.
    Reason:                WaitingForDeploymentAvailability
    Status:                True
    Type:                  RollingDeployment
...

kubectl get po -n spicedb

NAME                                READY   STATUS             RESTARTS         AGE
spicedb-operator-7968686549-8d4nn   1/1     Running            0                3d
spicedb-spicedb-5984bd67c-r4vct     0/1     CrashLoopBackOff   6 (2m13s ago)    14m
spicedb-spicedb-5984bd67c-x8c2s     1/1     Running            69 (2d19h ago)   3d

kubectl -n spicedb describe pod/spicedb-spicedb-5984bd67c-r4vct

Events:
  Type     Reason     Age   From               Message
  ----     ------     ----  ----               -------
  Normal   Scheduled  84s   default-scheduler  Successfully assigned spicedb/spicedb-spicedb-5984bd67c-r4vct to us1c-b-gce-gp-n2c2m8-a-b-9mmxp
  Normal   Pulling    84s   kubelet            Pulling image "ghcr.io/authzed/spicedb:v1.18.0"
  Normal   Pulled     80s   kubelet            Successfully pulled image "ghcr.io/authzed/spicedb:v1.18.0" in 3.660331808s (3.660354092s including waiting)
  Normal   Created    80s   kubelet            Created container spicedb-spicedb
  Normal   Started    80s   kubelet            Started container spicedb-spicedb
  Warning  Unhealthy  78s   kubelet            Readiness probe failed: parsed options:
> addr=localhost:50051 conn_timeout=1s rpc_timeout=1s
> tls=true
  > no-verify=true
  > ca-cert=
  > client-cert=
  > client-key=
  > server-name=
> alts=false
> spiffe=false
establishing connection
timeout: failed to connect service "localhost:50051" within 1s
  Warning  Unhealthy  78s  kubelet  Readiness probe failed: parsed options:
> addr=localhost:50051 conn_timeout=1s rpc_timeout=1s
> tls=true
  > no-verify=true
  > ca-cert=
  > client-cert=
  > client-key=
  > server-name=
> alts=false
> spiffe=false
establishing connection
connection established (took 8.625008ms)
service unhealthy (responded with "NOT_SERVING")
  Warning  Unhealthy  77s  kubelet  Readiness probe failed: parsed options:
> addr=localhost:50051 conn_timeout=1s rpc_timeout=1s
> tls=true
  > no-verify=true
  > ca-cert=
  > client-cert=
  > client-key=
  > server-name=
> alts=false
> spiffe=false
establishing connection
connection established (took 7.656165ms)
service unhealthy (responded with "NOT_SERVING")
...repeats

kubectl -n spicedb logs pod/spicedb-spicedb-5984bd67c-r4vct

{"level":"info","format":"auto","log_level":"info","provider":"zerolog","async":false,"time":"2023-03-27T19:41:06Z","message":"configured logging"}
{"level":"info","v":0,"provider":"none","endpoint":"","service":"spicedb","insecure":false,"sampleRatio":0.2,"time":"2023-03-27T19:41:06Z","message":"configured opentelemetry tracing"}
{"level":"info","latest-released-version":"v1.18.0","time":"2023-03-27T19:41:07Z","message":"this is the latest released version of SpiceDB"}
{"level":"info","time":"2023-03-27T19:41:07Z","message":"using postgres datastore engine"}
{"level":"warn","time":"2023-03-27T19:41:07Z","message":"watch API disabled, postgres must be run with track_commit_timestamp=on"}
{"level":"info","initialSlowRequest":"10ms","maxRequests":1000000,"hedgingQuantile":0.95,"time":"2023-03-27T19:41:07Z","message":"request hedging enabled"}
{"level":"info","interval":180000,"time":"2023-03-27T19:41:07Z","message":"datastore garbage collection worker started"}
{"level":"info","maxCost":"16 MiB","numCounters":1000,"defaultTTL":0,"time":"2023-03-27T19:41:07Z","message":"configured namespace cache"}
{"level":"info","maxCost":"311 MiB","numCounters":10000,"defaultTTL":10000,"time":"2023-03-27T19:41:07Z","message":"configured dispatch cache"}
{"level":"info","check-permission":50,"lookup-resources":50,"lookup-subjects":50,"reachable-resources":50,"time":"2023-03-27T19:41:07Z","message":"configured dispatch concurrency limits"}
{"level":"info","maxCost":"726 MiB","numCounters":100000,"defaultTTL":10000,"time":"2023-03-27T19:41:07Z","message":"configured cluster dispatch cache"}
{"level":"info","addr":":50053","network":"tcp","service":"dispatch-cluster","workers":0,"insecure":false,"time":"2023-03-27T19:41:07Z","message":"grpc server started serving"}
{"level":"warn","reason":"","time":"2023-03-27T19:41:07Z","message":"watch api disabled; underlying datastore does not support it"}
{"level":"info","addr":":50051","network":"tcp","service":"grpc","workers":0,"insecure":false,"time":"2023-03-27T19:41:07Z","message":"grpc server started serving"}
{"level":"info","addr":":9090","service":"metrics","insecure":true,"time":"2023-03-27T19:41:07Z","message":"http server started serving"}
{"level":"info","addr":":8080","prefix":"dashboard","insecure":false,"time":"2023-03-27T19:41:07Z","message":"http server started serving"}
{"level":"info","time":"2023-03-27T19:41:07Z","message":"telemetry disabled"}
{"level":"info","grpc.component":"server","grpc.method":"Check","grpc.method_type":"unary","grpc.service":"grpc.health.v1.Health","peer.address":"[::1]:34376","protocol":"grpc","requestID":"032b0eb4fbf6705d6e4b61fd06f976ae","grpc.request.deadline":"2023-03-27T19:41:08Z","grpc.start_time":"2023-03-27T19:41:07Z","grpc.code":"OK","grpc.time_ms":"0.116","time":"2023-03-27T19:41:07Z","message":"finished call"}
{"level":"info","grpc.component":"server","grpc.method":"Check","grpc.method_type":"unary","grpc.service":"grpc.health.v1.Health","peer.address":"[::1]:34382","protocol":"grpc","requestID":"2e8efe06bd28793a95262ff099d43126","grpc.request.deadline":"2023-03-27T19:41:09Z","grpc.start_time":"2023-03-27T19:41:08Z","grpc.code":"OK","grpc.time_ms":"0.103","time":"2023-03-27T19:41:08Z","message":"finished call"}
{"level":"info","grpc.component":"server","grpc.method":"Check","grpc.method_type":"unary","grpc.service":"grpc.health.v1.Health","peer.address":"[::1]:51758","protocol":"grpc","requestID":"2aad4920db16e32bbd49ac687419460a","grpc.request.deadline":"2023-03-27T19:41:17Z","grpc.start_time":"2023-03-27T19:41:16Z","grpc.code":"OK","grpc.time_ms":"0.077","time":"2023-03-27T19:41:16Z","message":"finished call"}
...repeats

Seems like changes can be anything. Toggling telemetry on and off. Increasing replicas by 1.
All results in the new containers failing the Readiness probe service unhealthy (responded with "NOT_SERVING").

Any insight as to what's going on here?

operator does not react on "logLevel" changes

during some maintenance operations I wanted to modify the log level of the managed spicedbs so the log output was less verbose. Unfortunately the operator did not recognize the change on logLevel. I forced the propagation of the change by temporarily adjusting the replica parameter.

memory datastore is set with dispatch enabled by default

Following the example in README.md leads to a failure when performing Check

zed version
client: zed 0.7.5
service: v1.15.0
...
zed permission check blog/post:1 read  blog/user:emilia --revision "${ZEDTOKEN}"
Error: rpc error: code = Unavailable desc = last connection error: connection error: desc = "transport: Error while dialing dial tcp 192.168.9.11:50053: connect: connection refused"

Race condition when creating a Spicedb Cluster

When creating a new spicedb cluster by following the instructions here, the operator can't the find the secret if the both the SpiceDBCluster and Secret object are created at the same time.

I used the following yamls

apiVersion: authzed.com/v1alpha1
kind: SpiceDBCluster
metadata:
  name: dev
spec:
  config:
    datastoreEngine: postgres
  secretName: dev-spicedb-config
---
apiVersion: v1
kind: Secret
metadata:
  name: dev-spicedb-config
stringData:
  datastore_uri: "postgres://spicedb:<pw>@spicedb-pg.internal:5432/spicedb?sslmode=disable"
  preshared_key: "<some key>"

After doing a kubectl apply -f with the file containing the above yaml, if I do kubectl describe spicedbclusters.authzed.com dev, I get the following output status

Name:         dev
Namespace:    bhashit-test
Labels:       <none>
Annotations:  <none>
API Version:  authzed.com/v1alpha1
Kind:         SpiceDBCluster
Metadata:
  Creation Timestamp:  2023-02-10T13:05:13Z
  Generation:          1
  Managed Fields:
    API Version:  authzed.com/v1alpha1
    Fields Type:  FieldsV1
    fieldsV1:
      f:status:
        f:conditions:
    Manager:      spicedb-operator
    Operation:    Apply
    Subresource:  status
    Time:         2023-02-10T13:05:13Z
    API Version:  authzed.com/v1alpha1
    Fields Type:  FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          .:
          f:kubectl.kubernetes.io/last-applied-configuration:
      f:spec:
        .:
        f:config:
          .:
          f:datastoreEngine:
          f:extraPodAnnotations:
        f:secretName:
    Manager:         kubectl-client-side-apply
    Operation:       Update
    Time:            2023-02-10T13:05:13Z
  Resource Version:  1039333060
  UID:               331817c4-46bb-4b12-beef-87a1b5af02d9
Spec:
  Config:
    Datastore Engine:  postgres
    Extra Pod Annotations:
      sidecar.istio.io/inject:  false
  Secret Name:                  dev-spicedb-config
Status:
  Conditions:
    Last Transition Time:  2023-02-10T13:05:13Z
    Message:               Secret bhashit-test/dev-spicedb-config not found
    Reason:                MissingSecret
    Status:                True
    Type:                  PreconditionsFailed
Events:                    <none>

The last part is important:

Status:
  Conditions:
    Last Transition Time:  2023-02-10T13:05:13Z
    Message:               Secret bhashit-test/dev-spicedb-config not found
    Reason:                MissingSecret
    Status:                True
    Type:                  PreconditionsFailed

Ideally, an operator should try a few time with some backoff periods before giving up. That doesn't seem to be happening.

The main problem here is that I can't use an automated deployment tool to deploy spicedb clusters because they remain stuck in the above state. While I can deploy them manually if I deploy the Secret first and then deploy the SpiceDBCluster instance.

Operator is not creating spicedb deployment

Hi.

We have deployed spicedb operator in minikube by using the command - kubectl apply --server-side -f https://github.com/authzed/spicedb-operator/releases/latest/download/bundle.yaml and its running.

image

We have tried Getting Started example(added below) and also the one mentioned in https://github.com/authzed/spicedb-operator/blob/main/examples/cockroachdb-tls-ingress/spicedb/spicedb.yaml

apiVersion: authzed.com/v1alpha1
kind: SpiceDBCluster
metadata:
name: dev
spec:
config:
datastoreEngine: memory
secretName: dev-spicedb-config

apiVersion: v1
kind: Secret
metadata:
name: dev-spicedb-config
stringData:
preshared_key: "averysecretpresharedkey"

Spicedb operator is not creating the SpiceDB deployment. Following is the log of spicedb operator pod - it does not show any error log or indication of attempting to create new deployment for the submitted SpiceDBCluster Custom Resource Definition instance

Following is the log portion

I0224 03:29:11.065153 1 merged_client_builder.go:121] Using in-cluster configuration
I0224 03:29:11.067163 1 reflector.go:205] Reflector from pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:169 configured with expectedType of *unstructured.Unstructured with empty GroupVersionKind.
I0224 03:29:11.067221 1 reflector.go:205] Reflector from pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:169 configured with expectedType of *unstructured.Unstructured with empty GroupVersionKind.
I0224 03:29:11.067222 1 reflector.go:221] Starting reflector *unstructured.Unstructured (0s) from pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:169
I0224 03:29:11.067239 1 reflector.go:257] Listing and watching *unstructured.Unstructured from pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:169
I0224 03:29:11.067245 1 reflector.go:221] Starting reflector *unstructured.Unstructured (0s) from pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:169
I0224 03:29:11.067254 1 reflector.go:257] Listing and watching *unstructured.Unstructured from pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:169
I0224 03:29:11.067579 1 reflector.go:205] Reflector from pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:169 configured with expectedType of *unstructured.Unstructured with empty GroupVersionKind.
I0224 03:29:11.067603 1 reflector.go:221] Starting reflector *unstructured.Unstructured (0s) from pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:169
I0224 03:29:11.067646 1 reflector.go:257] Listing and watching *unstructured.Unstructured from pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:169
I0224 03:29:11.068265 1 reflector.go:205] Reflector from pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:169 configured with expectedType of *unstructured.Unstructured with empty GroupVersionKind.
I0224 03:29:11.068334 1 reflector.go:221] Starting reflector *unstructured.Unstructured (0s) from pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:169
I0224 03:29:11.068373 1 reflector.go:257] Listing and watching *unstructured.Unstructured from pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:169
I0224 03:29:11.068587 1 reflector.go:205] Reflector from pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:169 configured with expectedType of *unstructured.Unstructured with empty GroupVersionKind.
I0224 03:29:11.068709 1 reflector.go:221] Starting reflector *unstructured.Unstructured (0s) from pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:169
I0224 03:29:11.068914 1 reflector.go:257] Listing and watching *unstructured.Unstructured from pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:169
I0224 03:29:11.069284 1 reflector.go:205] Reflector from pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:169 configured with expectedType of *unstructured.Unstructured with empty GroupVersionKind.
I0224 03:29:11.069321 1 reflector.go:221] Starting reflector *unstructured.Unstructured (0s) from pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:169
I0224 03:29:11.069334 1 reflector.go:257] Listing and watching *unstructured.Unstructured from pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:169
I0224 03:29:11.069663 1 reflector.go:205] Reflector from pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:169 configured with expectedType of *unstructured.Unstructured with empty GroupVersionKind.
I0224 03:29:11.069694 1 reflector.go:221] Starting reflector *unstructured.Unstructured (0s) from pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:169
I0224 03:29:11.069706 1 reflector.go:257] Listing and watching *unstructured.Unstructured from pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:169
I0224 03:29:11.070207 1 reflector.go:205] Reflector from pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:169 configured with expectedType of *unstructured.Unstructured with empty GroupVersionKind.
I0224 03:29:11.070962 1 reflector.go:221] Starting reflector *unstructured.Unstructured (0s) from pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:169
I0224 03:29:11.074280 1 reflector.go:257] Listing and watching *unstructured.Unstructured from pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:169
I0224 03:29:11.074989 1 reflector.go:205] Reflector from pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:169 configured with expectedType of *unstructured.Unstructured with empty GroupVersionKind.
I0224 03:29:11.075123 1 reflector.go:221] Starting reflector *unstructured.Unstructured (0s) from pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:169
I0224 03:29:11.075314 1 reflector.go:257] Listing and watching *unstructured.Unstructured from pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:169
I0224 03:29:11.075782 1 file_informer.go:206] "msg"="started watching" "file"="/opt/operator/config.yaml"
I0224 03:29:11.075864 1 controller.go:230] "msg"="loading config" "path"="/opt/operator/config.yaml"
I0224 03:29:11.180509 1 controller.go:263] "msg"="updated config" "config"={"imageName":"ghcr.io/authzed/spicedb","channels":[{"name":"stable","metadata":{"datastore":"postgres","default":"true"},"edges":{"v1.0.0":["v1.1.0","v1.2.0","v1.3.0","v1.4.0","v1.5.0","v1.6.0","v1.7.1","v1.8.0","v1.9.0","v1.10.0","v1.11.0","v1.12.0","v1.13.0","v1.14.0-phase1"],"v1.1.0":["v1.2.0","v1.3.0","v1.4.0","v1.5.0","v1.6.0","v1.7.1","v1.8.0","v1.9.0","v1.10.0","v1.11.0","v1.12.0","v1.13.0","v1.14.0-phase1"],"v1.10.0":["v1.11.0","v1.12.0","v1.13.0","v1.14.0-phase1"],"v1.11.0":["v1.12.0","v1.13.0","v1.14.0-phase1"],"v1.12.0":["v1.13.0","v1.14.0-phase1"],"v1.13.0":["v1.14.0-phase1"],"v1.14.0":["v1.14.1","v1.15.0","v1.16.0","v1.16.1"],"v1.14.0-phase1":["v1.14.0-phase2"],"v1.14.0-phase2":["v1.14.0"],"v1.14.1":["v1.15.0","v1.16.0","v1.16.1"],"v1.15.0":["v1.16.0","v1.16.1"],"v1.16.0":["v1.16.1"],"v1.2.0":["v1.3.0","v1.4.0","v1.5.0","v1.6.0","v1.7.1","v1.8.0","v1.9.0","v1.10.0","v1.11.0","v1.12.0","v1.13.0","v1.14.0-phase1"],"v1.3.0":["v1.4.0","v1.5.0","v1.6.0","v1.7.1","v1.8.0","v1.9.0","v1.10.0","v1.11.0","v1.12.0","v1.13.0","v1.14.0-phase1"],"v1.4.0":["v1.5.0","v1.6.0","v1.7.1","v1.8.0","v1.9.0","v1.10.0","v1.11.0","v1.12.0","v1.13.0","v1.14.0-phase1"],"v1.5.0":["v1.6.0","v1.7.1","v1.8.0","v1.9.0","v1.10.0","v1.11.0","v1.12.0","v1.13.0","v1.14.0-phase1"],"v1.6.0":["v1.7.1","v1.8.0","v1.9.0","v1.10.0","v1.11.0","v1.12.0","v1.13.0","v1.14.0-phase1"],"v1.7.0":["v1.7.1","v1.8.0","v1.9.0","v1.10.0","v1.11.0","v1.12.0","v1.13.0","v1.14.0-phase1"],"v1.7.1":["v1.8.0","v1.9.0","v1.10.0","v1.11.0","v1.12.0","v1.13.0","v1.14.0-phase1"],"v1.8.0":["v1.9.0","v1.10.0","v1.11.0","v1.12.0","v1.13.0","v1.14.0-phase1"],"v1.9.0":["v1.10.0","v1.11.0","v1.12.0","v1.13.0","v1.14.0-phase1"]},"nodes":[{"id":"v1.16.1","tag":"v1.16.1","migration":"drop-bigserial-ids"},{"id":"v1.16.0","tag":"v1.16.0","migration":"drop-bigserial-ids"},{"id":"v1.15.0","tag":"v1.15.0","migration":"drop-bigserial-ids"},{"id":"v1.14.1","tag":"v1.14.1","migration":"drop-bigserial-ids"},{"id":"v1.14.0","tag":"v1.14.0","migration":"drop-bigserial-ids"},{"id":"v1.14.0-phase2","tag":"v1.14.0","migration":"add-xid-constraints","phase":"write-both-read-new"},{"id":"v1.14.0-phase1","tag":"v1.14.0","migration":"add-xid-columns","phase":"write-both-read-old"},{"id":"v1.13.0","tag":"v1.13.0","migration":"add-ns-config-id"},{"id":"v1.12.0","tag":"v1.12.0","migration":"add-ns-config-id"},{"id":"v1.11.0","tag":"v1.11.0","migration":"add-ns-config-id"},{"id":"v1.10.0","tag":"v1.10.0","migration":"add-ns-config-id"},{"id":"v1.9.0","tag":"v1.9.0","migration":"add-unique-datastore-id"},{"id":"v1.8.0","tag":"v1.8.0","migration":"add-unique-datastore-id"},{"id":"v1.7.1","tag":"v1.7.1","migration":"add-unique-datastore-id"},{"id":"v1.7.0","tag":"v1.7.0","migration":"add-unique-datastore-id"},{"id":"v1.6.0","tag":"v1.6.0","migration":"add-unique-datastore-id"},{"id":"v1.5.0","tag":"v1.5.0","migration":"add-transaction-timestamp-index"},{"id":"v1.4.0","tag":"v1.4.0","migration":"add-transaction-timestamp-index"},{"id":"v1.3.0","tag":"v1.3.0","migration":"add-transaction-timestamp-index"},{"id":"v1.2.0","tag":"v1.2.0","migration":"add-transaction-timestamp-index"},{"id":"v1.1.0","tag":"v1.1.0","migration":"add-transaction-timestamp-index"},{"id":"v1.0.0","tag":"v1.0.0","migration":"add-unique-living-ns"}]},{"name":"stable","metadata":{"datastore":"cockroachdb","default":"true"},"edges":{"v1.0.0":["v1.1.0","v1.2.0","v1.3.0","v1.4.0","v1.5.0","v1.6.0","v1.7.1","v1.8.0","v1.9.0","v1.10.0","v1.11.0","v1.12.0","v1.13.0","v1.14.1","v1.15.0","v1.16.0","v1.16.1"],"v1.1.0":["v1.2.0","v1.3.0","v1.4.0","v1.5.0","v1.6.0","v1.7.1","v1.8.0","v1.9.0","v1.10.0","v1.11.0","v1.12.0","v1.13.0","v1.14.1","v1.15.0","v1.16.0","v1.16.1"],"v1.10.0":["v1.11.0","v1.12.0","v1.13.0","v1.14.1","v1.15.0","v1.16.0","v1.16.1"],"v1.11.0":["v1.12.0","v1.13.0","v1.14.1","v1.15.0","v1.16.0","v1.16.1"],"v1.12.0":["v1.13.0","v1.14.1","v1.15.0","v1.16.0","v1.16.1"],"v1.13.0":["v1.14.1","v1.15.0","v1.16.0","v1.16.1"],"v1.14.0":["v1.14.1","v1.15.0","v1.16.0","v1.16.1"],"v1.14.1":["v1.15.0","v1.16.0","v1.16.1"],"v1.15.0":["v1.16.0","v1.16.1"],"v1.16.0":["v1.16.1"],"v1.2.0":["v1.3.0","v1.4.0","v1.5.0","v1.6.0","v1.7.1","v1.8.0","v1.9.0","v1.10.0","v1.11.0","v1.12.0","v1.13.0","v1.14.1","v1.15.0","v1.16.0","v1.16.1"],"v1.3.0":["v1.4.0","v1.5.0","v1.6.0","v1.7.1","v1.8.0","v1.9.0","v1.10.0","v1.11.0","v1.12.0","v1.13.0","v1.14.1","v1.15.0","v1.16.0","v1.16.1"],"v1.4.0":["v1.5.0","v1.6.0","v1.7.1","v1.8.0","v1.9.0","v1.10.0","v1.11.0","v1.12.0","v1.13.0","v1.14.1","v1.15.0","v1.16.0","v1.16.1"],"v1.5.0":["v1.6.0","v1.7.1","v1.8.0","v1.9.0","v1.10.0","v1.11.0","v1.12.0","v1.13.0","v1.14.1","v1.15.0","v1.16.0","v1.16.1"],"v1.6.0":["v1.7.1","v1.8.0","v1.9.0","v1.10.0","v1.11.0","v1.12.0","v1.13.0","v1.14.1","v1.15.0","v1.16.0","v1.16.1"],"v1.7.0":["v1.7.1","v1.8.0","v1.9.0","v1.10.0","v1.11.0","v1.12.0","v1.13.0","v1.14.1","v1.15.0","v1.16.0","v1.16.1"],"v1.7.1":["v1.8.0","v1.9.0","v1.10.0","v1.11.0","v1.12.0","v1.13.0","v1.14.1","v1.15.0","v1.16.0","v1.16.1"],"v1.8.0":["v1.9.0","v1.10.0","v1.11.0","v1.12.0","v1.13.0","v1.14.1","v1.15.0","v1.16.0","v1.16.1"],"v1.9.0":["v1.10.0","v1.11.0","v1.12.0","v1.13.0","v1.14.1","v1.15.0","v1.16.0","v1.16.1"]},"nodes":[{"id":"v1.16.1","tag":"v1.16.1","migration":"add-caveats"},{"id":"v1.16.0","tag":"v1.16.0","migration":"add-caveats"},{"id":"v1.15.0","tag":"v1.15.0","migration":"add-caveats"},{"id":"v1.14.1","tag":"v1.14.1","migration":"add-caveats"},{"id":"v1.14.0","tag":"v1.14.0","migration":"add-caveats"},{"id":"v1.13.0","tag":"v1.13.0","migration":"add-metadata-and-counters"},{"id":"v1.12.0","tag":"v1.12.0","migration":"add-metadata-and-counters"},{"id":"v1.11.0","tag":"v1.11.0","migration":"add-metadata-and-counters"},{"id":"v1.10.0","tag":"v1.10.0","migration":"add-metadata-and-counters"},{"id":"v1.9.0","tag":"v1.9.0","migration":"add-metadata-and-counters"},{"id":"v1.8.0","tag":"v1.8.0","migration":"add-metadata-and-counters"},{"id":"v1.7.1","tag":"v1.7.1","migration":"add-metadata-and-counters"},{"id":"v1.7.0","tag":"v1.7.0","migration":"add-metadata-and-counters"},{"id":"v1.6.0","tag":"v1.6.0","migration":"add-metadata-and-counters"},{"id":"v1.5.0","tag":"v1.5.0","migration":"add-transactions-table"},{"id":"v1.4.0","tag":"v1.4.0","migration":"add-transactions-table"},{"id":"v1.3.0","tag":"v1.3.0","migration":"add-transactions-table"},{"id":"v1.2.0","tag":"v1.2.0","migration":"add-transactions-table"},{"id":"v1.1.0","tag":"v1.1.0","migration":"add-transactions-table"},{"id":"v1.0.0","tag":"v1.0.0","migration":"add-transactions-table"}]},{"name":"stable","metadata":{"datastore":"mysql","default":"true"},"edges":{"v1.10.0":["v1.11.0","v1.12.0","v1.13.0","v1.14.1","v1.15.0","v1.16.0","v1.16.1"],"v1.11.0":["v1.12.0","v1.13.0","v1.14.1","v1.15.0","v1.16.0","v1.16.1"],"v1.12.0":["v1.13.0","v1.14.1","v1.15.0","v1.16.0","v1.16.1"],"v1.13.0":["v1.14.1","v1.15.0","v1.16.0","v1.16.1"],"v1.14.0":["v1.14.1","v1.15.0","v1.16.0","v1.16.1"],"v1.14.1":["v1.15.0","v1.16.0","v1.16.1"],"v1.15.0":["v1.16.0","v1.16.1"],"v1.16.0":["v1.16.1"],"v1.7.0":["v1.7.1","v1.8.0","v1.9.0","v1.10.0","v1.11.0","v1.12.0","v1.13.0","v1.14.1","v1.15.0","v1.16.0","v1.16.1"],"v1.7.1":["v1.8.0","v1.9.0","v1.10.0","v1.11.0","v1.12.0","v1.13.0","v1.14.1","v1.15.0","v1.16.0","v1.16.1"],"v1.8.0":["v1.9.0","v1.10.0","v1.11.0","v1.12.0","v1.13.0","v1.14.1","v1.15.0","v1.16.0","v1.16.1"],"v1.9.0":["v1.10.0","v1.11.0","v1.12.0","v1.13.0","v1.14.1","v1.15.0","v1.16.0","v1.16.1"]},"nodes":[{"id":"v1.16.1","tag":"v1.16.1","migration":"add_caveat"},{"id":"v1.16.0","tag":"v1.16.0","migration":"add_caveat"},{"id":"v1.15.0","tag":"v1.15.0","migration":"add_caveat"},{"id":"v1.14.1","tag":"v1.14.1","migration":"add_caveat"},{"id":"v1.14.0","tag":"v1.14.0","migration":"add_caveat"},{"id":"v1.13.0","tag":"v1.13.0","migration":"add_ns_config_id"},{"id":"v1.12.0","tag":"v1.12.0","migration":"add_ns_config_id"},{"id":"v1.11.0","tag":"v1.11.0","migration":"add_ns_config_id"},{"id":"v1.10.0","tag":"v1.10.0","migration":"add_ns_config_id"},{"id":"v1.9.0","tag":"v1.9.0","migration":"add_unique_datastore_id"},{"id":"v1.8.0","tag":"v1.8.0","migration":"add_unique_datastore_id"},{"id":"v1.7.1","tag":"v1.7.1","migration":"add_unique_datastore_id"},{"id":"v1.7.0","tag":"v1.7.0","migration":"add_unique_datastore_id"}]},{"name":"stable","metadata":{"datastore":"spanner","default":"true"},"edges":{"v1.0.0":["v1.1.0","v1.2.0","v1.3.0","v1.4.0","v1.5.0","v1.6.0","v1.7.1","v1.8.0","v1.9.0","v1.10.0","v1.11.0","v1.12.0","v1.13.0","v1.14.1","v1.15.0","v1.16.0","v1.16.1"],"v1.1.0":["v1.2.0","v1.3.0","v1.4.0","v1.5.0","v1.6.0","v1.7.1","v1.8.0","v1.9.0","v1.10.0","v1.11.0","v1.12.0","v1.13.0","v1.14.1","v1.15.0","v1.16.0","v1.16.1"],"v1.10.0":["v1.11.0","v1.12.0","v1.13.0","v1.14.1","v1.15.0","v1.16.0","v1.16.1"],"v1.11.0":["v1.12.0","v1.13.0","v1.14.1","v1.15.0","v1.16.0","v1.16.1"],"v1.12.0":["v1.13.0","v1.14.1","v1.15.0","v1.16.0","v1.16.1"],"v1.13.0":["v1.14.1","v1.15.0","v1.16.0","v1.16.1"],"v1.14.0":["v1.14.1","v1.15.0","v1.16.0","v1.16.1"],"v1.14.1":["v1.15.0","v1.16.0","v1.16.1"],"v1.15.0":["v1.16.0","v1.16.1"],"v1.16.0":["v1.16.1"],"v1.2.0":["v1.3.0","v1.4.0","v1.5.0","v1.6.0","v1.7.1","v1.8.0","v1.9.0","v1.10.0","v1.11.0","v1.12.0","v1.13.0","v1.14.1","v1.15.0","v1.16.0","v1.16.1"],"v1.3.0":["v1.4.0","v1.5.0","v1.6.0","v1.7.1","v1.8.0","v1.9.0","v1.10.0","v1.11.0","v1.12.0","v1.13.0","v1.14.1","v1.15.0","v1.16.0","v1.16.1"],"v1.4.0":["v1.5.0","v1.6.0","v1.7.1","v1.8.0","v1.9.0","v1.10.0","v1.11.0","v1.12.0","v1.13.0","v1.14.1","v1.15.0","v1.16.0","v1.16.1"],"v1.5.0":["v1.6.0","v1.7.1","v1.8.0","v1.9.0","v1.10.0","v1.11.0","v1.12.0","v1.13.0","v1.14.1","v1.15.0","v1.16.0","v1.16.1"],"v1.6.0":["v1.7.1","v1.8.0","v1.9.0","v1.10.0","v1.11.0","v1.12.0","v1.13.0","v1.14.1","v1.15.0","v1.16.0","v1.16.1"],"v1.7.0":["v1.7.1","v1.8.0","v1.9.0","v1.10.0","v1.11.0","v1.12.0","v1.13.0","v1.14.1","v1.15.0","v1.16.0","v1.16.1"],"v1.7.1":["v1.8.0","v1.9.0","v1.10.0","v1.11.0","v1.12.0","v1.13.0","v1.14.1","v1.15.0","v1.16.0","v1.16.1"],"v1.8.0":["v1.9.0","v1.10.0","v1.11.0","v1.12.0","v1.13.0","v1.14.1","v1.15.0","v1.16.0","v1.16.1"],"v1.9.0":["v1.10.0","v1.11.0","v1.12.0","v1.13.0","v1.14.1","v1.15.0","v1.16.0","v1.16.1"]},"nodes":[{"id":"v1.16.1","tag":"v1.16.1","migration":"add-caveats"},{"id":"v1.16.0","tag":"v1.16.0","migration":"add-caveats"},{"id":"v1.15.0","tag":"v1.15.0","migration":"add-caveats"},{"id":"v1.14.1","tag":"v1.14.1","migration":"add-caveats"},{"id":"v1.14.0","tag":"v1.14.0","migration":"add-caveats"},{"id":"v1.13.0","tag":"v1.13.0","migration":"add-metadata-and-counters"},{"id":"v1.12.0","tag":"v1.12.0","migration":"add-metadata-and-counters"},{"id":"v1.11.0","tag":"v1.11.0","migration":"add-metadata-and-counters"},{"id":"v1.10.0","tag":"v1.10.0","migration":"add-metadata-and-counters"},{"id":"v1.9.0","tag":"v1.9.0","migration":"add-metadata-and-counters"},{"id":"v1.8.0","tag":"v1.8.0","migration":"add-metadata-and-counters"},{"id":"v1.7.1","tag":"v1.7.1","migration":"add-metadata-and-counters"},{"id":"v1.7.0","tag":"v1.7.0","migration":"add-metadata-and-counters"},{"id":"v1.6.0","tag":"v1.6.0","migration":"add-metadata-and-counters"},{"id":"v1.5.0","tag":"v1.5.0","migration":"initial"},{"id":"v1.4.0","tag":"v1.4.0","migration":"initial"},{"id":"v1.3.0","tag":"v1.3.0","migration":"initial"},{"id":"v1.2.0","tag":"v1.2.0","migration":"initial"},{"id":"v1.1.0","tag":"v1.1.0","migration":"initial"},{"id":"v1.0.0","tag":"v1.0.0","migration":"initial"}]}]} "path"="/opt/operator/config.yaml"
I0224 03:29:11.267564 1 shared_informer.go:303] caches populated
I0224 03:29:11.267654 1 shared_informer.go:303] caches populated
I0224 03:29:11.267667 1 shared_informer.go:303] caches populated
I0224 03:29:11.267676 1 shared_informer.go:303] caches populated
I0224 03:29:11.267686 1 shared_informer.go:303] caches populated
I0224 03:29:11.267694 1 shared_informer.go:303] caches populated
I0224 03:29:11.267703 1 shared_informer.go:303] caches populated
I0224 03:29:11.267710 1 shared_informer.go:303] caches populated
I0224 03:29:11.267716 1 shared_informer.go:303] caches populated
I0224 03:29:11.267776 1 shared_informer.go:303] caches populated
I0224 03:29:11.270113 1 controller.go:116] "msg"="starting controller" "resource"={"Group":"authzed.com","Version":"v1alpha1","Resource":"spicedbclusters"}
I0224 03:30:11.181331 1 file_informer.go:235] "msg"="resyncing file" "after"="1m0s" "file"="/opt/operator/config.yaml"
I0224 03:30:11.181473 1 controller.go:230] "msg"="loading config" "path"="/opt/operator/config.yaml"
I0224 03:30:11.187160 1 controller.go:259] "msg"="config hasn't changed" "new hash"=12989480526449716590 "old hash"=12989480526449716590
I0224 03:31:11.187173 1 file_informer.go:235] "msg"="resyncing file" "after"="1m0s" "file"="/opt/operator/config.yaml"
I0224 03:31:11.187682 1 controller.go:230] "msg"="loading config" "path"="/opt/operator/config.yaml"
I0224 03:31:11.197340 1 controller.go:259] "msg"="config hasn't changed" "new hash"=12989480526449716590 "old hash"=12989480526449716590
I0224 03:31:57.109771 1 controller.go:315] "msg"="syncing owned object" "controller"="spicedbclusters" "gvr"={"Group":"authzed.com","Version":"v1alpha1","Resource":"spicedbclusters"} "obj"={"name":"dev","namespace":"default"} "syncID"="meudG"
I0224 03:31:57.109898 1 logging.go:35] "msg"="entering handler" "controller"="spicedbclusters" "handler"="pauseCluster" "obj"={"name":"dev","namespace":"default"} "syncID"="meudG"
I0224 03:31:57.109927 1 logging.go:35] "msg"="entering handler" "controller"="spicedbclusters" "handler"="adoptSecret" "obj"={"name":"dev","namespace":"default"} "syncID"="meudG"
I0224 03:31:57.166318 1 controller.go:315] "msg"="syncing owned object" "controller"="spicedbclusters" "gvr"={"Group":"authzed.com","Version":"v1alpha1","Resource":"spicedbclusters"} "obj"={"name":"dev","namespace":"default"} "syncID"="R4QEX"
I0224 03:31:57.166437 1 logging.go:35] "msg"="entering handler" "controller"="spicedbclusters" "handler"="pauseCluster" "obj"={"name":"dev","namespace":"default"} "syncID"="R4QEX"
I0224 03:31:57.166469 1 logging.go:35] "msg"="entering handler" "controller"="spicedbclusters" "handler"="adoptSecret" "obj"={"name":"dev","namespace":"default"} "syncID"="R4QEX"
I0224 03:31:57.220390 1 controller.go:336] "msg"="syncing external object" "controller"="spicedbclusters" "obj"={"name":"dev-spicedb-config","namespace":"default"} "syncID"="88Z88"
I0224 03:31:57.249004 1 controller.go:336] "msg"="syncing external object" "controller"="spicedbclusters" "obj"={"name":"dev-spicedb-config","namespace":"default"} "syncID"="RumVr"
I0224 03:31:57.250223 1 logging.go:35] "msg"="entering handler" "controller"="spicedbclusters" "handler"="*controller.ConfigChangedHandler" "obj"={"name":"dev","namespace":"default"} "syncID"="R4QEX"
I0224 03:31:57.250799 1 config_change.go:38] "msg"="spicedb configuration changed" "controller"="spicedbclusters" "obj"={"name":"dev","namespace":"default"} "syncID"="R4QEX"
I0224 03:31:57.280619 1 logging.go:35] "msg"="entering handler" "controller"="spicedbclusters" "handler"="*controller.ValidateConfigHandler" "obj"={"name":"dev","namespace":"default"} "syncID"="R4QEX"
I0224 03:31:57.310650 1 controller.go:315] "msg"="syncing owned object" "controller"="spicedbclusters" "gvr"={"Group":"authzed.com","Version":"v1alpha1","Resource":"spicedbclusters"} "obj"={"name":"dev","namespace":"default"} "syncID"="JTZ5U"
I0224 03:31:57.310734 1 logging.go:35] "msg"="entering handler" "controller"="spicedbclusters" "handler"="pauseCluster" "obj"={"name":"dev","namespace":"default"} "syncID"="JTZ5U"
I0224 03:31:57.310756 1 logging.go:35] "msg"="entering handler" "controller"="spicedbclusters" "handler"="adoptSecret" "obj"={"name":"dev","namespace":"default"} "syncID"="JTZ5U"
I0224 03:31:57.311193 1 logging.go:35] "msg"="entering handler" "controller"="spicedbclusters" "handler"="*controller.ConfigChangedHandler" "obj"={"name":"dev","namespace":"default"} "syncID"="JTZ5U"
I0224 03:31:57.311255 1 logging.go:35] "msg"="entering handler" "controller"="spicedbclusters" "handler"="*controller.ValidateConfigHandler" "obj"={"name":"dev","namespace":"default"} "syncID"="JTZ5U"
I0224 03:31:57.331693 1 controller.go:315] "msg"="syncing owned object" "controller"="spicedbclusters" "gvr"={"Group":"authzed.com","Version":"v1alpha1","Resource":"spicedbclusters"} "obj"={"name":"dev","namespace":"default"} "syncID"="ov9s1"
I0224 03:31:57.331797 1 logging.go:35] "msg"="entering handler" "controller"="spicedbclusters" "handler"="pauseCluster" "obj"={"name":"dev","namespace":"default"} "syncID"="ov9s1"
I0224 03:31:57.331822 1 logging.go:35] "msg"="entering handler" "controller"="spicedbclusters" "handler"="adoptSecret" "obj"={"name":"dev","namespace":"default"} "syncID"="ov9s1"
I0224 03:31:57.332193 1 logging.go:35] "msg"="entering handler" "controller"="spicedbclusters" "handler"="*controller.ConfigChangedHandler" "obj"={"name":"dev","namespace":"default"} "syncID"="ov9s1"
I0224 03:31:57.332325 1 logging.go:35] "msg"="entering handler" "controller"="spicedbclusters" "handler"="*controller.ValidateConfigHandler" "obj"={"name":"dev","namespace":"default"} "syncID"="ov9s1"
I0224 03:32:11.197345 1 file_informer.go:235] "msg"="resyncing file" "after"="1m0s" "file"="/opt/operator/config.yaml"
I0224 03:32:11.197418 1 controller.go:230] "msg"="loading config" "path"="/opt/operator/config.yaml"
I0224 03:32:11.203974 1 controller.go:259] "msg"="config hasn't changed" "new hash"=12989480526449716590 "old hash"=12989480526449716590
I0224 03:33:11.204699 1 file_informer.go:235] "msg"="resyncing file" "after"="1m0s" "file"="/opt/operator/config.yaml"
I0224 03:33:11.204760 1 controller.go:230] "msg"="loading config" "path"="/opt/operator/config.yaml"
I0224 03:33:11.220434 1 controller.go:259] "msg"="config hasn't changed" "new hash"=12989480526449716590 "old hash"=12989480526449716590
I0224 03:34:11.219911 1 file_informer.go:235] "msg"="resyncing file" "after"="1m0s" "file"="/opt/operator/config.yaml"
I0224 03:34:11.219998 1 controller.go:230] "msg"="loading config" "path"="/opt/operator/config.yaml"
I0224 03:34:11.225464 1 controller.go:259] "msg"="config hasn't changed" "new hash"=12989480526449716590 "old hash"=12989480526449716590
I0224 03:34:30.179033 1 reflector.go:559] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:169: Watch close - *unstructured.Unstructured total 6 items received
I0224 03:34:40.178841 1 reflector.go:559] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:169: Watch close - *unstructured.Unstructured total 6 items received
I0224 03:34:57.180288 1 reflector.go:559] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:169: Watch close - *unstructured.Unstructured total 6 items received
I0224 03:35:11.228660 1 file_informer.go:235] "msg"="resyncing file" "after"="1m0s" "file"="/opt/operator/config.yaml"
I0224 03:35:11.228736 1 controller.go:230] "msg"="loading config" "path"="/opt/operator/config.yaml"
I0224 03:35:11.237846 1 controller.go:259] "msg"="config hasn't changed" "new hash"=12989480526449716590 "old hash"=12989480526449716590
I0224 03:36:11.238218 1 file_informer.go:235] "msg"="resyncing file" "after"="1m0s" "file"="/opt/operator/config.yaml"
I0224 03:36:11.238296 1 controller.go:230] "msg"="loading config" "path"="/opt/operator/config.yaml"
I0224 03:36:11.245296 1 controller.go:259] "msg"="config hasn't changed" "new hash"=12989480526449716590 "old hash"=12989480526449716590
I0224 03:36:18.177392 1 reflector.go:559] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:169: Watch close - *unstructured.Unstructured total 8 items received
I0224 03:36:22.179190 1 reflector.go:559] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:169: Watch close - *unstructured.Unstructured total 8 items received
I0224 03:37:11.315371 1 file_informer.go:235] "msg"="resyncing file" "after"="1m0s" "file"="/opt/operator/config.yaml"
I0224 03:37:11.315436 1 controller.go:230] "msg"="loading config" "path"="/opt/operator/config.yaml"
I0224 03:37:11.324949 1 controller.go:259] "msg"="config hasn't changed" "new hash"=12989480526449716590 "old hash"=12989480526449716590
I0224 03:37:12.247119 1 reflector.go:559] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:169: Watch close - *unstructured.Unstructured total 11 items received
I0224 03:37:30.260987 1 reflector.go:559] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:169: Watch close - *unstructured.Unstructured total 13 items received
I0224 03:37:37.261324 1 reflector.go:559] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:169: Watch close - *unstructured.Unstructured total 9 items received
I0224 03:38:11.322331 1 file_informer.go:235] "msg"="resyncing file" "after"="1m0s" "file"="/opt/operator/config.yaml"
I0224 03:38:11.322419 1 controller.go:230] "msg"="loading config" "path"="/opt/operator/config.yaml"
I0224 03:38:11.327759 1 controller.go:259] "msg"="config hasn't changed" "new hash"=12989480526449716590 "old hash"=12989480526449716590
I0224 03:38:53.237480 1 reflector.go:559] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:169: Watch close - *unstructured.Unstructured total 11 items received
I0224 03:39:11.322693 1 file_informer.go:235] "msg"="resyncing file" "after"="1m0s" "file"="/opt/operator/config.yaml"
I0224 03:39:11.322755 1 controller.go:230] "msg"="loading config" "path"="/opt/operator/config.yaml"
I0224 03:39:11.330742 1 controller.go:259] "msg"="config hasn't changed" "new hash"=12989480526449716590 "old hash"=12989480526449716590
I0224 03:40:11.328867 1 file_informer.go:235] "msg"="resyncing file" "after"="1m0s" "file"="/opt/operator/config.yaml"
I0224 03:40:11.328963 1 controller.go:230] "msg"="loading config" "path"="/opt/operator/config.yaml"
I0224 03:40:11.336504 1 controller.go:259] "msg"="config hasn't changed" "new hash"=12989480526449716590 "old hash"=12989480526449716590
I0224 03:41:05.239366 1 reflector.go:559] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:169: Watch close - *unstructured.Unstructured total 8 items received
I0224 03:41:11.336828 1 file_informer.go:235] "msg"="resyncing file" "after"="1m0s" "file"="/opt/operator/config.yaml"
I0224 03:41:11.337778 1 controller.go:230] "msg"="loading config" "path"="/opt/operator/config.yaml"
I0224 03:41:11.347065 1 controller.go:259] "msg"="config hasn't changed" "new hash"=12989480526449716590 "old hash"=12989480526449716590
I0224 03:41:21.239664 1 reflector.go:559] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:169: Watch close - *unstructured.Unstructured total 7 items received
I0224 03:42:00.239732 1 reflector.go:559] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:169: Watch close - *unstructured.Unstructured total 8 items received
I0224 03:42:11.349201 1 file_informer.go:235] "msg"="resyncing file" "after"="1m0s" "file"="/opt/operator/config.yaml"
I0224 03:42:11.349288 1 controller.go:230] "msg"="loading config" "path"="/opt/operator/config.yaml"
I0224 03:42:11.355151 1 controller.go:259] "msg"="config hasn't changed" "new hash"=12989480526449716590 "old hash"=12989480526449716590
I0224 03:42:45.238032 1 reflector.go:559] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:169: Watch close - *unstructured.Unstructured total 8 items received
I0224 03:43:11.354462 1 file_informer.go:235] "msg"="resyncing file" "after"="1m0s" "file"="/opt/operator/config.yaml"
I0224 03:43:11.354534 1 controller.go:230] "msg"="loading config" "path"="/opt/operator/config.yaml"
I0224 03:43:11.364370 1 controller.go:259] "msg"="config hasn't changed" "new hash"=12989480526449716590 "old hash"=12989480526449716590
I0224 03:43:17.239020 1 reflector.go:559] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:169: Watch close - *unstructured.Unstructured total 7 items received
I0224 03:43:30.245737 1 reflector.go:559] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:169: Watch close - *unstructured.Unstructured total 6 items received
I0224 03:44:11.365618 1 file_informer.go:235] "msg"="resyncing file" "after"="1m0s" "file"="/opt/operator/config.yaml"
I0224 03:44:11.365683 1 controller.go:230] "msg"="loading config" "path"="/opt/operator/config.yaml"
I0224 03:44:11.372006 1 controller.go:259] "msg"="config hasn't changed" "new hash"=12989480526449716590 "old hash"=12989480526449716590
I0224 03:44:25.236170 1 reflector.go:559] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:169: Watch close - *unstructured.Unstructured total 8 items received
I0224 03:44:45.236185 1 reflector.go:559] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:169: Watch close - *unstructured.Unstructured total 9 items received
I0224 03:45:11.368877 1 file_informer.go:235] "msg"="resyncing file" "after"="1m0s" "file"="/opt/operator/config.yaml"
I0224 03:45:11.368941 1 controller.go:230] "msg"="loading config" "path"="/opt/operator/config.yaml"
I0224 03:45:11.376936 1 controller.go:259] "msg"="config hasn't changed" "new hash"=12989480526449716590 "old hash"=12989480526449716590
I0224 03:46:11.375698 1 file_informer.go:235] "msg"="resyncing file" "after"="1m0s" "file"="/opt/operator/config.yaml"
I0224 03:46:11.375768 1 controller.go:230] "msg"="loading config" "path"="/opt/operator/config.yaml"
I0224 03:46:11.381209 1 controller.go:259] "msg"="config hasn't changed" "new hash"=12989480526449716590 "old hash"=12989480526449716590
I0224 03:46:44.241536 1 reflector.go:559] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:169: Watch close - *unstructured.Unstructured total 9 items received
I0224 03:47:11.393954 1 file_informer.go:235] "msg"="resyncing file" "after"="1m0s" "file"="/opt/operator/config.yaml"
I0224 03:47:11.394018 1 controller.go:230] "msg"="loading config" "path"="/opt/operator/config.yaml"
I0224 03:47:11.402514 1 controller.go:259] "msg"="config hasn't changed" "new hash"=12989480526449716590 "old hash"=12989480526449716590
I0224 03:47:46.765934 1 controller.go:315] "msg"="syncing owned object" "controller"="spicedbclusters" "gvr"={"Group":"authzed.com","Version":"v1alpha1","Resource":"spicedbclusters"} "obj"={"name":"example","namespace":"spicedb"} "syncID"="xIPxX"
I0224 03:47:46.766018 1 logging.go:35] "msg"="entering handler" "controller"="spicedbclusters" "handler"="pauseCluster" "obj"={"name":"example","namespace":"spicedb"} "syncID"="xIPxX"
I0224 03:47:46.766049 1 logging.go:35] "msg"="entering handler" "controller"="spicedbclusters" "handler"="adoptSecret" "obj"={"name":"example","namespace":"spicedb"} "syncID"="xIPxX"
I0224 03:47:46.814162 1 controller.go:315] "msg"="syncing owned object" "controller"="spicedbclusters" "gvr"={"Group":"authzed.com","Version":"v1alpha1","Resource":"spicedbclusters"} "obj"={"name":"example","namespace":"spicedb"} "syncID"="/wlCe"
I0224 03:47:46.814335 1 logging.go:35] "msg"="entering handler" "controller"="spicedbclusters" "handler"="pauseCluster" "obj"={"name":"example","namespace":"spicedb"} "syncID"="/wlCe"
I0224 03:47:46.814369 1 logging.go:35] "msg"="entering handler" "controller"="spicedbclusters" "handler"="adoptSecret" "obj"={"name":"example","namespace":"spicedb"} "syncID"="/wlCe"
I0224 03:47:46.837013 1 controller.go:336] "msg"="syncing external object" "controller"="spicedbclusters" "obj"={"name":"example-spicedb-config","namespace":"spicedb"} "syncID"="GUTr1"
I0224 03:47:46.847394 1 logging.go:35] "msg"="entering handler" "controller"="spicedbclusters" "handler"="*controller.ConfigChangedHandler" "obj"={"name":"example","namespace":"spicedb"} "syncID"="/wlCe"
I0224 03:47:46.847638 1 config_change.go:38] "msg"="spicedb configuration changed" "controller"="spicedbclusters" "obj"={"name":"example","namespace":"spicedb"} "syncID"="/wlCe"
I0224 03:47:46.847721 1 controller.go:336] "msg"="syncing external object" "controller"="spicedbclusters" "obj"={"name":"example-spicedb-config","namespace":"spicedb"} "syncID"="nQ97u"
I0224 03:47:46.866490 1 logging.go:35] "msg"="entering handler" "controller"="spicedbclusters" "handler"="*controller.ValidateConfigHandler" "obj"={"name":"example","namespace":"spicedb"} "syncID"="/wlCe"
I0224 03:47:46.892416 1 logging.go:35] "msg"="entering handler" "controller"="spicedbclusters" "handler"="parallel[ensureServiceAccount,ensureRole,ensureService]" "obj"={"name":"example","namespace":"spicedb"} "syncID"="/wlCe"
I0224 03:47:46.892497 1 logging.go:35] "msg"="entering handler" "controller"="spicedbclusters" "handler"="ensureService" "obj"={"name":"example","namespace":"spicedb"} "syncID"="/wlCe"
I0224 03:47:46.892743 1 controller.go:636] "msg"="applying service" "controller"="spicedbclusters" "name"="example" "namespace"="spicedb" "obj"={"name":"example","namespace":"spicedb"} "syncID"="/wlCe"
I0224 03:47:46.893953 1 logging.go:35] "msg"="entering handler" "controller"="spicedbclusters" "handler"="ensureServiceAccount" "obj"={"name":"example","namespace":"spicedb"} "syncID"="/wlCe"
I0224 03:47:46.894216 1 controller.go:542] "msg"="applying serviceaccount" "controller"="spicedbclusters" "name"="example" "namespace"="spicedb" "obj"={"name":"example","namespace":"spicedb"} "syncID"="/wlCe"
I0224 03:47:46.894685 1 logging.go:35] "msg"="entering handler" "controller"="spicedbclusters" "handler"="ensureRole" "obj"={"name":"example","namespace":"spicedb"} "syncID"="/wlCe"
I0224 03:47:46.894910 1 controller.go:572] "msg"="applying role" "controller"="spicedbclusters" "name"="example" "namespace"="spicedb" "obj"={"name":"example","namespace":"spicedb"} "syncID"="/wlCe"
I0224 03:47:46.913939 1 controller.go:336] "msg"="syncing external object" "controller"="spicedbclusters" "obj"={"name":"example","namespace":"spicedb"} "syncID"="4CVap"
I0224 03:47:46.944640 1 controller.go:336] "msg"="syncing external object" "controller"="spicedbclusters" "obj"={"name":"example","namespace":"spicedb"} "syncID"="1EvsQ"
I0224 03:47:46.976078 1 controller.go:336] "msg"="syncing external object" "controller"="spicedbclusters" "obj"={"name":"example","namespace":"spicedb"} "syncID"="TIkrm"
I0224 03:47:46.976132 1 logging.go:35] "msg"="entering handler" "controller"="spicedbclusters" "handler"="ensureRoleBinding" "obj"={"name":"example","namespace":"spicedb"} "syncID"="/wlCe"
I0224 03:47:46.980322 1 controller.go:603] "msg"="applying rolebinding" "controller"="spicedbclusters" "name"="example" "namespace"="spicedb" "obj"={"name":"example","namespace":"spicedb"} "syncID"="/wlCe"
I0224 03:47:47.058053 1 logging.go:35] "msg"="entering handler" "controller"="spicedbclusters" "handler"="deploymentsPre" "obj"={"name":"example","namespace":"spicedb"} "syncID"="/wlCe"
I0224 03:47:47.058160 1 logging.go:35] "msg"="entering handler" "controller"="spicedbclusters" "handler"="jobsPre" "obj"={"name":"example","namespace":"spicedb"} "syncID"="/wlCe"
I0224 03:47:47.058198 1 logging.go:35] "msg"="entering handler" "controller"="spicedbclusters" "handler"="parallel[getDeployments,getJobs]" "obj"={"name":"example","namespace":"spicedb"} "syncID"="/wlCe"
I0224 03:47:47.058234 1 logging.go:35] "msg"="entering handler" "controller"="spicedbclusters" "handler"="getJobs" "obj"={"name":"example","namespace":"spicedb"} "syncID"="/wlCe"
I0224 03:47:47.058053 1 controller.go:336] "msg"="syncing external object" "controller"="spicedbclusters" "obj"={"name":"example","namespace":"spicedb"} "syncID"="1DYps"
I0224 03:47:47.058368 1 logging.go:35] "msg"="entering handler" "controller"="spicedbclusters" "handler"="getDeployments" "obj"={"name":"example","namespace":"spicedb"} "syncID"="/wlCe"
I0224 03:47:47.058738 1 logging.go:35] "msg"="entering handler" "controller"="spicedbclusters" "handler"="*controller.MigrationCheckHandler" "obj"={"name":"example","namespace":"spicedb"} "syncID"="/wlCe"
I0224 03:47:47.059399 1 logging.go:35] "msg"="entering handler" "controller"="spicedbclusters" "handler"="*controller.MigrationRunHandler" "obj"={"name":"example","namespace":"spicedb"} "syncID"="/wlCe"
I0224 03:47:47.149101 1 controller.go:336] "msg"="syncing external object" "controller"="spicedbclusters" "obj"={"name":"example-migrate-n567h5cdh68h58f","namespace":"spicedb"} "syncID"="O+6l9"
I0224 03:47:47.150136 1 controller.go:315] "msg"="syncing owned object" "controller"="spicedbclusters" "gvr"={"Group":"authzed.com","Version":"v1alpha1","Resource":"spicedbclusters"} "obj"={"name":"example","namespace":"spicedb"} "syncID"="Q5H0R"
I0224 03:47:47.150205 1 logging.go:35] "msg"="entering handler" "controller"="spicedbclusters" "handler"="pauseCluster" "obj"={"name":"example","namespace":"spicedb"} "syncID"="Q5H0R"
I0224 03:47:47.150234 1 logging.go:35] "msg"="entering handler" "controller"="spicedbclusters" "handler"="adoptSecret" "obj"={"name":"example","namespace":"spicedb"} "syncID"="Q5H0R"
I0224 03:47:47.150524 1 logging.go:35] "msg"="entering handler" "controller"="spicedbclusters" "handler"="*controller.ConfigChangedHandler" "obj"={"name":"example","namespace":"spicedb"} "syncID"="Q5H0R"
I0224 03:47:47.150720 1 logging.go:35] "msg"="entering handler" "controller"="spicedbclusters" "handler"="*controller.ValidateConfigHandler" "obj"={"name":"example","namespace":"spicedb"} "syncID"="Q5H0R"
I0224 03:47:47.169986 1 logging.go:35] "msg"="entering handler" "controller"="spicedbclusters" "handler"="parallel[ensureServiceAccount,ensureRole,ensureService]" "obj"={"name":"example","namespace":"spicedb"} "syncID"="Q5H0R"
I0224 03:47:47.170048 1 logging.go:35] "msg"="entering handler" "controller"="spicedbclusters" "handler"="ensureService" "obj"={"name":"example","namespace":"spicedb"} "syncID"="Q5H0R"
I0224 03:47:47.170454 1 logging.go:35] "msg"="entering handler" "controller"="spicedbclusters" "handler"="ensureRole" "obj"={"name":"example","namespace":"spicedb"} "syncID"="Q5H0R"
I0224 03:47:47.170790 1 logging.go:35] "msg"="entering handler" "controller"="spicedbclusters" "handler"="ensureServiceAccount" "obj"={"name":"example","namespace":"spicedb"} "syncID"="Q5H0R"
I0224 03:47:47.175666 1 logging.go:35] "msg"="entering handler" "controller"="spicedbclusters" "handler"="ensureRoleBinding" "obj"={"name":"example","namespace":"spicedb"} "syncID"="Q5H0R"
I0224 03:47:47.188813 1 logging.go:35] "msg"="entering handler" "controller"="spicedbclusters" "handler"="deploymentsPre" "obj"={"name":"example","namespace":"spicedb"} "syncID"="Q5H0R"
I0224 03:47:47.188892 1 logging.go:35] "msg"="entering handler" "controller"="spicedbclusters" "handler"="jobsPre" "obj"={"name":"example","namespace":"spicedb"} "syncID"="Q5H0R"
I0224 03:47:47.188929 1 logging.go:35] "msg"="entering handler" "controller"="spicedbclusters" "handler"="parallel[getDeployments,getJobs]" "obj"={"name":"example","namespace":"spicedb"} "syncID"="Q5H0R"
I0224 03:47:47.188971 1 logging.go:35] "msg"="entering handler" "controller"="spicedbclusters" "handler"="getJobs" "obj"={"name":"example","namespace":"spicedb"} "syncID"="Q5H0R"
I0224 03:47:47.189700 1 logging.go:35] "msg"="entering handler" "controller"="spicedbclusters" "handler"="getDeployments" "obj"={"name":"example","namespace":"spicedb"} "syncID"="Q5H0R"
I0224 03:47:47.234394 1 logging.go:35] "msg"="entering handler" "controller"="spicedbclusters" "handler"="*controller.MigrationCheckHandler" "obj"={"name":"example","namespace":"spicedb"} "syncID"="Q5H0R"
I0224 03:47:47.274163 1 controller.go:336] "msg"="syncing external object" "controller"="spicedbclusters" "obj"={"name":"example-migrate-n567h5cdh68h58f-pcxlg","namespace":"spicedb"} "syncID"="Wv1Cl"
I0224 03:47:47.319183 1 controller.go:336] "msg"="syncing external object" "controller"="spicedbclusters" "obj"={"name":"example-migrate-n567h5cdh68h58f-pcxlg","namespace":"spicedb"} "syncID"="A3T2k"
I0224 03:47:47.320633 1 controller.go:336] "msg"="syncing external object" "controller"="spicedbclusters" "obj"={"name":"example-migrate-n567h5cdh68h58f","namespace":"spicedb"} "syncID"="mMv4c"
I0224 03:47:47.362674 1 controller.go:336] "msg"="syncing external object" "controller"="spicedbclusters" "obj"={"name":"example-migrate-n567h5cdh68h58f-pcxlg","namespace":"spicedb"} "syncID"="jZYtf"
I0224 03:47:48.249154 1 reflector.go:559] pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:169: Watch close - *unstructured.Unstructured total 8 items received
I0224 03:47:52.151441 1 controller.go:315] "msg"="syncing owned object" "controller"="spicedbclusters" "gvr"={"Group":"authzed.com","Version":"v1alpha1","Resource":"spicedbclusters"} "obj"={"name":"example","namespace":"spicedb"} "syncID"="AZGSw"
I0224 03:47:52.151537 1 logging.go:35] "msg"="entering handler" "controller"="spicedbclusters" "handler"="pauseCluster" "obj"={"name":"example","namespace":"spicedb"} "syncID"="AZGSw"
I0224 03:47:52.151569 1 logging.go:35] "msg"="entering handler" "controller"="spicedbclusters" "handler"="adoptSecret" "obj"={"name":"example","namespace":"spicedb"} "syncID"="AZGSw"
I0224 03:47:52.151897 1 logging.go:35] "msg"="entering handler" "controller"="spicedbclusters" "handler"="*controller.ConfigChangedHandler" "obj"={"name":"example","namespace":"spicedb"} "syncID"="AZGSw"
I0224 03:47:52.151989 1 logging.go:35] "msg"="entering handler" "controller"="spicedbclusters" "handler"="*controller.ValidateConfigHandler" "obj"={"name":"example","namespace":"spicedb"} "syncID"="AZGSw"
I0224 03:47:52.178772 1 logging.go:35] "msg"="entering handler" "controller"="spicedbclusters" "handler"="parallel[ensureServiceAccount,ensureRole,ensureService]" "obj"={"name":"example","namespace":"spicedb"} "syncID"="AZGSw"
I0224 03:47:52.178869 1 logging.go:35] "msg"="entering handler" "controller"="spicedbclusters" "handler"="ensureService" "obj"={"name":"example","namespace":"spicedb"} "syncID"="AZGSw"
I0224 03:47:52.179393 1 logging.go:35] "msg"="entering handler" "controller"="spicedbclusters" "handler"="ensureServiceAccount" "obj"={"name":"example","namespace":"spicedb"} "syncID"="AZGSw"
I0224 03:47:52.179785 1 logging.go:35] "msg"="entering handler" "controller"="spicedbclusters" "handler"="ensureRole" "obj"={"name":"example","namespace":"spicedb"} "syncID"="AZGSw"
I0224 03:47:52.182241 1 logging.go:35] "msg"="entering handler" "controller"="spicedbclusters" "handler"="ensureRoleBinding" "obj"={"name":"example","namespace":"spicedb"} "syncID"="AZGSw"
I0224 03:47:52.182958 1 logging.go:35] "msg"="entering handler" "controller"="spicedbclusters" "handler"="deploymentsPre" "obj"={"name":"example","namespace":"spicedb"} "syncID"="AZGSw"
I0224 03:47:52.183019 1 logging.go:35] "msg"="entering handler" "controller"="spicedbclusters" "handler"="jobsPre" "obj"={"name":"example","namespace":"spicedb"} "syncID"="AZGSw"
I0224 03:47:52.183044 1 logging.go:35] "msg"="entering handler" "controller"="spicedbclusters" "handler"="parallel[getDeployments,getJobs]" "obj"={"name":"example","namespace":"spicedb"} "syncID"="AZGSw"
I0224 03:47:52.183208 1 logging.go:35] "msg"="entering handler" "controller"="spicedbclusters" "handler"="getJobs" "obj"={"name":"example","namespace":"spicedb"} "syncID"="AZGSw"
I0224 03:47:52.183523 1 logging.go:35] "msg"="entering handler" "controller"="spicedbclusters" "handler"="getDeployments" "obj"={"name":"example","namespace":"spicedb"} "syncID"="AZGSw"
I0224 03:47:52.186890 1 logging.go:35] "msg"="entering handler" "controller"="spicedbclusters" "handler"="*controller.MigrationCheckHandler" "obj"={"name":"example","namespace":"spicedb"} "syncID"="AZGSw"
I0224 03:47:52.192392 1 controller.go:315] "msg"="syncing owned object" "controller"="spicedbclusters" "gvr"={"Group":"authzed.com","Version":"v1alpha1","Resource":"spicedbclusters"} "obj"={"name":"example","namespace":"spicedb"} "syncID"="H7WGs"
I0224 03:47:52.192589 1 logging.go:35] "msg"="entering handler" "controller"="spicedbclusters" "handler"="pauseCluster" "obj"={"name":"example","namespace":"spicedb"} "syncID"="H7WGs"
I0224 03:47:52.192639 1 logging.go:35] "msg"="entering handler" "controller"="spicedbclusters" "handler"="adoptSecret" "obj"={"name":"example","namespace":"spicedb"} "syncID"="H7WGs"
I0224 03:47:52.193484 1 logging.go:35] "msg"="entering handler" "controller"="spicedbclusters" "handler"="*controller.ConfigChangedHandler" "obj"={"name":"example","namespace":"spicedb"} "syncID"="H7WGs"
I0224 03:47:52.193995 1 logging.go:35] "msg"="entering handler" "controller"="spicedbclusters" "handler"="*controller.ValidateConfigHandler" "obj"={"name":"example","namespace":"spicedb"} "syncID"="H7WGs"
I0224 03:47:52.210036 1 logging.go:35] "msg"="entering handler" "controller"="spicedbclusters" "handler"="parallel[ensureServiceAccount,ensureRole,ensureService]" "obj"={"name":"example","namespace":"spicedb"} "syncID"="H7WGs"
I0224 03:47:52.210100 1 logging.go:35] "msg"="entering handler" "controller"="spicedbclusters" "handler"="ensureService" "obj"={"name":"example","namespace":"spicedb"} "syncID"="H7WGs"
I0224 03:47:52.210123 1 logging.go:35] "msg"="entering handler" "controller"="spicedbclusters" "handler"="ensureServiceAccount" "obj"={"name":"example","namespace":"spicedb"} "syncID"="H7WGs"
I0224 03:47:52.210578 1 logging.go:35] "msg"="entering handler" "controller"="spicedbclusters" "handler"="ensureRole" "obj"={"name":"example","namespace":"spicedb"} "syncID"="H7WGs"
I0224 03:47:52.211963 1 logging.go:35] "msg"="entering handler" "controller"="spicedbclusters" "handler"="ensureRoleBinding" "obj"={"name":"example","namespace":"spicedb"} "syncID"="H7WGs"
I0224 03:47:52.231531 1 logging.go:35] "msg"="entering handler" "controller"="spicedbclusters" "handler"="deploymentsPre" "obj"={"name":"example","namespace":"spicedb"} "syncID"="H7WGs"
I0224 03:47:52.231614 1 logging.go:35] "msg"="entering handler" "controller"="spicedbclusters" "handler"="jobsPre" "obj"={"name":"example","namespace":"spicedb"} "syncID"="H7WGs"
I0224 03:47:52.231651 1 logging.go:35] "msg"="entering handler" "controller"="spicedbclusters" "handler"="parallel[getDeployments,getJobs]" "obj"={"name":"example","namespace":"spicedb"} "syncID"="H7WGs"
I0224 03:47:52.231693 1 logging.go:35] "msg"="entering handler" "controller"="spicedbclusters" "handler"="getJobs" "obj"={"name":"example","namespace":"spicedb"} "syncID"="H7WGs"
I0224 03:47:52.232415 1 logging.go:35] "msg"="entering handler" "controller"="spicedbclusters" "handler"="getDeployments" "obj"={"name":"example","namespace":"spicedb"} "syncID"="H7WGs"
I0224 03:47:52.232728 1 logging.go:35] "msg"="entering handler" "controller"="spicedbclusters" "handler"="*controller.MigrationCheckHandler" "obj"={"name":"example","namespace":"spicedb"} "syncID"="H7WGs"
I0224 03:47:57.192471 1 controller.go:315] "msg"="syncing owned object" "controller"="spicedbclusters" "gvr"={"Group":"authzed.com","Version":"v1alpha1","Resource":"spicedbclusters"} "obj"={"name":"example","namespace":"spicedb"} "syncID"="I6a8j"
I0224 03:47:57.192564 1 logging.go:35] "msg"="entering handler" "controller"="spicedbclusters" "handler"="pauseCluster" "obj"={"name":"example","namespace":"spicedb"} "syncID"="I6a8j"
I0224 03:47:57.192592 1 logging.go:35] "msg"="entering handler" "controller"="spicedbclusters" "handler"="adoptSecret" "obj"={"name":"example","namespace":"spicedb"} "syncID"="I6a8j"
I0224 03:47:57.192976 1 logging.go:35] "msg"="entering handler" "controller"="spicedbclusters" "handler"="*controller.ConfigChangedHandler" "obj"={"name":"example","namespace":"spicedb"} "syncID"="I6a8j"
I0224 03:47:57.193162 1 logging.go:35] "msg"="entering handler" "controller"="spicedbclusters" "handler"="*controller.ValidateConfigHandler" "obj"={"name":"example","namespace":"spicedb"} "syncID"="I6a8j"
I0224 03:47:59.501122 1 logging.go:35] "msg"="entering handler" "controller"="spicedbclusters" "handler"="parallel[ensureServiceAccount,ensureRole,ensureService]" "obj"={"name":"example","namespace":"spicedb"} "syncID"="I6a8j"
I0224 03:47:59.501191 1 logging.go:35] "msg"="entering handler" "controller"="spicedbclusters" "handler"="ensureService" "obj"={"name":"example","namespace":"spicedb"} "syncID"="I6a8j"
I0224 03:47:59.501403 1 logging.go:35] "msg"="entering handler" "controller"="spicedbclusters" "handler"="ensureServiceAccount" "obj"={"name":"example","namespace":"spicedb"} "syncID"="I6a8j"
I0224 03:47:59.501829 1 logging.go:35] "msg"="entering handler" "controller"="spicedbclusters" "handler"="ensureRole" "obj"={"name":"example","namespace":"spicedb"} "syncID"="I6a8j"
I0224 03:47:59.502776 1 logging.go:35] "msg"="entering handler" "controller"="spicedbclusters" "handler"="ensureRoleBinding" "obj"={"name":"example","namespace":"spicedb"} "syncID"="I6a8j"
I0224 03:47:59.503182 1 logging.go:35] "msg"="entering handler" "controller"="spicedbclusters" "handler"="deploymentsPre" "obj"={"name":"example","namespace":"spicedb"} "syncID"="I6a8j"
I0224 03:47:59.503383 1 logging.go:35] "msg"="entering handler" "controller"="spicedbclusters" "handler"="jobsPre" "obj"={"name":"example","namespace":"spicedb"} "syncID"="I6a8j"
I0224 03:47:59.503499 1 logging.go:35] "msg"="entering handler" "controller"="spicedbclusters" "handler"="parallel[getDeployments,getJobs]" "obj"={"name":"example","namespace":"spicedb"} "syncID"="I6a8j"
I0224 03:47:59.503608 1 logging.go:35] "msg"="entering handler" "controller"="spicedbclusters" "handler"="getJobs" "obj"={"name":"example","namespace":"spicedb"} "syncID"="I6a8j"
I0224 03:47:59.504534 1 logging.go:35] "msg"="entering handler" "controller"="spicedbclusters" "handler"="getDeployments" "obj"={"name":"example","namespace":"spicedb"} "syncID"="I6a8j"
I0224 03:47:59.504680 1 logging.go:35] "msg"="entering handler" "controller"="spicedbclusters" "handler"="*controller.MigrationCheckHandler" "obj"={"name":"example","namespace":"spicedb"} "syncID"="I6a8j"
I0224 03:47:59.508358 1 controller.go:315] "msg"="syncing owned object" "controller"="spicedbclusters" "gvr"=

Add capability to disable TLS warning

Hi ๐Ÿ‘‹

I'm running a spicedb cluster, and I'm getting a warning in the Status.Conditions of the cluster because TLS is not configured. I would like to be able to remove this warning, I'm running spicedb internally, nothing is exposed to the outside world, there should be no security issue without tls.

Here's the output of describing the cluster:

Name:         spicedb-mycoach-infrastructure
Namespace:    default
Labels:       app.kubernetes.io/managed-by=Helm
Annotations:  meta.helm.sh/release-name: mycoach-infrastructure
              meta.helm.sh/release-namespace: default
API Version:  authzed.com/v1alpha1
Kind:         SpiceDBCluster
Metadata:
  Creation Timestamp:  2023-03-23T17:26:40Z
  Generation:          2
  Managed Fields:
    API Version:  authzed.com/v1alpha1
    Fields Type:  FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          f:meta.helm.sh/release-name:
          f:meta.helm.sh/release-namespace:
        f:labels:
          f:app.kubernetes.io/managed-by:
      f:status:
        f:conditions:
        f:currentMigrationHash:
        f:image:
        f:migration:
        f:observedGeneration:
        f:secretHash:
        f:targetMigrationHash:
        f:version:
          f:attributes:
          f:channel:
          f:name:
    Manager:      spicedb-operator
    Operation:    Apply
    Subresource:  status
    Time:         2023-04-04T08:03:28Z
    API Version:  authzed.com/v1alpha1
    Fields Type:  FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          .:
          f:meta.helm.sh/release-name:
          f:meta.helm.sh/release-namespace:
        f:labels:
          .:
          f:app.kubernetes.io/managed-by:
      f:spec:
        .:
        f:channel:
        f:config:
          .:
          f:datastoreEngine:
          f:replicas:
        f:secretName:
        f:version:
    Manager:         Go-http-client
    Operation:       Update
    Time:            2023-03-23T17:57:48Z
  Resource Version:  332941291
  UID:               3fd90ca2-f928-4fc1-bb16-326def7e6ae6
Spec:
  Channel:  stable
  Config:
    Datastore Engine:  mysql
    Replicas:          3
  Secret Name:         spicedb-mycoach-infrastructure
  Version:             v1.18.0
Status:
  Conditions:
    Last Transition Time:  2023-03-23T17:26:40Z
    Message:               no TLS configured, consider setting "tlsSecretName"
    Reason:                WarningsPresent
    Status:                True
    Type:                  ConfigurationWarning
  Current Migration Hash:  n698hch68ch65h544h67fh9ch6q
  Image:                   ghcr.io/authzed/spicedb:v1.18.0
  Migration:               add_caveat
  Observed Generation:     2
  Secret Hash:             n645h599h694hd6h96h547h695h688q
  Target Migration Hash:   n698hch68ch65h544h67fh9ch6q
  Version:
    Attributes:
      migration
    Channel:  stable
    Name:     v1.18.0

And the status condition:

Status:
  Conditions:
    Last Transition Time:  2023-03-23T17:26:40Z
    Message:               no TLS configured, consider setting "tlsSecretName"
    Reason:                WarningsPresent
    Status:                True
    Type:                  ConfigurationWarning

Thanks

[RFE] add the ability to load schema and initial dataset

I think the title is self explanatory. I envision this feature as having configmap for the schema and the initial dataset and pointing the operator to them. The operator would then load the scripts in an idempotent way (i.e. respecting possibly existing data) when the cluster starts.

Pre-warmed cluster upgrades

Right now, the operator rolls out new versions of SpiceDB by updating a deployment.

New pods become available as soon as they are connected to the datastore and the dispatch ring, which means that cache is lost during upgrades. Depending on the queries SpiceDB is handling, this could cause a significant increase in latency.

Some options worth exploring:

  1. Slowly introducing new pods so that only some % of a cluster loses its cache at a time. This is probably the simplest option, but requires all dispatch API changes to be fully backwards-compatible.
  2. Traffic mirroring via external routing (similar to how flagger provides generic blue/green mirroring). Currently, spicedb-operator operates "below" the level that most of these tools work, so the scope would need to increase dramatically to include more networking/ingress concerns.
  3. Traffic mirroring via SpiceDB itself. We could introduce mirroring flags into SpiceDB itself, so that incoming traffic can be forwarded to a parallel set of nodes to fill their cache. This would require old and new clusters to be exposed under different service objects so that their hashrings don't collide.
  4. Saving and restoring the cache. Currently, SpiceDB caches exist only in memory. We could switch to a cache that syncs to the filesystem or provide apis for dumping the cache (either over the network or to disk), and the operator could ensure the caches come back in the new pods. We would likely want to switch to a StatefulSet if we try this.

Detect cert changes

Either have spicedb watch the tls certs for changes
Or have the operator bump the spicedb pods when the secret changes

Specifying environment variables without a prefix

We need to support specifying standard environment variables (e.g. OpenTelemetry).

Right now SpiceDB appends a SPICEDB_ prefix to everything passed in -- rather those values should be specified in the config and the environment variables should reject anything with the prefix.

Publish versioned release manifests

Overlaps some with #76 - right now, the manifests in the repo point to the :latest tag, but we tag and release specific versions of the operator. The manifests should be versioned as well.

Expose metrics port via Service

Exposing the metrics via a Kubernetes service would allow for the usage of Prometheus Operator's ServiceMonitor, which is the preferred way to do discovery for pods. The current workaround is to use PodMonitors, but according to their documentation that is not the ideal interface.

extraPodAnnotations doesn't apply to the migration pod

I deployed the following in a namespace that has istio injection enabled

apiVersion: authzed.com/v1alpha1
kind: SpiceDBCluster
metadata:
  name: dev
spec:
  config:
    datastoreEngine: postgres
    extraPodAnnotations: 
      sidecar.istio.io/inject: "false"
  secretName: dev-spicedb-config

I was trying to add the annotation with extraPodAnnotations to the generated pods so that the sidecar doesn't get injected

Istio adds a sidecar to every pod created in that namespace. Because the sidecar keeps running, the pod never reaches the Completed stage, preventing the operator from progressing further (and creating the spicedb pods)

Since this migration pod is also generated by the operator, shouldn't the additional annotations apply to the migration pod as well?

I know I can either deploy the spicedb cluster to a namespace that doesn't inject sidecars, or configure the Istio operator to never inject anything in the pods that match the labels for spicedb, But it may be simpler, and perhaps more consistent, to apply the same annotations to the migration pod as well.

Another possible solution could be for the operator to only check the status of the migration container within the pod, instead of checking the pod status before moving on

Official Helm Chart

Creating this issue to track interest in an officially supported Helm chart for deploying SpiceDB.

Related to #76

operator does not restart spicedb on secret changes

Just validated that while changing the value of the Secret is identified and processed, the changes are not propagated to the spicedb cluster managed by the operator, and the system continues to run with the old secret values.

Persistent customization strategy

The operator creates kube resources on behalf of the users: deployments, services, serviceaccounts, rbac, etc, which may require some additional modification by the user:

  • Adding extra labels or annotations to integrate with other tools (i.e. GKE workload identity)
  • Directing workloads in specific ways (tolerations, nodeselectors, affinitty/anti-affinity, topologySpreadConstraints, etc)
  • Capacity planning (resource requests / limits)
  • Other unforseen future needs due to new SpiceDB features, tooling (HPA?), or the evolution of Kubernetes

All of these modifications are possible today by modifying operator-created resources after (or before!) they have been created. The operator uses Server Side Apply and will not touch fields it does not own. Users can query for which fields are owned by reading the fieldmanager metadata on a given resource.

But modifying the resources after creation makes git-ops workflows difficult, it would be nice if there was a way to persist such modifications in SpiceDBCluster or other native Kube resources.

There are some native methods for persisting this type of change, but only for specific fields of specific resources:

  • resource requests can be added automatically via a limitrange on the namespace to set a default
  • tolerations can be added with a default toleration setting for the namespace
  • volumes and env vars could be injected with PodPreset (which are deprecated and no longer available)

With the background out of the way, this leaves some general approaches we could take:

  1. Add new fields to SpiceDBCluster to cover any needs as they come up. This is the approach that most operators seem to take, but doing this for more than a couple of fields leads to huge schemas with dozens of options for customizing specific parts of downstream resources. I don't personally favor this approach - it seems at odds with the fieldmanager tracking that Kube introduced for SSA, and it brings things into the operator's scope that it doesn't actually have an opinion on (all such config is passed blindly to other resources).
  2. Admission Controllers: this is the general form of the PodPreset solution, where external config can modify the resource before it is persisted. There are a couple of competing projects with no clear (to me) leader: Kyverno and Gatekeeper both support "mutation" policies that can inject arbitrary data into a resource on creation. This approach can be used with the operator today, but we have no example policies for users to lean on, and it requires installing and running one of these projects as well.
  3. Embed generic customizations: instead of providing specific fields for specific customizations, we could provide a hook to allow users to provide arbitrary customizations. This could look like a single kustomization: <configMapName> field with Kustomize manifests (that the operator parses and applies, similar to kubebuilder-declarative-pattern), or it might look more like a Kyverno/Gatekeeper API but with a smaller, spicedb-operator focused scope.

Alternate packaging (helm, olm, etc)

spicedb-operator has minimal packaging right now (a directory of kustomize manifests).

Several options that are worth exploring for other delivery mechanisms:

Whichever packaging we choose to support should be low touch and integrated with the release pipeline.

Update 1/27/23:
Releases now include instructions for installing with kubectl, kustomize, and kustomizer. Keeping this open to track the other options above.

Bad dispatch defaults for memory datastore

When the memory datastore is selected, the operator sets the dispatch-upstream-addr but sets dispatch-cluster-enabled to false. This means the pod attempts to dispatch to itself, but has the dispatch server disabled.

The memory store should either entirely disable dispatch so it doesn't attempt to connect to itself, or it shouldn't disable the dispatch server.

Thanks to @mgagliardo91 for reporting this!

Allow specifying the spicedb image in the SpiceDBCluster spec

Right now, all SpiceDBCluster instances use the same SpiceDB image (configured globally for the operator).

We should allow an individual SpiceDBCluster to override the image, with appropriate warnings:

  • If the image in the SpiceDBCluster is one that's known the the operator (it matches one of the globally configured images), then there is no warning
  • If the image isn't in the list of known images, add a warning indicating that it may not be supported with the current version of the operator.

Remove self-pause feature

Right now, the operator will "pause" reconciliation of a cluster if the migration job permanently fails. This state requires human intervention to remove the pause from the SpiceDBCluster (after determining why the job has failed and addressing the issue).

In hindsight, it would be much more helpful for the operator to backoff and retry creating the job if it fails completely, and instead report detailed information about the failures on the SpiceDBCluster status.

This would allow a human to resolve the error (manually pausing if needed) and let the operator pick back up automatically. This also avoids misidentifying a transient failure as a permanent one that requires pausing the cluster.

The operator is not upgrading my cluster to v1.18.0

Hi there,

I've juste installed the operator yesterday and created my first cluster. After creation I've specified that I want the v1.18.0 to run (as I'm getting warning from zed CLI). My cluster is in the following state for multiple hours:

NAME                             AGE   CHANNEL   DESIRED   CURRENT   WARNINGS   MIGRATING   UPDATING   INVALID  
spicedb-mycoach-infrastructure   15h   stable    v1.18.0   v1.17.0   True

I've set the desired version in spec.version, should I do something else for my cluster to update ?

Thanks.

Operator does not cleanup secret

We are currently using the operator with a generated SpiceDBCluster CRD. When deleting the CRD, the operator successfully deletes all related entities (services, pods, etc); however, it leaves behind the secret used to store the shared secret key. This requires us to manually delete the secret before we can reinstall.

Running the guide to setup the operator results in UNAVAILABLE of spice on port 50053

I was following this guide https://docs.authzed.com/spicedb/operator.
Nit: I needed to use brew install authzed/tap/zed instead of brew install zed but this still didn't work due to some Ruby error in brew (might be unrelated). go install github.com/authzed/zed/cmd/zed@latest did work for me though.

Then, when following the guide by using these approximate commands, I get the error at the end.

$ kubectl apply --server-side -k github.com/authzed/spicedb-operator/config

$ kubectl apply --server-side -f - <<EOF
apiVersion: authzed.com/v1alpha1
kind: SpiceDBCluster
metadata:
  name: dev
spec:
  config:
    image: ghcr.io/authzed/spicedb:v1.14.1
    datastoreEngine: memory
  secretName: dev-spicedb-config
---
apiVersion: v1
kind: Secret
metadata:
  name: dev-spicedb-config
stringData:
  preshared_key: "averysecretpresharedkey"
EOF

$ zed schema write <(cat << EOF
/** user represents a user */
definition user {}

/** document represents a document with access control */
definition document {
  /** reader indicates that the user is a reader on the document */
  relation reader: user
}
EOF
)

$ zed context set local localhost:50051 "averysecretpresharedkey" --insecure

$ zed relationship create document:test reader user:herman
GhUKEzE2Njg3NjkxMDMwODU3NjY2OTY=

$ zed permission check --explain document:test reader user:herman
11:54AM INF debugging requested on check
11:54AM WRN No debuging information returned for the check
Error: rpc error: code = Unavailable desc = last connection error: connection error: desc = "transport: Error while dialing dial tcp 10.32.4.235:50053: connect: connection refused"
Usage:
  zed permission check <resource:id> <permission> <subject:id> [flags]

I've verified that the pod for the cluster is running, that it has said ports configured. Somehow it can't reach itself (it has IP 10.32.4.235).

v1.1.2 omits memory datastore from the config file

Summary

v1.1.2 of the operator does not include the "memory" datastore in the config.yaml included with the image.

Details

Trying to follow the setup instructions in the readme after installing the operator generates the following error message after applying the CRD:

Status:
  Conditions:
    Last Transition Time:  2023-02-22T23:53:44Z
    Message:               Error validating config with secret hash [redacted]: [couldn't find channel for datastore "memory": no channel found for datastore "memory", no update found in channel]
    Reason:                InvalidConfig
    Status:                True
    Type:                  ValidatingFailed

Diving into the operator image at ghcr.io/authzed/spicedb-operator:v1.1.2 shows this (sorry for the screengrab):

image

In particular, the default-operator-config.yaml used as the basis for /opt/operator/config.yaml does not contain the memory datastore. So anyone following along with the readme will fail to create a working SpiceDB Cluster unless they have another supported datastore available.

Never generate names that are too long

There are several places where the operator generates names and labels derived from other values in the input SpiceDBCluster, (deployments, jobs, services, etc).

Everywhere where this happens, we should know the field length kube expects and ensure we don't generate a name longer than that limit.

Extend CONTRIBUTING.md to describe how to try changes

I would expect something along the lines of:

in kind:
...

in full fledged cluster:

cd <src checkout>
REGISTRY=gcr.io/<my-gcr>
docker build --network=host --tag ${REGISTRY}/spicedb-operator:latest .
docker push ${REGISTRY}/spicedb-operator:latest
kubectl -n spicedb-operator set image deployment/spicedb-operator spicedb-operator=${REGISTRY}/spicedb-operator:latest
kubectl -n spicedb-operator delete pod -l app=spicedb-operator

Automatic migrations migrate to the wrong migration

Hi, thanks for providing SpiceDB and the operator!

Using SpiceDB-Operator version 1.1.0, we have deployed a SpiceDB Cluster with a postgres instance as the datastore. The operator then spins up the cluster, uses version v1.16.1, and first launches a pod to run migrations and then launches the pod running the actual SpiceDB instance.
Without further changes the migrate pod migrated our database to the drop-id-constraints migration as evident from the the pod's Command: spicedb migrate drop-id-constraints.
However, then the actual SpiceDB pod fails its health check and SpiceDB insists that the datastore is not ready, as evident from the log entry "datastoreReady":false. This presumably is a symptom of the migrations being outdated - see authzed/spicedb#739.
After manually migrating with a k exec -i --tty <spicedb-pod> -- spicedb migrate --log-level trace head an additional migration is performed and the database is migrated to drop-bigserial-ids. The pod then starts regularly and its healthchecks pass.

edit: You can find our cluster's spec below

apiVersion: authzed.com/v1alpha1
kind: SpiceDBCluster
metadata:
  name: spicedb-authorisation
spec:
  channel: stable 
  version: v1.16.1
  config:
    logLevel: debug
    datastoreEngine: postgres
    replicas: 1
    skipMigrations: false
  secretName: authorisation-secret

Operator does not start: panic: too many open files

Dear spicedb Team,

I just deployed the operator using kubectl apply --server-side -f https://github.com/authzed/spicedb-operator/releases/latest/download/bundle.yaml and got ghcr.io/authzed/spicedb-operator:v1.1.2 of the operator pod.

However the pod itself doesn't start and ends in a Crashloopbackoff with the following error in the logs:

I0214 20:53:33.589283       1 merged_client_builder.go:121] Using in-cluster configuration
panic: too many open files

goroutine 1 [running]:
github.com/authzed/controller-idioms/fileinformer.(*Factory).ForResource(0xc000187ce0, {{0x21445a0, 0xb}, {0x2136db9, 0x2}, {0x7ffda3b0edfb, 0x19}})
	/home/runner/go/pkg/mod/github.com/authzed/[email protected]/fileinformer/file_informer.go:74 +0x3b5
github.com/authzed/spicedb-operator/pkg/controller.NewController({0x2498b50, 0xc0000e9d80}, 0xc0006a5b10?, {0x247b660, 0xc000430490}, {0x24bd3f0, 0xc00048a680}, {0x7ffda3b0edfb, 0x19}, {0x249edf0, ...})
	/home/runner/work/spicedb-operator/spicedb-operator/pkg/controller/controller.go:111 +0x393
github.com/authzed/spicedb-operator/pkg/cmd/run.(*Options).Run(0xc00070caa0, {0x2498b50, 0xc0000e9d80}, {0x24a9d80?, 0xc0000e9ac0?})
	/home/runner/work/spicedb-operator/spicedb-operator/pkg/cmd/run/run.go:143 +0x645
github.com/authzed/spicedb-operator/pkg/cmd/run.NewCmdRun.func1(0xc0005a3500?, {0x2137e47?, 0x4?, 0x4?})
	/home/runner/work/spicedb-operator/spicedb-operator/pkg/cmd/run/run.go:67 +0x8a
github.com/spf13/cobra.(*Command).execute(0xc0005a3500, {0xc0000e9d00, 0x4, 0x4})
	/home/runner/go/pkg/mod/github.com/spf13/[email protected]/command.go:920 +0x847
github.com/spf13/cobra.(*Command).ExecuteC(0xc0005a3200)
	/home/runner/go/pkg/mod/github.com/spf13/[email protected]/command.go:1040 +0x3bd
github.com/spf13/cobra.(*Command).Execute(...)
	/home/runner/go/pkg/mod/github.com/spf13/[email protected]/command.go:968
main.main()
	/home/runner/work/spicedb-operator/spicedb-operator/cmd/spicedb-operator/main.go:33 +0x299

Any idea where this is comming from?

I am running on v1.23.6+rke2r2.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.