Giter Club home page Giter Club logo

helm-charts's Introduction

kcp Helm Charts

Repository for kcp helm charts.

Important: charts proxy, shard, cache and certificates are work in progress and are not ready for production use.

Pre-requisites

  • Cert-manager installed and running
  • Ingress installed (e.g. nginx-ingress or OpenShift router)

Usage

Helm must be installed to use the charts. Please refer to Helm's documentation to get started.

Once Helm has been set up correctly, add the repo as follows:

helm repo add kcp https://kcp-dev.github.io/helm-charts

If you had already added this repo earlier, run helm repo update to retrieve the latest versions of the packages. You can then run helm search repo kcp to see the charts.

To install the kcp chart:

helm install my-kcp kcp/kcp

To uninstall the chart:

helm delete my-kcp

Development usage

To install using the local chart:

helm install kcp ./charts/kcp --values ./myvalues.yaml --namespace kcp --create-namespace

Changes can then be made locally and tested via upgrade:

helm upgrade kcp ./charts/kcp --values ./myvalues.yaml --namespace kcp

Note myvalues.yaml will depend on your environment (you must specify which ingress method is used to expose the front-proxy endpoint), a minimal example:

externalHostname: "<external hostname as exposed by ingress method below>"
kcpFrontProxy:
  ingress:
    enabled: true

Note that by default all certificates are signed by the Helm chart's own PKI and so will not be trusted by browsers. You can however change the kcp-front-proxy's certificate to be issued by, for example, Let's Encrypt. For this you have to enable the creation of the Let's Encrypt issuer like so:

externalHostname: "<external hostname as exposed by ingress method below>"
kcpFrontProxy:
  ingress:
    enabled: true
  certificateIssuer:
    name: kcp-letsencrypt-prod
    kind: ClusterIssuer
letsEncrypt:
  enabled: true
  production:
    enabled: true
    email: [email protected]

Accessing the deployed kcp

To access the deployed kcp, it will be necessary to create a kubeconfig which connects via the front-proxy external endpoint (specified via externalHostname above).

The content of the kubeconfig will depend on the kcp authentication configuration, below we describe one option which uses client-cert auth to enable a kcp-admin user.

⚠️ this example allows global admin permissions across all workspaces, you may also want to consider using more restricted groups for example system:kcp:workspace:access to provide a user system:authenticated access to a workspace.

PKI

The chart will create a full PKI system, with root CA, intermediate CAs and more. The diagram below shows the default configuration, however the issuer for the kcp-front-proxy certificate can be configured and used, for example, Let's Encrypt.

graph TB
    A([kcp-pki-bootstrap]):::issuer --> B(kcp-pki-ca):::ca
    B --> C([kcp-pki]):::issuer

    X([lets-encrypt-staging]):::issuer
    Y([lets-encrypt-prod]):::issuer

    C --> D(kcp-etcd-client-ca):::ca
    C --> E(kcp-etcd-peer-ca):::ca
    C --> F(kcp-front-proxy-client-ca):::ca
    C --> G(kcp-ca):::ca
    C --> H(kcp-requestheader-client-ca):::ca
    C --> I(kcp-client-ca):::ca
    C --> J(kcp-service-account-ca):::ca

    D --> K([kcp-etcd-client-issuer]):::issuer
    E --> L([kcp-etcd-peer-issuer]):::issuer
    F --> M([kcp-front-proxy-client-issuer]):::issuer
    G --> N([kcp-server-issuer]):::issuer
    H --> O([kcp-requestheader-client-issuer]):::issuer
    I --> P([kcp-client-issuer]):::issuer
    J --> Q([kcp-service-account-issuer]):::issuer

    K --- K1(kcp-etcd):::cert --> K2(kcp-etcd-client):::cert
    L --> L1(kcp-etcd-peer):::cert
    M --> M1(kcp-external-admin-kubeconfig):::cert
    N --- N1(kcp):::cert --- N2(kcp-front-proxy):::cert --> N3(kcp-virtual-workspaces):::cert
    O --- O1(kcp-front-proxy-requestheader):::cert --> O2(kcp-front-proxy-vw-client):::cert
    P --- P1(kcp-front-proxy-kubeconfig):::cert --> P2(kcp-internal-admin-kubeconfig):::cert
    Q --> Q1(kcp-service-account):::cert

    classDef issuer color:#77F
    classDef ca color:#F77
    classDef cert color:orange
Loading

Create kubeconfig and add CA cert

First we get the CA cert for the front proxy, saving it to a file ca.crt

kubectl get secret kcp-front-proxy-cert -o=jsonpath='{.data.tls\.crt}' | base64 -d > ca.crt

Now we create a new kubeconfig which references the ca.crt

kubectl --kubeconfig=admin.kubeconfig config set-cluster base --server https://<externalHostname>:443 --certificate-authority=ca.crt
kubectl --kubeconfig=admin.kubeconfig config set-cluster root --server https://<externalHostname>:443/clusters/root --certificate-authority=ca.crt

Create client-cert credentials

Now we must add credentials to the kubeconfig, so requests to the front-proxy may be authenticated.

One way to do this is to create a client certificate with a cert-manager Certificate:

$ cat admin-client-cert.yaml
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
  name: cluster-admin-client-cert
spec:
  commonName: cluster-admin
  issuerRef:
    name: kcp-front-proxy-client-issuer
  privateKey:
    algorithm: RSA
    size: 2048
  secretName: cluster-admin-client-cert
  subject:
    organizations:
    - system:kcp:admin
  usages:
  - client auth

$ kubectl apply -f admin-client-cert.yaml

This will result in a cluster-admin-client-cert secret which we can again save to local files:

$ kubectl get secret cluster-admin-client-cert -o=jsonpath='{.data.tls\.crt}' | base64 -d > client.crt
$ kubectl get secret cluster-admin-client-cert -o=jsonpath='{.data.tls\.key}' | base64 -d > client.key
$ chmod 600 client.crt client.key

We can now add these credentials to the admin.kubeconfig and access kcp:

$ kubectl --kubeconfig=admin.kubeconfig config set-credentials kcp-admin --client-certificate=client.crt --client-key=client.key
$ kubectl --kubeconfig=admin.kubeconfig config set-context base --cluster=base --user=kcp-admin
$ kubectl --kubeconfig=admin.kubeconfig config set-context root --cluster=root --user=kcp-admin
$ kubectl --kubeconfig=admin.kubeconfig config use-context root
$ kubectl --kubeconfig=admin.kubeconfig workspace
$ export KUBECONFIG=$PWD/admin.kubeconfig
$ kubectl workspace
Current workspace is "1gnrr0twy6c3o".

Install to kind cluster (for development)

There is a helper script to install kcp to a kind cluster. It will install cert-manager and kcp. The kind cluster binds to host port 8443 for exposing kcp. This particular configuration is useful for development and testing, but will not work with Let's Encrypt.

./hack/kind-setup.sh

Pre-requisites established by that script:

  • kind executable installed at /usr/local/bin/kind
  • kind cluster named kcp
  • cert-manager installed on the cluster
  • /etc/hosts entry for kcp.dev.local pointing to 127.0.0.1

The script will then install kcp the following way:

helm upgrade --install my-kcp ./charts/kcp/ \
  --values ./hack/kind-values.yaml \
  --namespace kcp \
  --create-namespace

See hack/kind-values.yaml for the values passed to the Helm chart.

Known issues

  • kcp-dev/kcp#2295 - Deployments fail to start. Workaround: Delete the corrupted token store file in the kcp PersistentVolume and restart kcp pod.

helm-charts's People

Contributors

cbuto avatar csams avatar embik avatar erwandl avatar francostellari avatar jimmidyson avatar kcp-ci-bot avatar kylape avatar lionelvillard avatar luxas avatar markandersontrocme avatar mikespreitzer avatar mjudeikis avatar ncdc avatar s-urbaniak avatar stevekuznetsov avatar sttts avatar tnthornton avatar xrstf avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

helm-charts's Issues

Low port numbers are problematic

Rootless podman does not want to open ports numbered less than 1024. Really, that's just a normal restriction.

I ran into this problem when testing https://github.com/kcp-dev/helm-charts/blob/main/hack/kind-setup.sh :

mspreitz@mjs12 helm-charts % if ! kind get clusters | grep -w -q "${CLUSTER_NAME}"; then
kind create cluster --name ${CLUSTER_NAME} \
     --kubeconfig ./${CLUSTER_NAME}.kubeconfig \
     --config ./hack/kind/config.yaml
else
    echo "Cluster already exists"
fi
enabling experimental podman provider
No kind clusters found.
enabling experimental podman provider
Creating cluster "kcp" ...
 ✓ Ensuring node image (kindest/node:v1.24.2) 🖼 
 ✗ Preparing nodes 📦  
ERROR: failed to create cluster: command "podman run --name kcp-control-plane --hostname kcp-control-plane --label io.x-k8s.kind.role=control-plane --privileged --tmpfs /tmp --tmpfs /run --volume 002a6bd8d307221ec43fcf35d54445844de93d220aae012a406d8132a0d9ce4d:/var:suid,exec,dev --volume /lib/modules:/lib/modules:ro -e KIND_EXPERIMENTAL_CONTAINERD_SNAPSHOTTER --detach --tty --net kind --label io.x-k8s.kind.cluster=kcp -e container=podman --volume /dev/mapper:/dev/mapper --device /dev/fuse --publish=0.0.0.0:443:443/tcp --publish=0.0.0.0:80:80/tcp --publish=127.0.0.1:63159:6443/tcp -e KUBECONFIG=/etc/kubernetes/admin.conf docker.io/kindest/node:v1.24.2" failed with error: exit status 126
Command Output: Error: rootlessport cannot expose privileged port 80, you can add 'net.ipv4.ip_unprivileged_port_start=80' to /etc/sysctl.conf (currently 1024), or choose a larger port number (>= 1024): listen tcp 0.0.0.0:80: bind: permission denied

Update included etcd StatefulSet version

The etcd StatefulSet is on a 3.5 version that is quite old. We should update it to a more recent 3.5.x patch release due to some important fixes in etcd.

However, this will require reworking the command because it uses /bin/sh, and etcd switched to distroless as base image in one of the 3.5 patch releases. /bin/sh does not exist in distroless and therefore we need to change the StatefulSet to not do any shell "scripting".

Error: INSTALLATION FAILED: execution error at (kcp/templates/kcp.yaml:242:20): A valid external hostname is required

Issue

The installation of the helm chart fails when no parameters are passed

helm repo add kcp https://kcp-dev.github.io/helm-charts
helm install my-kcp kcp/kcp                            
Error: INSTALLATION FAILED: execution error at (kcp/templates/kcp.yaml:242:20): A valid external hostname is required

Installation do not succeed even if we pass an externalHostName

helm install my-kcp kcp/kcp --set externalHostname=192.168.1.90
...
Error: INSTALLATION FAILED: unable to build kubernetes objects from release manifest: [resource mapping not found for name: "etcd-client-ca" namespace: "" from "": no matches for kind "Certificate" in version "cert-manager.io/v1"
ensure CRDs are installed first, resource mapping not found for name: "etcd-peer-ca" namespace: "" from "": no matches for kind "Certificate" in version "cert-manager.io/v1"
ensure CRDs are installed first, resource mapping not found for name: "etcd" namespace: "" from "": no matches for kind "Certificate" in version "cert-manager.io/v1"
ensure CRDs are installed first, resource mapping not found for name: "etcd-peer" namespace: "" from "": no matches for kind "Certificate" in version "cert-manager.io/v1"
ensure CRDs are installed first, resource mapping not found for name: "kcp-pki-ca" namespace: "" from "": no matches for kind "Certificate" in version "cert-manager.io/v1"
ensure CRDs are installed first, resource mapping not found for name: "kcp-front-proxy" namespace: "" from "": no matches for kind "Certificate" in version "cert-manager.io/v1"
ensure CRDs are installed first, resource mapping not found for name: "kcp-client-ca" namespace: "" from "": no matches for kind "Certificate" in version "cert-manager.io/v1"
ensure CRDs are installed first, resource mapping not found for name: "kcp-front-proxy-kcp-client-cert" namespace: "" from "": no matches for kind "Certificate" in version "cert-manager.io/v1"
ensure CRDs are installed first, resource mapping not found for name: "client-cert-for-kubeconfig" namespace: "" from "": no matches for kind "Certificate" in version "cert-manager.io/v1"
ensure CRDs are installed first, resource mapping not found for name: "kcp-front-proxy-virtual-workspaces-client-cert" namespace: "" from "": no matches for kind "Certificate" in version "cert-manager.io/v1"
ensure CRDs are installed first, resource mapping not found for name: "kcp-ca" namespace: "" from "": no matches for kind "Certificate" in version "cert-manager.io/v1"
ensure CRDs are installed first, resource mapping not found for name: "kcp-requestheader-client-ca" namespace: "" from "": no matches for kind "Certificate" in version "cert-manager.io/v1"
ensure CRDs are installed first, resource mapping not found for name: "kcp-server-client-ca" namespace: "" from "": no matches for kind "Certificate" in version "cert-manager.io/v1"
ensure CRDs are installed first, resource mapping not found for name: "kcp" namespace: "" from "": no matches for kind "Certificate" in version "cert-manager.io/v1"
ensure CRDs are installed first, resource mapping not found for name: "kcp-virtual-workspaces" namespace: "" from "": no matches for kind "Certificate" in version "cert-manager.io/v1"
ensure CRDs are installed first, resource mapping not found for name: "kcp-etcd-client" namespace: "" from "": no matches for kind "Certificate" in version "cert-manager.io/v1"
ensure CRDs are installed first, resource mapping not found for name: "etcd-client-issuer" namespace: "" from "": no matches for kind "Issuer" in version "cert-manager.io/v1"
ensure CRDs are installed first, resource mapping not found for name: "etcd-peer-issuer" namespace: "" from "": no matches for kind "Issuer" in version "cert-manager.io/v1"
ensure CRDs are installed first, resource mapping not found for name: "kcp-pki-bootstrap" namespace: "" from "": no matches for kind "Issuer" in version "cert-manager.io/v1"
ensure CRDs are installed first, resource mapping not found for name: "kcp-pki-ca" namespace: "" from "": no matches for kind "Issuer" in version "cert-manager.io/v1"
ensure CRDs are installed first, resource mapping not found for name: "kcp-front-proxy-issuer" namespace: "" from "": no matches for kind "Issuer" in version "cert-manager.io/v1"
ensure CRDs are installed first, resource mapping not found for name: "kcp-client-issuer" namespace: "" from "": no matches for kind "Issuer" in version "cert-manager.io/v1"
ensure CRDs are installed first, resource mapping not found for name: "kcp-server-issuer" namespace: "" from "": no matches for kind "Issuer" in version "cert-manager.io/v1"
ensure CRDs are installed first, resource mapping not found for name: "kcp-server-client-issuer" namespace: "" from "": no matches for kind "Issuer" in version "cert-manager.io/v1"
ensure CRDs are installed first, resource mapping not found for name: "kcp-requestheader-client-issuer" namespace: "" from "": no matches for kind "Issuer" in version "cert-manager.io/v1"
ensure CRDs are installed first]

🐛 Wrong image tag

The helm chart seems to default to an image tag that does not exists:

$ helm install kcp kcp/kcp --version 0.4.0 --set externalHostname=my-dns --debug --dry-run | grep "image: ghcr.io/kcp-dev/kcp"
install.go:200: [debug] Original chart version: "0.4.0"
install.go:217: [debug] CHART PATH: /home/vagrant/.cache/helm/repository/kcp-0.4.0.tgz

  image: ghcr.io/kcp-dev/kcp
  image: ghcr.io/kcp-dev/kcp
          image: ghcr.io/kcp-dev/kcp:v0.21.0

Yet:

$ docker pull ghcr.io/kcp-dev/kcp:v0.21.0
Error response from daemon: manifest unknown

Looking at ghcr.io/kcp-dev/kcp I see a main, 'latest, release-0.21 tags

Which one should I use?

Bug: certificate-authority for external-admin-kubeconfig should match the certificateIssuer configured in values.yaml

The values.yaml of the KCP Helm chart allow configuring a custom (server) certificate issuer for the front-proxy: https://github.com/kcp-dev/helm-charts/blob/main/charts/kcp/values.yaml#L117-L121

However, the certificate-authority field of the external-admin-kubeconfig is hardcoded, instead of pointing to the CA cert of the configured certificate issuer: https://github.com/kcp-dev/helm-charts/blob/main/charts/kcp/templates/server-kubeconfigs.yaml#L50

I am not exactly sure how to fix this. I guess we need to both:

  • mount the custom CA cert in server-deployment.yaml
  • parameterize server-kubeconfigs.yaml so that certificate-authority points to this mounted CA cert

This would require some additions to values.yaml if I'm not mistaken. Does this sound reasonable?

Remove the extraFlags kcp chart parameter

This was just a hack to address some backwards-incompatible flag changes in kcp. In the future kcp releases should be better coordinated with the helm chart releases.

`admin.kubeconfig` x509: certificate signed by unknown authority

I'm deploying KCP using a slightly modified chart version (https://github.com/mjudeikis/helm-charts/tree/alliasing). Most of the changes are indeed to workaround current limitations of the chart, like:

  1. Ability to alias DNS names
  2. Better control of ClusterIssuers for LE
  3. Ability to add initContainer for KCP to fix DigitalOcean storage limitations
  4. Ability to override DNS names in self-signed certs
    Will try to upstream delta later, but it should not change the issue itself, as those parts are not changed.

Values.yaml file looks like bellow. It uses Nginx Passthrough as Ingress (replicate OpenShift router passthrough),
and DNS in-place resolver for LetsEncrypt.

externalHostname: "kcp.faros.sh"
kcp:
  hostAliases:
    enabled: true
    values:
    - hostnames:
      - kcp.faros.sh
      ip: 127.0.0.1
  volumeClassName: "do-block-storage-xfs"
  storagePermissionsInitContainer: true
  memoryLimit: 4Gi
  memoryRequest: 1Gi
  tokenAuth:
    enabled: true
    fileName: auth-token.csv
    config: |
        user-1-token,user-1,1111-1111-1111-1111,"team-1"
        xxxxxxxxxxxxx,admin,5555-5555-5555-5555,"system:kcp:admin"
kcpFrontProxy:
  openshiftRoute:
    enabled: false
  ingress:
    enabled: true
    annotations:
      kubernetes.io/ingress.class: "nginx"
      acme.cert-manager.io/http01-edit-in-place: "true"
      nginx.ingress.kubernetes.io/backend-protocol: HTTPS
      nginx.ingress.kubernetes.io/secure-backends: "true"
    secret: kcp-front-proxy-cert
  certificate:
    issuer: kcp-letsencrypt-prod
oidc:
  enabled: true
  issuerUrl: https://dex.faros.sh
  clientId: faros
  clientId: faros
  groupClaim: groups
  usernameClaim: email
  usernamePrefix: faros-sso-
  groupsPrefix: faros-sso-
certificates:
  dnsNames:
  - kcp
  - localhost
  - kcp.faros.sh
etcd:
  memoryLimit: 1Gi
  memoryRequest: 1Gi
  cpuRequest: 100m

It uses Lets Encrypt for FrontProxy kcp.faros.sh certificates.
The current certificate flow is :
FrontProxy (Lets Enrypt) -> Shard (KCP-CA self signed Cert-manager managed)

from inside KCP shard pod:

/data $ kubectl ws tree
Error: Get "https://kcp.faros.sh:443/clusters/root/apis/tenancy.kcp.io/v1alpha1/workspaces": x509: certificate signed by unknown authority

Because of URL and certificate-authority-data mismatch:

/data $ cat $KUBECONFIG                                                                                                                                                                                                                                                                                                                                                                                                                                          
apiVersion: v1                                                                                                                                                                                                                                                                                                                                                                                                                                                   
clusters:                                                                                                                                                                                                                                                                                                                                                                                                                                                        
- cluster:                                                                                                                                                                                                                                                                                                                                                                                                                                                       
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUREVENDQWZXZ0F3SUJBZ0lRRldnWm5EOFhmckdZaHNxTlZxRkNTREFOQmdrcWhraUc5dzBCQVFzRkFEQVIKTVE4d0RRWURWUVFERXdaclkzQXRZMkV3SGhjTk1qTXdNakl4TVRZd05UTXlXaGNOTWpRd01qSXhNVFl3TlRNeQpXakFBTUlJQklqQU5CZ2txaGtpRzl3MEJBUUVGQUFPQ0FROEFNSUlCQ2dLQ0FRRUF1ZzB2cG1SZjRvN3VqYm9LCjlORVlYbzRQZ09WT1BSQ0tTK0ZqZHpOL09DWldYRWtZcGVRU3NVMGxmdTFjcFlOT0lZVmZaOERDQTJSdXRWWnQKQ1kyNEN3aTJvVjNDVGlvb3A4Y2NHYldzU
mdBbHo4c2pISkp6eE9ZL09LaEVIc2VLMXRTZHlCclo0eElSc0xnMAptMVNlbGdMMXNnRG94bmxOdklrSE9vNlAyTlhMS3lHNGhOQWpBdk51Wmhlb2dmY2FqWUhnc2tySlJYOU90VmJxCkdnNEx3cVNrOU9zaDczeG0rZ3c3Tlgxb1NtZHlGZXI4d2VHbUIyb3NHUkRSbHF4eEovSU1jL3YwR2czRHV5alMKVU10VnhQUUtabUExMml6SEZNamcwODJ6MUIzM1BXcG1sVDB2cFlEaEdUSXRWa1lYRGdpeXp2Vk1VTGlnUmhCMwpRaFBjQVFJREFRQUJvM0l3Y0RBVEJnTlZIU1VFRERBS0JnZ3JCZ0VGQlFjREFUQU1CZ05WSFJNQkFmOEVBakFBCk1COEdBMVVkSXdRWU1CYUFGRHFzbEVXZVFKVi9LM1RFQTdGOH
BzKzgvdVZsTUNvR0ExVWRFUUVCL3dRZ01CNkMKQTJ0amNJSUpiRzlqWVd4b2IzTjBnZ3hyWTNBdVptRnliM011YzJnd0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQgpBRExUMXMzYml3cUpJNlUzR2tEd2xIUVNTK3dFbk45WFc5NDNDbCsyMFVZemh4ME1lWHFaN3RobXd0cHdCalArCkVYMDJ4NFVQc0tPM0JaMnlPM2VmU2owQW1CT0Z1MDZOc3NsVFltQzlVaXpKZUZnL0dPbnZ3YVlWVVViV1NxVWYKVWhxUlROZlRQcEJTQis2cDI4czZCNDZrcUQwNnk0VEVpWFc4UllpU0FjWnV5WW9PVk53Z0wyL2EyZlJpM2VtSQpWMVZKVUFTU1l1NHYvQXBPWms2OUUvVnlVK29nQmxNT2p4QzFCZ0Z6c0xraUcyU1J
5d0FrcXR4TUJVN1BBdlFDCkVUMUIvVU5yZVVkeUNNL29XOHJ5aG1tTWFTYStEcjZlWm0vUVNGWjZJMDAwbXhxYlRhUTAwNnMzcDN5SFNIUnYKZWQ4YXdLYWYySmsxeXpQdGh4TlgycVk9Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0KLS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURFekNDQWZ1Z0F3SUJBZ0lRUWVXbElCRDBPQkNXQURaQklCK0oyekFOQmdrcWhraUc5dzBCQVFzRkFEQVYKTVJNd0VRWURWUVFERXdwclkzQXRjR3RwTFdOaE1CNFhEVEl6TURJeU1URXpORE14T1ZvWERUSXpNRFV5TWpFegpORE14T1Zvd0VURVBNQTBHQTFVRUF4TUdhMk53TFdOaE1JSUJJakFOQmdrcWhr
aUc5dzBCQVFFRkFBT0NBUThBCk1JSUJDZ0tDQVFFQThaU3ZVTWJEcU40QUt0VmZ6Y3pxOXppVmdjSGpaUitnRGViRXp3d0xZTVlIWndRMm5zek4KVnJPaThIUG50dDNLaTJwU0xNUlMxd2E3UjZzVjJHbjIxc2d1eDhSUVJLRHlhVGdQdEEwdUcwQmdSYVFxbFF5bgpVc3JrckthajRpWWRKajZ0elBaSTVEaGdGS2JCQ09jYzUzdDI2TTNFbmxsc3oyVE5sOWZuY2R3QWpmS2dTODBqCjBQMTFoVWhTZ2lVRnlodlNmdFdlb2MxNUIwOEt5VjJERlV6U2ZiNDkwZFdHOGpTaUZJZXk0b1BxNTc3OUdETGoKZE5OMlNRSHhpTjh4N0h2QThTS1hrcHlWN2Z6K3UybnpMcHVUQkJqYUNqY0ZWaFNWenZtaGRVTXZKT
GVPQmE0WQpvSFp3MEhOaW9sWHpnNUMvNFg2YTVWUWVJNHNPYmd1K2RRSURBUUFCbzJNd1lUQU9CZ05WSFE4QkFmOEVCQU1DCkFxUXdEd1lEVlIwVEFRSC9CQVV3QXdFQi96QWRCZ05WSFE0RUZnUVVPcXlVUlo1QWxYOHJkTVFEc1h5bXo3eisKNVdVd0h3WURWUjBqQkJnd0ZvQVV1TTJOWmRDb0l5WnRqVGlxUk9yem5tT3I2WkF3RFFZSktvWklodmNOQVFFTApCUUFEZ2dFQkFKMGZtMVY5YWcxZWpuRVFVOUhhSmNleTNIVzNRSFV2dXRKMGNkSmx5UEJ1aHJnZDUzdUlnTmI2Ck5EcXVLaXBvTDAzOVllcjZ4OG5sdTNzci9kbkNVQm9DcVo0RkF4R2diZlpVOUlHLzNUTWZFYmxlZkM3T0RUdU0KOUt0Um
NQZnAzY3QrRFp2UXpnczJFZCswZlpQMExwaVcxREFrN0ZMbkNqKzdmdW54Ly80Mmp6WlVzSCtlMTlwSQo4UnA3d3NEN3NYUERaRzU1eXdiZHIzVzZKS28zWTYxRTVRNXA1bDFueEJqWG1QR1Q0WXRsdlR5bDZqbHg5WjEyClVmaGU1RHptMmpUcXlKVDlDVkZSQzlxalRTcUVZNm9KVU9CeTdzSmxWWGtQVEZmSnhidFVxcGY1bDIrUGZLWDUKTXYwRWVBbFovMFJhNlJ1MHRVa1VXV0FTQUVOODVLUT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=                                                                                                                   
    server: https://kcp.faros.sh:443 

admin.kubeconfig is generated with FrontProxy URL and Shard certificate, and this breaks the Certificate trust chain as KCP-CA is not in the FrontProxy CA trust.

For this to work admin.kubeconfig of the shard should be pointing to shard itself, and go via FrontProxy OR it should have Let's Encrypt (or other CA authority used in FP deployment) in certificate-authority-data.

Workaround:

  1. Create a second KCP service object to map port 443 to 6443. Port must be 443 so DNS override would work.
k get svc -n kcp kcp-internal -o yaml                                                                                                                                                                                                                                                                                                                                                                                                  
apiVersion: v1                                                                                                                                                                                                                                                                                                                                                                                                                                                   
kind: Service                                                                                                                                                                                                                                                                                                                                                                                                                                                    
metadata:                                  
  labels:                                  
    app: kcp                               
  name: kcp-internal                       
  namespace: kcp                           
spec:                           
  ports:                    
  - name: kcp               
    port: 443               
    protocol: TCP           
    targetPort: 6443        
  - name: virtual-workspaces
    port: 444          
    protocol: TCP      
    targetPort: 6444   
  selector:               
    app: kcp              
  sessionAffinity: None   
  type: ClusterIP         
status:                   
  loadBalancer: {}   
  1. Create DNS alias for KCP pod in the pod spec:
  hostAliases:
  - hostnames:
    - kcp.faros.sh
    ip: 10.245.126.57 (Service IP address)

With this traffic inteded to kcp.faros.sh frontProxy goes directrly to shard endpoint, and things just work. But this is workaround around the fact that the admin.kubeconfig certificate and URL do not match.

kcp ReplicaSet is rejected in OpenShift

I tried using the Helm chart to create a Helm "release" in an OpenShift cluster, and the ReplicaSet for the kcp server is unacceptable to OpenShift.

I put the following in my values YAML:

externalHostname: "some-long.stuff.containers.appdomain.cloud"
kcp:
  volumeClassName: "default"
kcpFrontProxy:
  openshiftRoute:
    enabled: true

I found that the ReplicaSet for kcp never got any Pod object created. A kubectl describe of that ReplicaSet included the following Event, which explains the problem (line breaks added for readability).

  Warning  FailedCreate  42s (x17 over 6m10s)  replicaset-controller  
Error creating: pods "kcp-75d5778b7d-" is forbidden: 
unable to validate against any security context constraint: [
provider "anyuid": Forbidden: not usable by user or serviceaccount,
provider restricted-v2: .spec.securityContext.fsGroup: Invalid value: []int64{65532}: 65532 is not an allowed group,
provider "restricted": Forbidden: not usable by user or serviceaccount,
provider "ibm-restricted-scc": Forbidden: not usable by user or serviceaccount,
provider "nonroot-v2": Forbidden: not usable by user or serviceaccount,
provider "nonroot": Forbidden: not usable by user or serviceaccount,
provider "ibm-anyuid-scc": Forbidden: not usable by user or serviceaccount,
provider "hostmount-anyuid": Forbidden: not usable by user or serviceaccount,
provider "ibm-anyuid-hostpath-scc": Forbidden: not usable by user or serviceaccount,
provider "hostnetwork-v2": Forbidden: not usable by user or serviceaccount,
provider "hostnetwork": Forbidden: not usable by user or serviceaccount,
provider "hostaccess": Forbidden: not usable by user or serviceaccount,
provider "ibm-anyuid-hostaccess-scc": Forbidden: not usable by user or serviceaccount,
provider "node-exporter": Forbidden: not usable by user or serviceaccount,
provider "ibm-privileged-scc": Forbidden: not usable by user or serviceaccount,
provider "privileged": Forbidden: not usable by user or serviceaccount]

Document some usage examples

A few ideas off the top of my head:

  • All-in-one minimal deployment
  • OIDC/authentication
  • Ingress options: Openshift route, ingress, gateway
  • External etcd vs deployed/managed

Add PDBs to sharded charts + update example setup

In #89, PDBs were added in the KCP helm chart for etcd, kcp, and the front-proxy.

@mjudeikis pointed out that we should also add PDBs to the sharded charts and update the example accordingly.

  • Update sharded charts to include optional PDBs (i.e. etcd, shard, and proxy)

bug: ServiceAccounts need `access` permission on their "home" workspace to pass front-proxy

Describe the bug

When using ServiceAccounts to talk to kcp, we discovered that they behave differently depending on whether you are talking to a kcp server directly or to kcp-front-proxy. When talking to kcp itself, a ServiceAccount only needs permissions to access respective resources within their home workspace to function. kcp-front-proxy however adds the requirement to have the access verb on / in the "home" workspace that the ServiceAccount exists in. This does not exist when talking to kcp directly.

As per this Slack thread, ServiceAccounts should have implicit access to the workspace they have been created in.

Steps To Reproduce

  1. Create a workspace, e.g. via kubectl ws create wildwest --enter.
  2. Create a ServiceAccount:
apiVersion: v1
kind: ServiceAccount
metadata:
  name: example-controller
  1. Create and bind a ClusterRole giving access to all API resources (but not the / nonResourceURL):
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: manager-role
rules:
  - apiGroups: ["*"]
    resources: ["*"]
    verbs: ["*"]
---  
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: manager-rolebinding
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: manager-role
subjects:
  - kind: ServiceAccount
    name: example-controller
    namespace: default
  1. Impersonate the ServiceAccount and attempt to get any resources while connecting to the kcp-front-proxy:
$ kubectl get workspaces --as=system:serviceaccount:default:example-controller
Error from server (Forbidden): workspaces.tenancy.kcp.io is forbidden: User "system:serviceaccount:default:example-controller" cannot list resource "workspaces" in API group "tenancy.kcp.io" at the cluster scope: access denied
  1. Use a kubeconfig that points to kcp directly and run the same command, observing that it returns the list of workspaces.
  2. Amend the ClusterRole from above with the following rule:
- nonResourceURLs:
    - /
  verbs:
    - access
  1. Observe impersonating the ServiceAccount through kcp-front-proxy working.

Expected Behaviour

ServiceAccounts should not need the / nonResourceURL access permission when accessing a workspace through the front-proxy.

Additional Context

No response

bug: compute ws stuck in Initializing Phase

Installed KCP on the cluster using helm-chart .. tried creating admin.kubeconfig with #30 as mentioned here.. with some changed to chart ingress etc was able to connect to kcp using the generated kubeconfig.

But when tried looking at ws got to know compute is stuck in initializing phase

after looking at logs got this error

{"ts":1679286985565.154,"caller":"workspace/workspace_controller.go:237","msg":""kcp-workspace" controller failed to sync "root|compute", err: Get "***.com:443/clusters/5dbcz56kxrgjoeal/apis/core.kcp.io/v1alpha1/logicalclusters/cluster": x509: certificate signed by unknown authority\n"}

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.