Giter Club home page Giter Club logo

multi-tenancy's Introduction

fluxcd-multi-tenancy

We are moving to Flux v2

⚠️ Please note: In preparation of Flux v2 GA this repository with Flux v1 examples has been archived. The Flux v2 equivalent of what is shown here can be found at flux2-multi-tenancy.

Thanks a lot for your interest.

For posterity

This repository serves as a starting point for a multi-tenant cluster managed with Git, Flux and Kustomize.

I'm assuming that a multi-tenant cluster is shared by multiple teams. The cluster wide operations are performed by the cluster administrators while the namespace scoped operations are performed by various teams each with its own Git repository. That means a team member, that's not a cluster admin, can't create namespaces, custom resources definitions or change something in another team namespace.

Flux multi-tenancy

Repositories

First you'll have to create two git repositories:

Team Namespace Git Repository Flux RBAC
ADMIN all org/dev-cluster Cluster wide e.g. namespaces, CRDs, Flux controllers
DEV-TEAM1 team1 org/dev-team1 Namespace scoped e.g. deployments, custom resources
DEV-TEAM2 team2 org/dev-team2 Namespace scoped e.g. ingress, services, network policies

Cluster admin repository structure:

├── .flux.yaml 
├── base
│   ├── flux
│   └── memcached
├── cluster
│   ├── common
│   │   ├── crds.yaml
│   │   └── kustomization.yaml
│   └── team1
│       ├── flux-patch.yaml
│       ├── kubeconfig.yaml
│       ├── kustomization.yaml
│       ├── namespace.yaml
│       ├── psp.yaml
│       └── rbac.yaml
├── install
└── scripts

The base folder holds the deployment spec used for installing Flux in the flux-system namespace and in the teams namespaces. All Flux instances share the same Memcached server deployed at install time in flux-system namespace.

With .flux.yaml we configure Flux to run Kustomize build on the cluster dir and deploy the generated manifests:

version: 1
commandUpdated:
  generators:
    - command: kustomize build .

Development team1 repository structure:

├── .flux.yaml 
├── flux-patch.yaml
├── kustomization.yaml
└── workloads
    ├── frontend
    │   ├── deployment.yaml
    │   ├── kustomization.yaml
    │   └── service.yaml
    └── backend
        ├── deployment.yaml
        ├── kustomization.yaml
        └── service.yaml

The workloads folder contains the desired state of the team1 namespace and the flux-patch.yaml contains the Flux annotations that define how the container images should be updated.

With .flux.yaml we configure Flux to run Kustomize build, apply the container update policies and deploy the generated manifests:

version: 1
patchUpdated:
  generators:
    - command: kustomize build .
  patchFile: flux-patch.yaml

Install the cluster admin Flux

In the dev-cluster repo, change the git URL to point to your fork:

vim ./install/flux-patch.yaml

[email protected]:org/dev-cluster

Install the cluster wide Flux with kubectl kustomize:

kubectl apply -k ./install/

Get the public SSH key with:

fluxctl --k8s-fwd-ns=flux-system identity

Add the public key to the github.com:org/dev-cluster repository deploy keys with write access.

The cluster wide Flux will do the following:

  • creates the cluster objects from cluster/common directory (CRDs, cluster roles, etc)
  • creates the team1 namespace and deploys a Flux instance with restricted access to that namespace

Install a Flux per team

Change the dev team1 git URL:

vim ./cluster/team1/flux-patch.yaml

[email protected]:org/dev-team1

When you commit your changes, the system Flux will configure the team1's Flux to sync with org/dev-team1 repository.

Get the public SSH key for team1 with:

fluxctl --k8s-fwd-ns=team1 identity

Add the public key to the github.com:org/dev-team1 deploy keys with write access. The team1's Flux will apply the manifests from org/dev-team1 repository only in the team1 namespace, this is enforced with RBAC and role bindings.

If team1 needs to deploy a controller that depends on a CRD or a cluster role, they'll have to open a PR in the org/dev-clusterrepository and add those cluster wide objects in the cluster/common directory.

The team1's Flux instance can be customised with different options than the system Flux using the cluster/team1/flux-patch.yaml.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: flux
spec:
  template:
    spec:
      containers:
        - name: flux
          args:
            - --manifest-generation=true
            - --memcached-hostname=flux-memcached.flux-system
            - --memcached-service=
            - --git-poll-interval=5m
            - --sync-interval=5m
            - --ssh-keygen-dir=/var/fluxd/keygen
            - --k8s-allow-namespace=team1
            - [email protected]:org/dev-team1
            - --git-branch=master

The k8s-allow-namespace restricts Flux discovery mechanism to a single namespace.

Install Flagger

Flagger is a progressive delivery Kubernetes operator that can be used to automate Canary, A/B testing and Blue/Green deployments.

Flux Flagger

You can deploy Flagger by including its manifests in the cluster/kustomization.yaml file:

bases:
  - ./flagger/
  - ./common/
  - ./team1/

Commit the changes to git and wait for system Flux to install Flagger and Prometheus:

fluxctl --k8s-fwd-ns=flux-system sync

kubectl -n flagger-system get po
NAME                                  READY   STATUS
flagger-64c6945d5b-4zgvh              1/1     Running
flagger-prometheus-6f6b558b7c-22kw5   1/1     Running

A team member can now push canary objects to org/dev-team1 repository and Flagger will automate the deployment process.

Flagger can notify your teams when a canary deployment has been initialised, when a new revision has been detected and if the canary analysis failed or succeeded.

You can enable Slack notifications by editing the cluster/flagger/flagger-patch.yaml file:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: flagger
spec:
  template:
    spec:
      containers:
        - name: flagger
          args:
            - -mesh-provider=kubernetes
            - -metrics-server=http://flagger-prometheus:9090
            - -slack-user=flagger
            - -slack-channel=alerts
            - -slack-url=https://hooks.slack.com/services/YOUR/SLACK/WEBHOOK

Enforce pod security policies per team

With pod security policies a cluster admin can define a set of conditions that a pod must run with in order to be accepted into the system.

For example you can forbid a team from creating privileged containers or use the host network.

Edit the team1 pod security policy cluster/team1/psp.yaml:

apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
  name: default-psp-team1
  annotations:
    seccomp.security.alpha.kubernetes.io/allowedProfileNames: '*'
spec:
  privileged: false
  hostIPC: false
  hostNetwork: false
  hostPID: false
  allowPrivilegeEscalation: false
  allowedCapabilities:
    - '*'
  fsGroup:
    rule: RunAsAny
  runAsUser:
    rule: RunAsAny
  seLinux:
    rule: RunAsAny
  supplementalGroups:
    rule: RunAsAny
  volumes:
    - '*'

Set privileged, hostIPC, hostNetwork and hostPID to false and commit the change to git. From this moment on, team1 will not be able to run containers with an elevated security context under the default service account.

If a team member adds a privileged container definition in the org/dev-team1 repository, Kubernetes will deny it:

kubectl -n team1 describe replicasets podinfo-5d7d9fc9d5

Error creating: pods "podinfo-5d7d9fc9d5-" is forbidden: unable to validate against any pod security policy:
[spec.containers[0].securityContext.privileged: Invalid value: true: Privileged containers are not allowed]

Enforce custom policies per team

Gatekeeper is a validating webhook that enforces CRD-based policies executed by Open Policy Agent.

Flux Gatekeeper

You can deploy Gatekeeper by including its manifests in the cluster/kustomization.yaml file:

bases:
  - ./gatekeeper/
  - ./flagger/
  - ./common/
  - ./team1/

Inside the gatekeeper dir there is a constraint template that instructs OPA to reject Kubernetes deployments if no container resources are specified.

Enable the constraint for team1 by editing the cluster/gatekeeper/constraints.yaml file:

apiVersion: constraints.gatekeeper.sh/v1alpha1
kind: ContainerResources
metadata:
  name: containerresources
spec:
  match:
    namespaces:
      - team1
    kinds:
      - apiGroups: ["apps"]
        kinds: ["Deployment"]

Commit the changes to git and wait for system Flux to install Gatekeeper and apply the constraints:

fluxctl --k8s-fwd-ns=flux-system sync

watch kubectl -n gatekeeper-system get po

If a team member adds a deployment without CPU or memory resources in the org/dev-team1 repository, Gatekeeper will deny it:

kubectl -n team1 logs deploy/flux

admission webhook "validation.gatekeeper.sh" denied the request: 
[denied by containerresources] container <podinfo> has no memory requests
[denied by containerresources] container <sidecar> has no memory limits

Add a new team/namespace/repository

If you want to add another team to the cluster, first create a git repository as github.com:org/dev-team2.

Run the create team script:

./scripts/create-team.sh team2

team2 created at cluster/team2/
team2 added to cluster/kustomization.yaml

Change the git URL in cluster/team2 dir:

vim ./cluster/team2/flux-patch.yaml

[email protected]:org/dev-team2

Push the changes to the master branch of org/dev-cluster and sync with the cluster:

fluxctl --k8s-fwd-ns=flux-system sync

Get the team2 public SSH key with:

fluxctl --k8s-fwd-ns=team2 identity

Add the public key to the github.com:org/dev-team2 repository deploy keys with write access. The team2's Flux will apply the manifests from org/dev-team2 repository only in the team2 namespace.

Isolate tenants

With this setup, Flux will prevent a team member from altering cluster level objects or other team's workloads.

In order to harden the tenant isolation, a cluster admin should consider using:

  • resource quotas (limit the compute resources that can be requested by a team)
  • network policies (restrict cross namespace traffic)
  • pod security policies (prevent running privileged containers or host network and filesystem usage)
  • Open Policy Agent admission controller (enforce custom policies on Kubernetes objects)

Getting Help

If you have any questions about Flux and GitOps:

multi-tenancy's People

Contributors

cartyc avatar lloydchang avatar stefanprodan avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

multi-tenancy's Issues

[Question] Can multi weave flux shares the same memcached services

Hi,

I am implementing the multi-tenancy via weave flux operators. My approach is the same, a cluster repo--> a cluster flux operator, and a namespace -> a namespace flux operator.

However, given that I do not want the flux operator be able to deleted or modified by mistakes by the namespace users, i decided to put all the namespace flux operators in the namesapce flux as below:

$ kubectl get pods -n flux
NAME                          READY   STATUS    RESTARTS   AGE
demo2-flux-5c5f58f547-zvjb5   1/1     Running   0          5m23s
flux-6f6d459df5-jsqld         1/1     Running   0          9h
memcached-7b4c8bd545-5ks9g    1/1     Running   0          2d8h

demo2 is the namespace name, which I use the below to get it working

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
  labels:
    name: demo2-flux
  name: demo2-flux
  namespace: demo2
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: flux
subjects:
  - kind: ServiceAccount
    name: demo2-flux
    namespace: flux
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
  labels:
    name: demo2-flux-secrets
  name: demo2-flux-secrets
  namespace: flux
rules:
  - apiGroups: [""]
    resources: ["secrets"]
    verbs: ["get", "watch", "list"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
  labels:
    name: demo2-flux-secrets
  name: demo2-flux-secrets
  namespace: flux
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: flux
subjects:
  - kind: ServiceAccount
    name: demo2-flux
    namespace: flux

Obviously, in this case all the flux operators are sharing with one memcached services (pod).
My question is, what will be some of the considerations for working with one memcached services ?

Flux dose not create clusterrole and service account in team namespace

Using fluxcd multi-tenancy , in the flux namespace i've cluster role and service account manifests my customization is like so

workloads/controller/kustomization.yaml
resources:

  • clusterrole.yaml
  • clusterrolebinding.yaml
  • deployment.yaml
  • serviceaccount.yaml

Flux only create the deployment but not clusterrole and sa

.flux.yaml

version: 1
commandUpdated:
generators:
- command: kustomize build .

This is the log

flux" cannot get resource "clusterroles" in API group "rbac.authorization.k8s.io" at the cluster scope; :clusterrole/goldilocks-dashboard: running kubectl: Error from server (Forbidden): error when retrieving current configuration of:\nResource: "rbac.authorization.k8s.io/v1beta1, Resource=clusterroles", GroupVersionKind: "rbac.authorization.k8s.io/v1beta1,

rbac issue on team namespace

I've followed the guide and I'm getting this issue;

ts=2019-10-09T11:14:19.220962472Z caller=main.go:243 version=1.14.2
ts=2019-10-09T11:14:19.220997437Z caller=main.go:372 msg="using in cluster config to connect to the cluster"
ts=2019-10-09T11:14:19.292022192Z caller=main.go:450 err="secrets \"flux-git-deploy\" is forbidden: User \"system:serviceaccount:adam:flux\" cannot patch resource \"secrets\" in API group \"\" in the namespace \"adam\": RBAC: [clusterrole.rbac.authorization.k8s.io \"flux-readonly\" not found, clusterrole.rbac.authorization.k8s.io \"flux-psp\" not found]"

adam is the name of the team. any ideas?
the flux-git-deploy secret get's created but is empty.

Can Flux support multiple versions of the same service in a cluster?

What I would like to do is test multiple versions of a service in the same cluster. I can see how to do at the k8s level. What I keep getting caught up on is how would this work at the git level. I have not seen (might simply be my lack of knowledge) something as simple as getting a service from a branch. Or is this simply multiple giturl settings?

flux not being able to locate flux-git-deploy secret

[takwo@master1 clusterflux]$ kubectl get secrets -n flux-system
NAME                  TYPE                                  DATA   AGE
default-token-52v2g   kubernetes.io/service-account-token   3      13m
flux-git-deploy       Opaque                                0      8m18s
flux-token-24rdc      kubernetes.io/service-account-token   3      13m

I see that flux-git-deploy is created by kustomize and is there when i use kubectl get secrets -n flux-system
however I get the error

ts=2020-01-16T23:05:52.220272492Z caller=main.go:250 version=1.17.0
ts=2020-01-16T23:05:52.220373689Z caller=main.go:389 msg="using in cluster config to connect to the cluster"
ts=2020-01-16T23:05:52.811797455Z caller=main.go:467 err="secrets \" flux-git-deploy\" not found"

fluxctl list-images --k8s-fwd-ns=xxx --workload=deployment/xxx returns empty

I have mostly successfully set up a multi tenant flux cluster with currently two namespaces managed by flux.
It basically works in the sense that a new tag in the registry triggers an update in the namespace repository and then in the namespace.

The remaining problem is, that

fluxctl list-images --k8s-fwd-ns=xxx --workload=deployment/xxx

returns nothing.

What might be the cause?

Kustomize error - SA

Tried deploying multiple teams and getting this error;

caller=images.go:17 component=sync-loop msg="polling for new images for automated workloads"
ts=2019-10-21T12:41:32.048856366Z caller=images.go:23 component=sync-loop error="getting unlocked automated resources: error executing generator command \"kubectl kustomize .\" from file \".flux.yaml\": exit status 1\nerror output:\nError: Multiple matches for name ~G_v1_ServiceAccount|ruben|~P|flux|~S:\n  [~G_v1_ServiceAccount|ruben|~P|flux|~S ~G_v1_ServiceAccount|adam|~P|flux|~S]\n\n\nExamples:\n  # Use the current working directory\n  kubectl kustomize .\n  \n  # Use some shared configuration directory\n  kubectl kustomize /home/configuration/production\n  \n  # Use a URL\n  kubectl kustomize github.com/kubernetes-sigs/kustomize.git/examples/helloWorld?ref=v1.0.6\n\nUsage:\n  kubectl kustomize <dir> [flags] [options]\n\nUse \"kubectl options\" for a list of global command-line options (applies to all commands).\n\nMultiple matches for name ~G_v1_ServiceAccount|ruben|~P|flux|~S:\n  [~G_v1_ServiceAccount|ruben|~P|flux|~S ~G_v1_ServiceAccount|adam|~P|flux|~S]\n\ngenerated output:\nError: Multiple matches for name ~G_v1_ServiceAccount|ruben|~P|flux|~S:\n  [~G_v1_ServiceAccount|ruben|~P|flux|~S ~G_v1_ServiceAccount|adam|~P|flux|~S]\n\n\nExamples:\n  # Use the current working directory\n  kubectl kustomize .\n  \n  # Use some shared configuration directory\n  kubectl kustomize /home/configuration/production\n  \n  # Use a URL\n  kubectl kustomize github.com/kubernetes-sigs/kustomize.git/examples/helloWorld?ref=v1.0.6\n\nUsage:\n  kubectl kustomize 

where adam is already a name of a team deployed to the cluster.

Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.1", GitCommit:"d647ddbd755faf07169599a625faf302ffc34458", GitTreeState:"clean", BuildDate:"2019-10-02T23:49:07Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"14+", GitVersion:"v1.14.6-eks-5047ed", GitCommit:"5047edce664593832e9b889e447ac75ab104f527", GitTreeState:"clean", BuildDate:"2019-08-21T22:32:40Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}

Flux Support Multi Environments?

Hi All,
Please help me on the below scenario:
I have 1 single AKS cluster, on this cluster I have created 3 different environments (Dev, QA, and Prod), for each environment 1 name spaces with 1 Nginx ingress controllers, Now I want to use GitOps with Flux for the application deployment,

  1. Do I need to install 3 flux environments for dev, QA and Prod?
  2. Do I need to add each environment public SSH key on GitHub? Is there any other best practices we can follow?
  3. Currently I am using 1 single GitHub repository with set of Yaml files, with 3 different branches (QA, Dev and Prod)? Flux will support for 3 branches?

The team flux causes lots of 403 error events in the audit log

Each time a namespaced team flux is running its sync it gets a bunch of 403 Forbidden from the API, cluttering the the audit log with

{
    "kind": "Event",
    "apiVersion": "audit.k8s.io/v1",
    "level": "Metadata",
    "auditID": "20162fc3-bb05-458f-906e-8c3eb60f04a1",
    "stage": "ResponseComplete",
    "requestURI": "/apis/crd.k8s.amazonaws.com/v1alpha1/eniconfigs?labelSelector=fluxcd.io%2Fsync-gc-mark",
    "verb": "list",
    "user": {
        "username": "system:serviceaccount:team1:flux",
        "uid": "9b41e074-5dec-11ea-a627-06ab94fdafa0",
        "groups": [
            "system:serviceaccounts",
            "system:serviceaccounts:team1",
            "system:authenticated"
        ]
    },
    "sourceIPs": [
        "10.41.72.187"
    ],
    "userAgent": "fluxd/v0.0.0 (linux/amd64) kubernetes/$Format",
    "objectRef": {
        "resource": "eniconfigs",
        "apiGroup": "crd.k8s.amazonaws.com",
        "apiVersion": "v1alpha1"
    },
    "responseStatus": {
        "metadata": {},
        "status": "Failure",
        "reason": "Forbidden",
        "code": 403
    },
    "requestReceivedTimestamp": "2020-06-17T13:36:10.116307Z",
    "stageTimestamp": "2020-06-17T13:36:10.116387Z",
    "annotations": {
        "authorization.k8s.io/decision": "forbid",
        "authorization.k8s.io/reason": ""
    }
}

I guess its rooted in the cluster role flux-readonly. Is there anything we can do to improve the situation, or event have flux to not check stuff without having permission?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.