Giter Club home page Giter Club logo

Comments (19)

gecube avatar gecube commented on September 24, 2024 2
+  serviceSpec:
+    metadata:
+      labels:
+        env: prod
+      annotations:
+        example.com/annotation: "true"
+    spec: # core.v1.ServiceSpec Ready k8s type

not enough as business logic supposes to have several type of services. Basic one - headless, and probably addiitonal one for clients? Should we give user option to choose if he wants the second? Or just deploy them both every time?

from etcd-operator.

gecube avatar gecube commented on September 24, 2024 1
serviceAccountName: default

then you need to also disable / or enable creation of custom service account...

from etcd-operator.

gecube avatar gecube commented on September 24, 2024 1

@AlexGluck I don't think so, because it is much more important thing than affinity, tolerations etc. But let it be.

from etcd-operator.

gecube avatar gecube commented on September 24, 2024 1

from etcd-operator.

gecube avatar gecube commented on September 24, 2024
+  nodeSelector: {} # map[string]string
+  tolerations: [] # core.v1.Toleration Ready k8s type
+  securityContext: {} # core.v1.PodSecurityContext Ready k8s type
+  priorityClassName: "low"
+  topologySpreadConstraints: [] # core.v1.TopologySpreadConstraint Ready k8s type
+  terminationGracePeriodSeconds: 30 # int64
+  schedulerName: "default-scheduler"
+  runtimeClassName: "legacy"
+  extraArgs: # map[string]string
+    arg1: "value1"
+    arg2: "value2"

I'd prefer this to move to some additional key. Not to overwhelm the base spec key. Like podSpec: or something similar

from etcd-operator.

gecube avatar gecube commented on September 24, 2024
+  storage:
+    volumeClaimTemplate:
+      metadata:
+        labels:
+          env: prod
+        annotations:
+          example.com/annotation: "true"
+      spec: # core.v1.PersistentVolumeClaimSpec Ready k8s type
+        storageClassName: gp3
+        accessModes: [ "ReadWriteOnce" ]
+        resources:
+          requests:
+            storage: 10Gi
+    emptyDir: {} # core.v1.EmptyDirVolumeSource Ready k8s type

need to discuss if we want to enable all possible types of volumes, or restrict them to some safe subset. Because in fact core.v1.PersistentVolumeClaimSpec is very complex for basic use.

+  podDisruptionBudget:
+    maxUnavailable: 1 # intstr.IntOrString
+    minAvailable: 2
+    selectorLabels: # If not set, the operator will use the labels from the EtcdCluster
+      env: prod

we need binary flag: deploy pdb or not. Not just to have the values in the spec, I think. Or podDisruptionBudget: nil would be enough for not deploying it?

from etcd-operator.

gecube avatar gecube commented on September 24, 2024
+  extraArgs: # map[string]string
+    arg1: "value1"
+    arg2: "value2"
+  extraEnvs: # []core.v1.EnvVar Ready k8s type
+    - name: MY_ENV
+      value: "my-value"

Should we have here some additional validation? Let's say that we pass --cluster-initial-state here which is already appended to spec by operator itself? It is where the abstraction leaks.

from etcd-operator.

sircthulhu avatar sircthulhu commented on September 24, 2024

As of PDB spec, I think user of etcd cluster does not need such complex spec. Operator can configure maxUnavailable according to the number of replicas in cluster, only labels and annotations can be useful

from etcd-operator.

gecube avatar gecube commented on September 24, 2024

Also I want to have a good hierarchy:

---
-apiVersion: etcd.aenix.io/v1alpha1
+apiVersion: etcd.aenix.io/v1alpha2      
 kind: EtcdCluster
 metadata:
   name: test
   namespace: default
 spec:
   image: "quay.io/coreos/etcd:v3.5.12"
   replicas: 3
+  podSpec:
+    imagePullSecrets:  # core.v1.LocalObjectReference Ready k8s type
     - name: myregistrykey
+    serviceAccountName: default
+    podMetadata:
+      labels:
+       env: prod
+     annotations:
+       example.com/annotation: "true"
+    resources: # core.v1.ResourceRequirements Ready k8s type
+      requests:
+        cpu: 100m
+        memory: 100Mi
+      limits:
+        cpu: 200m
+        memory: 200Mi
+    affinity: {} # core.v1.Affinity Ready k8s type
+    nodeSelector: {} # map[string]string
+    tolerations: [] # core.v1.Toleration Ready k8s type
+    securityContext: {} # core.v1.PodSecurityContext Ready k8s type
+    priorityClassName: "low"
+    topologySpreadConstraints: [] # core.v1.TopologySpreadConstraint Ready k8s type
+    terminationGracePeriodSeconds: 30 # int64
+    schedulerName: "default-scheduler"
+    runtimeClassName: "legacy"     
+  service-headless:
+    serviceSpec:
+      metadata:
+        labels:
+          env: prod
+        annotations:
+          example.com/annotation: "true"
+      spec: # core.v1.ServiceSpec Ready k8s type
+  service-main:
+    serviceSpec:
+      metadata:
+        labels:
+          env: prod
+        annotations:
+          example.com/annotation: "true"
+      spec: # core.v1.ServiceSpec Ready k8s type
+  storage:
+    volumeClaimTemplate:
+      metadata:
+        labels:
+          env: prod
+        annotations:
+          example.com/annotation: "true"
+      spec: # core.v1.PersistentVolumeClaimSpec Ready k8s type
+        storageClassName: gp3
+        accessModes: [ "ReadWriteOnce" ]
+        resources:
+          requests:
+            storage: 10Gi
+    emptyDir: {} # core.v1.EmptyDirVolumeSource Ready k8s type

something like this. Then in the editor I will be able to easily collapse / expand particular blocks with the technical details. I want to emphasize that I think that it is really great idea to separate the main business logic parameters (like number of replicas) from the deep technical things like service labels

from etcd-operator.

sergeyshevch avatar sergeyshevch commented on September 24, 2024

Updated spec addressing all comments

---
apiVersion: etcd.aenix.io/v1alpha1
kind: EtcdCluster
metadata:
 name: test
 namespace: default
spec:
  image: "quay.io/coreos/etcd:v3.5.12"
  replicas: 3
+ podSpec:
+   imagePullPolicy: "Always" # core.v1.PullPolicy Ready k8s type
+   imagePullSecrets:  # core.v1.LocalObjectReference Ready k8s type
+   - name: myregistrykey
+   podMetadata:
+     labels:
+       env: prod
+     annotations:
+       example.com/annotation: "true"
+   resources: # core.v1.ResourceRequirements Ready k8s type
+     requests:
+       cpu: 100m
+       memory: 100Mi
+    limits:
+       cpu: 200m
+       memory: 200Mi
+   affinity: {} # core.v1.Affinity Ready k8s type
+   nodeSelector: {} # map[string]string
+   tolerations: [] # core.v1.Toleration Ready k8s type
+   securityContext: {} # core.v1.PodSecurityContext Ready k8s type
+   priorityClassName: "low"
+   topologySpreadConstraints: [] # core.v1.TopologySpreadConstraint Ready k8s type
+   terminationGracePeriodSeconds: 30 # int64
+   schedulerName: "default-scheduler"
+   runtimeClassName: "legacy"
+   extraArgs: # map[string]string
+     arg1: "value1"
+     arg2: "value2"
+   extraEnvs: # []core.v1.EnvVar Ready k8s type
+     - name: MY_ENV
+       value: "my-value"
+ serviceAccountSpec: # TBD. How to represent it? Do we need ability to specify existing service account?
+   create: true
+   metadata:
+     labels:
+       env: prod
+     annotations:
+       example.com/annotation: "true"
+ serviceSpec:
+   client:
+     metadata:
+       labels:
+         env: prod
+       annotations:
+        example.com/annotation: "true"
+    spec: # core.v1.ServiceSpec Ready k8s type
+   headless:
+     metadata:
+       labels:
+         env: prod
+       annotations:
+        example.com/annotation: "true"
+    spec: # core.v1.ServiceSpec Ready k8s type
+ podDisruptionBudget:
+   maxUnavailable: 1 # intstr.IntOrString
+   minAvailable: 2
+   selectorLabels: # If not set, the operator will use the labels from the EtcdCluster
+     env: prod
+ readinessGates: [] # core.v1.PodReadinessGate Ready k8s type
+ storage:       # Discussed separately
+   volumeClaimTemplate:
+     metadata:
+       labels:
+         env: prod
+       annotations:
+         example.com/annotation: "true"
+     spec: # core.v1.PersistentVolumeClaimSpec Ready k8s type
+       storageClassName: gp3
+       accessModes: [ "ReadWriteOnce" ]
+       resources:
+         requests:
+           storage: 10Gi
+   emptyDir: {} # core.v1.EmptyDirVolumeSource Ready k8s type
- storage:
-   persistence: true # default: true, immutable
-   storageClass: local-path
-   size: 10Gi
status:
  conditions:
  - lastProbeTime: null
    lastTransitionTime: "2024-03-06T18:39:45Z"
    status: "True"
    type: Ready

from etcd-operator.

sergeyshevch avatar sergeyshevch commented on September 24, 2024

Should we have here some additional validation? Let's say that we pass --cluster-initial-state here which is already appended to spec by operator itself? It is where the abstraction leaks.

I don't thins it's necessary. I guess user can make anytinng that he want here

As of PDB spec, I think user of etcd cluster does not need such complex spec. Operator can configure maxUnavailable according to the number of replicas in cluster, only labels and annotations can be useful

I guess it's better to have ability to configure all fields in PDB. For example my cluster has policy that check PDB and require percent value in minAvailable

from etcd-operator.

sergeyshevch avatar sergeyshevch commented on September 24, 2024

#67 Implements storage spec from this proposal. All discussions about storage part can now be addressed here

from etcd-operator.

gecube avatar gecube commented on September 24, 2024

@sergeyshevch

I don't thins it's necessary. I guess user can make anytinng that he want here

no way. Anything that does not break etcd cluster.

I guess it's better to have ability to configure all fields in PDB. For example my cluster has policy that check PDB and require percent value in minAvailable

yes, I agree. I never argued with it. I am pointing different thing.

from etcd-operator.

sergeyshevch avatar sergeyshevch commented on September 24, 2024

@gecube Ok than we need to peform additional validation for extraArgs field. Added it to #69

from etcd-operator.

AlexGluck avatar AlexGluck commented on September 24, 2024

I think we need image placed under podSpec.

from etcd-operator.

sergeyshevch avatar sergeyshevch commented on September 24, 2024

Updated spec after merge of #69. Other spec fields is implemented and removed from this diff

---
apiVersion: etcd.aenix.io/v1alpha1
kind: EtcdCluster
metadata:
 name: test
 namespace: default
spec:
  image: "quay.io/coreos/etcd:v3.5.12"
  replicas: 3
+ serviceAccountSpec: # TBD. How to represent it? Do we need ability to specify existing service account?
+   create: true
+   metadata:
+     labels:
+       env: prod
+     annotations:
+       example.com/annotation: "true"
+ serviceSpec:
+   client:
+     metadata:
+       labels:
+         env: prod
+       annotations:
+        example.com/annotation: "true"
+    spec: # core.v1.ServiceSpec Ready k8s type
+   headless:
+     metadata:
+       labels:
+         env: prod
+       annotations:
+        example.com/annotation: "true"
+    spec: # core.v1.ServiceSpec Ready k8s type
+ podDisruptionBudget:
+   maxUnavailable: 1 # intstr.IntOrString
+   minAvailable: 2
+   selectorLabels: # If not set, the operator will use the labels from the EtcdCluster
+     env: prod
+ readinessGates: [] # core.v1.PodReadinessGate Ready k8s type

from etcd-operator.

prometherion avatar prometherion commented on September 24, 2024

I was taking a look at the specification, and noticed this:

// Replicas is the count of etcd instances in cluster.
// +optional
// +kubebuilder:default:=3
// +kubebuilder:validation:Minimum:=0
Replicas *int32 `json:"replicas,omitempty"`

In the context of etcd an even number of instances is non-sense, wondering if we could take advantage of the +kubebuilder:validation:MultipleOf kubebuilder validation marker:

specifies that this field must have a numeric value that's a multiple of this one.

from etcd-operator.

prometherion avatar prometherion commented on September 24, 2024

Damn, you're right 😁
I thought we could use that for this kind of even/odd check 🀦🏻

from etcd-operator.

kvaps avatar kvaps commented on September 24, 2024

Okay I'm going to close this proposal in favor #109.
Thank you for your design!

from etcd-operator.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.