Comments (19)
+ serviceSpec:
+ metadata:
+ labels:
+ env: prod
+ annotations:
+ example.com/annotation: "true"
+ spec: # core.v1.ServiceSpec Ready k8s type
not enough as business logic supposes to have several type of services. Basic one - headless, and probably addiitonal one for clients? Should we give user option to choose if he wants the second? Or just deploy them both every time?
from etcd-operator.
serviceAccountName: default
then you need to also disable / or enable creation of custom service account...
from etcd-operator.
@AlexGluck I don't think so, because it is much more important thing than affinity, tolerations etc. But let it be.
from etcd-operator.
from etcd-operator.
+ nodeSelector: {} # map[string]string
+ tolerations: [] # core.v1.Toleration Ready k8s type
+ securityContext: {} # core.v1.PodSecurityContext Ready k8s type
+ priorityClassName: "low"
+ topologySpreadConstraints: [] # core.v1.TopologySpreadConstraint Ready k8s type
+ terminationGracePeriodSeconds: 30 # int64
+ schedulerName: "default-scheduler"
+ runtimeClassName: "legacy"
+ extraArgs: # map[string]string
+ arg1: "value1"
+ arg2: "value2"
I'd prefer this to move to some additional key. Not to overwhelm the base spec
key. Like podSpec:
or something similar
from etcd-operator.
+ storage:
+ volumeClaimTemplate:
+ metadata:
+ labels:
+ env: prod
+ annotations:
+ example.com/annotation: "true"
+ spec: # core.v1.PersistentVolumeClaimSpec Ready k8s type
+ storageClassName: gp3
+ accessModes: [ "ReadWriteOnce" ]
+ resources:
+ requests:
+ storage: 10Gi
+ emptyDir: {} # core.v1.EmptyDirVolumeSource Ready k8s type
need to discuss if we want to enable all possible types of volumes, or restrict them to some safe subset. Because in fact core.v1.PersistentVolumeClaimSpec
is very complex for basic use.
+ podDisruptionBudget:
+ maxUnavailable: 1 # intstr.IntOrString
+ minAvailable: 2
+ selectorLabels: # If not set, the operator will use the labels from the EtcdCluster
+ env: prod
we need binary flag: deploy pdb or not. Not just to have the values in the spec, I think. Or podDisruptionBudget: nil
would be enough for not deploying it?
from etcd-operator.
+ extraArgs: # map[string]string
+ arg1: "value1"
+ arg2: "value2"
+ extraEnvs: # []core.v1.EnvVar Ready k8s type
+ - name: MY_ENV
+ value: "my-value"
Should we have here some additional validation? Let's say that we pass --cluster-initial-state
here which is already appended to spec by operator itself? It is where the abstraction leaks.
from etcd-operator.
As of PDB spec, I think user of etcd cluster does not need such complex spec. Operator can configure maxUnavailable
according to the number of replicas in cluster, only labels and annotations can be useful
from etcd-operator.
Also I want to have a good hierarchy:
---
-apiVersion: etcd.aenix.io/v1alpha1
+apiVersion: etcd.aenix.io/v1alpha2
kind: EtcdCluster
metadata:
name: test
namespace: default
spec:
image: "quay.io/coreos/etcd:v3.5.12"
replicas: 3
+ podSpec:
+ imagePullSecrets: # core.v1.LocalObjectReference Ready k8s type
- name: myregistrykey
+ serviceAccountName: default
+ podMetadata:
+ labels:
+ env: prod
+ annotations:
+ example.com/annotation: "true"
+ resources: # core.v1.ResourceRequirements Ready k8s type
+ requests:
+ cpu: 100m
+ memory: 100Mi
+ limits:
+ cpu: 200m
+ memory: 200Mi
+ affinity: {} # core.v1.Affinity Ready k8s type
+ nodeSelector: {} # map[string]string
+ tolerations: [] # core.v1.Toleration Ready k8s type
+ securityContext: {} # core.v1.PodSecurityContext Ready k8s type
+ priorityClassName: "low"
+ topologySpreadConstraints: [] # core.v1.TopologySpreadConstraint Ready k8s type
+ terminationGracePeriodSeconds: 30 # int64
+ schedulerName: "default-scheduler"
+ runtimeClassName: "legacy"
+ service-headless:
+ serviceSpec:
+ metadata:
+ labels:
+ env: prod
+ annotations:
+ example.com/annotation: "true"
+ spec: # core.v1.ServiceSpec Ready k8s type
+ service-main:
+ serviceSpec:
+ metadata:
+ labels:
+ env: prod
+ annotations:
+ example.com/annotation: "true"
+ spec: # core.v1.ServiceSpec Ready k8s type
+ storage:
+ volumeClaimTemplate:
+ metadata:
+ labels:
+ env: prod
+ annotations:
+ example.com/annotation: "true"
+ spec: # core.v1.PersistentVolumeClaimSpec Ready k8s type
+ storageClassName: gp3
+ accessModes: [ "ReadWriteOnce" ]
+ resources:
+ requests:
+ storage: 10Gi
+ emptyDir: {} # core.v1.EmptyDirVolumeSource Ready k8s type
something like this. Then in the editor I will be able to easily collapse / expand particular blocks with the technical details. I want to emphasize that I think that it is really great idea to separate the main business logic parameters (like number of replicas) from the deep technical things like service labels
from etcd-operator.
Updated spec addressing all comments
---
apiVersion: etcd.aenix.io/v1alpha1
kind: EtcdCluster
metadata:
name: test
namespace: default
spec:
image: "quay.io/coreos/etcd:v3.5.12"
replicas: 3
+ podSpec:
+ imagePullPolicy: "Always" # core.v1.PullPolicy Ready k8s type
+ imagePullSecrets: # core.v1.LocalObjectReference Ready k8s type
+ - name: myregistrykey
+ podMetadata:
+ labels:
+ env: prod
+ annotations:
+ example.com/annotation: "true"
+ resources: # core.v1.ResourceRequirements Ready k8s type
+ requests:
+ cpu: 100m
+ memory: 100Mi
+ limits:
+ cpu: 200m
+ memory: 200Mi
+ affinity: {} # core.v1.Affinity Ready k8s type
+ nodeSelector: {} # map[string]string
+ tolerations: [] # core.v1.Toleration Ready k8s type
+ securityContext: {} # core.v1.PodSecurityContext Ready k8s type
+ priorityClassName: "low"
+ topologySpreadConstraints: [] # core.v1.TopologySpreadConstraint Ready k8s type
+ terminationGracePeriodSeconds: 30 # int64
+ schedulerName: "default-scheduler"
+ runtimeClassName: "legacy"
+ extraArgs: # map[string]string
+ arg1: "value1"
+ arg2: "value2"
+ extraEnvs: # []core.v1.EnvVar Ready k8s type
+ - name: MY_ENV
+ value: "my-value"
+ serviceAccountSpec: # TBD. How to represent it? Do we need ability to specify existing service account?
+ create: true
+ metadata:
+ labels:
+ env: prod
+ annotations:
+ example.com/annotation: "true"
+ serviceSpec:
+ client:
+ metadata:
+ labels:
+ env: prod
+ annotations:
+ example.com/annotation: "true"
+ spec: # core.v1.ServiceSpec Ready k8s type
+ headless:
+ metadata:
+ labels:
+ env: prod
+ annotations:
+ example.com/annotation: "true"
+ spec: # core.v1.ServiceSpec Ready k8s type
+ podDisruptionBudget:
+ maxUnavailable: 1 # intstr.IntOrString
+ minAvailable: 2
+ selectorLabels: # If not set, the operator will use the labels from the EtcdCluster
+ env: prod
+ readinessGates: [] # core.v1.PodReadinessGate Ready k8s type
+ storage: # Discussed separately
+ volumeClaimTemplate:
+ metadata:
+ labels:
+ env: prod
+ annotations:
+ example.com/annotation: "true"
+ spec: # core.v1.PersistentVolumeClaimSpec Ready k8s type
+ storageClassName: gp3
+ accessModes: [ "ReadWriteOnce" ]
+ resources:
+ requests:
+ storage: 10Gi
+ emptyDir: {} # core.v1.EmptyDirVolumeSource Ready k8s type
- storage:
- persistence: true # default: true, immutable
- storageClass: local-path
- size: 10Gi
status:
conditions:
- lastProbeTime: null
lastTransitionTime: "2024-03-06T18:39:45Z"
status: "True"
type: Ready
from etcd-operator.
Should we have here some additional validation? Let's say that we pass --cluster-initial-state here which is already appended to spec by operator itself? It is where the abstraction leaks.
I don't thins it's necessary. I guess user can make anytinng that he want here
As of PDB spec, I think user of etcd cluster does not need such complex spec. Operator can configure maxUnavailable according to the number of replicas in cluster, only labels and annotations can be useful
I guess it's better to have ability to configure all fields in PDB. For example my cluster has policy that check PDB and require percent value in minAvailable
from etcd-operator.
#67 Implements storage spec from this proposal. All discussions about storage part can now be addressed here
from etcd-operator.
I don't thins it's necessary. I guess user can make anytinng that he want here
no way. Anything that does not break etcd cluster.
I guess it's better to have ability to configure all fields in PDB. For example my cluster has policy that check PDB and require percent value in minAvailable
yes, I agree. I never argued with it. I am pointing different thing.
from etcd-operator.
@gecube Ok than we need to peform additional validation for extraArgs field. Added it to #69
from etcd-operator.
I think we need image
placed under podSpec
.
from etcd-operator.
Updated spec after merge of #69. Other spec fields is implemented and removed from this diff
---
apiVersion: etcd.aenix.io/v1alpha1
kind: EtcdCluster
metadata:
name: test
namespace: default
spec:
image: "quay.io/coreos/etcd:v3.5.12"
replicas: 3
+ serviceAccountSpec: # TBD. How to represent it? Do we need ability to specify existing service account?
+ create: true
+ metadata:
+ labels:
+ env: prod
+ annotations:
+ example.com/annotation: "true"
+ serviceSpec:
+ client:
+ metadata:
+ labels:
+ env: prod
+ annotations:
+ example.com/annotation: "true"
+ spec: # core.v1.ServiceSpec Ready k8s type
+ headless:
+ metadata:
+ labels:
+ env: prod
+ annotations:
+ example.com/annotation: "true"
+ spec: # core.v1.ServiceSpec Ready k8s type
+ podDisruptionBudget:
+ maxUnavailable: 1 # intstr.IntOrString
+ minAvailable: 2
+ selectorLabels: # If not set, the operator will use the labels from the EtcdCluster
+ env: prod
+ readinessGates: [] # core.v1.PodReadinessGate Ready k8s type
from etcd-operator.
I was taking a look at the specification, and noticed this:
etcd-operator/api/v1alpha1/etcdcluster_types.go
Lines 28 to 32 in 755f630
In the context of etcd
an even number of instances is non-sense, wondering if we could take advantage of the +kubebuilder:validation:MultipleOf
kubebuilder validation marker:
specifies that this field must have a numeric value that's a multiple of this one.
from etcd-operator.
Damn, you're right π
I thought we could use that for this kind of even/odd check π€¦π»
from etcd-operator.
Okay I'm going to close this proposal in favor #109.
Thank you for your design!
from etcd-operator.
Related Issues (20)
- Design and implement RBAC switcher HOT 1
- [epic] etcd cluster management
- Optimize quotas requests/limits for CPU and memory by default HOT 8
- [feat] Majority failure recovery
- Disable development logger
- Roadmap
- Enable resize operation
- Automatically set `quota-backend-bytes` option to 95% from requested size:
- kubectl plugin HOT 1
- Unable to create etcd cluster with EmptyDir storage
- Add defrag CronJob HOT 5
- Enable autocompaction HOT 3
- Implement e2e autotests with zero downtime check
- Refactor volumes/secrets attachment
- Implement k8s version matrix in e2e tests HOT 1
- Fix logging system issues
- Guess etcd replicas number function HOT 6
- Enable PDB by default
- Implement sorting for extra Keys
- Research Backup & Restore from etcd-druid
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
π Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. πππ
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google β€οΈ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from etcd-operator.