This chart was created to deploy a ngnix application in a kubernetes cluster.
This file has the configuration of the nginx. The cluster will have an auto scaling policy based on cpu usage. The minimum size of of ngninx cluster will be 3 and will scale until 10 if the cpu usage is equal or above 80%.
autoscaling:
enabled: true
minReplicas: 3
maxReplicas: 10
targetCPUUtilizationPercentage: 80
The application will have a load balancer that will distribute de load between the applications.
service:
type: LoadBalancer
port: 8080
Here we have some of the metadata of the deployments. Like the version of the deployment, some keywords and the contact information
The following table lists the configurable parameters of the Lab-nginx chart and their default values.
Parameter | Description | Default |
---|---|---|
replicaCount |
1 |
|
image.repository |
"nginx" |
|
image.pullPolicy |
"Always" |
|
image.tag |
"1.19.8" |
|
imagePullSecrets |
[] |
|
nameOverride |
"" |
|
fullnameOverride |
"" |
|
serviceAccount.create |
true |
|
serviceAccount.annotations |
{} |
|
serviceAccount.name |
"lab-nginx" |
|
podAnnotations |
{} |
|
podSecurityContext |
{} |
|
securityContext |
{} |
|
service.type |
"NodePort" |
|
service.port |
8080 |
|
ingress.enabled |
false |
|
ingress.annotations |
{} |
|
ingress.hosts |
[{"host": "chart-lab.local", "paths": [{"path": "/", "backend": {"serviceName": "chart-lab.local", "servicePort": 80}}]}] |
|
ingress.tls |
[] |
|
resources.limits.cpu |
"100m" |
|
resources.limits.memory |
"128Mi" |
|
resources.requests.cpu |
"100m" |
|
resources.requests.memory |
"128Mi" |
|
autoscaling.enabled |
true |
|
autoscaling.minReplicas |
3 |
|
autoscaling.maxReplicas |
10 |
|
autoscaling.targetCPUUtilizationPercentage |
80 |
|
nodeSelector |
{} |
|
tolerations |
[] |
|
affinity |
{} |
Documentation generated by Frigate.