Giter Club home page Giter Club logo

aqua-helm's People

Contributors

agilgur5 avatar amirb-argon avatar andreazorzetto avatar baruchbilanski avatar danielciuraru86 avatar ebram-va avatar eranbibi avatar gezb avatar itaywol avatar jmichealson avatar josh-aqua avatar kenmccann avatar kiranbodipi avatar koppularajender avatar maik-d avatar nafarlee avatar niso120b avatar poggenpower avatar rshmiel avatar rutmus avatar sebidude avatar semyonmor avatar spimmer avatar sudhirsinghaqua avatar toqn avatar ultramaxim avatar vineethreddy02 avatar xb-2048 avatar yossig-aquasec avatar zivshits avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

aqua-helm's Issues

web-secret.yaml in aqua-helm/server has duplicate keys

The following code is listed twice as part of web-secret.yaml in aqua-helm/server

metadata:
name: {{ .Release.Name }}-console-secrets

It's preventing me from writing kustomize scripts due it being invalid yaml:
$> kustomize build
2021/02/20 10:11:17 failed to decode ynode: yaml: unmarshal errors:
line 11: mapping key "metadata" already defined at line 4
$> kustomize version
{Version:kustomize/v4.0.1 GitCommit:516ff1fa56040adc0173ff6ece66350eb4ed78a9 BuildDate:2021-02-13T21:21:14Z GoOs:linux GoArch:amd64}

We're still using the 5.0.0 chart, would it be possible to update that chart as well?

Thanks

Server chart for gateway doesn't support nodePort

What exists in the helm chart:
spec:
type: {{ .Values.gate.service.type }}
selector:
app: {{ .Release.Name }}-gateway
ports:

  • port: {{ .Values.gate.service.externalPort }}
    targetPort: 3622

What the configuration needs to look like to support NodePort:
spec:
type: NodePort
ports:
- port: 3622
nodePort: {{ .Values.gate.service.externalPort }}
selector:
app: {{ .Release.Name }}-gateway

Request Entity Too Large

We've currently got Aqua Helm chart running console and a scanner, but when scanning a particular image locally from my Mac command line, we're getting the error

"twirp error unknown: Error from intermediary with HTTP status code 413 "Request Entity Too Large""
The image is 1.4gb.
The confusing factor in this, is that when scanning the same image from the console, it works fine.

Ability to seed Scanner user/pass and Enforcer enforcerToken via Secrets / Helm values

Currently there are at least two manual steps required in order to set-up all the Aqua components, meaning it is not fully automated and requires human intervention to complete.

Once the Server is installed, one must add the license key and admin user/pass. This actually can be seeded with Helm or via secrets, it's just not documented (which is a separate issue).

In order to fully automate installation, one must also be able seed a Scanner user/pass and an Enforcer group/token in the same way.
Currently one can only create a Scanner user/pass by manually going into the UI and creating one.
The same is true for an Enforcer group/token, you must do so in the UI.

Instead, or alternatively, one should also be able to specify those via k8s Secrets or Helm values, like one can for the admin user/pass and license key already. If that were possible, then one would be able to install the Scanner and the Enforcer without human intervention and make for a very seamless automated install.
I thought that must be possible when I first went about installing Aqua, after all automation is key and this whole repo is to help with automation, but I was proven quite wrong unfortunately ๐Ÿ˜•

I'm not sure if this would require upstream changes in the Server, or if it is already possible to do so and the capabilities just need to be added to the Charts.

Tags

Related to #119 , which points out the lack of docs for these manual steps.

livenessProbe and readinessProbe for Console

Hi, I see this block code where I can input some probe for liveness and readiness, but I can't find any default value that I can use to determine if the console is healty or ready.

Is possible to know if the console implements any of those? I tried the knowledge base but couldn't find anything useful.

{{- with .Values.web.livenessProbe }}
livenessProbe:
{{ toYaml . | indent 10 }}
{{- end }}
{{- with .Values.web.readinessProbe }}
readinessProbe:
{{ toYaml . | indent 10 }}
{{- end }}

Security context for non-privileged deployments

Documentation says to use certain users, group and fsgroup when running server and gateway deployments. The helm charts say to run as any hardcoded. What's the best way to do this?

{{ - if .Values.rbac.privileged }]
fsGroup:
rule: RunAsAny
runAsUser:
rule: RunAsAny
{{ - else }}
runAsUser:

  • {{ .Values.rbac.securityContext.runAsUser}}
    runAsGroup:
  • {{ .Values.rbac.securityContext.runAsGroup}}
    fsGroup:
  • {{ .Values.rbac.securityContext.fsGroup}}
    {{ -end }}

values.yaml
securityContext:
runAsUser: 11431
runAsGroup: 11433
fsGroup: 11433

server ingress fails to install/upgrade

Latest chart 4.5 fails to create ingress

helm upgrade --install --namespace aqua csp aqua-helm/server/ -f aqua-helm/values.yaml
UPGRADE FAILED
Error: unable to recognize "": no matches for kind "Ingress" in version "apps/v1"
Error: UPGRADE FAILED: unable to recognize "": no matches for kind "Ingress" in version "apps/v1"

Values.yaml (ingress section)

web:
  ingress:
    annotations:
      kubernetes.io/ingress.class: nginx
    enabled: true
    hosts:
    - aquasec.domain.com
    tls:
    - secretName: aquasec-tls
    - hosts:
      - aquasec.domain.com

Client request: make enforcer name configurable

an Aqua client has requested that an update be made to the following line:

https://github.com/aquasecurity/aqua-helm/blob/master/enforcer/templates/enforcer-daemonset.yaml#L27-28

They wish to have the name "enforcer" replaced with a helm variable that allows for a custom name. I have informed them of the ease with which this can be done in a local file and provided instruction for doing so.

This is not a request for support, but simply to update our file documentation. Thank you!

Enforcer copies etc/passwd using generic runc process

Hi,

We've noticed that the enforcer install copies the etc/passwd file using /usr/sbin/runc process. The modification of the password file is blocked by many "traditional" security vendors on Linux. Whilst we can whitelist processes/set to report instead of block, we'd prefer not to whitelist the runc process as a whole as many other areas would use it, potentially maliciously.

Is there a way Aqua can change the process to copy the passwd file to be more specific to them?

chart not synced

Hi,

charts in http://helm.aquasec.com (It should be mentioned as https in the docs) aren't synced with the ones in this repository - check out the web-ingress.yml for instance and see that the version that is fetched from helm.aquasec.com doesn't contains handling with version prior to v1.14.x .

Thanks!

Cannot install server chart with Helm 3.2.1

The server chart fails to install with helm 3.2.1:

Command:

helm install aqua-server aqua-helm/server -n aqua --create-namespace --set imageCredentials.username=test,imageCredentials.password=test

Result:

install.go:159: [debug] Original chart version: ""
install.go:176: [debug] CHART PATH: /home/cdunford/.cache/helm/repository/server-4.6.0.tgz

Error: template: server/templates/web-secrets.yaml:2:11: executing "server/templates/web-secrets.yaml" at <(.Values.admin.password) .Values.admin.token>: can't give argument to non-function .Values.admin.password
helm.go:84: [debug] template: server/templates/web-secrets.yaml:2:11: executing "server/templates/web-secrets.yaml" at <(.Values.admin.password) .Values.admin.token>: can't give argument to non-function .Values.admin.password

The same command is successful with helm 3.2.0:

Result:

install.go:159: [debug] Original chart version: ""
install.go:176: [debug] CHART PATH: /home/cdunford/.cache/helm/repository/server-4.6.0.tgz

client.go:108: [debug] creating 1 resource(s)
client.go:258: [debug] Starting delete for "aqua-server-database-password" Secret
client.go:108: [debug] creating 1 resource(s)
client.go:108: [debug] creating 13 resource(s)
NAME: aqua-server
LAST DEPLOYED: Fri May 15 12:30:56 2020
NAMESPACE: aqua
STATUS: deployed
REVISION: 1
TEST SUITE: None
USER-SUPPLIED VALUES:
imageCredentials:
  password: test
  username: test

COMPUTED VALUES:
activeactive: true
admin:
  password: null
  passwordkey: null
  secretname: null
  token: null
  tokenkey: null
clustermode: false
db:
  affinity: {}
  dbPasswordKey: null
  dbPasswordName: null
  external:
    auditName: null
    enabled: false
    host: null
    name: null
    password: null
    port: null
    pubsubName: null
    user: null
  image:
    pullPolicy: IfNotPresent
    repository: database
    tag: "4.6"
  livenessProbe:
    exec:
      command:
      - sh
      - -c
      - exec pg_isready --host $POD_IP
    failureThreshold: 6
    initialDelaySeconds: 60
    timeoutSeconds: 5
  nodeSelector: {}
  passwordSecret: null
  persistence:
    accessMode: ReadWriteOnce
    enabled: true
    size: 30Gi
    storageClass: null
  readinessProbe:
    exec:
      command:
      - sh
      - -c
      - exec pg_isready --host $POD_IP
    initialDelaySeconds: 5
    periodSeconds: 5
    timeoutSeconds: 3
  resources:
    limits:
      cpu: 1
      memory: 1Gi
    requests:
      cpu: 0.1
      memory: 0.2Gi
  service:
    type: ClusterIP
  ssl: false
  tolerations: []
docker:
  socket:
    path: /var/run/docker.sock
dockerless: false
gate:
  affinity: {}
  grpcservice:
    externalPort: 8443
    nodePort: null
    type: ClusterIP
  image:
    pullPolicy: IfNotPresent
    repository: gateway
    tag: "4.6"
  livenessProbe: {}
  nodeSelector: {}
  publicIP: aqua-gateway
  readinessProbe: {}
  replicaCount: 1
  resources:
    limits:
      cpu: 1
      memory: 2Gi
    requests:
      cpu: 0.1
      memory: 0.2Gi
  service:
    externalPort: 3622
    nodePort: null
    type: ClusterIP
  tolerations: []
imageCredentials:
  create: true
  name: csp-registry-secret
  password: test
  registry: registry.aquasec.com
  repositoryUriPrefix: registry.aquasec.com
  username: test
rbac:
  enabled: true
  privileged: true
  roleRef: null
scanner:
  affinity: {}
  enabled: false
  image:
    pullPolicy: IfNotPresent
    repository: scanner
    tag: "4.6"
  livenessProbe: {}
  nodeSelector: {}
  password: null
  readinessProbe: {}
  replicaCount: 1
  resources: {}
  tolerations: []
  user: null
web:
  affinity: {}
  encryptionKey: null
  image:
    pullPolicy: IfNotPresent
    repository: console
    tag: "4.6"
  ingress:
    annotations: {}
    enabled: false
    hosts: null
    tls: []
  livenessProbe: {}
  nodeSelector: {}
  persistence:
    accessMode: ReadWriteOnce
    enabled: true
    size: 4
    storageClass: null
  proxy:
    httpProxy: null
    httpsProxy: null
    noProxy: null
  readinessProbe: {}
  replicaCount: 1
  resources:
    limits:
      cpu: 1
      memory: 2Gi
    requests:
      cpu: 0.1
      memory: 0.2Gi
  service:
    externalPort: 8080
    nodePort: null
    type: ClusterIP
  tolerations: []

HOOKS:
---
# Source: server/templates/db-password-secret.yaml
apiVersion: v1
kind: Secret
metadata:
  name: aqua-server-database-password
  labels:
    app: aqua-server-database
    chart: "server-4.6.0"
    release: "aqua-server"
    heritage: "Helm"
  annotations:
    "helm.sh/hook": pre-install
    "helm.sh/hook-delete-policy": before-hook-creation
type: Opaque
data:
  db-password: "T1ByV21CTHhlWm5RMDhEVkdCcUQ="
MANIFEST:
---
# Source: server/templates/rbac.yaml
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
    name: aqua-server-psp
    labels:
      app: aqua-server
      chart: "server-4.6.0"
      release: "aqua-server"
      heritage: "Helm"
spec:
  privileged: true
  allowedCapabilities:
  - '*'
  fsGroup:
    rule: RunAsAny
  runAsUser:
    rule: RunAsAny
  seLinux:
    rule: RunAsAny
  supplementalGroups:
    rule: RunAsAny
  volumes:
  - '*'
---
# Source: server/templates/serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: aqua-server-sa
  labels:
    app: aqua-server
    chart: "server-4.6.0"
    release: "aqua-server"
    heritage: "Helm"
imagePullSecrets:
- name: aqua-server-registry-secret
---
# Source: server/templates/image-pull-secret.yaml
apiVersion: v1
kind: Secret
metadata:
  name: aqua-server-registry-secret
  labels:
    app: aqua-server
    chart: "server-4.6.0"
    release: "aqua-server"
    heritage: "Helm"
type: kubernetes.io/dockerconfigjson
data:
  .dockerconfigjson: eyJhdXRocyI6IHsicmVnaXN0cnkuYXF1YXNlYy5jb20iOiB7ImF1dGgiOiAiZEdWemREcDBaWE4wIn19fQ==
---
# Source: server/templates/db-pvc.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: aqua-server-database-pvc
  labels:
    app: aqua-server
    chart: "server-4.6.0"
    release: "aqua-server"
    heritage: "Helm"
spec:
  accessModes:
    - "ReadWriteOnce"
  resources:
    requests:
      storage: "30Gi"
---
# Source: server/templates/web-pvc.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: aqua-server-web-pvc
  labels:
    app: aqua-server
    chart: "server-4.6.0"
    release: "aqua-server"
    heritage: "Helm"
spec:
  accessModes:
    - "ReadWriteOnce"
  resources:
    requests:
      storage: "4Gi"
---
# Source: server/templates/rbac.yaml
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: aqua-server-cluster-role
  labels:
    app: aqua-server
    chart: "server-4.6.0"
    release: "aqua-server"
    heritage: "Helm"
    rbac.example.com/aggregate-to-monitoring: "true"
rules:
- apiGroups: ["extensions"]
  resourceNames: [aqua-server-psp]
  resources: ["podsecuritypolicies"]
  verbs: ["use"]
- apiGroups: [""]
  resources: ["nodes", "services", "endpoints", "pods", "deployments", "namespaces","componentstatuses"]
  verbs: ["get", "list", "watch"]
- apiGroups: ["rbac.authorization.k8s.io"]
  resources: ["*"]
  verbs: ["get", "list", "watch"]
---
# Source: server/templates/rbac.yaml
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: aqua-server-role-binding
  namespace: aqua
  labels:
    app: aqua-server
    chart: "server-4.6.0"
    release: "aqua-server"
    heritage: "Helm"
subjects:
  - kind: ServiceAccount
    name: aqua-server-sa
    namespace: aqua
roleRef:
  kind: ClusterRole
  apiGroup: rbac.authorization.k8s.io
  name: aqua-server-cluster-role
---
# Source: server/templates/db-service.yaml
apiVersion: v1
kind: Service
metadata:
  name: aqua-server-database-svc
  labels:
    app: aqua-server-database
    chart: "server-4.6.0"
    release: "aqua-server"
    heritage: "Helm"
spec:
  type: ClusterIP
  selector:
    app: aqua-server-database
  ports:
    - port: 5432
---
# Source: server/templates/gate-service.yaml
apiVersion: v1
kind: Service
metadata:
  name: aqua-server-gateway-svc
  labels:
    app: aqua-server-gateway
    chart: "server-4.6.0"
    release: "aqua-server"
    heritage: "Helm"
spec:
  type: ClusterIP
  selector:
    app: aqua-server-gateway
  ports:
  - port: 3622
    targetPort: 3622
    protocol: TCP
    name: aqua-gate
  - port: 8443
    targetPort: 8443
    protocol: TCP
    name: aqua-gate-ssl
---
# Source: server/templates/web-service.yaml
apiVersion: v1
kind: Service
metadata:
  name: aqua-server-console-svc
  labels:
    app: aqua-server-console
    chart: "server-4.6.0"
    release: "aqua-server"
    heritage: "Helm"
spec:
  type: ClusterIP
  selector:
    app: aqua-server-console
  ports:
  - port: 8080
    targetPort: 8080
    protocol: TCP
    name: aqua-server-console
  - port: 443
    protocol: TCP
    targetPort: 8443
    name: aqua-web-ssl
---
# Source: server/templates/db-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: aqua-server-database
  labels:
    app: aqua-server-database
    chart: "server-4.6.0"
    release: "aqua-server"
    heritage: "Helm"
spec:
  selector:
    matchLabels:
      app: aqua-server-database
  template:
    metadata:
      annotations:
      labels:
        app: aqua-server-database
      name: aqua-server-database
    spec:
      serviceAccount: aqua-server-sa
      containers:
      - name: db
        image: "registry.aquasec.com/database:4.6"
        imagePullPolicy: "IfNotPresent"
        env:
        - name: POSTGRES_PASSWORD
          valueFrom:
            secretKeyRef:
              name: aqua-server-database-password
              key: db-password
        - name: PGDATA
          value: "/var/lib/postgresql/data/db-files"
        - name: POD_IP
          valueFrom: { fieldRef: { fieldPath: status.podIP } }
        volumeMounts:
        - mountPath: /var/lib/postgresql/data
          name: postgres-database
        ports:
        - containerPort: 5432
          protocol: TCP
        livenessProbe:
          exec:
            command:
            - sh
            - -c
            - exec pg_isready --host $POD_IP
          failureThreshold: 6
          initialDelaySeconds: 60
          timeoutSeconds: 5
        readinessProbe:
          exec:
            command:
            - sh
            - -c
            - exec pg_isready --host $POD_IP
          initialDelaySeconds: 5
          periodSeconds: 5
          timeoutSeconds: 3
        resources:
          limits:
            cpu: 1
            memory: 1Gi
          requests:
            cpu: 0.1
            memory: 0.2Gi
      volumes:
      - name: postgres-database
        persistentVolumeClaim:
          claimName: aqua-server-database-pvc
---
# Source: server/templates/gate-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: aqua-server-gateway
  labels:
    app: aqua-server-gateway
    chart: "server-4.6.0"
    release: "aqua-server"
    heritage: "Helm"
spec:
  replicas: 1
  selector:
    matchLabels:
      app: aqua-server-gateway
  template:
    metadata:
      annotations:
      labels:
        app: aqua-server-gateway
      name: aqua-server-gateway
    spec:
      serviceAccount: aqua-server-sa
      containers:
      - name: gate
        image: "registry.aquasec.com/gateway:4.6"
        imagePullPolicy: "IfNotPresent"
        env:
        - name: AQUA_CONSOLE_SECURE_ADDRESS
          value: "aqua-server-console-svc:443"
        - name: SCALOCK_GATEWAY_PUBLIC_IP
          value: aqua-gateway
        - name: HEALTH_MONITOR
          value: "0.0.0.0:8082"
        - name: SCALOCK_DBUSER
          value: postgres
        - name: SCALOCK_DBPASSWORD
          valueFrom:
            secretKeyRef:
              name: aqua-server-database-password
              key: db-password
        - name: SCALOCK_DBNAME
          value: scalock
        - name: SCALOCK_DBHOST
          value: aqua-server-database-svc
        - name: SCALOCK_DBPORT
          value: "5432"
        - name: SCALOCK_AUDIT_DBUSER
          value: postgres
        - name: SCALOCK_AUDIT_DBPASSWORD
          valueFrom:
            secretKeyRef:
              name: aqua-server-database-password
              key: db-password
        - name: SCALOCK_AUDIT_DBNAME
          value: slk_audit
        - name: SCALOCK_AUDIT_DBHOST
          value: aqua-server-database-svc
        - name: SCALOCK_AUDIT_DBPORT
          value: "5432"
        - name: AQUA_PUBSUB_DBUSER
          value: postgres
        - name: AQUA_PUBSUB_DBPASSWORD
          valueFrom:
            secretKeyRef:
              name: aqua-server-database-password
              key: db-password
        - name: AQUA_PUBSUB_DBNAME
          value: aqua_pubsub
        - name: AQUA_PUBSUB_DBHOST
          value: aqua-server-database-svc
        - name: AQUA_PUBSUB_DBPORT
          value: "5432"
        ports:
        - containerPort: 3622
          protocol: TCP
        - containerPort: 8443
          protocol: TCP
        resources:
            limits:
              cpu: 1
              memory: 2Gi
            requests:
              cpu: 0.1
              memory: 0.2Gi
---
# Source: server/templates/web-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: aqua-server-console
  labels:
    app: aqua-server-console
    chart: "server-4.6.0"
    release: "aqua-server"
    heritage: "Helm"
spec:
  replicas: 1
  selector:
    matchLabels:
      app: aqua-server-console
  template:
    metadata:
      annotations:
      labels:
        app: aqua-server-console
      name: aqua-server-console
    spec:
      serviceAccount: aqua-server-sa
      containers:
      - name: web
        image: "registry.aquasec.com/console:4.6"
        imagePullPolicy: "IfNotPresent"
        env:
        - name: SCALOCK_DBUSER
          value: postgres
        - name: SCALOCK_DBPASSWORD
          valueFrom:
            secretKeyRef:
              name: aqua-server-database-password
              key: db-password
        - name: SCALOCK_DBNAME
          value: scalock
        - name: SCALOCK_DBHOST
          value: aqua-server-database-svc
        - name: SCALOCK_DBPORT
          value: "5432"
        - name: SCALOCK_AUDIT_DBUSER
          value: postgres
        - name: SCALOCK_AUDIT_DBPASSWORD
          valueFrom:
            secretKeyRef:
              name: aqua-server-database-password
              key: db-password
        - name: SCALOCK_AUDIT_DBNAME
          value: slk_audit
        - name: SCALOCK_AUDIT_DBHOST
          value: aqua-server-database-svc
        - name: SCALOCK_AUDIT_DBPORT
          value: "5432"
        - name: AQUA_PUBSUB_DBUSER
          value: postgres
        - name: AQUA_PUBSUB_DBPASSWORD
          valueFrom:
            secretKeyRef:
              name: aqua-server-database-password
              key: db-password
        - name: AQUA_PUBSUB_DBNAME
          value: aqua_pubsub
        - name: AQUA_PUBSUB_DBHOST
          value: aqua-server-database-svc
        - name: AQUA_PUBSUB_DBPORT
          value: "5432"
        - name: AQUA_DOCKERLESS_SCANNING
          value: "0"
        - name: AQUA_PPROF_ENABLED
          value: "0"
        - name: DISABLE_IP_BAN
          value: "0"
        - name: AQUA_CLUSTER_MODE
          value: "active-active"
        - name: AQUA_CONSOLE_RAW_SCAN_RESULTS_STORAGE_SIZE
          value: "4"
        ports:
        - containerPort: 8080
          protocol: TCP
        - containerPort: 8443
          protocol: TCP
        volumeMounts:
        - mountPath: /var/run/docker.sock
          name: docker-socket-mount
        - mountPath: /opt/aquasec/raw-scan-results
          name: aqua-web-pvc
        resources:
            limits:
              cpu: 1
              memory: 2Gi
            requests:
              cpu: 0.1
              memory: 0.2Gi
      volumes:
      - name: docker-socket-mount
        hostPath:
          path: /var/run/docker.sock
      - name: aqua-web-pvc
        persistentVolumeClaim:
          claimName: aqua-server-web-pvc

NOTES:
Thank you for installing Aqua Security Server.

Now that you have deployed Aqua Server, you should look over the docs on using: 

https://read.aquasec.com/docs


Your release is named aqua-server. To learn more about the release, try:

  $ helm status aqua-server
  $ helm get aqua-server

Error: unable to build kubernetes objects from release manifest: error validating "": error validating data: ValidationError(Ingress.spec.rules[0].http.paths[0].backend): missing required field "servicePort" in io.k8s.api.extensions.v1beta1.IngressBackend

In the helm template for web_ingress.yaml, there is a variable defined for externalPort. There is no such value to define in the values.yaml. There is also no definition for this in values definition in the README.md. As a result of this, the error in the subject header is generated.

README update required for enforcer chart

The gateway server and host parameters set during helm install/upgrade are documented wrong and need to align with entries in values.yaml

gate:
  host: csp-gateway-svc # example
  port: 3622

Error: execution error at (aqua-enforcer/charts/enforcer/templates/enforcer-token-secret.yaml:14:13): A valid .Values.enforcerToken entry required!

I am getting an error when trying to use Helm with ArgoCD. My values.yaml file does include the enforcerToken value.

enforcerToken: "xxxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxxx"

I scrubbed the value but that is what is listed in my values.yaml file. When I look at my parameters section in Argo it shows the proper parameters are being populated when I try to patch the file to swap the token. I am not sure if there is a bug in the yaml file listed in the subject or if I am using it incorrectly.

Thanks.

No changelogs??

None of the Charts seem to have changelogs, making it very difficult to upgrade or rely on this repo, as one has to read through each commit to understand the changes made. To make matters worse, most of the commits and PRs in this repo also have little to no details in their titles and messages, so the commit log is not really useful as a changelog either.

Per #82 , this repository and the corresponding Helm registry also overwrite existing versions and make breaking changes without bumping version at all, making even pinning to exact version unreliable. Overwriting existing versions is generally poor practice outside of security incidents where the better response is usually to unpublish. Breaking changes like those mentioned in #82 or in 70ae917 seem to be made with some frequency too. That would make a changelog, if not full upgrade docs, even more important.

If SemVer is not being followed, this should be very explicitly written in multiple places in the docs, and the alternate versioning strategy explicitly written as well. Right now it is misleadingly SemVer, whereas CalVer may perhaps be more fitting given current usage.

Having a changelog will also make it easier to follow SemVer as one has to delineate breaking changes, new features, and patches from each other.

Braces at end of line caused issue for client

resources: {}

We had a client getting errors while attempting to implement helm. They were seeing some variation of:

error converting YAML to JSON): error converting YAML to JSON

They found that removing the {} at the end of the "resource" line allowed them to successfully move forward.

I'm not sure whether there are situations in which that is needed, so I'm not asking for them to be removed so much as just reporting that a client needed them removed.

This ticket is more for awareness and can be closed, I understand we are revamping helm.

Scanner replicas variable mismatch

As per README the scanner.replicas should set the number of replicas to be running but the deployment tempalte refers to replicas: {{ .Values.scanner.replicaCount }}.

Can you please either update the documentation or the reference template to fix this

Enforcer 5.3: DaemonSet fails to create Pods with multi_cluster set to false

If multi_cluster is set to false (the default), the ServiceAccount is not created, but the DaemonSet doesn't check for the multi_cluster value and always sets spec.template.spec.serviceAccount to {{ .Release.Namespace }}-sa, which when the ServiceAccount does not exist/multi_cluster is false, the Pods can't be created and you see the following event:

Events:
  Type     Reason        Age                  From                  Message
  ----     ------        ----                 ----                  -------
  Warning  FailedCreate  47s (x15 over 2m9s)  daemonset-controller  Error creating: pods "enforcer-ds-" is forbidden: error looking up service account aqua/aqua-sa: serviceaccount "aqua-sa" not found

My current work around is to set multi_cluster: true and let the ServiceAccount get created.

Unable to find command to install acqusec for the first time

Hello,
Greetings. I am trying to install helm-chart in my k8s environment. I expected that document should have 'helm install' command. But I didn't find it. Document is mentioning about 'helm upgrade --install'.
Can you please help to get the relevant guide.

Regards,
Deepak

Breaking changes with active:active commit and chart version unchanged

There was an update made to the server chart (specifically ce60ce8#diff-cf1e8c14e54505f60aa10ceb8d5d8ab3) that removed values that we were setting (specifically the audit DB parameters auditHost, auditUser and auditPassword). Now our console container will no longer start as it cannot connect to the audit DB and one of our environments is unusable.

The bigger problem here is that the chart version did not change whatsoever with this change, and was published to helm.aquasec.com replacing the previous revision of 4.6.0. We have no way to reference the previous, working version of the chart, as it appears it is completely replaced by this new version.

Whenever changes are made to the chart, at the very least the chart patch version should be changed (and possibly major/minor version depending on the scope/nature of the change) to avoid impacting anyone who has pinned to a specific version of the chart. Once a specific version of the chart is published, it should not be modified.

Gateway is unable to get clusterId

Gateway is unable to get ClusterId according to the logfile while using the server chart:
2019-07-10 07:29:17.172 WARN Failed getting cluster id, failed getting nodes: nodes is forbidden: User "system:serviceaccount:aqua:aqua-sa" cannot list resource "nodes" in API group "" at the cluster scope

Here is the fix:

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: {{ .Release.Name }}-cluster-role
  labels:
    app: {{ .Release.Name }}
    chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
    release: "{{ .Release.Name }}"
    heritage: "{{ .Release.Service }}"
rules:
- apiGroups: ["extensions"]
  resourceNames: [{{ .Release.Name }}-psp]
  resources: ["podsecuritypolicies"]
  verbs: ["use"]
- apiGroups: [""]
  resources: ["nodes"]
  verbs: ["list"]
{{- end }}

Server: Image pull secret name does not match service account

On branch 5.3, as of commit e1da7fc, the image pull secret name uses .Release.Name (https://github.com/aquasecurity/aqua-helm/blob/5.3/server/templates/image-pull-secret.yaml#L6), but the service account expects the secret name to be built from .Release.Namespace (https://github.com/aquasecurity/aqua-helm/blob/5.3/server/templates/serviceaccount.yaml#L13).

For reference, the enforcer chart is more consistent on this: https://github.com/aquasecurity/aqua-helm/blob/5.3/enforcer/templates/serviceaccount.yaml#L14 and https://github.com/aquasecurity/aqua-helm/blob/5.3/enforcer/templates/image-pull-secret.yaml#L6

I defer to you on which name you'd like to use, .Release.Namespace or .Release.Name, but I have a slight preference for .Release.Name (what if a user wants to deploy different charts to the same namespace?)

The impact of this that imageCredentials.create must be set to false in order to work correctly, and the secret must be created by the user out-of-band.

Unable to specify Scanner password via Secret

There is no option available to specify a Secret for the Scanner password, meaning it has to be input in plaintext in a Helm values file or passed via command-line. But not all Helm deployment styles allow for command-line specification, requiring customization to do so. This is also a security issue as, in either case, the plaintext password is introspectable.

It would be much easier if the Scanner password could be specified as a Secret, similar to the Server's admin password or the Enforcer's enforcerToken

This seems like it may be difficult to provide here and may require upstream changes to the Scanner, because it seems to only accept password as command-line argument in the Chart's containers spec as well as per the Scanner executable docs. This may require it to accept an environment variable or volume mount as an alternative to the command-line argument.

Need ability to annotate services

AWS requires annotation of services for certain configurations as an example requirement.

This PR was closed that added that functionality without comment.

#79

Example:

  grpcservice:
    annotations:
      external-dns.alpha.kubernetes.io/hostname: aqua-gateway.company.domain
      service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0

KubeEnforcer Chart doesn't exist on https://helm.aquasec.com

I was following the directions in the README and ran into some issues with the KubeEnforcer Chart not existing on the https://helm.aquasec.com server.

Reproduction:

$ helm repo add aqua-helm https://helm.aquasec.com
"aqua-helm" has been added to your repositories
$ helm search repo aqua-helm
NAME                            CHART VERSION   APP VERSION     DESCRIPTION                                       
aqua-helm/enforcer              5.0.0           5.0             A Helm chart for the Aqua Enforcer                
aqua-helm/harbor-scanner-aqua   0.2.0           0.2.0           Harbor scanner adapter for Aqua CSP scanner       
aqua-helm/harbor-scanner-trivy  0.1.3           0.2.1           Trivy as a plug-in vulnerability scanner in the...
aqua-helm/scanner               5.0.0           5.0             A Helm chart for the aqua scanner cli component   
aqua-helm/server                5.0.0           5.0             A Helm chart for the Aqua Console Componants

There is no aqua-helm/kube-enforcer as the README suggests. The kube-enforcer Chart in the repo also has version: 0.1.0 and not 5.0.0 as the README shows and the other Charts have.

Similarly had an error when adding as a dependency to a Chart.yaml:

...
Saving 4 charts
Downloading server from repo https://helm.aquasec.com
Downloading scanner from repo https://helm.aquasec.com
Downloading enforcer from repo https://helm.aquasec.com
Save error occurred:  could not find : chart kube-enforcer not found in https://helm.aquasec.com
Deleting newly downloaded charts, restoring pre-update state
Error: could not find : chart kube-enforcer not found in https://helm.aquasec.com

enforcer helm chart doesn't support setting up multiple gateways

The current helm chart doesn't support passing values of multiple gateways/envoys. Raising this issue to resolve it.

https://github.com/aquasecurity/aqua-helm/blob/master/enforcer/templates/enforcer-daemonset.yaml#L53

Update the enforcer-daemonset.yaml:
value: {{ .Values.gate.host | default "aqua-gateway-svc:8443" }}

And then pass a comma separated list of host:port similar to the following in values.yaml:

gate:
host: 20.73.195.115:8443,20.73.33.34:8443

4.2 tag is confusing

There is an issue with the 4.2 tag. It points to an older commit that does not include improvements for gRPC support.
As it is, gateways cannot reach the console.

 $ git log
commit bc95cc2f64fe859702970dfed5af97deb564fbb2 (HEAD -> master, origin/master, origin/HEAD, origin/4.5)
Author: niso120b <[email protected]>
Date:   Tue Aug 20 17:10:06 2019 +0300

    Add grpc ssl port for enforcer

commit 2bf9cbd3192acd2afe62b54fd7cb6cdef1546c25 (origin/4.2)
Author: Nissim Bitan <[email protected]>
Date:   Wed Jul 3 09:55:01 2019 +0300

    Update gRPC Support

commit 77a4160386be8e25b3d07992693ef226d95d9341
Author: Nissim Bitan <[email protected]>
Date:   Mon Jul 1 19:20:51 2019 +0300

    Update ingress example

commit 4b4b7425ea73ab6ebaa9910592897d18d5655ddc (tag: 4.2)
Author: Nissim Bitan <[email protected]>
Date:   Mon Jun 24 12:01:06 2019 +0300

    Update to 4.2 version

Can't get the info in Readme to work

Are there updated instructions?

Also noticed the latest Pull request from a few days ago. I tried the chart repository, but it doesn't seem to work either. Can you give me a status on this. I'd like to give this a go running on top of openSUSE Kubic and SUSE CaaS Platform.
Thanks,
Cameron

Template Envoy Config gateway-headless-svc to include Release.Namespace

Currently the envoy config is hard coding the namespace that the gateway-headless-svc would be in and if you deploy aqua into a different namespace other than "aqua" envoy will not work.

Need to update
address: {{ .Release.Name }}-gateway-headless-svc.aqua.svc.cluster.local
to
address: {{ .Release.Name }}-gateway-headless-svc.{{ .Release.Namespace }}.svc.cluster.local

Cluster Mode

Hi, I couldn't find any useful information about this environment variable CLUSTER_MODE. I even tried the Knowledge Base, but found nothing.

{{- if .Values.clustermode }}
- name: CLUSTER_MODE
value: "enable"
{{- end }}

Is it possible to provide more information about this?

loadBalancerIP for Internal LB in AKS

There's no variable in values.yaml or in the templates to handle loadBalancerIP. This is required to support this - https://docs.microsoft.com/en-us/azure/aks/internal-lb

apiVersion: v1
kind: Service
metadata:
name: internal-app
annotations:
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
spec:
type: LoadBalancer
loadBalancerIP: 10.240.0.25
ports:

  • port: 80
    selector:
    app: internal-app

Please add support for it.

docs: missing prerequisites on Scanner user/pass and Enforcer enforcerToken

The docs for Scanner and Enforcer don't mention that they have prerequisites other than an imagePullSecret, but both do.

The Scanner chart requires that a user and password are given, but the docs/its README doesn't mention this at all, neither that it is required nor how to create one. Aqua Engineers pointed out to me that this was necessary but I didn't know despite having read through all the docs quite a lot.
I think directions on creating a user/password should be added in the Scanner README's prerequisites.

The Enforcer chart requires that an enforcerToken is given, but the docs/its README doesn't mention this at all, neither that it is required nor how to create one. This I noticed from the argument in the Helm install command and knew from the docs site that one must create an Enforcer group, but if one only read here one wouldn't realize this was necessary as it's not mentioned.
I think directions on creating an Enforcer group and getting a enforcerToken should be added in the Enforcer README's prerequisites.

Tags

Mentioned this in #116

docs: unclear what the difference between Active-Active and HA Cluster Mode are

The server docs have an activeactive and clustermode options but it's not clear what the difference between the two is.

clustermode says it is for "HA" but activeactive is a term for an HA deployment as well. A comment in the Chart values also lists that the two are mutually exclusive with each other, which adds to the confusion. The website's docs for HA mention Active-Active but not "Cluster Mode" as well.

It would be good to clarify the differences between the two, whether one is deprecated or not (since the website doesn't mention cluster mode, maybe it is?), which one is optimal (i.e. most resilient), and what the default is (both are false, so default is not HA?).

#72 is related but was not really answered. The response doesn't point to a specific place in the docs and I have read the docs; it defines these as the same thing as far as I can tell: aqua_cluster_mode='active-active'

Tenant Manager Chart missing

I would like to use the Tenant Manager and I use Helm, but there is no Tenant Manager Chart.

I added it myself in my PR #99 instead of waiting for someone from Aqua to do so, but for some reason that was closed with a one-liner saying "we currently don't need the tenant manager to be in the helm charts", despite me needing it... I said as much in #99 (comment) as well

It seems like this repository has a history of rejecting useful PRs with little to no comments, like all the ones mentioned in #92 and #102, among others... This does not give a good experience to customers ๐Ÿ˜• , especially ones that are doing work to improve Aqua's products.

Console with PVC

Hi, I see for the console component the environment variable AQUA_CONSOLE_RAW_SCAN_RESULTS_STORAGE_SIZE and some code that creates a PVC for it as well.

Isn't clear to me what's the reason for this PVC. With the current code is also not possible to add more replicas to the console, as the persistent volume will be attached to the first created replica.

persistence:
enabled: true
storageClass:
size: 4
accessMode: ReadWriteOnce

Is possible to give more details why this is required and what's the suggested solution to handle the volume (if required) when more than 1 replica is used?

I tried the knowledge base and blamed a few commits but there were no details.

Encryption key for gateway not working

Hi,

I don't seem to be able to get the encryption key working for the Gateway, I've managed to get the server to pick up the new key using the SCALOCK_ENCRYPTION_KEY env var, but it doesn't want to work for the Gateway.

Two problems:

  1. The documentation doesn't list it as a variable to be used by the gateway but in the best practices section it says they both accept it.

https://docs.aquasec.com/docs/security-best-practices#section-use-a-non-default-encryption-key-for-at-rest-storage-in-the-database

https://docs.aquasec.com/docs/environment-variables#section-optional-variables-for-the-aqua-gateway

  1. This helm chart doesn't make things any clearer as there isn't a value for it and nothing referencing it in the templates.

Happy to make the change if someone can tell me what the correct variable name is.

Thanks.

Error initializing configuration '/etc/envoy/envoy.yaml': Invalid path: /etc/envoy/envoy.yaml

When I enable envoy in the server chart, I get the error:

Server: Envoy: error initializing configuration '/etc/envoy/envoy.yaml': Invalid path: /etc/envoy/envoy.yaml.

It appears as if the data section of aqua-envoy-conf configMap isn't getting loaded. If I edit the configMap (kubectl -n aqua edit cm aqua-envoy-conf) and paste the data section in, the pod will start no problem. Anyone else run into this problem?

Render error installing server when ingress enabled

Render error installing server when ingress enabled

helm install --name myrelease --dry-run --debug -f values.yaml server/
[debug] Created tunnel using local port: '60142'

[debug] SERVER: "127.0.0.1:60142"

[debug] Original chart version: ""
[debug] CHART PATH: /home/kcorupe/aqua/aqua-helm/server

Error: render error in "server/templates/web-ingress.yaml": template: server/templates/web-ingress.yaml:26:46: executing "server/templates/web-ingress.yaml" at <.Release.Name>: can't evaluate field Release in type interface {}

values.yaml:

web:
  ingress:
    enabled: true
    annotations:
      kubernetes.io/ingress.class: nginx
    hosts:
     - aquasec.test.com

HTTP proxy env variables for air-gapped outbound connections

Need to add [no_]http_proxy[s] variables to server to allow connectivity to CyberCentre from air-gapped environments.

    spec:
      containers:
        env:
        - name: http_proxy
          value: "$proxy_url"
        - name: https_proxy
          value: "$proxy_url"
        - name: no_proxy
          value: "$noproxy"

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.