aquasecurity / aqua-helm Goto Github PK
View Code? Open in Web Editor NEWHelm Charts For Installing Aqua Security Components
Home Page: http://aquasec.com
License: Apache License 2.0
Helm Charts For Installing Aqua Security Components
Home Page: http://aquasec.com
License: Apache License 2.0
The following code is listed twice as part of web-secret.yaml in aqua-helm/server
metadata:
name: {{ .Release.Name }}-console-secrets
It's preventing me from writing kustomize scripts due it being invalid yaml:
$> kustomize build
2021/02/20 10:11:17 failed to decode ynode: yaml: unmarshal errors:
line 11: mapping key "metadata" already defined at line 4
$> kustomize version
{Version:kustomize/v4.0.1 GitCommit:516ff1fa56040adc0173ff6ece66350eb4ed78a9 BuildDate:2021-02-13T21:21:14Z GoOs:linux GoArch:amd64}
We're still using the 5.0.0 chart, would it be possible to update that chart as well?
Thanks
What exists in the helm chart:
spec:
type: {{ .Values.gate.service.type }}
selector:
app: {{ .Release.Name }}-gateway
ports:
What the configuration needs to look like to support NodePort:
spec:
type: NodePort
ports:
- port: 3622
nodePort: {{ .Values.gate.service.externalPort }}
selector:
app: {{ .Release.Name }}-gateway
How to get an aqua image registry account?
We've currently got Aqua Helm chart running console and a scanner, but when scanning a particular image locally from my Mac command line, we're getting the error
"twirp error unknown: Error from intermediary with HTTP status code 413 "Request Entity Too Large""
The image is 1.4gb.
The confusing factor in this, is that when scanning the same image from the console, it works fine.
Currently there are at least two manual steps required in order to set-up all the Aqua components, meaning it is not fully automated and requires human intervention to complete.
Once the Server is installed, one must add the license key and admin user/pass. This actually can be seeded with Helm or via secrets, it's just not documented (which is a separate issue).
In order to fully automate installation, one must also be able seed a Scanner user/pass and an Enforcer group/token in the same way.
Currently one can only create a Scanner user/pass by manually going into the UI and creating one.
The same is true for an Enforcer group/token, you must do so in the UI.
Instead, or alternatively, one should also be able to specify those via k8s Secrets or Helm values, like one can for the admin user/pass and license key already. If that were possible, then one would be able to install the Scanner and the Enforcer without human intervention and make for a very seamless automated install.
I thought that must be possible when I first went about installing Aqua, after all automation is key and this whole repo is to help with automation, but I was proven quite wrong unfortunately ๐
I'm not sure if this would require upstream changes in the Server, or if it is already possible to do so and the capabilities just need to be added to the Charts.
Related to #119 , which points out the lack of docs for these manual steps.
your metadata->labels->
are not indented correctly - causing a kubectl yaml snafu :)
Hi, I see this block code where I can input some probe for liveness
and readiness
, but I can't find any default value that I can use to determine if the console is healty or ready.
Is possible to know if the console implements any of those? I tried the knowledge base but couldn't find anything useful.
aqua-helm/server/templates/web-deployment.yaml
Lines 175 to 182 in 07321c6
Active:active support was recently added to the server chart, and the docs indicate it is disabled by default:
e688b27#diff-7d62b4cde47a08800e3b7212ca2dd397R81
However the default value for activeactive
is actually true
:
ce60ce8#diff-0fd8ccf1d8b0f5309cd540e606d67248R17
The default value should be false
, as this is an unwanted new setting for existing consumers of the chart.
Documentation says to use certain users, group and fsgroup when running server and gateway deployments. The helm charts say to run as any hardcoded. What's the best way to do this?
{{ - if .Values.rbac.privileged }]
fsGroup:
rule: RunAsAny
runAsUser:
rule: RunAsAny
{{ - else }}
runAsUser:
values.yaml
securityContext:
runAsUser: 11431
runAsGroup: 11433
fsGroup: 11433
Latest chart 4.5 fails to create ingress
helm upgrade --install --namespace aqua csp aqua-helm/server/ -f aqua-helm/values.yaml
UPGRADE FAILED
Error: unable to recognize "": no matches for kind "Ingress" in version "apps/v1"
Error: UPGRADE FAILED: unable to recognize "": no matches for kind "Ingress" in version "apps/v1"
Values.yaml (ingress section)
web:
ingress:
annotations:
kubernetes.io/ingress.class: nginx
enabled: true
hosts:
- aquasec.domain.com
tls:
- secretName: aquasec-tls
- hosts:
- aquasec.domain.com
an Aqua client has requested that an update be made to the following line:
They wish to have the name "enforcer" replaced with a helm variable that allows for a custom name. I have informed them of the ease with which this can be done in a local file and provided instruction for doing so.
This is not a request for support, but simply to update our file documentation. Thank you!
In https://github.com/aquasecurity/aqua-helm/blob/5.3/server/templates/web-secrets.yaml#L12-13, latest commit 70ae917, it appears that .metadata
and .metadata.name
are duplicated. This causes an error when used with kustomize:
failed to decode ynode: yaml: unmarshal errors:
line 11: mapping key "metadata" already defined at line 4.
Hi,
We've noticed that the enforcer install copies the etc/passwd file using /usr/sbin/runc process. The modification of the password file is blocked by many "traditional" security vendors on Linux. Whilst we can whitelist processes/set to report instead of block, we'd prefer not to whitelist the runc process as a whole as many other areas would use it, potentially maliciously.
Is there a way Aqua can change the process to copy the passwd file to be more specific to them?
Hi,
charts in http://helm.aquasec.com (It should be mentioned as https in the docs) aren't synced with the ones in this repository - check out the web-ingress.yml for instance and see that the version that is fetched from helm.aquasec.com doesn't contains handling with version prior to v1.14.x .
Thanks!
The server chart fails to install with helm 3.2.1
:
Command:
helm install aqua-server aqua-helm/server -n aqua --create-namespace --set imageCredentials.username=test,imageCredentials.password=test
Result:
install.go:159: [debug] Original chart version: ""
install.go:176: [debug] CHART PATH: /home/cdunford/.cache/helm/repository/server-4.6.0.tgz
Error: template: server/templates/web-secrets.yaml:2:11: executing "server/templates/web-secrets.yaml" at <(.Values.admin.password) .Values.admin.token>: can't give argument to non-function .Values.admin.password
helm.go:84: [debug] template: server/templates/web-secrets.yaml:2:11: executing "server/templates/web-secrets.yaml" at <(.Values.admin.password) .Values.admin.token>: can't give argument to non-function .Values.admin.password
The same command is successful with helm 3.2.0
:
Result:
install.go:159: [debug] Original chart version: ""
install.go:176: [debug] CHART PATH: /home/cdunford/.cache/helm/repository/server-4.6.0.tgz
client.go:108: [debug] creating 1 resource(s)
client.go:258: [debug] Starting delete for "aqua-server-database-password" Secret
client.go:108: [debug] creating 1 resource(s)
client.go:108: [debug] creating 13 resource(s)
NAME: aqua-server
LAST DEPLOYED: Fri May 15 12:30:56 2020
NAMESPACE: aqua
STATUS: deployed
REVISION: 1
TEST SUITE: None
USER-SUPPLIED VALUES:
imageCredentials:
password: test
username: test
COMPUTED VALUES:
activeactive: true
admin:
password: null
passwordkey: null
secretname: null
token: null
tokenkey: null
clustermode: false
db:
affinity: {}
dbPasswordKey: null
dbPasswordName: null
external:
auditName: null
enabled: false
host: null
name: null
password: null
port: null
pubsubName: null
user: null
image:
pullPolicy: IfNotPresent
repository: database
tag: "4.6"
livenessProbe:
exec:
command:
- sh
- -c
- exec pg_isready --host $POD_IP
failureThreshold: 6
initialDelaySeconds: 60
timeoutSeconds: 5
nodeSelector: {}
passwordSecret: null
persistence:
accessMode: ReadWriteOnce
enabled: true
size: 30Gi
storageClass: null
readinessProbe:
exec:
command:
- sh
- -c
- exec pg_isready --host $POD_IP
initialDelaySeconds: 5
periodSeconds: 5
timeoutSeconds: 3
resources:
limits:
cpu: 1
memory: 1Gi
requests:
cpu: 0.1
memory: 0.2Gi
service:
type: ClusterIP
ssl: false
tolerations: []
docker:
socket:
path: /var/run/docker.sock
dockerless: false
gate:
affinity: {}
grpcservice:
externalPort: 8443
nodePort: null
type: ClusterIP
image:
pullPolicy: IfNotPresent
repository: gateway
tag: "4.6"
livenessProbe: {}
nodeSelector: {}
publicIP: aqua-gateway
readinessProbe: {}
replicaCount: 1
resources:
limits:
cpu: 1
memory: 2Gi
requests:
cpu: 0.1
memory: 0.2Gi
service:
externalPort: 3622
nodePort: null
type: ClusterIP
tolerations: []
imageCredentials:
create: true
name: csp-registry-secret
password: test
registry: registry.aquasec.com
repositoryUriPrefix: registry.aquasec.com
username: test
rbac:
enabled: true
privileged: true
roleRef: null
scanner:
affinity: {}
enabled: false
image:
pullPolicy: IfNotPresent
repository: scanner
tag: "4.6"
livenessProbe: {}
nodeSelector: {}
password: null
readinessProbe: {}
replicaCount: 1
resources: {}
tolerations: []
user: null
web:
affinity: {}
encryptionKey: null
image:
pullPolicy: IfNotPresent
repository: console
tag: "4.6"
ingress:
annotations: {}
enabled: false
hosts: null
tls: []
livenessProbe: {}
nodeSelector: {}
persistence:
accessMode: ReadWriteOnce
enabled: true
size: 4
storageClass: null
proxy:
httpProxy: null
httpsProxy: null
noProxy: null
readinessProbe: {}
replicaCount: 1
resources:
limits:
cpu: 1
memory: 2Gi
requests:
cpu: 0.1
memory: 0.2Gi
service:
externalPort: 8080
nodePort: null
type: ClusterIP
tolerations: []
HOOKS:
---
# Source: server/templates/db-password-secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: aqua-server-database-password
labels:
app: aqua-server-database
chart: "server-4.6.0"
release: "aqua-server"
heritage: "Helm"
annotations:
"helm.sh/hook": pre-install
"helm.sh/hook-delete-policy": before-hook-creation
type: Opaque
data:
db-password: "T1ByV21CTHhlWm5RMDhEVkdCcUQ="
MANIFEST:
---
# Source: server/templates/rbac.yaml
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: aqua-server-psp
labels:
app: aqua-server
chart: "server-4.6.0"
release: "aqua-server"
heritage: "Helm"
spec:
privileged: true
allowedCapabilities:
- '*'
fsGroup:
rule: RunAsAny
runAsUser:
rule: RunAsAny
seLinux:
rule: RunAsAny
supplementalGroups:
rule: RunAsAny
volumes:
- '*'
---
# Source: server/templates/serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: aqua-server-sa
labels:
app: aqua-server
chart: "server-4.6.0"
release: "aqua-server"
heritage: "Helm"
imagePullSecrets:
- name: aqua-server-registry-secret
---
# Source: server/templates/image-pull-secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: aqua-server-registry-secret
labels:
app: aqua-server
chart: "server-4.6.0"
release: "aqua-server"
heritage: "Helm"
type: kubernetes.io/dockerconfigjson
data:
.dockerconfigjson: eyJhdXRocyI6IHsicmVnaXN0cnkuYXF1YXNlYy5jb20iOiB7ImF1dGgiOiAiZEdWemREcDBaWE4wIn19fQ==
---
# Source: server/templates/db-pvc.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: aqua-server-database-pvc
labels:
app: aqua-server
chart: "server-4.6.0"
release: "aqua-server"
heritage: "Helm"
spec:
accessModes:
- "ReadWriteOnce"
resources:
requests:
storage: "30Gi"
---
# Source: server/templates/web-pvc.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: aqua-server-web-pvc
labels:
app: aqua-server
chart: "server-4.6.0"
release: "aqua-server"
heritage: "Helm"
spec:
accessModes:
- "ReadWriteOnce"
resources:
requests:
storage: "4Gi"
---
# Source: server/templates/rbac.yaml
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: aqua-server-cluster-role
labels:
app: aqua-server
chart: "server-4.6.0"
release: "aqua-server"
heritage: "Helm"
rbac.example.com/aggregate-to-monitoring: "true"
rules:
- apiGroups: ["extensions"]
resourceNames: [aqua-server-psp]
resources: ["podsecuritypolicies"]
verbs: ["use"]
- apiGroups: [""]
resources: ["nodes", "services", "endpoints", "pods", "deployments", "namespaces","componentstatuses"]
verbs: ["get", "list", "watch"]
- apiGroups: ["rbac.authorization.k8s.io"]
resources: ["*"]
verbs: ["get", "list", "watch"]
---
# Source: server/templates/rbac.yaml
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: aqua-server-role-binding
namespace: aqua
labels:
app: aqua-server
chart: "server-4.6.0"
release: "aqua-server"
heritage: "Helm"
subjects:
- kind: ServiceAccount
name: aqua-server-sa
namespace: aqua
roleRef:
kind: ClusterRole
apiGroup: rbac.authorization.k8s.io
name: aqua-server-cluster-role
---
# Source: server/templates/db-service.yaml
apiVersion: v1
kind: Service
metadata:
name: aqua-server-database-svc
labels:
app: aqua-server-database
chart: "server-4.6.0"
release: "aqua-server"
heritage: "Helm"
spec:
type: ClusterIP
selector:
app: aqua-server-database
ports:
- port: 5432
---
# Source: server/templates/gate-service.yaml
apiVersion: v1
kind: Service
metadata:
name: aqua-server-gateway-svc
labels:
app: aqua-server-gateway
chart: "server-4.6.0"
release: "aqua-server"
heritage: "Helm"
spec:
type: ClusterIP
selector:
app: aqua-server-gateway
ports:
- port: 3622
targetPort: 3622
protocol: TCP
name: aqua-gate
- port: 8443
targetPort: 8443
protocol: TCP
name: aqua-gate-ssl
---
# Source: server/templates/web-service.yaml
apiVersion: v1
kind: Service
metadata:
name: aqua-server-console-svc
labels:
app: aqua-server-console
chart: "server-4.6.0"
release: "aqua-server"
heritage: "Helm"
spec:
type: ClusterIP
selector:
app: aqua-server-console
ports:
- port: 8080
targetPort: 8080
protocol: TCP
name: aqua-server-console
- port: 443
protocol: TCP
targetPort: 8443
name: aqua-web-ssl
---
# Source: server/templates/db-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: aqua-server-database
labels:
app: aqua-server-database
chart: "server-4.6.0"
release: "aqua-server"
heritage: "Helm"
spec:
selector:
matchLabels:
app: aqua-server-database
template:
metadata:
annotations:
labels:
app: aqua-server-database
name: aqua-server-database
spec:
serviceAccount: aqua-server-sa
containers:
- name: db
image: "registry.aquasec.com/database:4.6"
imagePullPolicy: "IfNotPresent"
env:
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: aqua-server-database-password
key: db-password
- name: PGDATA
value: "/var/lib/postgresql/data/db-files"
- name: POD_IP
valueFrom: { fieldRef: { fieldPath: status.podIP } }
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgres-database
ports:
- containerPort: 5432
protocol: TCP
livenessProbe:
exec:
command:
- sh
- -c
- exec pg_isready --host $POD_IP
failureThreshold: 6
initialDelaySeconds: 60
timeoutSeconds: 5
readinessProbe:
exec:
command:
- sh
- -c
- exec pg_isready --host $POD_IP
initialDelaySeconds: 5
periodSeconds: 5
timeoutSeconds: 3
resources:
limits:
cpu: 1
memory: 1Gi
requests:
cpu: 0.1
memory: 0.2Gi
volumes:
- name: postgres-database
persistentVolumeClaim:
claimName: aqua-server-database-pvc
---
# Source: server/templates/gate-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: aqua-server-gateway
labels:
app: aqua-server-gateway
chart: "server-4.6.0"
release: "aqua-server"
heritage: "Helm"
spec:
replicas: 1
selector:
matchLabels:
app: aqua-server-gateway
template:
metadata:
annotations:
labels:
app: aqua-server-gateway
name: aqua-server-gateway
spec:
serviceAccount: aqua-server-sa
containers:
- name: gate
image: "registry.aquasec.com/gateway:4.6"
imagePullPolicy: "IfNotPresent"
env:
- name: AQUA_CONSOLE_SECURE_ADDRESS
value: "aqua-server-console-svc:443"
- name: SCALOCK_GATEWAY_PUBLIC_IP
value: aqua-gateway
- name: HEALTH_MONITOR
value: "0.0.0.0:8082"
- name: SCALOCK_DBUSER
value: postgres
- name: SCALOCK_DBPASSWORD
valueFrom:
secretKeyRef:
name: aqua-server-database-password
key: db-password
- name: SCALOCK_DBNAME
value: scalock
- name: SCALOCK_DBHOST
value: aqua-server-database-svc
- name: SCALOCK_DBPORT
value: "5432"
- name: SCALOCK_AUDIT_DBUSER
value: postgres
- name: SCALOCK_AUDIT_DBPASSWORD
valueFrom:
secretKeyRef:
name: aqua-server-database-password
key: db-password
- name: SCALOCK_AUDIT_DBNAME
value: slk_audit
- name: SCALOCK_AUDIT_DBHOST
value: aqua-server-database-svc
- name: SCALOCK_AUDIT_DBPORT
value: "5432"
- name: AQUA_PUBSUB_DBUSER
value: postgres
- name: AQUA_PUBSUB_DBPASSWORD
valueFrom:
secretKeyRef:
name: aqua-server-database-password
key: db-password
- name: AQUA_PUBSUB_DBNAME
value: aqua_pubsub
- name: AQUA_PUBSUB_DBHOST
value: aqua-server-database-svc
- name: AQUA_PUBSUB_DBPORT
value: "5432"
ports:
- containerPort: 3622
protocol: TCP
- containerPort: 8443
protocol: TCP
resources:
limits:
cpu: 1
memory: 2Gi
requests:
cpu: 0.1
memory: 0.2Gi
---
# Source: server/templates/web-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: aqua-server-console
labels:
app: aqua-server-console
chart: "server-4.6.0"
release: "aqua-server"
heritage: "Helm"
spec:
replicas: 1
selector:
matchLabels:
app: aqua-server-console
template:
metadata:
annotations:
labels:
app: aqua-server-console
name: aqua-server-console
spec:
serviceAccount: aqua-server-sa
containers:
- name: web
image: "registry.aquasec.com/console:4.6"
imagePullPolicy: "IfNotPresent"
env:
- name: SCALOCK_DBUSER
value: postgres
- name: SCALOCK_DBPASSWORD
valueFrom:
secretKeyRef:
name: aqua-server-database-password
key: db-password
- name: SCALOCK_DBNAME
value: scalock
- name: SCALOCK_DBHOST
value: aqua-server-database-svc
- name: SCALOCK_DBPORT
value: "5432"
- name: SCALOCK_AUDIT_DBUSER
value: postgres
- name: SCALOCK_AUDIT_DBPASSWORD
valueFrom:
secretKeyRef:
name: aqua-server-database-password
key: db-password
- name: SCALOCK_AUDIT_DBNAME
value: slk_audit
- name: SCALOCK_AUDIT_DBHOST
value: aqua-server-database-svc
- name: SCALOCK_AUDIT_DBPORT
value: "5432"
- name: AQUA_PUBSUB_DBUSER
value: postgres
- name: AQUA_PUBSUB_DBPASSWORD
valueFrom:
secretKeyRef:
name: aqua-server-database-password
key: db-password
- name: AQUA_PUBSUB_DBNAME
value: aqua_pubsub
- name: AQUA_PUBSUB_DBHOST
value: aqua-server-database-svc
- name: AQUA_PUBSUB_DBPORT
value: "5432"
- name: AQUA_DOCKERLESS_SCANNING
value: "0"
- name: AQUA_PPROF_ENABLED
value: "0"
- name: DISABLE_IP_BAN
value: "0"
- name: AQUA_CLUSTER_MODE
value: "active-active"
- name: AQUA_CONSOLE_RAW_SCAN_RESULTS_STORAGE_SIZE
value: "4"
ports:
- containerPort: 8080
protocol: TCP
- containerPort: 8443
protocol: TCP
volumeMounts:
- mountPath: /var/run/docker.sock
name: docker-socket-mount
- mountPath: /opt/aquasec/raw-scan-results
name: aqua-web-pvc
resources:
limits:
cpu: 1
memory: 2Gi
requests:
cpu: 0.1
memory: 0.2Gi
volumes:
- name: docker-socket-mount
hostPath:
path: /var/run/docker.sock
- name: aqua-web-pvc
persistentVolumeClaim:
claimName: aqua-server-web-pvc
NOTES:
Thank you for installing Aqua Security Server.
Now that you have deployed Aqua Server, you should look over the docs on using:
https://read.aquasec.com/docs
Your release is named aqua-server. To learn more about the release, try:
$ helm status aqua-server
$ helm get aqua-server
In the helm template for web_ingress.yaml, there is a variable defined for externalPort. There is no such value to define in the values.yaml. There is also no definition for this in values definition in the README.md. As a result of this, the error in the subject header is generated.
The gateway server and host parameters set during helm install/upgrade are documented wrong and need to align with entries in values.yaml
gate:
host: csp-gateway-svc # example
port: 3622
I am getting an error when trying to use Helm with ArgoCD. My values.yaml file does include the enforcerToken value.
enforcerToken: "xxxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxxx"
I scrubbed the value but that is what is listed in my values.yaml file. When I look at my parameters section in Argo it shows the proper parameters are being populated when I try to patch the file to swap the token. I am not sure if there is a bug in the yaml file listed in the subject or if I am using it incorrectly.
Thanks.
None of the Charts seem to have changelogs, making it very difficult to upgrade or rely on this repo, as one has to read through each commit to understand the changes made. To make matters worse, most of the commits and PRs in this repo also have little to no details in their titles and messages, so the commit log is not really useful as a changelog either.
Per #82 , this repository and the corresponding Helm registry also overwrite existing versions and make breaking changes without bumping version at all, making even pinning to exact version unreliable. Overwriting existing versions is generally poor practice outside of security incidents where the better response is usually to unpublish. Breaking changes like those mentioned in #82 or in 70ae917 seem to be made with some frequency too. That would make a changelog, if not full upgrade docs, even more important.
If SemVer is not being followed, this should be very explicitly written in multiple places in the docs, and the alternate versioning strategy explicitly written as well. Right now it is misleadingly SemVer, whereas CalVer may perhaps be more fitting given current usage.
Having a changelog will also make it easier to follow SemVer as one has to delineate breaking changes, new features, and patches from each other.
aqua-helm/enforcer/values.yaml
Line 32 in 386deb2
We had a client getting errors while attempting to implement helm. They were seeing some variation of:
error converting YAML to JSON): error converting YAML to JSON
They found that removing the {} at the end of the "resource" line allowed them to successfully move forward.
I'm not sure whether there are situations in which that is needed, so I'm not asking for them to be removed so much as just reporting that a client needed them removed.
This ticket is more for awareness and can be closed, I understand we are revamping helm.
As per README the scanner.replicas should set the number of replicas to be running but the deployment tempalte refers to replicas: {{ .Values.scanner.replicaCount }}
.
Can you please either update the documentation or the reference template to fix this
Update
aqua-helm/aqua-quickstart/Chart.yaml
Line 1 in cc7851c
apiVersion: v2
to apiVersion: v1
. v2 is still not supported in several implementation.If multi_cluster
is set to false
(the default), the ServiceAccount is not created, but the DaemonSet doesn't check for the multi_cluster
value and always sets spec.template.spec.serviceAccount
to {{ .Release.Namespace }}-sa
, which when the ServiceAccount does not exist/multi_cluster
is false
, the Pods can't be created and you see the following event:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedCreate 47s (x15 over 2m9s) daemonset-controller Error creating: pods "enforcer-ds-" is forbidden: error looking up service account aqua/aqua-sa: serviceaccount "aqua-sa" not found
My current work around is to set multi_cluster: true
and let the ServiceAccount get created.
Hello,
Greetings. I am trying to install helm-chart in my k8s environment. I expected that document should have 'helm install' command. But I didn't find it. Document is mentioning about 'helm upgrade --install'.
Can you please help to get the relevant guide.
Regards,
Deepak
There was an update made to the server chart (specifically ce60ce8#diff-cf1e8c14e54505f60aa10ceb8d5d8ab3) that removed values that we were setting (specifically the audit DB parameters auditHost
, auditUser
and auditPassword
). Now our console container will no longer start as it cannot connect to the audit DB and one of our environments is unusable.
The bigger problem here is that the chart version did not change whatsoever with this change, and was published to helm.aquasec.com
replacing the previous revision of 4.6.0. We have no way to reference the previous, working version of the chart, as it appears it is completely replaced by this new version.
Whenever changes are made to the chart, at the very least the chart patch version should be changed (and possibly major/minor version depending on the scope/nature of the change) to avoid impacting anyone who has pinned to a specific version of the chart. Once a specific version of the chart is published, it should not be modified.
Gateway is unable to get ClusterId according to the logfile while using the server chart:
2019-07-10 07:29:17.172 WARN Failed getting cluster id, failed getting nodes: nodes is forbidden: User "system:serviceaccount:aqua:aqua-sa" cannot list resource "nodes" in API group "" at the cluster scope
Here is the fix:
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: {{ .Release.Name }}-cluster-role
labels:
app: {{ .Release.Name }}
chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
release: "{{ .Release.Name }}"
heritage: "{{ .Release.Service }}"
rules:
- apiGroups: ["extensions"]
resourceNames: [{{ .Release.Name }}-psp]
resources: ["podsecuritypolicies"]
verbs: ["use"]
- apiGroups: [""]
resources: ["nodes"]
verbs: ["list"]
{{- end }}
Is there a way of defining a static IP for gateway and web services if type load balancer?
Is gate.publicIP
a value for gateway.
On branch 5.3, as of commit e1da7fc, the image pull secret name uses .Release.Name
(https://github.com/aquasecurity/aqua-helm/blob/5.3/server/templates/image-pull-secret.yaml#L6), but the service account expects the secret name to be built from .Release.Namespace
(https://github.com/aquasecurity/aqua-helm/blob/5.3/server/templates/serviceaccount.yaml#L13).
For reference, the enforcer chart is more consistent on this: https://github.com/aquasecurity/aqua-helm/blob/5.3/enforcer/templates/serviceaccount.yaml#L14 and https://github.com/aquasecurity/aqua-helm/blob/5.3/enforcer/templates/image-pull-secret.yaml#L6
I defer to you on which name you'd like to use, .Release.Namespace
or .Release.Name
, but I have a slight preference for .Release.Name
(what if a user wants to deploy different charts to the same namespace?)
The impact of this that imageCredentials.create
must be set to false
in order to work correctly, and the secret must be created by the user out-of-band.
There is no option available to specify a Secret for the Scanner password
, meaning it has to be input in plaintext in a Helm values file or passed via command-line. But not all Helm deployment styles allow for command-line specification, requiring customization to do so. This is also a security issue as, in either case, the plaintext password is introspectable.
It would be much easier if the Scanner password
could be specified as a Secret, similar to the Server's admin password or the Enforcer's enforcerToken
This seems like it may be difficult to provide here and may require upstream changes to the Scanner, because it seems to only accept password
as command-line argument in the Chart's containers spec as well as per the Scanner executable docs. This may require it to accept an environment variable or volume mount as an alternative to the command-line argument.
AWS requires annotation of services for certain configurations as an example requirement.
This PR was closed that added that functionality without comment.
Example:
grpcservice:
annotations:
external-dns.alpha.kubernetes.io/hostname: aqua-gateway.company.domain
service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0
I was following the directions in the README and ran into some issues with the KubeEnforcer Chart not existing on the https://helm.aquasec.com server.
Reproduction:
$ helm repo add aqua-helm https://helm.aquasec.com
"aqua-helm" has been added to your repositories
$ helm search repo aqua-helm
NAME CHART VERSION APP VERSION DESCRIPTION
aqua-helm/enforcer 5.0.0 5.0 A Helm chart for the Aqua Enforcer
aqua-helm/harbor-scanner-aqua 0.2.0 0.2.0 Harbor scanner adapter for Aqua CSP scanner
aqua-helm/harbor-scanner-trivy 0.1.3 0.2.1 Trivy as a plug-in vulnerability scanner in the...
aqua-helm/scanner 5.0.0 5.0 A Helm chart for the aqua scanner cli component
aqua-helm/server 5.0.0 5.0 A Helm chart for the Aqua Console Componants
There is no aqua-helm/kube-enforcer
as the README suggests. The kube-enforcer
Chart in the repo also has version: 0.1.0
and not 5.0.0
as the README shows and the other Charts have.
Similarly had an error when adding as a dependency to a Chart.yaml
:
...
Saving 4 charts
Downloading server from repo https://helm.aquasec.com
Downloading scanner from repo https://helm.aquasec.com
Downloading enforcer from repo https://helm.aquasec.com
Save error occurred: could not find : chart kube-enforcer not found in https://helm.aquasec.com
Deleting newly downloaded charts, restoring pre-update state
Error: could not find : chart kube-enforcer not found in https://helm.aquasec.com
The current helm chart doesn't support passing values of multiple gateways/envoys. Raising this issue to resolve it.
https://github.com/aquasecurity/aqua-helm/blob/master/enforcer/templates/enforcer-daemonset.yaml#L53
Update the enforcer-daemonset.yaml:
value: {{ .Values.gate.host | default "aqua-gateway-svc:8443" }}
And then pass a comma separated list of host:port similar to the following in values.yaml:
gate:
host: 20.73.195.115:8443,20.73.33.34:8443
There is an issue with the 4.2 tag. It points to an older commit that does not include improvements for gRPC support.
As it is, gateways cannot reach the console.
$ git log
commit bc95cc2f64fe859702970dfed5af97deb564fbb2 (HEAD -> master, origin/master, origin/HEAD, origin/4.5)
Author: niso120b <[email protected]>
Date: Tue Aug 20 17:10:06 2019 +0300
Add grpc ssl port for enforcer
commit 2bf9cbd3192acd2afe62b54fd7cb6cdef1546c25 (origin/4.2)
Author: Nissim Bitan <[email protected]>
Date: Wed Jul 3 09:55:01 2019 +0300
Update gRPC Support
commit 77a4160386be8e25b3d07992693ef226d95d9341
Author: Nissim Bitan <[email protected]>
Date: Mon Jul 1 19:20:51 2019 +0300
Update ingress example
commit 4b4b7425ea73ab6ebaa9910592897d18d5655ddc (tag: 4.2)
Author: Nissim Bitan <[email protected]>
Date: Mon Jun 24 12:01:06 2019 +0300
Update to 4.2 version
Are there updated instructions?
Also noticed the latest Pull request from a few days ago. I tried the chart repository, but it doesn't seem to work either. Can you give me a status on this. I'd like to give this a go running on top of openSUSE Kubic and SUSE CaaS Platform.
Thanks,
Cameron
Currently the envoy config is hard coding the namespace that the gateway-headless-svc would be in and if you deploy aqua into a different namespace other than "aqua" envoy will not work.
Need to update
address: {{ .Release.Name }}-gateway-headless-svc.aqua.svc.cluster.local
to
address: {{ .Release.Name }}-gateway-headless-svc.{{ .Release.Namespace }}.svc.cluster.local
Hi, I couldn't find any useful information about this environment variable CLUSTER_MODE
. I even tried the Knowledge Base, but found nothing.
aqua-helm/server/templates/web-deployment.yaml
Lines 150 to 153 in 07321c6
Is it possible to provide more information about this?
Is Helm3 supported?
There's no variable in values.yaml or in the templates to handle loadBalancerIP. This is required to support this - https://docs.microsoft.com/en-us/azure/aks/internal-lb
apiVersion: v1
kind: Service
metadata:
name: internal-app
annotations:
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
spec:
type: LoadBalancer
loadBalancerIP: 10.240.0.25
ports:
Please add support for it.
The docs for Scanner and Enforcer don't mention that they have prerequisites other than an imagePullSecret
, but both do.
The Scanner chart requires that a user
and password
are given, but the docs/its README doesn't mention this at all, neither that it is required nor how to create one. Aqua Engineers pointed out to me that this was necessary but I didn't know despite having read through all the docs quite a lot.
I think directions on creating a user
/password
should be added in the Scanner README's prerequisites.
The Enforcer chart requires that an enforcerToken
is given, but the docs/its README doesn't mention this at all, neither that it is required nor how to create one. This I noticed from the argument in the Helm install command and knew from the docs site that one must create an Enforcer group, but if one only read here one wouldn't realize this was necessary as it's not mentioned.
I think directions on creating an Enforcer group and getting a enforcerToken
should be added in the Enforcer README's prerequisites.
Mentioned this in #116
The server docs have an activeactive
and clustermode
options but it's not clear what the difference between the two is.
clustermode
says it is for "HA" but activeactive
is a term for an HA deployment as well. A comment in the Chart values also lists that the two are mutually exclusive with each other, which adds to the confusion. The website's docs for HA mention Active-Active but not "Cluster Mode" as well.
It would be good to clarify the differences between the two, whether one is deprecated or not (since the website doesn't mention cluster mode, maybe it is?), which one is optimal (i.e. most resilient), and what the default is (both are false
, so default is not HA?).
#72 is related but was not really answered. The response doesn't point to a specific place in the docs and I have read the docs; it defines these as the same thing as far as I can tell: aqua_cluster_mode='active-active'
I would like to use the Tenant Manager and I use Helm, but there is no Tenant Manager Chart.
I added it myself in my PR #99 instead of waiting for someone from Aqua to do so, but for some reason that was closed with a one-liner saying "we currently don't need the tenant manager to be in the helm charts", despite me needing it... I said as much in #99 (comment) as well
It seems like this repository has a history of rejecting useful PRs with little to no comments, like all the ones mentioned in #92 and #102, among others... This does not give a good experience to customers ๐ , especially ones that are doing work to improve Aqua's products.
Hi, I see for the console
component the environment variable AQUA_CONSOLE_RAW_SCAN_RESULTS_STORAGE_SIZE
and some code that creates a PVC for it as well.
Isn't clear to me what's the reason for this PVC. With the current code is also not possible to add more replicas to the console, as the persistent volume will be attached to the first created replica.
Lines 137 to 141 in 07321c6
Is possible to give more details why this is required and what's the suggested solution to handle the volume (if required) when more than 1 replica is used?
I tried the knowledge base and blamed a few commits but there were no details.
In the aqua server and gateway documentation, the property db.passwordSecret
is documented as password secret name
when this property is used as a flag to know if the user specified a password in the values file and not as the password secret name (cf.
Hi,
I don't seem to be able to get the encryption key working for the Gateway, I've managed to get the server to pick up the new key using the SCALOCK_ENCRYPTION_KEY env var, but it doesn't want to work for the Gateway.
Two problems:
https://docs.aquasec.com/docs/environment-variables#section-optional-variables-for-the-aqua-gateway
Happy to make the change if someone can tell me what the correct variable name is.
Thanks.
When I enable envoy in the server chart, I get the error:
Server: Envoy: error initializing configuration '/etc/envoy/envoy.yaml': Invalid path: /etc/envoy/envoy.yaml.
It appears as if the data section of aqua-envoy-conf configMap isn't getting loaded. If I edit the configMap (kubectl -n aqua edit cm aqua-envoy-conf) and paste the data section in, the pod will start no problem. Anyone else run into this problem?
In the rbac template files you refer to the namespace specifically as "aqua". This causes problems if deploying to any other namespace.
aqua-helm/enforcer/templates/rbac.yaml
Line 66 in 3d1b660
You could use the following in its place:
namespace: {{ .Release.Namespace }}
Render error installing server when ingress enabled
helm install --name myrelease --dry-run --debug -f values.yaml server/
[debug] Created tunnel using local port: '60142'
[debug] SERVER: "127.0.0.1:60142"
[debug] Original chart version: ""
[debug] CHART PATH: /home/kcorupe/aqua/aqua-helm/server
Error: render error in "server/templates/web-ingress.yaml": template: server/templates/web-ingress.yaml:26:46: executing "server/templates/web-ingress.yaml" at <.Release.Name>: can't evaluate field Release in type interface {}
values.yaml:
web:
ingress:
enabled: true
annotations:
kubernetes.io/ingress.class: nginx
hosts:
- aquasec.test.com
Need to add [no_]http_proxy[s] variables to server to allow connectivity to CyberCentre from air-gapped environments.
spec:
containers:
env:
- name: http_proxy
value: "$proxy_url"
- name: https_proxy
value: "$proxy_url"
- name: no_proxy
value: "$noproxy"
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.