Giter Club home page Giter Club logo

charts's Introduction

Wiz Kubernetes Helm Charts

Usage

Helm must be installed to use the charts. Please refer to Helm's documentation to get started.

Once Helm is set up properly, add the repo as follows:

helm repo add wiz-sec https://wiz-sec.github.io/charts

You can then run helm search repo wiz-sec to see the charts.

Helm charts build status

CircleCI

charts's People

Contributors

acaire avatar ariknem avatar barmagnezi avatar bluphy avatar circleci-wiz avatar dany74q avatar daviduash avatar eladgabay avatar ericabramov avatar eyal-moscovici avatar galsolo avatar gaziter avatar itshacki avatar jcogilvie avatar jdong10 avatar liorschach avatar lir-wiz avatar maximrub avatar mazooz-chen avatar morancohen26 avatar nitrikx avatar nitzanzuler avatar nivbend avatar ofirc-wiz avatar r-darwish avatar talmalka4 avatar thealannix avatar uristernik avatar yarinm avatar yossimarzuk avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

charts's Issues

Chart(s) should have an option(s) to be stable across deployments

I run in an environment using infra-as-code tooling that prints out the helm template diff, if there is one, and attaches it to a PR for review.

This requires us to be able to quash certain diffs by implementing an option to stabilize the template across runs of helm template. It could be a random seed, or it could be a flag, but things like this don't work for us out of the box:

apiVersion: batch/v1
kind: Job
metadata:
  name: {{ include "wiz-kubernetes-connector.name" . }}-delete-connector
  ...
  annotations:
    ...
    rollme: {{ randAlphaNum 5 | quote }}

This will change on every deployment with no way to suppress. What that means is every time I make a change to my repo that deploys wiz (which also deploys a bunch of other tooling) wiz shows up in the diff as changed.

I would like to request the team to see if there's a way to accomplish the goal of this annotation w/o template instability. For instance, could we hash some other element of the chart instead?

connector secret is getting deleted

I am using ArgoCD (v2.5.2) to deploy the wiz-kubernetes-connector:1.0.0 chart.

I have the following values set:

wizApiToken:
  # Specifies whether an api token secret should be created
  # If create is false you need to create it with clientId, clientToken
  createSecret: false

Upon syncing, the helm pre-install/pre-upgrade hook fires, which creates a job which creates a wiz-kubernetes-connector.connectorSecretName secret that the wiz-kubernetes-connector-broker pod mounts when it starts up. However, when the job finishes, the secret is being deleted before the pod is finished creating, so the pod cannot start up.

Expected behavior: The wiz-kubernetes-connector.connectorSecretName secret shouldn't be deleted, as this prevents the pod from starting up.

Digest of wiz-broker images not publicly available

Hi,

as part of deployment to company internal infrastructure, I will need to mirror wiz-broker docker image into private company docker registry. To do it safely, I would like to verify that image that is being mirrored has correct digest as the one officially released by Wiz.io.

Is there a place where I can verify which versions of Wiz docker images are available (with change log ideally) and what is digest for each of them so I can verify?

Sadly, I have not found an answer in Wiz documentation so I am trying my luck here.

Thank you!

wiz-kubernetes-connector chart version 2.2.12 broker doesnt work

I have deployed the latest version of wiz-kubernetes-connector (2.2.12) as per the instructions here. The broker pod never ends up running and has this output:

Starting Wiz tunnel client 0.41.0, Connector Id: xxx (Token: xxx...)
Target: kubernetes.default.svc.cluster.local:443
Connecting to: tunnel.eu8.app.wiz.io:443
{"level":"info","time":"2023-08-02T15:12:49.79507747Z","msg":"Binding flag [client-id] on env variable [WIZ_CLIENT_ID]"}
{"level":"info","time":"2023-08-02T15:12:49.795148199Z","msg":"Binding flag [client-token] on env variable [WIZ_CLIENT_TOKEN]"}
{"level":"info","time":"2023-08-02T15:12:49.795171661Z","msg":"Binding flag [cluster-external-id] on env variable [WIZ_CLUSTER_EXTERNAL_ID]"}
{"level":"info","time":"2023-08-02T15:12:49.795263425Z","msg":"Binding flag [env] on env variable [WIZ_ENV]"}
{"level":"info","time":"2023-08-02T15:12:49.795612952Z","msg":"Binding flag [frp-config] on env variable [WIZ_FRP_CONFIG]"}
{"level":"info","time":"2023-08-02T15:12:49.795643206Z","msg":"Binding flag [help] on env variable [WIZ_HELP]"}
{"level":"info","time":"2023-08-02T15:12:49.795913646Z","msg":"Binding flag [managed] on env variable [WIZ_MANAGED]"}
Error: failed to parse server port: strconv.Atoi: parsing "": invalid syntax
{"level":"fatal","time":"2023-08-02T15:12:50.208709846Z","msg":"Failed executing entrypoint","error":"failed to parse server port: strconv.Atoi: parsing \"\": invalid syntax"}

Deploying chart version 2.2.11 works as expected.

[wiz-kubernetes-connector] Connector secret created by Helm chart is not accepted by deployment

See here:

This creates the connector secret with several keys, however the deployment requires a single connectorData key (supposedly in JSON format) instead: https://github.com/wiz-sec/charts/blob/master/wiz-kubernetes-connector/templates/wiz-broker-deployment.yaml#L38-L43

Also, when adding a Kubernetes connector in the wiz.io console, a kubectl command is shown to create this secret manually, this also uses multiple keys in the secret instead of a single connectorData key.

What is the public helm chart link?

How can I publicly pull the HELM charts?

For example, What is the URL for pulling the wiz-kubernetes-integration helm chart?

I would like to use curl to download helm chart, but I don't see information around it. Can someone clarify this for me? Thanks

wiz-sensor: Should not require dynamic access to all secrets in namespace

I'm going through the process of installing wiz-sensor onto a kubernetes cluster. I noticed that the default rbac grants read access to all secrets in the namespace. See here:

{{- if .Values.serviceAccount.rbac -}}
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: {{ include "wiz-sensor.fullname" . }}-namespace-role
labels: {{- include "wiz-sensor.labels" . | nindent 4 }}
namespace: {{ .Release.Namespace }}
rules:
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get", "list", "watch"]
{{- end -}}

The secret name is then passed through at runtime here:

value: {{ include "wiz-sensor.secretName" . }}

This does not match my expectation and experience. I expect instead that the daemonset should mount these values either as environment variables (example) or as a volume (example). My intuition is to mount these as environment variables.

Confusingly, later in the daemonset, this secret is mounted as a volume:

- name: api-client-secret
secret:
secretName: {{ include "wiz-sensor.secretName" . }}
items:
- key: clientId
path: clientId
- key: clientToken
path: clientToken

I don't understand why both approaches would be needed, and my understanding is that we should prefer the latter approach to the former.

Do we require this role and role binding? Can we remove this requirement by refactoring the daemonset and application to parse these secrets through a volume or environment variable?

Bug with apiTokenSecretName for wiz-kubernetes-connector chart

After updating our wiz-kubernetes-connector to version 2.2.8 I realized that the apiTokenSecret name was not getting properly generated. It seems that there is a new global key, which is fine, but in the helper function to generate the name for the apiTokenSecret there is a typo.

The key .Values.global.wizApiToken.secret.name is repeated, I think it should be:

{{- define "wiz-kubernetes-connector.apiTokenSecretName" -}}
{{- $nameOverride := coalesce .Values.global.wizApiToken.secret.name  .Values.wizApiToken.secret.name .Values.global.nameOverride .Values.nameOverride }}
{{- default .Chart.Name $nameOverride | trunc 63 | trimSuffix "-" }}
{{- end }}

Where the 2nd repeated .Values.global.wizApiToken.secret.name should be .Values.wizApiToken.secret.name.

Exclude WIZ namespace by default from admission controller

Hello folks,

By default, wiz ignores only resources in the kube-system namespace. Which is reasonable but another exclusion must me made for wiz resources itself.

By default, wiz pods are not passing Kubernetes pod security standard, especially this rule -Pod should run containers with the runtime/default seccomp profile

In a future, there can be more.
This leads to an inability to update/upgrade wiz resources with K8S admission policy set to block.

Please, come up with a solution to exclude wiz resources by default. For example, it can be done via assigned kubernetes labels on all wiz resources.
Plus, documentation should explicitly state that these resources are excluded.

ArgoCD keeps creating new connectors with `autoCreateConnector.enabled = true`

As ArgoCD considers the pre-install and pre-upgrade Helm lifecycle hooks as a PreSync operations, the connector creation job is run on EVERY ArgoCD sync.

Any ideas on how to fix that?
One nice solution would be to make the wiz-broker tool to only generates new credentials/update the K8S secrets if:

  • the chart version has changed (could be stored in the secret)
  • we pass a --force-recreation flag to the wiz-broker CLI

If the wiz-broker is opensource, I can propose a PR on it with this change.

Feature Request: wiz-sensor - Provide easier correlation of event to cluster

With the Wiz cluster connector we were able to set the connector name in the Helm chart. This allowed us to use cluster name as connector name and made identifying the cluster related to a finding easier. Please provide something similar for the sensor chart. Right now the helm connectors appear to be randomly named, so if we could name them after the cluster as we do for the cluster connector our lives would be easier.

Urgent! wiz-broker-rbac helm chart failed to install on a new environment

probably related to https://github.com/wiz-sec/charts/pull/101/files
helm chart version - 0.4.0
the error that I get during the terraform apply of the helm chart -

│ Error: execution error at (wiz-broker/templates/secrets.yaml:16:6): A valid .Values.global.wizConnector.connectorId entry required!
│ 
│   with module.k8s_wiz.helm_release.wiz_broker_rbac,
│   on ../../modules/k8s/agents/wiz/wiz_rbac.tf line 1, in resource "helm_release" "wiz_broker_rbac":
│    1: resource "helm_release" "wiz_broker_rbac" {

note that this is a blocker for us to onboarding new customers

Wiz connector helm chart converts string label value to bool

On my connector deployment, I need to specify:

labels:
  admission.datadoghq.com/enabled: "false"

If I try to use commonLabels to do this, I end up instead with

labels:
  admission.datadoghq.com/enabled: false

The string and boolean here are not interchangeable; a raw boolean is a syntax error.

Consider quoting label values, like so:

{{ $index }}: {{ tpl $content $ | quote }}

Mismatched logic between wiz-broker and wiz-connector-creator if `usePodCustomEnvironmentVariablesFile` is used.

The wiz-broker and wiz-connector-creator have mismatched logic blocks when service account ID and secret are passed through an environment variables file:

If a user were to provide the following values:

wizApiToken:
  secret:
    create: false
  
  # API token should be read from an environment file, which is specified in podCustomEnvironmentVariablesFile
  usePodCustomEnvironmentVariablesFile: true

Then passing the correct volumes and volume mounts through to both the connector-creator and broker:

  customVolumes:
    - name: wiz-kubernetes-connector-secrets
      csi:
        driver: secrets-store.csi.k8s.io
        readOnly: true
        volumeAttributes:
          secretProviderClass: "wiz-kubernetes-connector-secrets"
  customVolumeMounts:
    - mountPath: "/etc/wiz-secrets/secrets.conf"
      subPath: secrets.conf
      name: wiz-kubernetes-connector-secrets

This will result in the broker referencing a secret that does not exist. Error: secret wiz-kubernetes-connector-api-token not found

The wiz-connector-creator will complete successfully.

argo cd application for wiz-kubernetes-integration after a while becomes outofsync

after some time after successful install, argocd reports that the application (wiz-kubernetes-integration helm chart) is out of sync and unable to self heal.

wiz-kubernetes-integration-wiz-admission-controller:
reported manifest diff that is unable to resolve/self heal:
rollme.webhookCert

argocd sync logs:
deleting wiz-auto-modify-connector service account

workaround:
manual deleting of service account resumes sync successfully. this step seems to kick off the integration job which starts to properly reinstall all the respective resources

environment:
app.kubernetes.io/chartName: wiz-admission-controller
app.kubernetes.io/instance: wiz-kubernetes-integration
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: wiz-admission-controller
app.kubernetes.io/version: '2.4'
helm.sh/chart: wiz-admission-controller-3.4.13
wiz helm chart 0.1.85
AWS EKS 1.27

Explicit definition of Kubernetes permissions for Helm Connector

Per documentation and FAQ - https://docs.wiz.io/wiz-docs/docs/kubernetes-req-perm-api?lng=en

Required permissions are much more limited in scope than the used definition:
https://github.com/wiz-sec/charts/blob/master/wiz-kubernetes-connector/templates/service-account-cluster-reader.yaml#L39

Using the permissions at this level for ease of engineering release is not in alignment with least permissions, which we should expect from a security tool with global visibility.

Customers should not need to grant "get, watch" permissions for all ApiGroups and Resources for one resource that uses the Get verb and none which are reported to use the Watch verb. Future functionality is not a reason for over-provisioning of permissions. (Something that the Wiz platform would flag in other portions of cloud infrastructure.)

In the event of malicious access/poisioning of the Wiz connector - this reduced scope should limit the damage possible.

Please adhere to least privilege in your RBAC

The privs should be enumerated instead of being a wildcard because if I understand correctly this can read all secrets. You do probably need to read secrets in your own namespace though.
https://github.com/wiz-sec/charts/blob/master/wiz-kubernetes-connector/templates/service-account-cluster-reader.yaml#L38-L41

rules:
  - apiGroups: ["*"]
    resources: ["*"]
    verbs: ["get", "list", "watch"]

Example by datadog:
https://github.com/DataDog/helm-charts/blob/main/charts/datadog/templates/cluster-agent-rbac.yaml#L9-L21

rules:
- apiGroups:
  - ""
  resources:
  - services
  - endpoints
  - pods
  - nodes
  - namespaces
  - componentstatuses
  verbs:
  - get
  - list
  - watch

Also if possible please give an example of how to setup a secret out of band so it's not managed by the helm chart so you don't need to pass in the client_id and client_secret via a values file when you first install the helm chart.

wizApiToken:
  clientId: ""
  clientToken: ""
  clientEndpoint: ""

broker:
  enabled: true/false

autoCreateConnector:
 connectorName: ""
 apiServerEndpoint: "" # This is only required if broker.enabled = false
 clusterFlavor: ""

Thanks!

[wiz-kubernetes-connector] Documentation on expected values when autoCreateConnector = false

Hi,

I am trying to deploy for the first time the wiz-kubernetes-connector with broker.enabled = true & autoCreateConnector = false

The connector is already deployed on GCP, this is why I don't need the auto creation of the connector

Where to find the values here and what are their purposes (especially the target* variables, which seems to be k8s related)?

  connectorId: ""
  connectorToken: ""
  targetDomain: ""
  targetIp: ""
  targetPort: ""

Thank you

wiz-kubernetes-integration unable to deploy from scratch via ArgoCD using externalSecrets

Hello,

wiz-kubernetes-integration is unable to deploy from scratch with ArgoCD when passing secrets with External-Secrets. Reason for this is dependency on Kubernetes secrets in job/wiz-kubernetes-connector-create-connector, it will naturally not be able to run without secrets being present.

With external-secrets Kubernetes secrets are not created within the deployment, instead an ExternalSecret resource is created which is syncing with external vault and then creating Kubernetes secrets.

As The Helm template has a dependency to Kubernetes secrets, the deployment fails

│   Warning  Failed     9s (x2 over 9s)  kubelet            Error: secret "wiz-sa" not found                                                                                      

We are deploying Wiz via ArgoCD using Kustomize + Helm:

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

namespace: wiz

resources:
  - namespace.yml
  - secrets.yml

helmCharts:
- name: wiz-kubernetes-integration
  repo: https://charts.wiz.io/
  releaseName: wiz-kubernetes-integration
  namespace: wiz
  valuesFile: values.yml
  version: 0.1.95

ExternalSecrets are defined in the secrets.yml file. While deploying this manually directly via Kustomize the deployment will work as ExternalSecrets are created with kustomize build --enable-helm | kubectl apply -f -, with ArgoCD deployment will remain OutOfSync.

Could the chart be improved upon a bit an allow this sort of use case?

Allow for the opa webhooks in wiz-admission-controller to be installed as normal resources

The wiz-admission-controller currently unconditionally installs webhooks as helm hooks which means they are deleted and recreated on every installation (see https://github.com/wiz-sec/charts/blob/master/wiz-admission-controller/templates/opawebhook.yaml#L19-L21). This can be necessary, if the user is not using a custom certificate and the caBundle needs to continuously change.

However, if the user of the chart is using certManager or some other method to manage certificates and access from the API server to the webhook, then deleting and recreating the admission webhook on every change is a bit useless, and can lead to a lot of drift for any tooling that performs differences between some configuration and the existing configuration (e.g. terraform, argocd -- if the tooling includes hooks -- among other tools).

If possible, could we remove the hook annotations under a new boolean or when cert-manager is enabled? Is there a case that I am missing as to why they should be kept?

Add Proper Release Tags

Can proper release tags please be added for new releases of the helm charts? Its really difficult to hunt down the values.yaml documentation for an older release because their are no branch tags nor GitHub releases to reference.

Implement `priorityClassName` for admission controller deployment

The helm chart for the admission controller should support priorityClassName in the deployment.

If the deployment runs at the default PriorityClass, it's possible that other workloads can evict the pods, and, in a space-constrained use case, prevent it from running. In the worst case, this can cause workload deployments to fail if the admission webhook is mandatory.

Consider enabling us to set the priority class, or default it to system-cluster-critical so we're (mostly) guaranteed to have some pods somewhere.

Doubts about configuration

Hello Everyone, I have been installed wiz with the follow configuration using unified chart version 0.1.95, but the broker doesn't work

    global:
      wizApiToken:
        secret:
          create: false
          name: wiz-api-token
    wiz-kubernetes-connector:
      autoCreateConnector:
        enabled: true
        clusterFlavor: GKE
      broker:
        enabled: true
      wizConnector:
        createSecret: true
        secretName: wiz-connector-autocreated-secret
    wiz-sensor:
      enabled: true
      imagePullSecret:
        create: false
        name: "wiz-sensor-imagepullkey"
    wiz-admission-controller:
      enabled: true

Regards!

wiz-sensor should provide more context for the permissions and Linux capabilities defined in helm chart

I see that we use hostNetwork: true here:

I experimented with setting this to false, and so far the daemonset seems healthy. Is this permission required? If it is required, can we document in detail why?

Additionally, the wiz-sensor helm chart currently defines the following capabilities:

securityContext:
capabilities:
add:
- SYS_ADMIN # moving between namespaces
- SYS_CHROOT # moving between namespaces
- SYS_RESOURCE # eBPF
- SYS_RAWIO # file hashing
- DAC_OVERRIDE # file hashing
- DAC_READ_SEARCH # file hashing
- NET_ADMIN # network events
- NET_RAW # network events
- IPC_LOCK # eBPF
- FOWNER # file hashing
- SYS_PTRACE # eBPF
- KILL # forensics

I appreciate that comments are added next to each capability, but many of them still leave me with questions. For example, why is KILL needed for forensics? Is NET_ADMIN required, or is there another capability that fits the needs of wiz-sensor without granting more access than required. I looked into SYS_ADMIN a bit and learned that it is required to call setns (https://man7.org/linux/man-pages/man2/setns.2.html). It would help to document the syscall(s) needed, and why wiz-sensor needs to make these calls.

Can we include more detail in these required capabilities?

bump wiz broker helm chart from `0.2.1` to `0.4.0` failed

I encountered an error when attempting to upgrade the helm chart to version 0.4.0.
│ Error: template: wiz-broker/templates/wiz-broker-deployment.yaml:46:18: executing "wiz-broker/templates/wiz-broker-deployment.yaml" at <.Values.wizConnector.secret.annotations>: nil pointer evaluating interface {}.annotations

I'm not sure what's missing from the helm chart or values file that's causing this error as I could not find any information on previous versions of the helm chart in the repository, as only the latest version seems to be available.

Is there any documentation or release notes available for the helm chart that might provide more information on previous versions? Alternatively, is there any way to obtain the previous version of the helm chart from another source?

[wiz-kubernetes-connector] Possibility to add custom CA

Hi all, we are deploying wiz-kubernetes-connector in fairly air-tight environment with SSL inspection in place and currently there is not option to add custom trusted CA to containers.

Would it be possible to implement such option?

Error thrown in "wiz-kubernetes-connector-create-connector" job:

{"level":"error","time":"2024-02-27T15:15:39.430808858Z","msg":"error posting token request to url=https://auth.app.wiz.io/oauth/token, status=, resp=","error":"Post \"https://auth.app.wiz.io/oauth/token\": tls: failed to verify certificate: x509: certificate signed by unknown authority"}
Error: failed getting broker api client: failed to k8s connector api client: failed to get wiz rpc client: failed to authenticate with wiz: failed authenticating for wiz gql api: failed authenticating with credentials: error posting token request: Post "https://auth.app.wiz.io/oauth/token": tls: failed to verify certificate: x509: certificate signed by unknown authority

Helm chart version 2.4.8, appVersion 2.4.

Thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.