Giter Club home page Giter Club logo

neuvector-helm's Introduction

NeuVector Helm charts

A collection of Helm charts for deploying NeuVector product in Kubernetes, Rancher and Openshift clusters.

Installing charts

Helm Charts

This repository contains three Helm charts.

Chart Description
core Deploy NeuVector container security core services. chart
crd Deploy CRD services before installing NeuVector container security platform. chart
monitor Deploy monitoring services, such as Prometheus exporter. chart

IMPORTANT - Each chart has a set of configuration values, especially for the 'core' chart. Review the Helm chart configuration values here and make any required changes to the values.yaml file for your deployment.

Adding chart repo

helm repo add neuvector https://neuvector.github.io/neuvector-helm/
helm search repo neuvector/core

Versioning

Helm charts for officially released product are published from the release branch of the repository. The main branch is used for the charts of the product in the development. Typically, the charts in the main branch are published with the alpha, beta or rc tag. They can be discovered with --devel option.

$ helm search repo neuvector/core -l
NAME          	CHART VERSION	APP VERSION	DESCRIPTION
neuvector/core	2.2.2       	5.0.2      	Helm chart for NeuVector's core services
neuvector/core	2.2.1        	5.0.1      	Helm chart for NeuVector's core services
neuvector/core	2.2.0        	5.0.0      	Helm chart for NeuVector's core services
neuvector/core	1.9.2        	4.4.4-s2   	Helm chart for NeuVector's core services
neuvector/core	1.9.1        	4.4.4      	Helm chart for NeuVector's core services
...
...

$ helm search repo neuvector/core --devel
NAME            	CHART VERSION	APP VERSION	DESCRIPTION
neuvector/core	2.2.0-b1     	5.0.0-b1   	Helm chart for NeuVector's core services
neuvector/core	1.9.2        	4.4.4-s2   	Helm chart for NeuVector's core services
neuvector/core	1.9.1        	4.4.4      	Helm chart for NeuVector's core services
neuvector/core	1.9.0        	4.4.4      	Helm chart for NeuVector's core services
neuvector/core	1.8.9        	4.4.3      	Helm chart for NeuVector's core services
...
...

Deploy in Kubernetes

To install the chart with the release name neuvector:

  • Create the NeuVector namespace. You can use namespace name other than "neuvector".
kubectl create namespace neuvector
  • Label the NeuVector namespace with privileged profile for deploying on PSA enabled cluster.
kubectl label  namespace neuvector "pod-security.kubernetes.io/enforce=privileged"
  • To install the chart with the release name neuvector.
helm install neuvector --namespace neuvector --create-namespace neuvector/core

You can find a list of all config options in the README of the core chart.

Deploy in Rancher by SUSE

You can find instructions for deploying NeuVector from Rancher charts here: https://open-docs.neuvector.com/deploying/rancher

Deploy in RedHat OpenShift

  • Create a new project.
oc new-project neuvector
  • Privileged SCC is added to Service Account specified in the values.yaml by Helm chart version 2.0.0 and above in new Helm install on OpenShift 4.x. In case of upgrading NeuVector chart from previous version to 2.0.0, please delete Privileged SCC before upgrading.
oc delete rolebinding -n neuvector system:openshift:scc:privileged

To install the chart with the release name neuvector:

helm install neuvector --namespace neuvector neuvector/core --set openshift=true,crio.enabled=true

Rolling upgrade

helm upgrade neuvector --set tag=5.0.2 neuvector/core

Uninstalling the Chart

To uninstall/delete the neuvector deployment:

helm delete neuvector

The command removes all the Kubernetes components associated with the chart and deletes the release.

Using private registry

If you are using a private registry, you need pull NeuVector images of the specified version to your own registry and add registry name when installing the chart.

helm install neuvector --namespace neuvector neuvector/core --set registry=your-private-registry

If your registry needs authentication, create a secret with the authentication information:

kubectl create secret docker-registry regsecret -n neuvector --docker-server=https://your-private-registry/ --docker-username=your-name --docker-password=your-password --docker-email=your-email

or for OpenShift:

oc create secret docker-registry regsecret -n neuvector --docker-server=https://your-private-registry/ --docker-username=your-name --docker-password=your-password --docker-email=your-email

And install the helm chart with at least these values:

helm install neuvector --namespace neuvector neuvector/core --set imagePullSecrets=regsecret,registry=your-private-registry

To keep the vulnerability database up-to-date, you want to create a script, run it as a cronjob to pull the updater and scanner images periodically to your own registry.

$ docker login docker.io
$ docker pull docker.io/neuvector/updater
$ docker logout docker.io

$ oc login -u <user_name>
# this user_name is the one when you install neuvector

$ docker login -u <user_name> -p `oc whoami -t` docker-registry.default.svc:5000
$ docker tag docker.io/neuvector/updater docker-registry.default.svc:5000/neuvector/updater
$ docker push docker-registry.default.svc:5000/neuvector/updater
$ docker logout docker-registry.default.svc:5000

Migration

If you are using the previous way to install charts from the source directly, after adding the Helm repo, you can upgrade the current installation by given the same chart name.

helm upgrade my-release neuvector/core --namespace neuvector --set tag=4.1.0

neuvector-helm's People

Contributors

albundy83 avatar andsont avatar bashofmann avatar bear454 avatar becitsthere avatar clemenko avatar dependabot[bot] avatar esther-suse avatar garyduan avatar geragio avatar gtam avatar heinzemeinze avatar holyspectral avatar hprommer avatar jorn-k avatar knappr avatar ksandermann avatar luigibk avatar marcostvz avatar mjnagel avatar neuronaddict avatar nnewc avatar oscni avatar raif-ahmed avatar reespozzi avatar selvamt70 avatar selvamt94 avatar timwebster9 avatar tsoontornwutikul avatar venkateshjayagopal avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

neuvector-helm's Issues

Support for creating CRD resources outside of chart

It would be nice if it would be configurable to NOT deploy the CRD resources inside of the helmchart.
This would make it possible to remove/purge the helm release without losing all resources of the CRD type (i.e. neuvectorrules)

Unable to get neuvector up and running in a K3D cluster

Hi,

I have a K3D cluster, up and running with Ingress-Nginx and Calico. Ingress is working with a nginx test pod. After installing all is running, but the service isn't working (also not with a port-forward) and the after some time the enforcer pods are restarting after an error "protocol not supported". Here is a log from the enforcer pod:

2022-09-18T12:22:13|MON|/usr/local/bin/monitor starts, pid=4904
net.core.somaxconn = 1024
net.unix.max_dgram_qlen = 64
Check TC kernel module ...
module act_mirred not find.
module act_pedit not find.
2022-09-18T12:22:13|MON|Start dp, pid=4937
2022-09-18T12:22:13|MON|Start agent, pid=4939
1970-01-01T00:00:00|DEBU||dpi_dlp_init: enter
1970-01-01T00:00:00|DEBU||dpi_dlp_register_options: enter
1970-01-01T00:00:00|DEBU||net_run: enter
1970-01-01T00:00:00|DEBU|cmd|dp_ctrl_loop: enter
2022-09-18T12:22:14|DEBU|dp0|dpi_frag_init: enter
2022-09-18T12:22:14|DEBU|dp0|dpi_session_init: enter
2022-09-18T12:22:14|DEBU|dlp|dp_bld_dlp_thr: dp bld_dlp thread starts
2022-09-18T12:22:14|DEBU|dp0|dp_data_thr: dp thread starts
2022-09-18T12:22:14.846|INFO|AGT|main.main: START - version=v5.0.0
2022-09-18T12:22:14.851|INFO|AGT|main.main: - bind=192.168.249.105
2022-09-18T12:22:14.861|INFO|AGT|system.NewSystemTools: cgroup v2
2022-09-18T12:22:14.861|INFO|AGT|container.Connect: - endpoint=
2022-09-18T12:22:14.913|WARN|AGT|container.parseEndpointWithFallbackProtocol: no error unix /run/containerd/containerd.sock.
2022-09-18T12:22:14.917|INFO|AGT|container.containerdConnect: cri - version=&VersionResponse{Version:0.1.0,RuntimeName:containerd,RuntimeVersion:v1.6.6-k3s1,RuntimeApiVersion:v1alpha2,}
2022-09-18T12:22:14.927|INFO|AGT|container.containerdConnect: containerd connected - endpoint=/run/containerd/containerd.sock version={Version:v1.6.6-k3s1 Revision:}
2022-09-18T12:22:14.986|INFO|AGT|container.(*containerdDriver).getSpecs: Failed to get container task - error=no running task found: task 063c2c3d5dca9ffa44b47ecbd88c8f48f9c0b96a3865a8a29ec99321f3ebbd3a not found: not found id=063c2c3d5dca9ffa44b47ecbd88c8f48f9c0b96a3865a8a29ec99321f3ebbd3a
2022-09-18T12:22:15.009|INFO|AGT|container.(*containerdDriver).getSpecs: Failed to get container task - error=no running task found: task 0d5ef7c1bb9b50d1df48e10bbe4f90f1de8cca198915cb576dbe4f7df17e7b52 not found: not found id=0d5ef7c1bb9b50d1df48e10bbe4f90f1de8cca198915cb576dbe4f7df17e7b52
2022-09-18T12:22:15.053|INFO|AGT|container.(*containerdDriver).getSpecs: Failed to get container task - error=no running task found: task 2c747e53070a212cee98c4b5b8275ea64a0d375b50d79e63ebabf3f3f7a52a6e not found: not found id=2c747e53070a212cee98c4b5b8275ea64a0d375b50d79e63ebabf3f3f7a52a6e
2022-09-18T12:22:15.058|INFO|AGT|container.(*containerdDriver).getSpecs: Failed to get container task - error=no running task found: task 318398e38fafe058a04d64651c309e34157dd497f5be0d3442e9c6878ab40919 not found: not found id=318398e38fafe058a04d64651c309e34157dd497f5be0d3442e9c6878ab40919
2022-09-18T12:22:15.111|INFO|AGT|container.(*containerdDriver).getSpecs: Failed to get container task - error=no running task found: task 5ee2fec1e00ca5753042f306165560d7cbf5b05f106287a1dbf03318566b838e not found: not found id=5ee2fec1e00ca5753042f306165560d7cbf5b05f106287a1dbf03318566b838e
2022-09-18T12:22:15.248|INFO|AGT|container.(*containerdDriver).getSpecs: Failed to get container task - error=no running task found: task b5760bf293539fd06ba6bcb4af581a4dfc4274f05f3f95c71a9df5a4c1ccc1b6 not found: not found id=b5760bf293539fd06ba6bcb4af581a4dfc4274f05f3f95c71a9df5a4c1ccc1b6
2022-09-18T12:22:15.254|INFO|AGT|container.(*containerdDriver).getSpecs: Failed to get container task - error=no running task found: task bb4ade9b24330f6f7ed090e211c91a94067bc6f4e831d6d85c8f1b49cf68ec3f not found: not found id=bb4ade9b24330f6f7ed090e211c91a94067bc6f4e831d6d85c8f1b49cf68ec3f
2022-09-18T12:22:15.349|INFO|AGT|container.(*containerdDriver).getSpecs: Failed to get container task - error=no running task found: task f1bc754f2ad03ded064ce38ada72ee6f69608ad86ee89efda9db8d15838b7f7c not found: not found id=f1bc754f2ad03ded064ce38ada72ee6f69608ad86ee89efda9db8d15838b7f7c
2022-09-18T12:22:15.546|ERRO|AGT|orchestration.getVersion: - code=401 tag=k8s
2022-09-18T12:22:15.57 |ERRO|AGT|orchestration.getVersion: - code=401 tag=oc
2022-09-18T12:22:15.582|ERRO|AGT|orchestration.getVersion: - code=403 tag=oc
2022-09-18T12:22:15.585|INFO|AGT|workerlet.NewWalkerTask: - showDebug=false
2022-09-18T12:22:15.586|INFO|AGT|main.main: Container socket connected - endpoint= runtime=containerd
2022-09-18T12:22:15.586|INFO|AGT|main.main: - k8s=1.24.4+k3s1 oc=
2022-09-18T12:22:15.587|INFO|AGT|main.main: PROC: - shield=true
2022-09-18T12:22:15.632|ERRO|AGT|system.(*SystemTools).NsRunBinary: - error=exit status 255 msg=
2022-09-18T12:22:15.635|ERRO|AGT|main.getHostAddrs: Error getting host IP - error=exit status 255
2022-09-18T12:22:15.635|INFO|AGT|main.parseHostAddrs: - maxMTU=0
2022-09-18T12:22:15.724|ERRO|AGT|container.(*containerdDriver).GetContainer: Failed to get container image config - error=content digest sha256:c2280d2f5f56cf9c9a01bb64b2db4651e35efd6d62a54dcfc12049fe6449c5e4: not found id=9323a149c468888c7e502f2805cb2662ad36e4ec740d8ac4a09d0c2557bb345b
2022-09-18T12:22:15.727|INFO|AGT|main.main: - hostIPs={}
2022-09-18T12:22:15.727|INFO|AGT|main.main: - host={ID:k3d-Test-Wilco-server-0: Name:k3d-Test-Wilco-server-0 Runtime:containerd Platform:Kubernetes Flavor: Network:Default RuntimeVer:v1.6.6-k3s1 RuntimeAPIVer:v1.6.6-k3s1 OS:K3s dev Kernel:5.10.104-linuxkit CPUs:5 Memory:8232370176 Ifaces:map[] TunnelIP:[] CapDockerBench:false CapKubeBench:true StorageDriver:overlayfs CgroupVersion:2}
2022-09-18T12:22:15.731|INFO|AGT|main.main: - agent={CLUSDevice:{ID:ef7edba5b59c68b9ab97a03ff603c96bca6d46da7dec9f73f556031abf91ed51 Name:k8s_neuvector-enforcer-pod_neuvector-enforcer-pod-d287b_neuvector_2b1bd754-5164-465a-9303-a58d5b682b8b_9 SelfHostname: HostName:k3d-Test-Wilco-server-0 HostID:k3d-Test-Wilco-server-0: Domain:neuvector NetworkMode:/proc/659/ns/net PidMode:host Ver:v5.0.0 Labels:map[io.cri-containerd.image:managed io.cri-containerd.kind:container io.kubernetes.container.name:neuvector-enforcer-pod io.kubernetes.pod.name:neuvector-enforcer-pod-d287b io.kubernetes.pod.namespace:neuvector io.kubernetes.pod.uid:2b1bd754-5164-465a-9303-a58d5b682b8b name:enforcer neuvector.image:neuvector/enforcer neuvector.rev:568356c neuvector.role:enforcer release:5.0.0 vendor:NeuVector Inc. version:5.0.0] CreatedAt:2022-09-18 12:22:13.500841876 +0000 UTC StartedAt:2022-09-18 12:22:13.500841876 +0000 UTC JoinedAt:0001-01-01 00:00:00 +0000 UTC MemoryLimit:0 CPUs: ClusterIP: RPCServerPort:0 Pid:4904 Ifaces:map[eth0:[{IPNet:{IP:192.168.249.105 Mask:ffffffff} Gateway: Scope:global NetworkID: NetworkName:}]]}}
2022-09-18T12:22:15.733|INFO|AGT|main.main: - jumboframe=false pipeType=no_tc
2022-09-18T12:22:15.734|INFO|AGT|cluster.FillClusterAddrs: - advertise=192.168.249.105 join=neuvector-svc-controller.neuvector
2022-09-18T12:22:15.743|INFO|AGT|cluster.(*consulMethod).Start: - config=&{ID:ef7edba5b59c68b9ab97a03ff603c96bca6d46da7dec9f73f556031abf91ed51 Server:false Debug:false Ifaces:map[eth0:[{IPNet:{IP:192.168.249.105 Mask:ffffffff} Gateway: Scope:global NetworkID: NetworkName:}]] JoinAddr:neuvector-svc-controller.neuvector joinAddrList:[192.168.102.48 192.168.249.107 192.168.181.241] BindAddr:192.168.249.105 AdvertiseAddr:192.168.249.105 DataCenter:neuvector RPCPort:0 LANPort:0 WANPort:0 EnableDebug:false} recover=false
2022-09-18T12:22:15.748|INFO|AGT|cluster.isBootstrap: - node-id=fc0bd60f-8c8b-2224-dd71-486534c8c536
2022-09-18T12:22:15.75 |INFO|AGT|cluster.(*consulMethod).Start: Consul start - args=[agent -datacenter neuvector -data-dir /tmp/neuvector -config-file /tmp/consul.json -bind 192.168.249.105 -advertise 192.168.249.105 -node 192.168.249.105 -node-id fc0bd60f-8c8b-2224-dd71-486534c8c536 -raft-protocol 3 -retry-join 192.168.102.48 -retry-join 192.168.249.107 -retry-join 192.168.181.241]
==> Starting Consul agent...
           Version: '1.11.3'
           Node ID: 'fc0bd60f-8c8b-2224-dd71-486534c8c536'
         Node name: '192.168.249.105'
        Datacenter: 'neuvector' (Segment: '')
            Server: false (Bootstrap: false)
       Client Addr: [127.0.0.1] (HTTP: 8500, HTTPS: -1, gRPC: -1, DNS: -1)
      Cluster Addr: 192.168.249.105 (LAN: 18301, WAN: -1)
           Encrypt: Gossip: true, TLS-Outgoing: true, TLS-Incoming: true, Auto-Encrypt-TLS: false

==> Log data will now stream in as it occurs:

2022-09-18T12:22:17.107Z [WARN]  agent: Node name "192.168.249.105" will not be discoverable via DNS due to invalid characters. Valid characters include all alpha-numerics and dashes.
2022-09-18T12:22:17.178Z [WARN]  agent.auto_config: Node name "192.168.249.105" will not be discoverable via DNS due to invalid characters. Valid characters include all alpha-numerics and dashes.
2022-09-18T12:22:17.222Z [INFO]  agent.client.serf.lan: serf: EventMemberJoin: 192.168.249.105 192.168.249.105
2022-09-18T12:22:17.226Z [INFO]  agent.router: Initializing LAN area manager
2022-09-18T12:22:17.233Z [INFO]  agent: Starting server: address=127.0.0.1:8500 network=tcp protocol=http
2022-09-18T12:22:17.234Z [WARN]  agent: DEPRECATED Backwards compatibility with pre-1.9 metrics enabled. These metrics will be removed in a future version of Consul. Set `telemetry { disable_compat_1.9 = true }` to disable them.
2022-09-18T12:22:17.236Z [INFO]  agent: Retry join is supported for the following discovery methods: cluster=LAN discovery_methods="aliyun aws azure digitalocean gce k8s linode mdns os packet scaleway softlayer tencentcloud triton vsphere"
2022-09-18T12:22:17.236Z [INFO]  agent: Joining cluster...: cluster=LAN
2022-09-18T12:22:17.236Z [INFO]  agent: (LAN) joining: lan_addresses=[192.168.102.48, 192.168.249.107, 192.168.181.241]
2022-09-18T12:22:17.251Z [INFO]  agent: started state syncer
2022-09-18T12:22:17.252Z [INFO]  agent: Consul agent running!
2022-09-18T12:22:17.252Z [WARN]  agent.router.manager: No servers available
2022-09-18T12:22:17.253Z [ERROR] agent.anti_entropy: failed to sync remote state: error="No known Consul servers"
2022-09-18T12:22:17.262Z [INFO]  agent.client.serf.lan: serf: EventMemberJoin: 192.168.102.51 192.168.102.51
2022-09-18T12:22:17.264Z [INFO]  agent.client.serf.lan: serf: EventMemberJoin: 192.168.102.48 192.168.102.48
2022-09-18T12:22:17.265Z [INFO]  agent.client.serf.lan: serf: EventMemberJoin: 192.168.181.244 192.168.181.244
2022-09-18T12:22:17.266Z [INFO]  agent.client.serf.lan: serf: EventMemberJoin: 192.168.181.241 192.168.181.241
2022-09-18T12:22:17.267Z [INFO]  agent.client.serf.lan: serf: EventMemberJoin: 192.168.249.107 192.168.249.107
2022-09-18T12:22:17.277Z [WARN]  agent.client.memberlist.lan: memberlist: Refuting a dead message (from: 192.168.249.105)
2022-09-18T12:22:17.284Z [INFO]  agent.client: adding server: server="192.168.102.48 (Addr: tcp/192.168.102.48:18300) (DC: neuvector)"
2022-09-18T12:22:17.300Z [INFO]  agent.client: adding server: server="192.168.181.241 (Addr: tcp/192.168.181.241:18300) (DC: neuvector)"
2022-09-18T12:22:17.310Z [INFO]  agent.client: adding server: server="192.168.249.107 (Addr: tcp/192.168.249.107:18300) (DC: neuvector)"
2022-09-18T12:22:17.311Z [WARN]  agent: grpc: addrConn.createTransport failed to connect to {neuvector-192.168.102.48:18300 0 192.168.102.48 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 192.168.249.105:0->192.168.102.48:18300: operation was canceled". Reconnecting...
2022-09-18T12:22:17.323Z [WARN]  agent: grpc: addrConn.createTransport failed to connect to {neuvector-192.168.102.48:18300 0 192.168.102.48 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 192.168.249.105:0->192.168.102.48:18300: operation was canceled". Reconnecting...
2022-09-18T12:22:17.346Z [INFO]  agent: (LAN) joined: number_of_nodes=3
2022-09-18T12:22:17.346Z [INFO]  agent: Join cluster completed. Synced with initial agents: cluster=LAN num_agents=3
2022-09-18T12:22:17.999Z [INFO]  agent: Synced node info
2022-09-18T12:22:22.321Z [INFO]  agent.client.serf.lan: serf: EventMemberLeave: 192.168.181.244 192.168.181.244
2022-09-18T12:22:25.874|INFO|AGT|cluster.StartCluster: - lead=192.168.249.107
2022-09-18T12:22:25.904|INFO|AGT|main.waitForAdmission: Node admission is enabled
2022-09-18T12:22:25.904|INFO|AGT|main.waitForAdmission: Sending join request
2022-09-18T12:22:25.977|INFO|AGT|cluster.newGRPCClientTCP: Expected server name - cn=NeuVector
2022-09-18T12:22:26.145|INFO|AGT|main.waitForAdmission: Agent join request accepted
2022-09-18T12:22:26.158|INFO|AGT|main.main: Runtime storage driver - name=overlayfs
2022-09-18T12:22:26.159|INFO|AGT|dp.Open:
2022-09-18T12:22:26.184|INFO|AGT|probe.New:
2022-09-18T12:22:26.186|ERRO|AGT|probe.(*Probe).openSocketMonitor: Failed to create socket for inet - error=protocol not supported
2022-09-18T12:22:26.186|ERRO|AGT|probe.(*Probe).cbOpenNetlinkSockets: Unable to open diagnostic netlink socket - error=protocol not supported
2022-09-18T12:22:26.187|ERRO|AGT|probe.NewFileAccessCtrl: FA: Initialize - error=function not implemented
2022-09-18T12:22:26.187|INFO|AGT|probe.New: PROC: Process control is not supported
2022-09-18T12:22:26.23 |INFO|AGT|main.(*Bench).RerunKube:
2022-09-18T12:22:26.251|ERRO|AGT|system.(*SystemTools).CheckHostProgram: Done - error=exit status 255 msg=
2022-09-18T12:22:26.252|ERRO|AGT|main.(*Bench).checkRequiredHostProgs: - error=exit status 255 program=grep
2022-09-18T12:22:26.264|ERRO|AGT|system.(*SystemTools).CheckHostProgram: Done - error=exit status 255 msg=
2022-09-18T12:22:26.265|ERRO|AGT|main.(*Bench).checkRequiredHostProgs: - error=exit status 255 program=grep
2022-09-18T12:22:26.265|ERRO|AGT|main.(*Bench).RerunKube: Cannot run master node CIS benchmark - error=grep command not found.

2022-09-18T12:22:26.297|ERRO|AGT|main.(*Bench).RerunKube: Cannot run worker node CIS benchmark - error=grep command not found.

2022-09-18T12:22:26.323|INFO|AGT|probe.(*Probe).netlinkProcMonitor: PROC: Start real-time process listener
2022-09-18T12:22:26.324|ERRO|AGT|fsmon.NewFileWatcher: Open fanotify fail - error=function not implemented
2022-09-18T12:22:26.325|ERRO|AGT|main.main: Failed to open file monitor! - error=function not implemented
2022-09-18T12:22:26|MON|Process agent exit status 254, pid=4939
2022-09-18T12:22:26|MON|Process agent exit with non-recoverable return code. Monitor Exit!!
2022-09-18T12:22:26|MON|Kill dp with signal 15, pid=4937
2022-09-18T12:22:26|DEBU|dp0|dp_data_thr: dp thread exits
Leave the cluster
2022-09-18T12:22:27.239Z [INFO]  agent.client: client starting leave
2022-09-18T12:22:27.521Z [INFO]  agent.client.serf.lan: serf: EventMemberLeave: 192.168.249.105 192.168.249.105
2022-09-18T12:22:30.826Z [INFO]  agent: Requesting shutdown
2022-09-18T12:22:30.829Z [INFO]  agent.client: shutting down client
2022-09-18T12:22:30.836Z [INFO]  agent: consul client down
2022-09-18T12:22:30.836Z [INFO]  agent: shutdown complete
2022-09-18T12:22:30.837Z [INFO]  agent: Stopping server: address=127.0.0.1:8500 network=tcp protocol=http
2022-09-18T12:22:30.839Z [INFO]  agent: Waiting for endpoints to shut down
2022-09-18T12:22:30.839Z [INFO]  agent: Endpoints down
2022-09-18T12:22:30.840Z [INFO]  agent: Exit code: code=0
Graceful leave complete
2022-09-18T12:22:30|MON|Clean up.
2022-09-18T12:22:57|MON|/usr/local/bin/monitor starts, pid=5507
net.core.somaxconn = 1024
net.unix.max_dgram_qlen = 64
Check TC kernel module ...
module act_mirred not find.
module act_pedit not find.
2022-09-18T12:22:57|MON|Start dp, pid=5541
2022-09-18T12:22:57|MON|Start agent, pid=5544
1970-01-01T00:00:00|DEBU||dpi_dlp_init: enter
1970-01-01T00:00:00|DEBU||dpi_dlp_register_options: enter
1970-01-01T00:00:00|DEBU||net_run: enter
1970-01-01T00:00:00|DEBU|cmd|dp_ctrl_loop: enter
1970-01-01T00:00:00|DEBU|dlp|dp_bld_dlp_thr: dp bld_dlp thread starts
2022-09-18T12:22:58|DEBU|dp0|dpi_frag_init: enter
2022-09-18T12:22:58|DEBU|dp0|dpi_session_init: enter
2022-09-18T12:22:58|DEBU|dp0|dp_data_thr: dp thread starts
2022-09-18T12:22:58.509|INFO|AGT|main.main: START - version=v5.0.0
2022-09-18T12:22:58.515|INFO|AGT|main.main: - bind=192.168.249.105
2022-09-18T12:22:58.532|INFO|AGT|system.NewSystemTools: cgroup v2
2022-09-18T12:22:58.532|INFO|AGT|container.Connect: - endpoint=
2022-09-18T12:22:58.587|WARN|AGT|container.parseEndpointWithFallbackProtocol: no error unix /run/containerd/containerd.sock.
2022-09-18T12:22:58.606|INFO|AGT|container.containerdConnect: cri - version=&VersionResponse{Version:0.1.0,RuntimeName:containerd,RuntimeVersion:v1.6.6-k3s1,RuntimeApiVersion:v1alpha2,}
2022-09-18T12:22:58.648|INFO|AGT|container.containerdConnect: containerd connected - endpoint=/run/containerd/containerd.sock version={Version:v1.6.6-k3s1 Revision:}
2022-09-18T12:22:58.688|INFO|AGT|container.(*containerdDriver).getSpecs: Failed to get container task - error=no running task found: task 063c2c3d5dca9ffa44b47ecbd88c8f48f9c0b96a3865a8a29ec99321f3ebbd3a not found: not found id=063c2c3d5dca9ffa44b47ecbd88c8f48f9c0b96a3865a8a29ec99321f3ebbd3a
2022-09-18T12:22:58.704|INFO|AGT|container.(*containerdDriver).getSpecs: Failed to get container task - error=no running task found: task 0d5ef7c1bb9b50d1df48e10bbe4f90f1de8cca198915cb576dbe4f7df17e7b52 not found: not found id=0d5ef7c1bb9b50d1df48e10bbe4f90f1de8cca198915cb576dbe4f7df17e7b52
2022-09-18T12:22:58.804|INFO|AGT|container.(*containerdDriver).getSpecs: Failed to get container task - error=no running task found: task 2c747e53070a212cee98c4b5b8275ea64a0d375b50d79e63ebabf3f3f7a52a6e not found: not found id=2c747e53070a212cee98c4b5b8275ea64a0d375b50d79e63ebabf3f3f7a52a6e
2022-09-18T12:22:58.814|INFO|AGT|container.(*containerdDriver).getSpecs: Failed to get container task - error=no running task found: task 318398e38fafe058a04d64651c309e34157dd497f5be0d3442e9c6878ab40919 not found: not found id=318398e38fafe058a04d64651c309e34157dd497f5be0d3442e9c6878ab40919
2022-09-18T12:22:58.856|INFO|AGT|container.(*containerdDriver).getSpecs: Failed to get container task - error=no running task found: task 5ee2fec1e00ca5753042f306165560d7cbf5b05f106287a1dbf03318566b838e not found: not found id=5ee2fec1e00ca5753042f306165560d7cbf5b05f106287a1dbf03318566b838e
2022-09-18T12:22:58.996|INFO|AGT|container.(*containerdDriver).getSpecs: Failed to get container task - error=no running task found: task b5760bf293539fd06ba6bcb4af581a4dfc4274f05f3f95c71a9df5a4c1ccc1b6 not found: not found id=b5760bf293539fd06ba6bcb4af581a4dfc4274f05f3f95c71a9df5a4c1ccc1b6
2022-09-18T12:22:59    |INFO|AGT|container.(*containerdDriver).getSpecs: Failed to get container task - error=no running task found: task bb4ade9b24330f6f7ed090e211c91a94067bc6f4e831d6d85c8f1b49cf68ec3f not found: not found id=bb4ade9b24330f6f7ed090e211c91a94067bc6f4e831d6d85c8f1b49cf68ec3f
2022-09-18T12:22:59.142|INFO|AGT|container.(*containerdDriver).getSpecs: Failed to get container task - error=no running task found: task f1bc754f2ad03ded064ce38ada72ee6f69608ad86ee89efda9db8d15838b7f7c not found: not found id=f1bc754f2ad03ded064ce38ada72ee6f69608ad86ee89efda9db8d15838b7f7c
2022-09-18T12:22:59.571|ERRO|AGT|orchestration.getVersion: - code=401 tag=k8s
2022-09-18T12:22:59.623|ERRO|AGT|orchestration.getVersion: - code=401 tag=oc
2022-09-18T12:22:59.636|ERRO|AGT|orchestration.getVersion: - code=403 tag=oc
2022-09-18T12:22:59.64 |INFO|AGT|workerlet.NewWalkerTask: - showDebug=false
2022-09-18T12:22:59.641|INFO|AGT|main.main: Container socket connected - endpoint= runtime=containerd
2022-09-18T12:22:59.641|INFO|AGT|main.main: - k8s=1.24.4+k3s1 oc=
2022-09-18T12:22:59.642|INFO|AGT|main.main: PROC: - shield=true
2022-09-18T12:22:59.693|ERRO|AGT|system.(*SystemTools).NsRunBinary: - error=exit status 255 msg=
2022-09-18T12:22:59.694|ERRO|AGT|main.getHostAddrs: Error getting host IP - error=exit status 255
2022-09-18T12:22:59.694|INFO|AGT|main.parseHostAddrs: - maxMTU=0
2022-09-18T12:22:59.924|ERRO|AGT|container.(*containerdDriver).GetContainer: Failed to get container image config - error=content digest sha256:c2280d2f5f56cf9c9a01bb64b2db4651e35efd6d62a54dcfc12049fe6449c5e4: not found id=9323a149c468888c7e502f2805cb2662ad36e4ec740d8ac4a09d0c2557bb345b
2022-09-18T12:22:59.926|INFO|AGT|main.main: - hostIPs={}
2022-09-18T12:22:59.935|INFO|AGT|main.main: - host={ID:k3d-Test-Wilco-server-0: Name:k3d-Test-Wilco-server-0 Runtime:containerd Platform:Kubernetes Flavor: Network:Default RuntimeVer:v1.6.6-k3s1 RuntimeAPIVer:v1.6.6-k3s1 OS:K3s dev Kernel:5.10.104-linuxkit CPUs:5 Memory:8232370176 Ifaces:map[] TunnelIP:[] CapDockerBench:false CapKubeBench:true StorageDriver:overlayfs CgroupVersion:2}
2022-09-18T12:22:59.936|INFO|AGT|main.main: - agent={CLUSDevice:{ID:ff1d7655914d1696675842fff7cbe47e1436540f2039958e76ec9112b315b886 Name:k8s_neuvector-enforcer-pod_neuvector-enforcer-pod-d287b_neuvector_2b1bd754-5164-465a-9303-a58d5b682b8b_10 SelfHostname: HostName:k3d-Test-Wilco-server-0 HostID:k3d-Test-Wilco-server-0: Domain:neuvector NetworkMode:/proc/659/ns/net PidMode:host Ver:v5.0.0 Labels:map[io.cri-containerd.image:managed io.cri-containerd.kind:container io.kubernetes.container.name:neuvector-enforcer-pod io.kubernetes.pod.name:neuvector-enforcer-pod-d287b io.kubernetes.pod.namespace:neuvector io.kubernetes.pod.uid:2b1bd754-5164-465a-9303-a58d5b682b8b name:enforcer neuvector.image:neuvector/enforcer neuvector.rev:568356c neuvector.role:enforcer release:5.0.0 vendor:NeuVector Inc. version:5.0.0] CreatedAt:2022-09-18 12:22:57.505373591 +0000 UTC StartedAt:2022-09-18 12:22:57.505373591 +0000 UTC JoinedAt:0001-01-01 00:00:00 +0000 UTC MemoryLimit:0 CPUs: ClusterIP: RPCServerPort:0 Pid:5507 Ifaces:map[eth0:[{IPNet:{IP:192.168.249.105 Mask:ffffffff} Gateway: Scope:global NetworkID: NetworkName:}]]}}
2022-09-18T12:22:59.944|INFO|AGT|main.main: - jumboframe=false pipeType=no_tc
2022-09-18T12:22:59.947|INFO|AGT|cluster.FillClusterAddrs: - advertise=192.168.249.105 join=neuvector-svc-controller.neuvector
2022-09-18T12:22:59.958|INFO|AGT|cluster.(*consulMethod).Start: - config=&{ID:ff1d7655914d1696675842fff7cbe47e1436540f2039958e76ec9112b315b886 Server:false Debug:false Ifaces:map[eth0:[{IPNet:{IP:192.168.249.105 Mask:ffffffff} Gateway: Scope:global NetworkID: NetworkName:}]] JoinAddr:neuvector-svc-controller.neuvector joinAddrList:[192.168.249.107 192.168.181.241 192.168.102.48] BindAddr:192.168.249.105 AdvertiseAddr:192.168.249.105 DataCenter:neuvector RPCPort:0 LANPort:0 WANPort:0 EnableDebug:false} recover=false
2022-09-18T12:22:59.96 |INFO|AGT|cluster.isBootstrap: - node-id=fc0bd60f-8c8b-2224-dd71-486534c8c536
2022-09-18T12:22:59.961|INFO|AGT|cluster.(*consulMethod).Start: Consul start - args=[agent -datacenter neuvector -data-dir /tmp/neuvector -config-file /tmp/consul.json -bind 192.168.249.105 -advertise 192.168.249.105 -node 192.168.249.105 -node-id fc0bd60f-8c8b-2224-dd71-486534c8c536 -raft-protocol 3 -retry-join 192.168.249.107 -retry-join 192.168.181.241 -retry-join 192.168.102.48]
==> Starting Consul agent...
           Version: '1.11.3'
           Node ID: 'fc0bd60f-8c8b-2224-dd71-486534c8c536'
         Node name: '192.168.249.105'
        Datacenter: 'neuvector' (Segment: '')
            Server: false (Bootstrap: false)
       Client Addr: [127.0.0.1] (HTTP: 8500, HTTPS: -1, gRPC: -1, DNS: -1)
      Cluster Addr: 192.168.249.105 (LAN: 18301, WAN: -1)
           Encrypt: Gossip: true, TLS-Outgoing: true, TLS-Incoming: true, Auto-Encrypt-TLS: false

==> Log data will now stream in as it occurs:

2022-09-18T12:23:01.484Z [WARN]  agent: Node name "192.168.249.105" will not be discoverable via DNS due to invalid characters. Valid characters include all alpha-numerics and dashes.
2022-09-18T12:23:01.563Z [WARN]  agent.auto_config: Node name "192.168.249.105" will not be discoverable via DNS due to invalid characters. Valid characters include all alpha-numerics and dashes.
2022-09-18T12:23:01.615Z [INFO]  agent.client.serf.lan: serf: EventMemberJoin: 192.168.249.105 192.168.249.105
2022-09-18T12:23:01.651Z [INFO]  agent.router: Initializing LAN area manager
2022-09-18T12:23:01.669Z [INFO]  agent: Starting server: address=127.0.0.1:8500 network=tcp protocol=http
2022-09-18T12:23:01.676Z [WARN]  agent: DEPRECATED Backwards compatibility with pre-1.9 metrics enabled. These metrics will be removed in a future version of Consul. Set `telemetry { disable_compat_1.9 = true }` to disable them.
2022-09-18T12:23:01.680Z [INFO]  agent: Retry join is supported for the following discovery methods: cluster=LAN discovery_methods="aliyun aws azure digitalocean gce k8s linode mdns os packet scaleway softlayer tencentcloud triton vsphere"
2022-09-18T12:23:01.680Z [INFO]  agent: Joining cluster...: cluster=LAN
2022-09-18T12:23:01.680Z [INFO]  agent: (LAN) joining: lan_addresses=[192.168.249.107, 192.168.181.241, 192.168.102.48]
2022-09-18T12:23:01.701Z [INFO]  agent: started state syncer
2022-09-18T12:23:01.701Z [INFO]  agent: Consul agent running!
2022-09-18T12:23:01.711Z [INFO]  agent.client.serf.lan: serf: EventMemberJoin: 192.168.102.48 192.168.102.48
2022-09-18T12:23:01.713Z [INFO]  agent.client: adding server: server="192.168.102.48 (Addr: tcp/192.168.102.48:18300) (DC: neuvector)"
2022-09-18T12:23:01.728Z [INFO]  agent.client.serf.lan: serf: EventMemberJoin: 192.168.181.241 192.168.181.241
2022-09-18T12:23:01.773Z [INFO]  agent.client.serf.lan: serf: EventMemberJoin: 192.168.249.107 192.168.249.107
2022-09-18T12:23:01.776Z [INFO]  agent.client.serf.lan: serf: EventMemberJoin: 192.168.181.244 192.168.181.244
2022-09-18T12:23:01.776Z [INFO]  agent.client.serf.lan: serf: EventMemberJoin: 192.168.102.51 192.168.102.51
2022-09-18T12:23:01.781Z [INFO]  agent.client: adding server: server="192.168.181.241 (Addr: tcp/192.168.181.241:18300) (DC: neuvector)"
2022-09-18T12:23:01.782Z [INFO]  agent.client: adding server: server="192.168.249.107 (Addr: tcp/192.168.249.107:18300) (DC: neuvector)"
2022-09-18T12:23:01.785Z [WARN]  agent: grpc: addrConn.createTransport failed to connect to {neuvector-192.168.102.48:18300 0 192.168.102.48 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 192.168.249.105:0->192.168.102.48:18300: operation was canceled". Reconnecting...
2022-09-18T12:23:01.838Z [INFO]  agent: (LAN) joined: number_of_nodes=3
2022-09-18T12:23:01.855Z [INFO]  agent: Join cluster completed. Synced with initial agents: cluster=LAN num_agents=3
2022-09-18T12:23:01.920Z [INFO]  agent: Synced node info
2022-09-18T12:23:03.739Z [INFO]  agent.client.serf.lan: serf: EventMemberLeave: 192.168.181.244 192.168.181.244
2022-09-18T12:23:09.986|INFO|AGT|cluster.StartCluster: - lead=192.168.249.107
2022-09-18T12:23:09.994|INFO|AGT|main.waitForAdmission: Node admission is enabled
2022-09-18T12:23:09.995|INFO|AGT|main.waitForAdmission: Sending join request
2022-09-18T12:23:10.023|INFO|AGT|cluster.newGRPCClientTCP: Expected server name - cn=NeuVector
2022-09-18T12:23:10.079|INFO|AGT|main.waitForAdmission: Agent join request accepted
2022-09-18T12:23:10.096|INFO|AGT|main.main: Runtime storage driver - name=overlayfs
2022-09-18T12:23:10.098|INFO|AGT|dp.Open:
2022-09-18T12:23:10.117|INFO|AGT|probe.New:
2022-09-18T12:23:10.118|ERRO|AGT|probe.(*Probe).openSocketMonitor: Failed to create socket for inet - error=protocol not supported
2022-09-18T12:23:10.119|ERRO|AGT|probe.(*Probe).cbOpenNetlinkSockets: Unable to open diagnostic netlink socket - error=protocol not supported
2022-09-18T12:23:10.119|ERRO|AGT|probe.NewFileAccessCtrl: FA: Initialize - error=function not implemented
2022-09-18T12:23:10.119|INFO|AGT|probe.New: PROC: Process control is not supported
2022-09-18T12:23:10.158|INFO|AGT|main.(*Bench).RerunKube:
2022-09-18T12:23:10.177|ERRO|AGT|system.(*SystemTools).CheckHostProgram: Done - error=exit status 255 msg=
2022-09-18T12:23:10.178|ERRO|AGT|main.(*Bench).checkRequiredHostProgs: - error=exit status 255 program=grep
2022-09-18T12:23:10.189|ERRO|AGT|system.(*SystemTools).CheckHostProgram: Done - error=exit status 255 msg=
2022-09-18T12:23:10.189|ERRO|AGT|main.(*Bench).checkRequiredHostProgs: - error=exit status 255 program=grep
2022-09-18T12:23:10.19 |ERRO|AGT|main.(*Bench).RerunKube: Cannot run master node CIS benchmark - error=grep command not found.

2022-09-18T12:23:10.203|ERRO|AGT|main.(*Bench).RerunKube: Cannot run worker node CIS benchmark - error=grep command not found.

2022-09-18T12:23:10.213|INFO|AGT|probe.(*Probe).netlinkProcMonitor: PROC: Start real-time process listener
2022-09-18T12:23:10.213|ERRO|AGT|fsmon.NewFileWatcher: Open fanotify fail - error=function not implemented
2022-09-18T12:23:10.214|ERRO|AGT|main.main: Failed to open file monitor! - error=function not implemented
2022-09-18T12:23:10|MON|Process agent exit status 254, pid=5544
2022-09-18T12:23:10|MON|Process agent exit with non-recoverable return code. Monitor Exit!!
2022-09-18T12:23:10|MON|Kill dp with signal 15, pid=5541
2022-09-18T12:23:10|DEBU|dp0|dp_data_thr: dp thread exits
Leave the cluster
2022-09-18T12:23:10.964Z [INFO]  agent.client: client starting leave
2022-09-18T12:23:11.218Z [INFO]  agent.client.serf.lan: serf: EventMemberLeave: 192.168.249.105 192.168.249.105
2022-09-18T12:23:14.420Z [INFO]  agent: Requesting shutdown
2022-09-18T12:23:14.421Z [INFO]  agent.client: shutting down client
2022-09-18T12:23:14.427Z [INFO]  agent: consul client down
2022-09-18T12:23:14.427Z [INFO]  agent: shutdown complete
2022-09-18T12:23:14.429Z [INFO]  agent: Stopping server: address=127.0.0.1:8500 network=tcp protocol=http
2022-09-18T12:23:14.430Z [INFO]  agent: Waiting for endpoints to shut down
2022-09-18T12:23:14.430Z [INFO]  agent: Endpoints down
2022-09-18T12:23:14.430Z [INFO]  agent: Exit code: code=0
Graceful leave complete
2022-09-18T12:23:14|MON|Clean up.

Wrong network rules learned

Hello,
I am currently testing Neuvector on a 1.23 CCE kubernetes cluster.
I noticed weirds links between pods in Network activity and rules learned.
In order to test, i deployed one pod and ran a curl to others pods and it seems that Neuvector detects it with the right ip src and ip dst port, but links it to the wrong application pods and doing so generates wrong networks rules ...
I tried to change neuvector versions but even with 5.1.0 the issue remains.

I am quite surprised because i tested it with another cluster (not CCE) and it works perfectly ...
Did i miss something in my configuration or is it linked to CCE kubernetes clusters ?

Thanks.

NeuVector Unable to Auth with Rancher RBAC/Proxy

I'm deploying NeuVector (v2.4.5 / v5.1.3) with Helm on an RKE2 Cluster (v1.24.14) and I'm unable to use the navlink inside of the Rancher Multi-Cluster Manager (v2.7.3) with the Rancher RBAC/Proxy to access NeuVector. I'm able to access NeuVector using the ingress and ensured Authenticate using OpenShift’s or Rancher’s RBAC is enabled. I've tried the steps listed within rancher/rancher#37434.

I'm receiving the error Authentication using OpenShift's or Rancher's RBAC was disabled!.

Logs of the NeuVector Manager Pod show {"code":50,"error":"Platform authentication is disabled","message":"Platform authentication is disabled"}.

I'm unable to find any documentation on this within the docs or git repos or helm charts. See below for more information:

[root@ip-10-0-40-140 certs]# kubectl get nodes -o wide
NAME                          STATUS   ROLES                       AGE    VERSION           INTERNAL-IP   EXTERNAL-IP   OS-IMAGE                      KERNEL-VERSION                     CONTAINER-RUNTIME
ip-10-0-40-140.ec2.internal   Ready    control-plane,etcd,master   6d3h   v1.24.14+rke2r1   10.0.40.140   <none>        Rocky Linux 9.2 (Blue Onyx)   5.14.0-162.12.1.el9_1.0.2.x86_64   containerd://1.7.1-k3s1
ip-10-0-40-204.ec2.internal   Ready    <none>                      6d3h   v1.24.14+rke2r1   10.0.40.204   <none>        Rocky Linux 9.2 (Blue Onyx)   5.14.0-162.12.1.el9_1.0.2.x86_64   containerd://1.7.1-k3s1
ip-10-0-50-69.ec2.internal    Ready    control-plane,etcd,master   6d3h   v1.24.14+rke2r1   10.0.50.69    <none>        Rocky Linux 9.2 (Blue Onyx)   5.14.0-162.12.1.el9_1.0.2.x86_64   containerd://1.7.1-k3s1
ip-10-0-50-79.ec2.internal    Ready    <none>                      6d3h   v1.24.14+rke2r1   10.0.50.79    <none>        Rocky Linux 9.2 (Blue Onyx)   5.14.0-162.12.1.el9_1.0.2.x86_64   containerd://1.7.1-k3s1
ip-10-0-60-252.ec2.internal   Ready    <none>                      6d3h   v1.24.14+rke2r1   10.0.60.252   <none>        Rocky Linux 9.2 (Blue Onyx)   5.14.0-162.12.1.el9_1.0.2.x86_64   containerd://1.7.1-k3s1
ip-10-0-60-82.ec2.internal    Ready    control-plane,etcd,master   6d3h   v1.24.14+rke2r1   10.0.60.82    <none>        Rocky Linux 9.2 (Blue Onyx)   5.14.0-162.12.1.el9_1.0.2.x86_64   containerd://1.7.1-k3s1
kubectl create namespace cattle-neuvector-system

kubectl -n cattle-neuvector-system create secret generic tls-ca  --from-file=cacerts.pem=/opt/rancher/certs/ca.pem
kubectl -n cattle-neuvector-system create secret tls tls-neuvector-certs  --cert=tls.crt --key=tls.key

helm upgrade -i neuvector neuvector/core --namespace cattle-neuvector-system --set psp=true --set imagePullSecrets=regsecret --set k3s.enabled=true --set k3s.runtimePath=/run/k3s/containerd/containerd.sock --set manager.ingress.enabled=true --set manager.ingress.tls=true --set manager.ingress.secretName=tls-neuvector-certs --set manager.svc.type=ClusterIP --set manager.runAsUser=5400 --set cve.updater.runAsUser=5400 --set cve.scanner.runAsUser=5400 --set controller.pvc.enabled=true --set controller.pvc.capacity=1Gi --set controller.image.repository=neuvector/controller --set enforcer.image.repository=neuvector/enforcer --set manager.image.repository=neuvector/manager --set cve.updater.image.repository=neuvector/updater --set registry=rgcrprod.azurecr.us --set manager.ingress.host=neuvector.rancherfederal.io --set global.cattle.url=https://rancher.rancherfederal.io --set ranchersso.enabled=true --set rbac=true
[root@ip-10-0-40-140 certs]# helm list -n cattle-neuvector-system
NAME            NAMESPACE               REVISION        UPDATED                                 STATUS          CHART           APP VERSION
neuvector       cattle-neuvector-system 7               2023-06-07 20:51:33.177708648 +0000 UTC deployed        core-2.4.5      5.1.3  
[root@ip-10-0-40-140 certs]# helm get values neuvector -n cattle-neuvector-system
USER-SUPPLIED VALUES:
controller:
  image:
    repository: neuvector/controller
  pvc:
    capacity: 1Gi
    enabled: true
cve:
  scanner:
    runAsUser: 5400
  updater:
    image:
      repository: neuvector/updater
    runAsUser: 5400
enforcer:
  image:
    repository: neuvector/enforcer
global:
  cattle:
    url: https://rancher.rancherfederal.io
imagePullSecrets: regsecret
k3s:
  enabled: true
  runtimePath: /run/k3s/containerd/containerd.sock
manager:
  image:
    repository: neuvector/manager
  ingress:
    enabled: true
    host: neuvector.rancherfederal.io
    secretName: tls-neuvector-certs
    tls: true
  runAsUser: 5400
  svc:
    type: ClusterIP
psp: true
ranchersso:
  enabled: true
rbac: true
registry: rgcrprod.azurecr.us
[root@ip-10-0-40-140 certs]# kubectl logs neuvector-manager-pod-68c5fc4969-gvn9q -n cattle-neuvector-system

2023-06-08 01:51:52,013|INFO |MANAGER|com.neu.api.AuthenticationService(apply:241): Getting EULA
2023-06-08 01:51:52,060|INFO |MANAGER|com.neu.api.AuthenticationService(apply:845): post path auth
2023-06-08 01:51:52,063|WARN |MANAGER|com.neu.api.AuthenticationService(apply:849): Status: 401 Unauthorized
Body: {"code":50,"error":"Platform authentication is disabled","message":"Platform authentication is disabled"}

View data stored in Consul

Hello, I want to check the data stored in Consul, but it is found to be internally started. Is there any way for me to check the data stored in Consul? It would be nice if you could see the UI, thank you very much!

proxy configuration inside helm configuration?

Hello,

is it possible to define the "http" and "https" proxy inside helm configuration file?

At the moment we need to configure this after installation at web frontend. It would be greate to configure the complete neuvector software at installation phase, without additional steps after installation.

TIA and best regards, Oli

neuvector controller-pod and enforcer-pod go into a CrashLoopBackOff state in Azure AKS

Description

I have tried installing the latest helm chart in an Azure AKS cluster with the following command

helm upgrade --install neuvector neuvector/core --version 2.2.0 --set tag=5.0.0 --set registry=docker.io --create-namespace --namespace neuvector

and got the following output successfully.

Release "neuvector" does not exist. Installing it now.
NAME: neuvector
LAST DEPLOYED: Mon Jun 20 11:17:12 2022
NAMESPACE: neuvector
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
Get the NeuVector URL by running these commands:
  NODE_PORT=$(kubectl get --namespace neuvector -o jsonpath="{.spec.ports[0].nodePort}" services neuvector-service-webui)
  NODE_IP=$(kubectl get nodes --namespace neuvector -o jsonpath="{.items[0].status.addresses[0].address}")
  echo https://$NODE_IP:$NODE_PORT

However when I checked for the pods, following was the output showing issues with neuvector controller-pods and enforcer-pods.

NAME                                        READY   STATUS             RESTARTS   AGE
neuvector-controller-pod-75bd75f68d-b8pzv   0/1     CrashLoopBackOff   6          10m
neuvector-controller-pod-75bd75f68d-bgsjg   0/1     CrashLoopBackOff   6          10m
neuvector-controller-pod-75bd75f68d-d26qq   0/1     CrashLoopBackOff   6          10m
neuvector-enforcer-pod-7zxh7                0/1     CrashLoopBackOff   6          10m
neuvector-enforcer-pod-kqcjr                0/1     CrashLoopBackOff   6          10m
neuvector-enforcer-pod-pnf5r                0/1     CrashLoopBackOff   6          10m
neuvector-manager-pod-9d847b4f6-gpjg6       1/1     Running            0          10m
neuvector-scanner-pod-7d5bcd4947-28xkg      1/1     Running            0          10m
neuvector-scanner-pod-7d5bcd4947-8s7c4      1/1     Running            0          10m
neuvector-scanner-pod-7d5bcd4947-hccr6      1/1     Running            0          10m

Then I checked the logs and found out that the container runtime was not identified during the container startup and the agents have exited as a result of this.

2022-06-20T05:58:12|MON|/usr/local/bin/monitor starts, pid=15018
net.core.somaxconn = 1024
net.unix.max_dgram_qlen = 64
Check TC kernel module ...
TC module located
2022-06-20T05:58:12|MON|Start dp, pid=15049
2022-06-20T05:58:12|MON|Start agent, pid=15050
1970-01-01T00:00:00|DEBU||dpi_dlp_init: enter
1970-01-01T00:00:00|DEBU||dpi_dlp_register_options: enter
1970-01-01T00:00:00|DEBU||net_run: enter
1970-01-01T00:00:00|DEBU|cmd|dp_ctrl_loop: enter
1970-01-01T00:00:00|DEBU|dp0|dpi_frag_init: enter
1970-01-01T00:00:00|DEBU|dp0|dpi_session_init: enter
1970-01-01T00:00:00|DEBU|dp0|dp_data_thr: dp thread starts
1970-01-01T00:00:00|DEBU|dlp|dp_bld_dlp_thr: dp bld_dlp thread starts
2022-06-20T05:58:12.885|INFO|AGT|main.main: START - version=v5.0.0
2022-06-20T05:58:12.885|INFO|AGT|main.main: - bind=10.240.0.120
2022-06-20T05:58:12.887|INFO|AGT|system.NewSystemTools: cgroup v1
2022-06-20T05:58:12.887|INFO|AGT|container.Connect: - endpoint=
2022-06-20T05:58:12.887|ERRO|AGT|main.main: Failed to initialize - error=Unknown container runtime
2022-06-20T05:58:12|MON|Process agent exit status 254, pid=15050
2022-06-20T05:58:12|MON|Process agent exit with non-recoverable return code. Monitor Exit!!
2022-06-20T05:58:12|MON|Kill dp with signal 15, pid=15049
Leave the cluster
2022-06-20T05:58:12|DEBU|dp0|dp_data_thr: dp thread exits
Error leaving: Put "http://127.0.0.1:8500/v1/agent/leave": dial tcp 127.0.0.1:8500: connect: connection refused
2022-06-20T05:58:12|MON|Clean up.

We use containerd://1.4.12+azure-3 in the cluster nodes.

(@becitsthere @gtam) Any idea on how to resolve this issue is highly appreciated.

Document controller.configmap.data

Document the settings that can go under each section of the configmap to help configure configure fresh installs. Or at least link to another document.

controller:
  configmap:
      data:
        # eulainitcfg.yaml: |
        #  ...
        # ldapinitcfg.yaml: |
        #  ...
        # oidcinitcfg.yaml: |
        # ...
        # samlinitcfg.yaml: |
        # ...
        # sysinitcfg.yaml: |
        # ...
        # userinitcfg.yaml: |

Missing lots of containers on K8s(1.26.1) on chart 2.4.2

We upgraded from 2.4.1 to 2.4.2 on our K8s cluster which is running 1.26.1 and when looking at assets we are seeing only a handful of our containers in the cluster. We haven't been able to find a common descriptor on why some of the containers are not showing up.

When downgrading the chart to 2.4.1 we are immediately seeing all of our containers.
I have no idea why this is happening yet.

Kubectl version
Server Version: version.Info{Major:"1", Minor:"26", GitVersion:"v1.26.1", GitCommit:"8f94681cd294aa8cfd3407b8191f6c70214973a4", GitTreeState:"clean", BuildDate:"2023-01-18T15:51:25Z", GoVersion:"go1.19.5", Compiler:"gc", Platform:"linux/amd64"}

neuvector image unable to pulled

[root@kmaster ~]# kubectl version
Client Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.1", GitCommit:"c4d752765b3bbac2237bf87cf0b1c2e307844666", GitTreeState:"clean", BuildDate:"2020-12-18T12:09:25Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.2", GitCommit:"faecb196815e248d3ecfb03c680a4507229c2a56", GitTreeState:"clean", BuildDate:"2021-01-13T13:20:00Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"linux/amd64"}

[root@kmaster ~]# docker version
Client: Docker Engine - Community
Version: 20.10.1
API version: 1.40
Go version: go1.13.15
Git commit: 831ebea
Built: Tue Dec 15 04:37:17 2020
OS/Arch: linux/amd64
Context: default
Experimental: true

Server: Docker Engine - Community
Engine:
Version: 19.03.13
API version: 1.40 (minimum version 1.12)
Go version: go1.13.15
Git commit: 4484c46d9d
Built: Wed Sep 16 17:02:21 2020
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: 1.4.3
GitCommit: 269548fa27e0089a8b8278fc4fc781d7f65a939b
runc:
Version: 1.0.0-rc92
GitCommit: ff819c7e9184c13b7c2607fe6c30ae19403a7aff
docker-init:
Version: 0.18.0
GitCommit: fec3683

[root@kmaster ~]# kubectl get pods -n neuvector
NAME READY STATUS RESTARTS AGE
neuvector-controller-pod-bbf6765bc-4zrch 0/1 ImagePullBackOff 0 2m54s
neuvector-controller-pod-bbf6765bc-86vz5 0/1 ImagePullBackOff 0 2m54s
neuvector-controller-pod-bbf6765bc-gb4vg 0/1 ImagePullBackOff 0 2m54s
neuvector-enforcer-pod-gst6l 0/1 ImagePullBackOff 0 2m54s
neuvector-enforcer-pod-zkdw7 0/1 ImagePullBackOff 0 2m53s
neuvector-manager-pod-5cf6db6b59-tzvlb 0/1 ImagePullBackOff 0 2m54s
neuvector-scanner-pod-74b5f5965c-cqbck 0/1 ImagePullBackOff 0 3h5m
neuvector-scanner-pod-74b5f5965c-lzv6x 0/1 ImagePullBackOff 0 3h5m
neuvector-scanner-pod-74b5f5965c-mj84p 0/1 ImagePullBackOff 0 3h5m

Can't install Neuvector in Kubernetes with Helm or Helmfile.

Im on Azure Kuberentes Service.

Steps to reproduce.

  1. helm repo add neuvector https://neuvector.github.io/neuvector-helm/
  2. kubectl create namespace neuvector
  3. helm install neuvector neuvector/core --version 2.2.2 --set tag=5.0.2 --set registry=docker.io --namespace neuvector

Note that we tried with --version 2.2.0 and --set-tag=5.0.0 as per this blog post: https://blog.krum.io/deploying-neuvector/ and got the same result.

Note that we also tried with Helmfile instead of Helm and got the same result as well.

Output from kubectl -n neuvector get pods

NAME                                        READY   STATUS    RESTARTS     AGE
neuvector-controller-pod-5bcffbf9ff-r5g6v   0/1     Error     1 (3s ago)   4s
neuvector-controller-pod-5bcffbf9ff-tpx89   0/1     Error     1 (3s ago)   4s
neuvector-controller-pod-5bcffbf9ff-xgrpp   0/1     Error     1 (3s ago)   4s
neuvector-manager-pod-9cb6c586d-b6vc7       1/1     Running   0            23m
neuvector-scanner-pod-d79fb9b77-bbcpx       1/1     Running   0            23m
neuvector-scanner-pod-d79fb9b77-dlc9d       1/1     Running   0            23m
neuvector-scanner-pod-d79fb9b77-hrsbt       1/1     Running   0            23m

Events from kubectl -n neuvector describe pod neuvector-controller-pod*

Events:
  Type     Reason     Age               From               Message
  ----     ------     ----              ----               -------
  Normal   Scheduled  50s               default-scheduler  Successfully assigned neuvector/neuvector-controller-pod-5bcffbf9ff-r5g6v to aks-default-19655508-vmss00000d
  Normal   Pulled     9s (x4 over 50s)  kubelet            Container image "docker.io/neuvector/controller:5.0.2" already present on machine
  Normal   Created    9s (x4 over 50s)  kubelet            Created container neuvector-controller-pod
  Normal   Started    9s (x4 over 50s)  kubelet            Started container neuvector-controller-pod
  Warning  BackOff    5s (x7 over 48s)  kubelet            Back-off restarting failed container

Logs from kubectl -n neuvector logs neuvector-controller-pod*

2022-09-08T14:15:05|MON|/usr/local/bin/monitor starts, pid=1
2022-09-08T14:15:05|MON|Start ctrl, pid=8
2022-09-08T14:15:05.311|INFO|CTL|main.main: START - version=v5.0.2
2022-09-08T14:15:05.311|INFO|CTL|main.main: - join=neuvector-svc-controller.neuvector
2022-09-08T14:15:05.311|INFO|CTL|main.main: - advertise=10.244.3.215
2022-09-08T14:15:05.312|INFO|CTL|main.main: - bind=10.244.3.215
2022-09-08T14:15:05.315|INFO|CTL|system.NewSystemTools: cgroup v1
2022-09-08T14:15:05.315|INFO|CTL|container.Connect: - endpoint=
2022-09-08T14:15:05.315|ERRO|CTL|main.main: Failed to initialize - error=Unknown container runtime
2022-09-08T14:15:05|MON|Process ctrl exit status 254, pid=8
2022-09-08T14:15:05|MON|Process ctrl exit with non-recoverable return code. Monitor Exit!!
Leave the cluster
Error leaving: Put "http://127.0.0.1:8500/v1/agent/leave": dial tcp 127.0.0.1:8500: connect: connection refused
2022-09-08T14:15:05|MON|Clean up.

When we do a port-forward like kubectl port-forward --namespace neuvector service/neuvector-service-webui 8443 we can go to the portal at https://localhost:8443 but we are greeted with Controller is not available ... and when we try to login we get Network connect timeout error .

What is next step for us?
Screenshot 2022-09-08 at 16 17 41
Screenshot 2022-09-08 at 16 20 23

default access mode for PVC

Hello,

I think the default access mode should be

controller:
  pvc:
    enabled: true
    accessModes:
      - ReadWriteOnce

ReadWriteMany is often not supported and I think this should be configured if needed. Default should IMHO be ReadWriteOnce.

Regards, Oli

Wrong registry, Wrong tags, tag 4.4.4 is hardcoded somewhere

In your own manual itś said that I must:

The right image tags are:

  • neuvector/manager.preview:5.0.0-preview.3
  • neuvector/controller.preview:5.0.0-preview.3
  • neuvector/enforcer.preview:5.0.0-preview.3
  • neuvector/scanner.preview:latest
  • neuvector/updater.preview:latest

The HELM repo should work well with the default values, but this is not happening here, everyone should:

  • Update the registry to docker.io
  • Update image names/tags to the preview version on Docker hub, as shown above
  • Leave the imagePullSecrets empty

Even updating the tags, the HELM adds 4.4.4 that is hardcoded somewhere and the POD can´t pull the image.

default values.yaml file is wrong,
The step-by-step in the README file is wrong, requesting to create a docker-registry secret and add it in --set imagePullSecrets=regsecret, when the manual said to leave it empty.

It seems that there is No way to make this HELM works.

kubectl port-forward method to access the WEB-UI fails time-to-time

Description

I have used the "kubectl port-forward" method to access the web-ui as per the below blog,
as there is no way to use the node-port access method due to nodes being private.
https://blog.krum.io/deploying-neuvector

However, I experience failures in accessing the console time-to-time due to the following error.

E0706 15:53:45.014940    7354 portforward.go:400] an error occurred forwarding 8443 -> 8443: error forwarding port 8443 to pod f10739214509823fc6f8a5a6321d08ae991f8cd3243eb4981dd963eaae27d6ba, uid : failed to execute portforward in network namespace "/var/run/netns/cni-4ea1e795-5cbb-8863-478f-29bc2011aac5": read tcp4 127.0.0.1:57468->127.0.0.1:8443: read: connection reset by peer

I could also see the manager pod up and running, attached to the relevant service as an endpoint.

neuvector-manager-pod-9d847b4f6-chwhm       1/1     Running     0          23h
NAME                        TYPE          CLUSTER-IP      EXTERNAL-IP    PORT(S)            AGE
neuvector-service-webui       NodePort     10.0.107.150     <none>        8443:31201/TCP    23h

what could be the issue here? Is this something that any of you observed too?
and any help to expose the web console using an ingress resource is also appreciated.
@becitsthere @gtam @jorn-k

additionally, following is my kubectl client version

Client Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.1", GitCommit:"5e58841cce77d4bc13713ad2b91fa0d961e69192", GitTreeState:"clean", BuildDate:"2021-05-12T14:18:45Z", GoVersion:"go1.16.4", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.9", GitCommit:"3396beebcd97ed3b01f57127fba1b8565b311acb", GitTreeState:"clean", BuildDate:"2022-06-04T17:58:54Z", GoVersion:"go1.16.12", Compiler:"gc", Platform:"linux/amd64"}

Issues trying to connect Multi Clusters

When trying to link up multi-clusters I always get timed out, there's not much to go by in the way of logs but I can see a couple of errors in the ingress pods: -

2022-08-24T15:03:09.495|ERRO|CTL|rest.sendReqToMasterCluster: Request failed - proxy={proxyUsed:false enabled:false url:} status=400 Bad Request timeout=50s url=https://neuvector.xxxxx.xxxxxx.net:443/v1/fed/join_internal

And

2022-08-24T15:03:09.495|ERRO|CTL|rest.handlerJoinFed: - data=Request is missing required HTTP header 'Token' localPort=443 localServer=neuvector.xxxxx.xxxxxx.net proxyUsed=false statusCode=400

It looks like the token isn't being sent, if I run a test via postman I can see that it goes through just fine. I get the same issue on 2.2.0/2.2.1/2.2.2

I'm running this in Azure/AKS/k8s version: v1.21.9 - Any ideas?

Thanks and best regards, Dan

Neuvector Helm-created CRDs get Reconciled by App, Breaks GitOps Sync

Issue

When deploying Neuvector through GitOps (in my case, Fleet) and leveraging either the crd chart or having crdwebhook.enabled=false in the core chart, the chart will successfully create the CRDs, however when the controller starts running, it checks for the existence of Roles, and when they exist, the controller overwrites the CRDs, wiping out the annotations/labels used by GitOps/Fleet to track the resources. This causes it to go out of sync, with no way to reconcile it.

Proposed Solution

There's a few ways this could be solved:

  1. Remove CRD creation from the chart(s), have the roles attached to the controller, and have the controller solely responsible for the management of CRDs.
  2. Remove CRD management from the controller itself, have it solely managed by the chart(s).
  3. Separate the chart CRDs from the Roles, so someone can install the CRDs from the chart without creating the roles. This will give users the ability to manage the CRDs purely from the chart without the controller overwriting them.

Unable to pull images from neuvector registry

I'm currently trying to setup neuvector on my cluster but am unable to run it because now the images are hosted on a private registry.

neuvector                 neuvector-manager-pod-65dbb6448b-rmmm5           0/1     ErrImagePull       0          41s
neuvector                 neuvector-controller-pod-5d8b549c4c-vjw52        0/1     ErrImagePull       0          41s
neuvector                 neuvector-controller-pod-5d8b549c4c-lh8jr        0/1     ErrImagePull       0          41s
neuvector                 neuvector-scanner-pod-7bccc444d-ht4ns            0/1     ErrImagePull       0          41s
neuvector                 neuvector-enforcer-pod-klpw8                     0/1     ErrImagePull       0          41s
neuvector                 neuvector-controller-pod-5d8b549c4c-lbtl8        0/1     ErrImagePull       0          41s
neuvector                 neuvector-scanner-pod-7bccc444d-gdg2s            0/1     ErrImagePull       0          41s
neuvector                 neuvector-scanner-pod-7bccc444d-h9kxr            0/1     ErrImagePull       0          41s

How can I create an user on neuvector registry so that I can try the solution before buying it out?

BTW, documentation is completely outdated and it's doesn't reflect those steps required to set it up.
https://docs.neuvector.com/deploying/rancher

The enforcer is continuesly restarting

Hi,

I have installed neuvector on a K3D cluster with the following helm options:

helm upgrade neuvector neuvector/core \
--namespace neuvector \
--install \
--create-namespace \
--version 2.2.2 \
--set tag=5.0.2 \
--set registry=docker.io \
--set k3s.enabled=true \
--set manager.ingress.enabled=true \
--set manager.ingress.ingressClassName="nginx" \
--set manager.svc.type="ClusterIP" \
--set manager.ingress.host="neuvector.internal.xxxxxxxx.com"

The neuvector-enforcer pods are all restarting, in the logs there are some errors:

2022-09-19T18:52:57|MON|/usr/local/bin/monitor starts, pid=25242
net.core.somaxconn = 1024
net.unix.max_dgram_qlen = 64
Check TC kernel module ...
module act_mirred not find.
module act_pedit not find.
2022-09-19T18:52:57|MON|Start dp, pid=25275
2022-09-19T18:52:57|MON|Start agent, pid=25277
1970-01-01T00:00:00|DEBU||dpi_dlp_init: enter
1970-01-01T00:00:00|DEBU||dpi_dlp_register_options: enter
1970-01-01T00:00:00|DEBU||net_run: enter
1970-01-01T00:00:00|DEBU|cmd|dp_ctrl_loop: enter
2022-09-19T18:52:57|DEBU|dlp|dp_bld_dlp_thr: dp bld_dlp thread starts
2022-09-19T18:52:57|DEBU|dp0|dpi_frag_init: enter
2022-09-19T18:52:57|DEBU|dp0|dpi_session_init: enter
2022-09-19T18:52:57|DEBU|dp0|dp_data_thr: dp thread starts
2022-09-19T18:52:57.589|INFO|AGT|main.main: START - version=v5.0.2
2022-09-19T18:52:57.594|INFO|AGT|main.main: - bind=192.168.181.217
2022-09-19T18:52:57.614|INFO|AGT|system.NewSystemTools: cgroup v2
2022-09-19T18:52:57.615|INFO|AGT|container.Connect: - endpoint=
2022-09-19T18:52:57.659|WARN|AGT|container.parseEndpointWithFallbackProtocol: no error unix /run/containerd/containerd.sock.
2022-09-19T18:52:57.673|INFO|AGT|container.containerdConnect: cri - version=&VersionResponse{Version:0.1.0,RuntimeName:containerd,RuntimeVersion:v1.6.6-k3s1,RuntimeApiVersion:v1alpha2,}
2022-09-19T18:52:57.734|INFO|AGT|container.containerdConnect: containerd connected - endpoint=/run/containerd/containerd.sock version={Version:v1.6.6-k3s1 Revision:}
2022-09-19T18:52:58.126|ERRO|AGT|orchestration.getVersion: - code=401 tag=k8s
2022-09-19T18:52:58.148|ERRO|AGT|orchestration.getVersion: - code=401 tag=oc
2022-09-19T18:52:58.157|ERRO|AGT|orchestration.getVersion: - code=403 tag=oc
2022-09-19T18:52:58.16 |INFO|AGT|workerlet.NewWalkerTask: - showDebug=false
2022-09-19T18:52:58.161|INFO|AGT|main.main: Container socket connected - endpoint= runtime=containerd
2022-09-19T18:52:58.162|INFO|AGT|main.main: - k8s=1.24.4+k3s1 oc=
2022-09-19T18:52:58.162|INFO|AGT|main.main: PROC: - shield=true
2022-09-19T18:52:58.179|ERRO|AGT|system.(*SystemTools).NsRunBinary: - error=exit status 255 msg=
2022-09-19T18:52:58.18 |ERRO|AGT|main.getHostAddrs: Error getting host IP - error=exit status 255
2022-09-19T18:52:58.18 |INFO|AGT|main.parseHostAddrs: - maxMTU=0
2022-09-19T18:52:58.277|ERRO|AGT|container.(*containerdDriver).GetContainer: Failed to get container image config - error=content digest sha256:c2280d2f5f56cf9c9a01bb64b2db4651e35efd6d62a54dcfc12049fe6449c5e4: not found id=eff4af6d37fd4fba2c37c838de8aac8e1beddcd7fe75343468e70607031b7a0b
2022-09-19T18:52:58.279|INFO|AGT|main.main: - hostIPs={}
2022-09-19T18:52:58.28 |INFO|AGT|main.main: - host={ID:k3d-Test-Wilco-agent-1: Name:k3d-Test-Wilco-agent-1 Runtime:containerd Platform:Kubernetes Flavor: Network:Default RuntimeVer:v1.6.6-k3s1 RuntimeAPIVer:v1.6.6-k3s1 OS:K3s dev Kernel:5.10.104-linuxkit CPUs:5 Memory:8232370176 Ifaces:map[] TunnelIP:[] CapDockerBench:false CapKubeBench:true StorageDriver:overlayfs CgroupVersion:2}
2022-09-19T18:52:58.28 |INFO|AGT|main.main: - agent={CLUSDevice:{ID:e62cf8e783c188001a5458df1352fc1b5fb168815eb1663f021452acbe0da972 Name:k8s_neuvector-enforcer-pod_neuvector-enforcer-pod-z5xcq_neuvector_c9e201e4-b4da-45b1-8dd1-2e30b17d54ac_2 SelfHostname: HostName:k3d-Test-Wilco-agent-1 HostID:k3d-Test-Wilco-agent-1: Domain:neuvector NetworkMode:/proc/23276/ns/net PidMode:host Ver:v5.0.2 Labels:map[io.cri-containerd.image:managed io.cri-containerd.kind:container io.kubernetes.container.name:neuvector-enforcer-pod io.kubernetes.pod.name:neuvector-enforcer-pod-z5xcq io.kubernetes.pod.namespace:neuvector io.kubernetes.pod.uid:c9e201e4-b4da-45b1-8dd1-2e30b17d54ac name:enforcer neuvector.image:neuvector/enforcer neuvector.rev:a552e5e neuvector.role:enforcer release:5.0.2 vendor:NeuVector Inc. version:5.0.2] CreatedAt:2022-09-19 18:52:56.937028472 +0000 UTC StartedAt:2022-09-19 18:52:56.937028472 +0000 UTC JoinedAt:0001-01-01 00:00:00 +0000 UTC MemoryLimit:0 CPUs: ClusterIP: RPCServerPort:0 Pid:25242 Ifaces:map[eth0:[{IPNet:{IP:192.168.181.217 Mask:ffffffff} Gateway: Scope:global NetworkID: NetworkName:}]]}}
2022-09-19T18:52:58.292|INFO|AGT|main.main: - jumboframe=false pipeType=no_tc
2022-09-19T18:52:58.293|INFO|AGT|cluster.FillClusterAddrs: - advertise=192.168.181.217 join=neuvector-svc-controller.neuvector
2022-09-19T18:52:58.311|INFO|AGT|cluster.(*consulMethod).Start: - config=&{ID:e62cf8e783c188001a5458df1352fc1b5fb168815eb1663f021452acbe0da972 Server:false Debug:false Ifaces:map[eth0:[{IPNet:{IP:192.168.181.217 Mask:ffffffff} Gateway: Scope:global NetworkID: NetworkName:}]] JoinAddr:neuvector-svc-controller.neuvector joinAddrList:[192.168.181.219 192.168.249.84 192.168.102.21] BindAddr:192.168.181.217 AdvertiseAddr:192.168.181.217 DataCenter:neuvector RPCPort:0 LANPort:0 WANPort:0 EnableDebug:false} recover=false
2022-09-19T18:52:58.313|INFO|AGT|cluster.(*consulMethod).Start: - node-id=6fa06cea-8c38-97be-277d-8a2d3ff3f27e
2022-09-19T18:52:58.315|INFO|AGT|cluster.(*consulMethod).Start: Consul start - args=[agent -datacenter neuvector -data-dir /tmp/neuvector -config-file /tmp/consul.json -bind 192.168.181.217 -advertise 192.168.181.217 -node 192.168.181.217 -node-id 6fa06cea-8c38-97be-277d-8a2d3ff3f27e -raft-protocol 3 -retry-join 192.168.181.219 -retry-join 192.168.249.84 -retry-join 192.168.102.21]
==> Starting Consul agent...
           Version: '1.11.3'
           Node ID: '6fa06cea-8c38-97be-277d-8a2d3ff3f27e'
         Node name: '192.168.181.217'
        Datacenter: 'neuvector' (Segment: '')
            Server: false (Bootstrap: false)
       Client Addr: [127.0.0.1] (HTTP: 8500, HTTPS: -1, gRPC: -1, DNS: -1)
      Cluster Addr: 192.168.181.217 (LAN: 18301, WAN: -1)
           Encrypt: Gossip: true, TLS-Outgoing: true, TLS-Incoming: true, Auto-Encrypt-TLS: false

==> Log data will now stream in as it occurs:

2022-09-19T18:52:59.577Z [WARN]  agent: Node name "192.168.181.217" will not be discoverable via DNS due to invalid characters. Valid characters include all alpha-numerics and dashes.
2022-09-19T18:52:59.631Z [WARN]  agent.auto_config: Node name "192.168.181.217" will not be discoverable via DNS due to invalid characters. Valid characters include all alpha-numerics and dashes.
2022-09-19T18:52:59.657Z [INFO]  agent.client.serf.lan: serf: EventMemberJoin: 192.168.181.217 192.168.181.217
2022-09-19T18:52:59.660Z [INFO]  agent.router: Initializing LAN area manager
2022-09-19T18:52:59.667Z [INFO]  agent: Starting server: address=127.0.0.1:8500 network=tcp protocol=http
2022-09-19T18:52:59.668Z [WARN]  agent: DEPRECATED Backwards compatibility with pre-1.9 metrics enabled. These metrics will be removed in a future version of Consul. Set `telemetry { disable_compat_1.9 = true }` to disable them.
2022-09-19T18:52:59.669Z [INFO]  agent: Retry join is supported for the following discovery methods: cluster=LAN discovery_methods="aliyun aws azure digitalocean gce k8s linode mdns os packet scaleway softlayer tencentcloud triton vsphere"
2022-09-19T18:52:59.669Z [INFO]  agent: Joining cluster...: cluster=LAN
2022-09-19T18:52:59.669Z [INFO]  agent: (LAN) joining: lan_addresses=[192.168.181.219, 192.168.249.84, 192.168.102.21]
2022-09-19T18:52:59.680Z [INFO]  agent: started state syncer
2022-09-19T18:52:59.680Z [INFO]  agent: Consul agent running!
2022-09-19T18:52:59.683Z [WARN]  agent.router.manager: No servers available
2022-09-19T18:52:59.686Z [ERROR] agent.anti_entropy: failed to sync remote state: error="No known Consul servers"
2022-09-19T18:52:59.688Z [INFO]  agent.client.serf.lan: serf: EventMemberJoin: 192.168.249.84 192.168.249.84
2022-09-19T18:52:59.690Z [INFO]  agent.client: adding server: server="192.168.249.84 (Addr: tcp/192.168.249.84:18300) (DC: neuvector)"
2022-09-19T18:52:59.719Z [INFO]  agent.client.serf.lan: serf: EventMemberJoin: 192.168.102.21 192.168.102.21
2022-09-19T18:52:59.720Z [INFO]  agent.client.serf.lan: serf: EventMemberJoin: 192.168.181.219 192.168.181.219
2022-09-19T18:52:59.720Z [WARN]  agent.client.memberlist.lan: memberlist: Refuting a dead message (from: 192.168.181.217)
2022-09-19T18:52:59.721Z [INFO]  agent.client: adding server: server="192.168.102.21 (Addr: tcp/192.168.102.21:18300) (DC: neuvector)"
2022-09-19T18:52:59.721Z [INFO]  agent.client: adding server: server="192.168.181.219 (Addr: tcp/192.168.181.219:18300) (DC: neuvector)"
2022-09-19T18:52:59.771Z [INFO]  agent: (LAN) joined: number_of_nodes=3
2022-09-19T18:52:59.771Z [INFO]  agent: Join cluster completed. Synced with initial agents: cluster=LAN num_agents=3
2022-09-19T18:53:00.895Z [INFO]  agent.client.serf.lan: serf: EventMemberJoin: 192.168.249.83 192.168.249.83
2022-09-19T18:53:02.076Z [INFO]  agent: Synced node info

Any idea why and how to fix it? Thanks

accept license, set admin password

If you install neuvector with this helmchart you first have to login with default user/pwd admin:admin and accept the license.

It would be nice to set the default admin password inside the helmchart. I didn't find a solution for this.

Running neuvector with this helm charts and without RWX causes a downtime of the controller. Because only one controller can be used with this RWO-PVC and this needs to be shutdown before running the new one.

After this controller is up again the users, pwds, ... are still there - but you have to accept the license again. This happens if the packages of the nodes will be updated every night (including a restart of the nodes) - every day we have to accept the license. This is annoying.

It would be great if you can accept the license with the helm chart (other products do this) or make this accept persistent that it is only accepted once.

Thanks and best regards, Oli

Add checksum so that configmap/secret init changes cause controller cycling

Currently controller.configmap and controller.secret provide an option for initialization of things like users or SSO configuration. When paired with always_reload: true this initialization should update on each deploy, however the controller pod does not automatically restart when the configmap or secret changes.

This can result in mismatched config between what is stored in the secret and what is actually configured in Neuvector. If leveraging this data elsewhere this can become problematic (ex: cycling a "robot user" credentials that are fed into your exporter deployment).

Proposed solution would be to add a pod annotation leveraging sha256sum with Helm to create a checksum that will cycle the pod on any changes to the configmap/secret if they are enabled.

Missing containerd.sock is CrashLoopBackOff - PR ready

Helm chart 2.4.4 breaks controller and enforcer.

PR: #257

In the latest 2.4.4 chart the controller and enforcer are missing the 'containerd.sock' fields in the volume mounts.

Testing
Working:

helm upgrade -i neuvector -n neuvector neuvector/core --version 2.4.3 --create-namespace --set imagePullSecrets=regsecret --set k3s.enabled=true --set k3s.runtimePath=/run/k3s/containerd/containerd.sock --set manager.ingress.enabled=true --set manager.ingress.host=neuvector.rfed.io --set manager.svc.type=ClusterIP

Not working:

helm upgrade -i neuvector -n neuvector neuvector/core --version 2.4.4 --create-namespace --set imagePullSecrets=regsecret --set k3s.enabled=true --set k3s.runtimePath=/run/k3s/containerd/containerd.sock --set manager.ingress.enabled=true --set manager.ingress.host=neuvector.rfed.io --set manager.svc.type=ClusterIP

cc: @zackbradys

When adding --set manager.ingress.ingressClassName="nginx" to helm, it is not present in the object

When using the following helm command, I do not see the ingressClassname back om the ingress object:

helm upgrade neuvector neuvector/core \
--namespace neuvector \
--install \
--version 2.2.0 \
--set tag=5.0.0 \
--set registry=docker.io \
--create-namespace \
--set containerd.enabled=true \
--set containerd.path="/run/k3s/containerd/containerd.sock" \
--set manager.ingress.enabled=true \
--set manager.ingress.ingressClassName="nginx" \
--set manager.svc.type="ClusterIP" \
--set manager.ingress.host="neuvector.internal.xxxxx.com"

The object:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: neuvector-webui-ingress
  namespace: neuvector
  uid: b0b53317-5c78-4102-a596-ace70d4d170b
  resourceVersion: '13505'
  generation: 1
  creationTimestamp: '2022-09-18T11:55:29Z'
  labels:
    app.kubernetes.io/managed-by: Helm
    chart: core-2.2.0
    heritage: Helm
    release: neuvector
  annotations:
    meta.helm.sh/release-name: neuvector
    meta.helm.sh/release-namespace: neuvector
  managedFields:
    ....
spec:
  rules:
    - host: neuvector.internal.xxxxx.com
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: neuvector-service-webui
                port:
                  number: 8443

Instructions for registry.neuvector.com

The instructions still reflect dockerhub, which appears to longer be an option for image source.
Are there instructions for how to authenticate with registry.neuvector.com? The current instructions are unclear.

We would like to get the installation rolling in our Rancher deployment for evaluation. I've watched this video https://www.youtube.com/watch?v=cc8nA7nxuDc&t=79s, but it is old and does not address updated image pull or licensing methods.

Sensitive data stored in insecure ConfigMap

We really appreciate the ability to configure the NeuVector deployment with a ConfigMap but are concerned about two issues:

  1. Sensitive data is stored in ConfigMap vs Secret
  2. Configuration cannot be merged from multiple Helm values files, which is very important when maintaining some default values combined with some deployment-specific values

We propose the following changes to address each of these issues:

  1. Change to a Kubernetes Secret instead of ConfigMap, allowing the use of Kubernetes RBAC to control access to Secret data.
  2. Change the implementation of the data stored within the mounted files to be yaml within the configuration, converted to yaml within the files. This allows override of specific properties in specific files without duplicating all configuration in each file.
  3. Update the naming of the configuration properties to more accurately describe the contents. (configmap.data -> init.files)

make scheduler parameters

we use portworx as storage. portworx uses stock as the scheduler for storage awareness. each time we get charts from here, we'd modify the scheduler. Could you make the scheduler as parameter optionally?

Chart not working with Kubernetes >= 1.19 -> Ingress Template not migrated to new API

The switch for the ingress apiVersion in ingress.yaml using SemVer for Kubernetes >1.19 does switch to the new API Group networking.k8s.io/v1.
However the rest of the ingress.yaml also needs to be adjusted for the new ingress specification.

Currently the chart cannot be used with Kubernetes >= 1.19
Resulting error:
Error: error validating "": error validating data: [ValidationError(Ingress.spec.rules[0].http.paths[0].backend): unknown field "serviceName" in io.k8s.api.networking.v1.IngressBackend, ValidationError(Ingress.spec.rules[0].http.paths[0].backend): unknown field "servicePort" in io.k8s.api.networking.v1.IngressBackend]

According to the K8S API-Spec, the Ingress Syntax has significantly changed:
https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.19/#ingress-v1-networking-k8s-io

Cannot run last version on Kubernetes

In two differents environments, I reproduce exactly the same problem

Here on DO with Kubernetes 1.24 but similar on GKE with public node on 1.22

Client Version: version.Info{Major:"1", Minor:"24", GitVersion:"v1.24.0", GitCommit:"4ce5a8954017644c5420bae81d72b09b735c21f0", GitTreeState:"clean", BuildDate:"2022-05-03T13:46:05Z", GoVersion:"go1.18.1", Compiler:"gc", Platform:"darwin/amd64"}
Kustomize Version: v4.5.4
Server Version: version.Info{Major:"1", Minor:"24", GitVersion:"v1.24.4", GitCommit:"95ee5ab382d64cfe6c28967f36b53970b8374491", GitTreeState:"clean", BuildDate:"2022-08-17T18:47:37Z", GoVersion:"go1.18.5", Compiler:"gc", Platform:"linux/amd64"}
Events:
  Type     Reason     Age               From               Message
  ----     ------     ----              ----               -------
  Normal   Scheduled  35s               default-scheduler  Successfully assigned neuvector/neuvector-controller-pod-569c94dc4c-bfpxq to pool-hrt3wmegx-7jmy1
  Normal   Pulling    35s               kubelet            Pulling image "docker.io/neuvector/controller:5.0.2"
  Normal   Pulled     26s               kubelet            Successfully pulled image "docker.io/neuvector/controller:5.0.2" in 9.419698445s
  Normal   Created    25s               kubelet            Created container neuvector-controller-pod
  Normal   Started    25s               kubelet            Started container neuvector-controller-pod
  Warning  Unhealthy  1s (x4 over 16s)  kubelet            Readiness probe failed: cat: can't open '/tmp/ready': No such file or directory

Thanks,

Invalid Neuvector link in Rancher when installed with manager.env.ssl=false

Not sure if this is an installation issue or a "general" Neuvector issue (or maybe even a Rancher issue). Trying here. If not, I'll create a new issue...

I'm installing Neuvector from Rancher's app menu (2.6.11).
It is using the Helm-Chart neuvector:100.0.3+up2.2.4. The cluster is running K3S 1.24.11 with Traefik Ingress (2.9.4).
I'm trying to access Neuvector via the generated Ingress and via the Rancher integration (with credential forwarding).

Because Traefik does not correctly forward to a TLS service with a self-signed certificate in the generated Ingress (it returns always "Bad Gateway"), I'm trying to disable SSL on the manager service. I'm doing this with the helm value manager.env.ssl=false. I'm also using manager.svc.type=ClusterIP. This is working fine, I'm getting a working Ingress with that, but after the change, the Neuvector link in Rancher is not working anymore.

The link seems to point always to the https service, even if this service is not created (or better: it's created as http):
https://rancher-link/api/v1/namespaces/cattle-neuvector-system/services/https:neuvector-service-webui:8443/proxy/index.html?v=7618fefeba#/app/dashboard

It's failing with

{
  "kind": "Status",
  "apiVersion": "v1",
  "metadata": {},
  "status": "Failure",
  "message": "error trying to reach service: tls: first record does not look like a TLS handshake",
  "reason": "ServiceUnavailable",
  "code": 503
}

which is perfectly fine since there is not TLS anymore for the manager service.

When I change the link in the browser manually to
https://rancher-link/api/v1/namespaces/cattle-neuvector-system/services/http:neuvector-service-webui:8443/proxy/index.html?v=7618fefeba#/app/dashboard
things work again and I can access Neuvector with my Rancher admin credentials forwarded to Neuvector.

I couldn't find the location where the link in Rancher is generated.
I suspect that it is generated with the helm chart. If not, please suggest where to place this issue instead?

[Question] CVE Updater postStart Lifecycle Hook

What exactly is the point of the postStart lifecycle hook on the CVE updater (as defined in the cronjob)?

The Lifecycle Hook currently failes everytime in my cluster (see below, I assume a network related race condition) and thus prevents the CVE updater itself from running. If I remove the hook from the cronjob definition, the updater runs as expected. Does the annotation that would be set on the scanner pod by the lifecycle hook serve any purpose at all? Or can it be ignored?

Exec lifecycle hook ([/bin/sh -c /usr/bin/curl -kv -X PATCH -H "Authorization:Bearer $(cat /var/run/secrets/kubernetes.io/serviceaccount/token)" -H "Content-Type:application/strategic-merge-patch+json" -d '{"spec":{"template":{"metadata":{"annotations":{"kubectl.kubernetes.io/restartedAt":"'date +%Y-%m-%dT%H:%M:%S%z'"}}}}}' 'https://kubernetes.default/apis/apps/v1/namespaces/neuvector/deployments/neuvector-scanner-pod']) for Container "neuvector-updater-pod" in Pod "manual-update-jcrpp_neuvector(6413f830-e249-4049-85b3-9883355b7f32)" failed - error: command '/bin/sh -c /usr/bin/curl -kv -X PATCH -H "Authorization:Bearer $(cat /var/run/secrets/kubernetes.io/serviceaccount/token)" -H "Content-Type:application/strategic-merge-patch+json" -d '{"spec":{"template":{"metadata":{"annotations":{"kubectl.kubernetes.io/restartedAt":"'date +%Y-%m-%dT%H:%M:%S%z'"}}}}}' 'https://kubernetes.default/apis/apps/v1/namespaces/neuvector/deployments/neuvector-scanner-pod'' exited with 6: % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:00:01 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:00:02 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:00:03 --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- 0:00:04 --:--:-- 0* Could not resolve host: kubernetes.default * Closing connection 0 curl: (6) Could not resolve host: kubernetes.default , message: " % Total % Received % Xferd Average Speed Time Time Time Current\n Dload Upload Total Spent Left Speed\n\r 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:00:01 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:00:02 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:00:03 --:--:-- 0\r 0 0 0 0 0 0 0 0 --:--:-- 0:00:04 --:--:-- 0* Could not resolve host: kubernetes.default\n* Closing connection 0\ncurl: (6) Could not resolve host: kubernetes.default\n"

ServiceMonitor for neuvector/monitor chart

For users using a KubePrometheusStack (formerly known as prometheus-operator) style monitoring deployment, being able to deploy deploy a ServiceMonitor resource together with the exporter would be very handy.

In essence a ServiceMonitor is an alternative to scraping labeled exporters and it is supported by OpenShift as well as regular KuberPrometheusStack deployments.

Such a ServiceMonitor would look something like this:

apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
  name: nv-monitor
  labels:
    app: nv-monitor
spec:
  endpoints:
  - honorLabels: true
    interval: 60s
    # adapt to match the exporter
    path: /metrics
    scheme: http
    targetPort: 8080
  jobLabel: nv
  namespaceSelector:
    matchNames:
    - nv-namespace
  selector:
    matchLabels:
      app: nv-monitor
      release: nv-monitor

The ServiceMonitor would be an optional component that can be activated with something like serviceMonitor.create. For it to work a Service that point to the exporter would also be needed (and i'd consider that best-practice anyways).

Would you consider merging such a change? It's something I've seen a lot of charts offer and where available it's always highly appreciated by users.

error in deploy neuvector in openshift

helm install neuvector --namespace neuvector neuvector/core --set openshift=true,crio.enabled=true

Error: INSTALLATION FAILED: rendered manifests contain a resource that already exists. Unable to continue with install: CustomResourceDefinition "nvclustersecurityrules.neuvector.com" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; label validation error: missing key "app.kubernetes.io/managed-by": must be set to "Helm"; annotation validation error: missing key "meta.helm.sh/release-name": must be set to "neuvector"; annotation validation error: missing key "meta.helm.sh/release-namespace": must be set to "neuvector"
[root@ns ~]#

Support env vars for enforcer

Currently the ENF_NO_SYSTEM_PROFILES is not supported in the helm template.
Would be a good thing to have env options like in the Controller.

how to get neuvector TargetGroup with port 443 instead of 30143

Hi

I have installed neuvector using helm in eks cluster using the steps below:

Before installing made few changes to values.yaml under manager ingress section (https://github.com/neuvector/neuvector-helm/blob/master/charts/core/values.yaml#L262-L275)

Below are the changes:

ingress:
   enabled: true
   host:  pocnexuscontainer.XXXXXX.com # MUST be set, if ingress is enabled
   ingressClassName: "alb"
   path: "/"
   annotations:
     alb.ingress.kubernetes.io/scheme: internal
     alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:us-east-1:AWSACCOUNTID:certificate/CertificateID
     alb.ingress.kubernetes.io/ssl-policy: ELBSecurityPolicy-TLS-1-1-2017-01
     alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS":443}, {"HTTP":80}]'
   tls: true
   secretName: nexus-container-secret # my-tls-secret

When this was applied, helm was able to install all pods and create a loadbalancer in AWS, but when TargetGroup was created, it's created with port 30143 instead of 443.

I'd to know where to make changes in Values.yaml so that I can over-ride target group creation with 443 instead of 30143.

I had to manually created a TargetGroup with 443 & and register all the targets with port 30143 and updated the LoadBalancer listeners with the TargetGroup which was created manally which is when the DNS for neuvector is working for us.

Let me know What changes I need to in values.yaml so that TargetGroup gets created with 443 and registeres all the targets with 30143 port

Thanks,
Srikanth

The neuvector-controller-pod is continuous restarting because of incorrect volumeMounts

prerequisite:
Upstream Kubernetes setup with containerd and kubeadm

procedures to reproduce the issue:

  1. Run 'kubectl create namespace neuvector'
  2. Run 'kubectl create secret docker-registry regsecret -n neuvector --docker-server=https://index.docker.io/v1/ --docker-username=your-name --docker-password=your-password --docker-email=your-email'
  3. Run 'helm install my-release --namespace neuvector neuvector/core --set imagePullSecrets=regsecret'
  4. Check the status of neuvector-controller-pod, it is restarting continuously.
  5. Check the log of this pod:
"2022-11-02T07:53:53.311|INFO|CTL|system.NewSystemTools: cgroup v1
2022-11-02T07:53:53.311|INFO|CTL|container.Connect: - endpoint=
2022-11-02T07:53:53.311|ERRO|CTL|main.main: Failed to initialize - error=Unknown container runtime
2022-11-02T07:53:53|MON|Process ctrl exit status 254, pid=7
2022-11-02T07:53:53|MON|Process ctrl exit with non-recoverable return code. Monitor Exit!!"
  1. Check the YAML of the controller deployment:
       volumeMounts:
        - mountPath: /var/neuvector
          name: nv-share
        - mountPath: /var/run/docker.sock
          name: runtime-sock
          readOnly: true
        - mountPath: /host/proc
          name: proc-vol
          readOnly: true
        - mountPath: /host/cgroup
          name: cgroup-vol
          readOnly: true
        - mountPath: /etc/config
          name: config-volume
          readOnly: true

"/run/containerd/containerd.sock" is not mounted to the pod, so it cannot detect the containerd runtime.

Consider publishing chart

It would be nice if this chart was published in a chart repository. We would prefer to use your 'official' chart rather than maintaining our own because:

  • we would be more confident we are using it as intended
  • we would like to add custom features, but instead of adding them as PRs, we would prefer to make this chart a dependency in our own chart (i.e. via requirements.yaml because they are specific to our environment.) . However to do this the chart must be published, not a Github clone.

Also we considered publishing ourselves, but the chart name doesn't match the directory name:

Tims-MBP-4:neuvector-helm timw$ helm package .
Error: directory name (neuvector-helm) and Chart.yaml name (neuvector) must match

EDIT - nevermind the last comment - cloning to a directory called neuvector will fix this...

Expose imagePullPolicy setting

As a user of this chart, I currently cannot specify the imagePullPolicy for any of the containers it deploys without using a post renderer or mutation.

I know setting the pull policy to Always is the correct choice for a number of reasons. However, there are scenarios and environments where this is problematic or hinders performance.

I suggest exposing the policy as a helm variable and setting the default value to Always.

Add custom CA for Keycloak OIDC from Helm chart

Hi,

I deployed Neuvector by using neuvector-helm-chart and I would like to add the authentification by Keycloak (Open ID Connect).

My values.yaml configuration :

controller:
  secret:
    enabled: true
    data:
      oidcinitcfg.yaml:
        Issuer: https://KEYCLOAK_URL/auth/realms/REALM
        Client_ID: neuvector
        Client_Secret: CLIENT_SECRET
        Scopes:
          - openid
          - profile
          - email
        Enable: true
        Default_Role: reader

However, I got the following error :

2022-09-19T11:58:57.914|ERRO|CTL|rest.handlerAuthLoginServer: User login failed - error=Post "https://KEYCLOAK_URL/auth/realms/REALM/protocol/openid-connect/token": x509: certificate signed by unknown authority server=openId1 

The workaround would be to implement the ability to add your own CA from the chart. By adding the following content in the controller, the authentication works :

        volumeMounts:
        - mountPath: /etc/ssl/certs
          name: ca-volume
          readOnly: true
#...
      volumes:
      - name: ca-volume
        projected:
          defaultMode: 420
          sources:
          - configMap:
              name: custom-ca

What do you think about adding your own CA in the helm chart ?

tags do not exist at new registry ( registry.neuvector.com )

Hello,
since a few days you deleted your images at dockerhub. Now we have to switch to registry.neuvector.com as you already changed at values.yaml.

At your new directory no tags latest are present. Additionally the cve.updater.image and the cve.scanner.image can't be downloaded anymore. At lease because of the changing regestry credentials and they still pull from docker.io....

Events:
  Type     Reason   Age                     From     Message
  ----     ------   ----                    ----     -------
  Warning  Failed   19m (x8780 over 33h)    kubelet  Error: ImagePullBackOff
  Normal   BackOff  3m57s (x8847 over 33h)  kubelet  Back-off pulling image "docker.io/neuvector/updater:latest"

I hope you can fix this very soon because we always use your helm chart for install the neuvector application. And if a pod needs to be rescheduled they can't download the container from docker.io at the new node.

Hope to hear from you soon.

Regards, Oli

Failed to pull neuvector images

We are observing ImagePullBackOff errors with recent pod restarts. Are there any changes in image location?

Events:
  Type     Reason   Age                     From     Message
  ----     ------   ----                    ----     -------
  Warning  Failed   32m (x51065 over 8d)    kubelet  Error: ImagePullBackOff
  Normal   BackOff  7m32s (x51174 over 8d)  kubelet  Back-off pulling image "docker.io/neuvector/controller:latest"
  Normal   Pulling  2m36s (x2272 over 8d)   kubelet  Pulling image "docker.io/neuvector/controller:latest"

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.