Giter Club home page Giter Club logo

litmus-go's People

Contributors

andreas131989 avatar ashiskumarnaik avatar avaakash avatar cazeaux avatar elric1 avatar gdsoumya avatar gpsingh-1991 avatar iassurewipro avatar ipsita2192 avatar ispeakc0de avatar jonsy13 avatar jordigilh avatar machacekondra avatar masayag avatar michaelmorrisest avatar mikhailknyazev avatar nageshbansal avatar namkyu1999 avatar neelanjan00 avatar oumkale avatar piyush0609 avatar s-ayanide avatar samarsidharth avatar saptarshisarkar12 avatar smitthakkar96 avatar snyk-bot avatar tanmaypandey7 avatar uditgaurav avatar vr00mm avatar williamhyzhang avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

litmus-go's Issues

litmuschaos/go-runner:1.13.0 image pullback issue

While running the experiments for "Pod Network Latency, Pod I/O Stress ,Pod Network Corruption and Pod Network Loss" getting image pullback issue for '"litmuschaos/go-runner:1.13.0" image.
While running the experiment it spawns two containers.
One of the container starts with the below message : -

Events:
Type Reason Age From Message


Normal Scheduled 2m17s default-scheduler Successfully assigned ui-consumer-service/pod-io-stress-y164fd-p5ms8 to isvchaosk8t01
Normal Pulled 2m16s kubelet Container image "litmuschaos/go-runner:1.13.0" already present on machine
Normal Created 2m16s kubelet Created container pod-io-stress-y164fd
Normal Started 2m16s kubelet Started container pod-io-stress-y164fd

The other container gives below error. When image is already present it should have pulled it like above container.
Events:

Type Reason Age From Message


Normal Pulling 76s (x4 over 2m39s) kubelet Pulling image "litmuschaos/go-runner:1.13.0"
Warning Failed 76s (x4 over 2m39s) kubelet Failed to pull image "litmuschaos/go-runner:1.13.0": rpc error: code = Unknown desc = Error response from daemon: Get https://registry-1.docker.io/v2/: dial tcp: lookup registry-1.docker.io on 192.168.200.199:53: server misbehaving
Warning Failed 76s (x4 over 2m39s) kubelet Error: ErrImagePull
Normal BackOff 64s (x6 over 2m38s) kubelet Back-off pulling image "litmuschaos/go-runner:1.13.0"
Warning Failed 51s (x7 over 2m38s) kubelet Error: ImagePullBackOff

GCP VM Instance Stop By Label

FEATURE REQUEST

  • Add the Label Selector feature for the GCP VM Instance Stop Experiment so that the target VM instances can be filtered using a label.

Okteto devflow docs appear to be missing some detail needed

Just trying to validate a new experiment using the okteto dev flow steps. UPDATE: changes are now in a draft PR (#230 )

This line looks like it needs to be updated:

❯ go run experiments/kube-aws/az-down/experiment/az-down.go 
go run: cannot run non-main package

Instead I ran the experiment with bin/go-runner.go, but it seems I'm missing a step (it wasn't clear how/ if I needed to setup a chaosengine to test the experiment with okteto) - any thoughts?

litmus:litmus-experiment okteto> go run bin/go-runner.go -name az-down
W1208 16:01:12.905209    2519 client_config.go:541] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
INFO[2020-12-08T16:01:12Z] Experiment Name: az-down                     
INFO[2020-12-08T16:01:12Z] [PreReq]: Getting the ENV for the  experiment 
FATAL[2020-12-08T16:01:12Z] Unable to initialise probes details from chaosengine, err: Unable to Get the chaosengine, err: resource name may not be empty 
exit status 1

On a sidenote, I've been using an Openshift 4 cluster to validate this and needed to add perms to the experiment serviceaccount to run privileged containers for okteto to be able to create the litmus-experiment pod, fyi.

GCP VM Instance Stop experiment doesn't wait for target VM instances to shutdown when AutoScalingGroup is enabled

BUG REPORT

What happened: GCP VM Instance Stop experiment doesn't wait for target VM instances, that are part of an auto-scaling group, to shut down when AutoScalingGroup is enabled. The experiment proceeds onto the next steps and seemingly completes the experiment with a PASS verdict. Also, invalid value for AUTO_SCALING_GROUP requires validation.

What you expected to happen: The auto-scaled target VM instances should properly shut down as part of the chaos injection before further steps can be initiated. Also, an invalid value for AUTO_SCALING_GROUP should cause an error.

How to reproduce it (as minimally and precisely as possible): Execute the experiment with AUTO_SCALING_GROUP ENV set as enable in the ChaosEngine manifest of the experiment, for any VM instance that is a part of an auto-scaling group. AUTO_SCALING_GROUP value can be set as an invalid string.

Node Restart using Redfish API

FEATURE REQUEST

What happened:
Currently Litmus is using SSH based auth to execute reboot command to target nodes. for this, we are creating a keypair and adding the public key to all target nodes manually.

What you expected to happen:
Eliminate SSH , Use Redfish api to directly perform restart on target node using IP, UserName and Password.

EX: Reset Dell iDRAC
API request: POST request to "https:///redfish/v1/Managers/iDRAC.Embedded.1/Actions/Manager.Reset/"

GCP VM Disk Loss By Label

FEATURE REQUEST

  • Add the Label Selector feature for the GCP VM Disk Loss Experiment so that the target persistent disk volumes can be filtered using a label.

No Auxiliary Application Check in node-memory-hog Experiment and Multiple Node Selection Failure in three Node-Level Experiments

BUG REPORT

What happened:

  1. No auxiliary application status check is performed in the node-memory-hog experiment
  2. Upon specifying multiple target nodes as comma-separated-values in node-cpu-hog, node-memory-hog, and node-io-stress experiments causes the experiment to fail.

What you expected to happen:

  1. Auxiliary application check should be performed as part of the node-memory-hog experiment.
  2. Each of the target nodes specified as comma-separated-values in node-cpu-hog, node-memory-hog, and node-io-stress experiments should be subjected to chaos.

How to reproduce it (as minimally and precisely as possible):

  1. node-memory-hog experiment run with auxiliaryAppInfo field set as any incorrect string doesn't cause any error, since auxiliary application check is not being performed.
  2. node-cpu-hog, node-memory-hog, and node-io-stress experiment run with multiple node names as comma-separated-values.

Check helper pod status is failing because of multiple same chaos experiments running in the k8s namespace

Version
1.10.0

Bug description
We're using 1.10.0 to run multiple pod-network-loss chaos experiments at the same time, it's using pumba to create helper pod. Also, we setup one namespace as the chaos namespace, so the helper pods will be running in the same namespace at that time. Some chaos experiments showed verdict: Fail after some time, the litmus job's pod logs showed

time="2021-01-19T15:58:16Z" level=info msg="[Status]: Checking the status of the helper pod"  
time="2021-01-19T15:58:16Z" level=info msg="[Status]: Checking whether application containers are in ready state"  
time="2021-01-19T16:02:12Z" level=error msg="Chaos injection failed, err: helper pod is not in running state, err: container is in terminated state" 

User Experience
The helper pods are still running, the chaos effect is still effective. The helper pod will not be cleaned after that which makes the following chaos experiments verdict marked as Fail.

Expected behavior
The chaos experiment can be finished as expected with Verdict: Success and the helper pods will be cleaned.

Support IAM Roles for Service Accounts for AWS related experiments

With IAM roles for service accounts on Amazon EKS clusters, you can associate an IAM role with a Kubernetes service account. This service account can then provide AWS permissions to the containers in any pod that uses that service account. Instead of using AWS access key/secrets, we can use the IAM Role mapped to a service account, which is more secure way of accessing AWS services.

AWS SDK Go needs to be 1.23.13 (https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts-minimum-sdk.html).

AUT pre-check should not treat a complete-Terminated container as an error state

https://github.com/litmuschaos/litmus-go/blob/master/pkg/status/application.go
func CheckContainerStatus treat a Terminated container with reason Completed and Exit Code: 0 as error and abort the chaos experiments.
https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/:
A container in the Terminated state began execution and then either ran to completion or failed for some reason. When you use kubectl to query a Pod with a container that is Terminated, you see a reason, an exit code, and the start and finish time for that container's period of execution.
A container is in Terminated state with Reason as Completed and Exit Code: 0 is health to perform the chaos experiments on the pod. But now the code just treat them as abort.

func CheckContainerStatus(appNs, appLabel string, timeout, delay int, clients clients.ClientSets) error {

err := retry.
	Times(uint(timeout / delay)).
	Wait(time.Duration(delay) * time.Second).
	Try(func(attempt uint) error {
		podList, err := clients.KubeClient.CoreV1().Pods(appNs).List(metav1.ListOptions{LabelSelector: appLabel})
		if err != nil || len(podList.Items) == 0 {
			return errors.Errorf("Unable to find the pods with matching labels, err: %v", err)
		}
		for _, pod := range podList.Items {
			for _, container := range pod.Status.ContainerStatuses {
				if container.State.Terminated != nil {
					return errors.Errorf("container is in terminated state")
				}
				if container.Ready != true {
					return errors.Errorf("containers are not yet in running state")
				}
				log.InfoWithValues("[Status]: The Container status are as follows", logrus.Fields{
					"container": container.Name, "Pod": pod.Name, "Readiness": container.Ready})
			}
		}
		return nil
	})
if err != nil {
	return err
}
return nil

}

Add a "steps" artifact for an experiment

  • Allows us to develop step codes/error codes against this and update in chaosresult
  • User intuitive. They can contribute experiments w/o having to write associated code
  • Aids in facilitating restartability of the experiment

container-kill: Sending SIGTERM instead of SIGKILL when using CRIO runtime

I am trying to use container-kill experiment with OpenShift 4 which uses CRIO 1.19 as container runtime.

Checking in the code, this experiment is triggered by executing crictl stop container-id [1]
But looking then to the underlying terminated container, I see that its exit code is 143, i.e. it receives a SIGTERM, but not a SIGKILL.
However, if I try a manual test directly from the host and I add the --timeout=0 to the crictl stop command, the container will receive a SIGKILL, i.e its exit status will be 137.
Following PR should fix this issue: #306

Many thanks!

[1]

cmd := exec.Command("crictl", "-i", endpoint, "-r", endpoint, "stop", string(containerID))

[ec2-terminate-by-tag] Handle interval 0/1 case

Is this a BUG REPORT or FEATURE REQUEST?

BUG REPORT

What happened:

In the case of ec2-terminate-by-tag with MANAGED_SUBGROUP=enable (when EC2 instances are managed by an ASG), there is an issue when trying to execute only once the chaos.
In general, setting CHAOS_INTERVAL=TOTAL_CHAOS_DURATION is the way to get a single time execution.
But if we set CHAOS_INTERVAL<TOTAL_CHAOS_DURATION the chaos is failing because of the following behavior :

it seems that the code does a loop during all the CHAOS_DURATION

for duration < experimentsDetails.ChaosDuration {
and inside it, it loops over the instanceIDList so it can try to stop the same instance multiple times during the Chaos duration. In the case of (MANAGED_SUBGROUP=disable), the instance is "stopped" instead of terminated so it will stop/start/stop... the same instance without any issue. But in the case of MANAGED_SUBGROUP=enable, the instance is "terminated" and it causes an issue as the instance has not been removed from the instanceIDList , it cannot be stopped as it is not existing anymore in the next iteration...

The only way to have a success is to set CHAOS_INTERVAL=TOTAL_CHAOS_DURATION but then we have to wait CHAOS_INTERVAL_TOTAL for nothing at the end of the chaos first (and only) iteration.

=> the case when the interval is 0/1 should be handled

The details are explained in this Slack discussion : https://kubernetes.slack.com/archives/CNXNB0ZTN/p1643826054494339?thread_ts=1643739932.025119&cid=CNXNB0ZTN

What you expected to happen:

In the case of MANAGED_SUBGROUP=enable, the instance has to be removed from the instanceIDList to avoid trying to stop it again in the next iterations.

How to reproduce it (as minimally and precisely as possible):

  • tag an instance with chaos=allowed
  • Launch the experiment with : MANAGED_SUBGROUP=enable, TOTAL_CHAOS_DURATION=500s (a sufficient time to allow the ASG to terminate the stopped instance), CHAOS_INTERVAL=0 (or any value < TOTAL_CHAOS_DURATION), and `INSTANCE_TAG= 'chaos:allowed'
  • the instance is stopped, after several minutes, the instance is terminated by the ASG
  • the code is waiting CHAOS_INTERVAL => 0 seconds
  • the instance is still in the list of instances to stop => the experiment is failing with err: ec2 instance failed to stop, err: IncorrectInstanceState: This instance 'i-0fd0da669ea93c044' is not in a state from which it can be stopped.

The only way to not fail is to set CHAOS_INTERVAL=TOTAL_CHAOS_DURATION but then :

  • the instance is stopped, after several minutes, the instance is terminated by the ASG
  • the code is waiting CHAOS_INTERVAL ! (so 400s for nothing)
  • the experiment is successful (but with a useless waiting period of CHAOS_INTERVAL)

Anything else we need to know?:

@ksatchit and @uditgaurav already agreed on that missing behavior, thanks to them for the support to understand this issue 👍

Delete chaosresults as part of cleanup.

BUG REPORT

What happened:
Deleting the chaosengine is only deleting the pods(runner, helper, experiment pod)which are being created during the chaos test.

What you expected to happen:
It should also delete the chaosresults from the test.

#kubectl delete chaosengine -n litmus network-chaos-1 network-chaos-2
chaosengine.litmuschaos.io "network-chaos-1" deleted
chaosengine.litmuschaos.io "network-chaos-2" deleted

#kubectl get pods -n litmus
NAME                                READY   STATUS    RESTARTS   AGE
chaos-operator-ce-c7cc65966-zz5n4   1/1     Running   0          6h57m

#kubectl get chaosresults -n litmus
NAME                                  AGE
network-chaos-1-pod-network-latency   5m9s
network-chaos-2-pod-network-latency   3m26s```

Add OnChaos mode to pod-delete experiment

I have a k8sprobe which mode is "OnChaos" in pod-delete experiment, it's not able to run. Seems the OnChaos mode is not ready on pod-delete.

Ideally, the "OnChaos" mode should be enabled for all generic experiments.

Support disk fill using pod exec commands

FEATURE REQUEST

  • We currently run Open Policy Agent on our K8s clusters which limit our abilities to create litmus helper pods on specific nodes. OPA has rules enabled to limit this activity. We would like to have a mode for the disk fill experiment where we would run a disk fill using the pod exec commands like it is done for pod-cpu-hog and pod-memory-hog

Refactoring of pod-delete go experiment

Description

In the present scenario, We are using pod-delete go experiment with the helper pod approach. This issue will track the discussion to find a better way helper/without a helper. I am just putting my thoughts in support & oppose the same.

HELPER VS WITHOUT HELPER (:heavy_check_mark: )

  • We have a few questions in our mind while using pod-delete-helper pod Why are we using two pods (one helper, one main chaos pod) to delete a single pod? as our helper pod doesn't need any extra privileged or something related to the accessibility of target pod (need to schedule on some particular node). As the number of pod/resources for chaos increases. It may increase the complexity in terms of cleanup(need to clean up helper pod after chaos injection or we have to retain if the experiment fails, for debugging purpose).

ITERATIONS VS CHAOS_DURATION

  • In the present scenario, We are deriving the iterations by duration/interval and terminate the pod after time passed duration. It is not necessary that it will run for all iterations(duration can be reached before that). As our pod-delete experiment is simple chaos so can we support both? we can have both requirements like:

    • Need to delete replicas for x iterations independent of time?
    • Need to delete the replicas for x duration, independent of iteration?
  • Let the user decide what he wants and he will be clear about the chaos, what he applied?

KILL_COUNT VS PERCENTAGE

  • Right now we are supporting kill count(no. of replicas to be deleted w/ matching labels). In other experiments, we have percentage env. Do we want to use the same? or retain kill count? -- both are fine actually made for the same purpose (neutral views).

Result with experiment details

  • In refactoring, we can add the experiment's result related details in the chaos-result, which can be derived inside the experiment & passed as a blob. It is tracked by litmuschaos/litmus#1254

stop/abort network delay experiment is not restoring the settings back to its original state on target pod interface.

stop/abort network delay experiment is not removing the added delay time on target pod interface.

I have run the network delay test with duration 5 min and stopped the experiment after 2min, It have deleted all resources(runner pod and helper pods) but the introduced delay is still running on the target pod interface.
Is there a way to cleanup introduced delay time on target pods?

logs from the pod after stopping the chaos. the applied 500ms is goingon.......

/ # ping 108.250.140.21
PING 108.250.140.21 (108.250.140.21): 56 data bytes
64 bytes from 108.250.140.21: seq=0 ttl=63 time=500.754 ms
64 bytes from 108.250.140.21: seq=1 ttl=63 time=500.614 ms
64 bytes from 108.250.140.21: seq=2 ttl=63 time=500.537 ms
64 bytes from 108.250.140.21: seq=3 ttl=63 time=501.802 ms
64 bytes from 108.250.140.21: seq=4 ttl=63 time=500.566 ms
64 bytes from 108.250.140.21: seq=5 ttl=63 time=503.307 ms
64 bytes from 108.250.140.21: seq=6 ttl=63 time=500.575 ms
64 bytes from 108.250.140.21: seq=7 ttl=63 time=500.656 ms
64 bytes from 108.250.140.21: seq=8 ttl=63 time=500.632 ms

Add the conditional helper in the experiments

Description

  • We can have different values of blast radius for an experiment, based on the use case. Adding a sample use cases:

    • Suppose we have to inject chaos on the single replica of deployment so it would be better to have a single helper pod which will help us to inject the chaos using minimal resources.
    • If we want to inject the chaos on some percentage of replicas the single pod approach won't be the best fit rather we can use the daemonset.
  • Considering the ^^ above use cases, we can make our chaos experiment smarter so that it can invoke the helper pod/daemonset on the basis of the blast radius.

[node-io-stress] ContainerCannotRun (go-runner:1.8.2)

node-io-stress experiment fails to start container.

Events:
  Type     Reason   Age   From     Message
  ----     ------   ----  ----     -------
  Normal   Pulling  36s   kubelet  Pulling image "litmuschaos/go-runner:1.8.2"
  Normal   Pulled   34s   kubelet  Successfully pulled image "litmuschaos/go-runner:1.8.2" in 2.107156699s
  Normal   Created  33s   kubelet  Created container node-io-stress
  Warning  Failed   33s   kubelet  Error: failed to start container "node-io-stress": Error response from daemon: OCI runtime create failed: container_linux.go:370: starting container process caused: exec: "/stress-ng": stat /stress-ng: no such file or directory: unknown

I also tested node-io-stress with go-runner:1.8.1, then it works. It seems that there were some changes of the build related to stress-ng.

Could you please investigate it? Thanks!

pod-autoscaler experiment uses wrong namespace while querying deployment to recover

Please see chaoslib/litmus/pod-autoscaler/lib/pod-autoscaler.go:143 (for go-runner:1.8.1 )

The deployment is in AppNS while the query is for ChaosNamespace:

applicationClient := clients.KubeClient.AppsV1().Deployments(experimentsDetails.AppNS)
  vs
applicationClient := clients.KubeClient.AppsV1().Deployments(experimentsDetails.ChaosNamespace)

So when the app-under-test is in say "default" namespace, while "ChaosNamespace" is kept with the default value of "litmus":
experimentDetails.ChaosNamespace = Getenv("CHAOS_NAMESPACE", "litmus")
we have the following error:

time="2020-10-12T23:07:39Z" level=error msg="Chaos injection failed due to Unable to recover the auto scaling, due to Unable to scale the, due to: Failed to get latest version of Application Deployment: deployments.apps \"nginx\" not found\n"

selecting random pod if specified target-pod not found

Is this a BUG REPORT or FEATURE REQUEST?

Choose one: BUG REPORT

What happened:

  • In the pod-level experiment, It is selecting a random pod for the chaos if TARGET_POD is not found. Ideally it should give an error if the specified pod is not available for the chaos

AUT precheck checks all the deployment with appLabel and it doesn't ignore the deployments without annotation

In 1.12.2, litmus allows the applabel applied for more than 1 deployments. Some of the deployment is added annotation that chaos=true. And annotaionCheck is true in ChaosEngine. In such scnario, the AUT precheck should only check the deployments with annotations. But the AUT precheck function checks health status for all the deployments.
pod-delete.go:
err = status.CheckApplicationStatus(experimentsDetails.AppNS, experimentsDetails.AppLabel, experimentsDetails.Timeout, experimentsDetails.Delay, clients)
application.go:
// CheckApplicationStatus checks the status of the AUT
func CheckApplicationStatus(appNs, appLabel string, timeout, delay int, clients clients.ClientSets) error {

switch appLabel {
case "":
	// Checking whether applications are healthy
	log.Info("[Status]: Checking whether applications are in healthy state")
	err := CheckPodAndContainerStatusInAppNs(appNs, timeout, delay, clients)
	if err != nil {
		return err
	}
default:
	// Checking whether application containers are in ready state
	log.Info("[Status]: Checking whether application containers are in ready state")
	err := CheckContainerStatus(appNs, appLabel, timeout, delay, clients)
	if err != nil {
		return err
	}
	// Checking whether application pods are in running state
	log.Info("[Status]: Checking whether application pods are in running state")
	err = CheckPodStatus(appNs, appLabel, timeout, delay, clients)
	if err != nil {
		return err
	}
}
return nil

}

Docker Container Image Vulnerability Check - 2021-07-30

Is this a BUG REPORT or FEATURE REQUEST?

It is a BUG REPORT.

Choose one: BUG REPORT or FEATURE REQUEST

What happened:
Experienced the following Docker container image vulnerability scan report using Trivy Docker image scan tool.

2021-07-29T13:38:11.2187979Z 2021-07-29T13:38:11.217Z	�[34mINFO�[0m	Detecting Alpine vulnerabilities...
2021-07-29T13:38:11.2203896Z 2021-07-29T13:38:11.219Z	�[34mINFO�[0m	Number of language-specific files: 6
2021-07-29T13:38:11.2204899Z 2021-07-29T13:38:11.219Z	�[34mINFO�[0m	Detecting gobinary vulnerabilities...
2021-07-29T13:38:11.2238865Z 
2021-07-29T13:38:11.2240010Z litmuschaos/go-runner:1.13.8 (alpine 3.13.5)
2021-07-29T13:38:11.2240527Z ============================================
2021-07-29T13:38:11.2241005Z Total: 4 (MEDIUM: 4, HIGH: 0, CRITICAL: 0)
2021-07-29T13:38:11.2241273Z 
2021-07-29T13:38:11.2241988Z +---------+------------------+----------+-------------------+---------------+---------------------------------------+
2021-07-29T13:38:11.2242686Z | LIBRARY | VULNERABILITY ID | SEVERITY | INSTALLED VERSION | FIXED VERSION |                 TITLE                 |
2021-07-29T13:38:11.2249533Z +---------+------------------+----------+-------------------+---------------+---------------------------------------+
2021-07-29T13:38:11.2250760Z | curl    | CVE-2021-22922   | MEDIUM   | 7.77.0-r1         | 7.78.0-r0     | curl: wrong content via               |
2021-07-29T13:38:11.2251494Z |         |                  |          |                   |               | metalink is not being discarded       |
2021-07-29T13:38:11.2252376Z |         |                  |          |                   |               | -->avd.aquasec.com/nvd/cve-2021-22922 |
2021-07-29T13:38:11.2258617Z +         +------------------+          +                   +               +---------------------------------------+
2021-07-29T13:38:11.2259705Z |         | CVE-2021-22923   |          |                   |               | curl: Metalink download               |
2021-07-29T13:38:11.2260431Z |         |                  |          |                   |               | sends credentials                     |
2021-07-29T13:38:11.2261312Z |         |                  |          |                   |               | -->avd.aquasec.com/nvd/cve-2021-22923 |
2021-07-29T13:38:11.2262202Z +---------+------------------+          +                   +               +---------------------------------------+
2021-07-29T13:38:11.2263646Z | libcurl | CVE-2021-22922   |          |                   |               | curl: wrong content via               |
2021-07-29T13:38:11.2264530Z |         |                  |          |                   |               | metalink is not being discarded       |
2021-07-29T13:38:11.2266854Z |         |                  |          |                   |               | -->avd.aquasec.com/nvd/cve-2021-22922 |
2021-07-29T13:38:11.2267787Z +         +------------------+          +                   +               +---------------------------------------+
2021-07-29T13:38:11.2268713Z |         | CVE-2021-22923   |          |                   |               | curl: Metalink download               |
2021-07-29T13:38:11.2269432Z |         |                  |          |                   |               | sends credentials                     |
2021-07-29T13:38:11.2270307Z |         |                  |          |                   |               | -->avd.aquasec.com/nvd/cve-2021-22923 |
2021-07-29T13:38:11.2271190Z +---------+------------------+----------+-------------------+---------------+---------------------------------------+
2021-07-29T13:38:11.2281737Z 
2021-07-29T13:38:11.2282206Z litmus/experiments (gobinary)
2021-07-29T13:38:11.2283921Z =============================
2021-07-29T13:38:11.2285690Z Total: 6 (MEDIUM: 5, HIGH: 1, CRITICAL: 0)
2021-07-29T13:38:11.2286966Z 
2021-07-29T13:38:11.2298684Z +---------------------+------------------+----------+------------------------------------+------------------------------------+---------------------------------------+
2021-07-29T13:38:11.2299650Z |       LIBRARY       | VULNERABILITY ID | SEVERITY |         INSTALLED VERSION          |           FIXED VERSION            |                 TITLE                 |
2021-07-29T13:38:11.2300824Z +---------------------+------------------+----------+------------------------------------+------------------------------------+---------------------------------------+
2021-07-29T13:38:11.2302017Z | golang.org/x/crypto | CVE-2020-29652   | HIGH     | v0.0.0-20200622213623-75b288015ac9 | v0.0.0-20201216223049-8b5274cf687f | golang: crypto/ssh: crafted           |
2021-07-29T13:38:11.2303163Z |                     |                  |          |                                    |                                    | authentication request can            |
2021-07-29T13:38:11.2313693Z |                     |                  |          |                                    |                                    | lead to nil pointer dereference       |
2021-07-29T13:38:11.2321287Z |                     |                  |          |                                    |                                    | -->avd.aquasec.com/nvd/cve-2020-29652 |
2021-07-29T13:38:11.2326877Z +---------------------+------------------+----------+------------------------------------+------------------------------------+---------------------------------------+
2021-07-29T13:38:11.2328245Z | k8s.io/client-go    | CVE-2019-11250   | MEDIUM   | v0.0.0-20191016111102-bec269661e48 | v0.17.0                            | kubernetes: Bearer tokens             |
2021-07-29T13:38:11.2329152Z |                     |                  |          |                                    |                                    | written to logs at high               |
2021-07-29T13:38:11.2329965Z |                     |                  |          |                                    |                                    | verbosity levels (>= 7)...            |
2021-07-29T13:38:11.2331034Z |                     |                  |          |                                    |                                    | -->avd.aquasec.com/nvd/cve-2019-11250 |
2021-07-29T13:38:11.2332103Z +                     +------------------+          +                                    +------------------------------------+---------------------------------------+
2021-07-29T13:38:11.2333210Z |                     | CVE-2020-8565    |          |                                    | v0.20.0-alpha.2                    | kubernetes: Incomplete fix            |
2021-07-29T13:38:11.2334349Z |                     |                  |          |                                    |                                    | for CVE-2019-11250 allows for         |
2021-07-29T13:38:11.2335176Z |                     |                  |          |                                    |                                    | token leak in logs when...            |
2021-07-29T13:38:11.2336236Z |                     |                  |          |                                    |                                    | -->avd.aquasec.com/nvd/cve-2020-8565  |
2021-07-29T13:38:11.2337300Z +---------------------+------------------+          +------------------------------------+------------------------------------+---------------------------------------+
2021-07-29T13:38:11.2338438Z | k8s.io/kubernetes   | CVE-2020-8554    |          | v1.17.3                            |                                    | kubernetes: MITM using                |
2021-07-29T13:38:11.2339313Z |                     |                  |          |                                    |                                    | LoadBalancer or ExternalIPs           |
2021-07-29T13:38:11.2340335Z |                     |                  |          |                                    |                                    | -->avd.aquasec.com/nvd/cve-2020-8554  |
2021-07-29T13:38:11.2341370Z +                     +------------------+          +                                    +------------------------------------+---------------------------------------+
2021-07-29T13:38:11.2342594Z |                     | CVE-2020-8564    |          |                                    | v1.20.0-alpha.1                    | kubernetes: Docker config             |
2021-07-29T13:38:11.2343455Z |                     |                  |          |                                    |                                    | secrets leaked when file is           |
2021-07-29T13:38:11.2344213Z |                     |                  |          |                                    |                                    | malformed and loglevel >=...          |
2021-07-29T13:38:11.2345372Z |                     |                  |          |                                    |                                    | -->avd.aquasec.com/nvd/cve-2020-8564  |
2021-07-29T13:38:11.2346386Z +                     +------------------+          +                                    +------------------------------------+---------------------------------------+
2021-07-29T13:38:11.2347530Z |                     | CVE-2020-8565    |          |                                    | v1.20.0-alpha.2                    | kubernetes: Incomplete fix            |
2021-07-29T13:38:11.2348668Z |                     |                  |          |                                    |                                    | for CVE-2019-11250 allows for         |
2021-07-29T13:38:11.2349597Z |                     |                  |          |                                    |                                    | token leak in logs when...            |
2021-07-29T13:38:11.2350662Z |                     |                  |          |                                    |                                    | -->avd.aquasec.com/nvd/cve-2020-8565  |
2021-07-29T13:38:11.2351720Z +---------------------+------------------+----------+------------------------------------+------------------------------------+---------------------------------------+
2021-07-29T13:38:11.2352216Z 
2021-07-29T13:38:11.2352554Z litmus/helpers (gobinary)
2021-07-29T13:38:11.2352936Z =========================
2021-07-29T13:38:11.2353327Z Total: 3 (MEDIUM: 2, HIGH: 1, CRITICAL: 0)
2021-07-29T13:38:11.2353561Z 
2021-07-29T13:38:11.2354314Z +---------------------+------------------+----------+------------------------------------+------------------------------------+---------------------------------------+
2021-07-29T13:38:11.2355097Z |       LIBRARY       | VULNERABILITY ID | SEVERITY |         INSTALLED VERSION          |           FIXED VERSION            |                 TITLE                 |
2021-07-29T13:38:11.2356111Z +---------------------+------------------+----------+------------------------------------+------------------------------------+---------------------------------------+
2021-07-29T13:38:11.2357216Z | golang.org/x/crypto | CVE-2020-29652   | HIGH     | v0.0.0-20200622213623-75b288015ac9 | v0.0.0-20201216223049-8b5274cf687f | golang: crypto/ssh: crafted           |
2021-07-29T13:38:11.2358054Z |                     |                  |          |                                    |                                    | authentication request can            |
2021-07-29T13:38:11.2358810Z |                     |                  |          |                                    |                                    | lead to nil pointer dereference       |
2021-07-29T13:38:11.2359826Z |                     |                  |          |                                    |                                    | -->avd.aquasec.com/nvd/cve-2020-29652 |
2021-07-29T13:38:11.2360890Z +---------------------+------------------+----------+------------------------------------+------------------------------------+---------------------------------------+
2021-07-29T13:38:11.2362046Z | k8s.io/client-go    | CVE-2019-11250   | MEDIUM   | v0.0.0-20191016111102-bec269661e48 | v0.17.0                            | kubernetes: Bearer tokens             |
2021-07-29T13:38:11.2362932Z |                     |                  |          |                                    |                                    | written to logs at high               |
2021-07-29T13:38:11.2363734Z |                     |                  |          |                                    |                                    | verbosity levels (>= 7)...            |
2021-07-29T13:38:11.2364792Z |                     |                  |          |                                    |                                    | -->avd.aquasec.com/nvd/cve-2019-11250 |
2021-07-29T13:38:11.2365855Z +                     +------------------+          +                                    +------------------------------------+---------------------------------------+
2021-07-29T13:38:11.2367118Z |                     | CVE-2020-8565    |          |                                    | v0.20.0-alpha.2                    | kubernetes: Incomplete fix            |
2021-07-29T13:38:11.2368252Z |                     |                  |          |                                    |                                    | for CVE-2019-11250 allows for         |
2021-07-29T13:38:11.2369020Z |                     |                  |          |                                    |                                    | token leak in logs when...            |
2021-07-29T13:38:11.2370095Z |                     |                  |          |                                    |                                    | -->avd.aquasec.com/nvd/cve-2020-8565  |
2021-07-29T13:38:11.2371109Z +---------------------+------------------+----------+------------------------------------+------------------------------------+---------------------------------------+
2021-07-29T13:38:11.2371520Z 
2021-07-29T13:38:11.2371844Z usr/local/bin/dns_interceptor (gobinary)
2021-07-29T13:38:11.2372217Z ========================================
2021-07-29T13:38:11.2372625Z Total: 0 (MEDIUM: 0, HIGH: 0, CRITICAL: 0)
2021-07-29T13:38:11.2372860Z 
2021-07-29T13:38:11.2372987Z 
2021-07-29T13:38:11.2373292Z usr/local/bin/nsutil (gobinary)
2021-07-29T13:38:11.2373645Z ===============================
2021-07-29T13:38:11.2374034Z Total: 0 (MEDIUM: 0, HIGH: 0, CRITICAL: 0)
2021-07-29T13:38:11.2374253Z 
2021-07-29T13:38:11.2374382Z 
2021-07-29T13:38:11.2374690Z usr/local/bin/promql (gobinary)
2021-07-29T13:38:11.2375043Z ===============================
2021-07-29T13:38:11.2375434Z Total: 1 (MEDIUM: 1, HIGH: 0, CRITICAL: 0)
2021-07-29T13:38:11.2375651Z 
2021-07-29T13:38:11.2376319Z +------------------+------------------+----------+-------------------+---------------+---------------------------------------+
2021-07-29T13:38:11.2376984Z |     LIBRARY      | VULNERABILITY ID | SEVERITY | INSTALLED VERSION | FIXED VERSION |                 TITLE                 |
2021-07-29T13:38:11.2377838Z +------------------+------------------+----------+-------------------+---------------+---------------------------------------+
2021-07-29T13:38:11.2378759Z | gopkg.in/yaml.v2 | CVE-2019-11254   | MEDIUM   | v2.2.2            | v2.2.8        | kubernetes: Denial of                 |
2021-07-29T13:38:11.2379460Z |                  |                  |          |                   |               | service in API server via             |
2021-07-29T13:38:11.2380098Z |                  |                  |          |                   |               | crafted YAML payloads by...           |
2021-07-29T13:38:11.2380965Z |                  |                  |          |                   |               | -->avd.aquasec.com/nvd/cve-2019-11254 |
2021-07-29T13:38:11.2381832Z +------------------+------------------+----------+-------------------+---------------+---------------------------------------+
2021-07-29T13:38:11.2382167Z 
2021-07-29T13:38:11.2382625Z usr/local/bin/pumba (gobinary)
2021-07-29T13:38:11.2382980Z ==============================
2021-07-29T13:38:11.2383371Z Total: 1 (MEDIUM: 0, HIGH: 1, CRITICAL: 0)
2021-07-29T13:38:11.2383637Z 
2021-07-29T13:38:11.2384371Z +--------------------------+------------------+----------+-------------------+---------------+--------------------------------------+
2021-07-29T13:38:11.2385098Z |         LIBRARY          | VULNERABILITY ID | SEVERITY | INSTALLED VERSION | FIXED VERSION |                TITLE                 |
2021-07-29T13:38:11.2386035Z +--------------------------+------------------+----------+-------------------+---------------+--------------------------------------+
2021-07-29T13:38:11.2386990Z | github.com/gogo/protobuf | CVE-2021-3121    | HIGH     | v1.3.1            | v1.3.2        | gogo/protobuf:                       |
2021-07-29T13:38:11.2387807Z |                          |                  |          |                   |               | plugin/unmarshal/unmarshal.go        |
2021-07-29T13:38:11.2388620Z |                          |                  |          |                   |               | lacks certain index validation       |
2021-07-29T13:38:11.2389519Z |                          |                  |          |                   |               | -->avd.aquasec.com/nvd/cve-2021-3121 |
2021-07-29T13:38:11.2390407Z +--------------------------+------------------+----------+-------------------+---------------+--------------------------------------+
2021-07-29T13:38:11.3867958Z Vulnerabilities found.
2021-07-29T13:38:11.3898449Z ##[error]Bash exited with code '1'.
2021-07-29T13:38:11.3947357Z ##[section]Finishing: Scan Docker container image

What you expected to happen:
Since, maintenance of a tested version of go-runner Docker container image in a user specific, private container registry is a best practice in a production grade container deployment (instead of using the publicly available version from a public image registry), it would be ideal to provide the users with an image which is vulnerability free, as much as possible.

Appreciate if you could look into the detected vulnerabilities. If LitmusChaos uses a different, image scan tool, would appreciate details about its vulnerability check.

How to reproduce it (as minimally and precisely as possible):
Using Trivy Docker image scan tool.

VMWare VM-Poweroff Experiment Enhancements

FEATURE REQUEST

VMWare VM-Poweroff experiment currently lacks the following functionalities:

  1. The experiment lacks proper error handling during the API calls which can lead the experiment run to panic if something goes wrong during API calls.
  2. The experiment lacks the functionality of injecting chaos in multiple VMs in serial or parallel mode.
  3. The experiment lacks the logic for waiting through the duration for the VM to fully shut down or fully start before commencing the next step.

Expected Functionality

  1. Proper error handling during VMWare API calls
  2. Functionality of chaos injection in multiple VMs in serial and parallel mode
  3. Logic for waiting through the duration for the VM to fully shut down or fully start before commencing the next step.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.