Giter Club home page Giter Club logo

illuminatio's Introduction

illuminatio - The kubernetes network policy validator

Build Status codecov

logo

illuminatio is a tool for automatically testing kubernetes network policies. Simply execute illuminatio clean run and illuminatio will scan your kubernetes cluster for network policies, build test cases accordingly and execute them to determine if the policies are in effect.

An overview of the concept is visualized in the concept doc.

Demo

Demo with NetworkPolicy enabled

Watch it on asciinema with NetworkPolicy enabled or with NetworkPolicy disabled.

Getting started

Follow these instructions to get illuminatio up and running.

Prerequisites

  • Python 3.6 or greater
  • Pip 3

Installation

with pip:

pip3 install illuminatio

or directly from the repository:

pip3 install git+https://github.com/inovex/illuminatio

Kubectl plugin

In order to use illuminatio as a kubectl plugin run the following command:

ln -s $(which illuminatio) /usr/local/bin/kubectl-illuminatio

And now cross check that the plugin exists:

kubectl plugin list --name-only | grep illuminatio
The following compatible plugins are available:

kubectl-illuminatio

Example Usage

Create a Deployment to test with:

kubectl create deployment web --image=nginx
kubectl expose deployment web --port 80 --target-port 80

Define and create a NetworkPolicy for your Deployment:

cat <<EOF | kubectl create -f -
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
  name: web-deny-all
spec:
  podSelector:
    matchLabels:
      app: web
  ingress: []
EOF

Test your newly created NetworkPolicy:

illuminatio clean run
Starting cleaning resources with policies ['on-request', 'always']
Deleting namespaces [] with cleanup policy on-request
Deleting namespaces [] with cleanup policy always
Deleting DSs in default with cleanup policy on-request
Deleting pods in default with cleanup policy on-request
Deleting svcs in default with cleanup policy on-request
Deleting CfgMaps in default with cleanup policy on-request
Deleting CRBs  with cleanup policy on-request globally
Deleting SAs in default with cleanup policy on-request
Deleting DSs in default with cleanup policy always
Deleting pods in default with cleanup policy always
Deleting svcs in default with cleanup policy always
Deleting CfgMaps in default with cleanup policy always
Deleting CRBs  with cleanup policy always globally
Deleting SAs in default with cleanup policy always
Finished cleanUp

Starting test generation and run.
Got cases: [NetworkTestCase(from=ClusterHost(namespace=default, podLabels={'app': 'web'}), to=ClusterHost(namespace=default, podLabels={'app': 'web'}), port=-*)]
Generated 1 cases in 0.0701 seconds
FROM             TO               PORT
default:app=web  default:app=web  -*

Using existing cluster role
Creating cluster role binding
TestResults: {'default:app=web': {'default:app=web': {'-*': {'success': True}}}}
Finished running 1 tests in 18.7413 seconds
FROM             TO               PORT  RESULT
default:app=web  default:app=web  -*    success

The clean keyword assures that illuminatio clears all potentially existing resources created in past illuminatio runs to prevent potential issues, however no user generated resources are affected.

PLEASE NOTE that currently each new run requires a clean, as the runners do not continuously look for new cases.

For the case that you really want to keep the generated resources you are free to omit the clean keyword.

If you are done testing you might want to easily delete all resources created by illuminatio:

illuminatio clean

To preview generated test cases without running tests use illuminatio run's --dry option:

illuminatio run --dry
Starting test generation and run.
Got cases: [NetworkTestCase(from=ClusterHost(namespace=default, podLabels={'app': 'web'}), to=ClusterHost(namespace=default, podLabels={'app': 'web'}), port=-*)]
Generated 1 cases in 0.0902 seconds
FROM             TO               PORT
default:app=web  default:app=web  -*

Skipping test execution as --dry was set

All options and further information can be found using the --help flag on any level:

illuminatio --help
Usage: illuminatio [OPTIONS] COMMAND1 [ARGS]... [COMMAND2 [ARGS]...]...

Options:
  -v, --verbosity LVL  Either CRITICAL, ERROR, WARNING, INFO or DEBUG
  --incluster
  --help               Show this message and exit.

Commands:
  clean
  run

Docker Usage

Instead of installing the illumnatio cli on your machine you can also use our Docker image. You will need to provide the kubeconfig to the container and probably some certificates:

docker run -ti -v ~/.kube/config:/kubeconfig:ro inovex/illuminatio illuminatio clean run

Please note that some external authentication mechanisms (e.g. GCP / gcloud CLI) don't work correctly inside the container.

Minikube

Minikube will store the certificates in the users home so we need to pass these to the container:

docker run -ti -v "${HOME}/.minikube":"${HOME}/.minikube" -v "${HOME}/.kube:"/home/illuminatio/.kube:ro inovex/illuminatio illuminatio clean run

If the minikube VM is not reachable from your container try to pass the --net=host flag to the docker run command.

Compatibility

illuminatio 1.1 was tested using:

  • python 3.5.2
  • pip 19.2.1

illuminatio 1.1 is confirmed to be working properly with the following kubernetes environments:

  • minikube v0.34.1, kubernetes v1.13.3
  • Google Kubernetes Engine, v1.12.8-gke.10
  • kubeadm 1.15.0-00, kubernetes v1.15.2

PodSecurityPolicy

If your cluster has the PodSecurityPolicy Admission Controller you must ensure that the illuminatio runner has the following rights to be created:

  • Wants to run as root
  • Needs the SYS_ADMIN capability
  • Needs allowPrivilegeEscalation: true
  • Needs access to the hostPath for the network namespaces and the cri socket

A PodSecurityPolicy granting these privileges needs to be bound to the illuminatio-runner ServiceAccount in the illuminatio namespace. For more details look at the illuminatio DaemonSet

References

The logo was created by Pia Blum.

Example Network Policies are inspired by kubernetes-network-policy-recipes

Presentation from ContainerDays 2019, slides

There is also a blog post about the background of illuminatio: illuminatio-kubernetes-network-policy-validator

Contributing

We are happy to read your issues and accept your Pull Requests. This project uses the standard github flow. For more information on developing illuminatio refer to the development docs.

License

This project excluding the logo is licensed under the terms of the Apache 2.0 license. The logo is licensed under the terms of the CC BY-NC-ND 4.0 license.

illuminatio's People

Contributors

adracus avatar johscheuer avatar maal avatar maxbischoff avatar phil1602 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

illuminatio's Issues

Make kubeconfig used by illuminatio configurable

illuminatio currently always uses the default kubeconfig location. It should instead use the kubeconfig path provided with environment variable KUBECONFIG (if set) or via a new flag --kubeconfig, as used in kubectl.

Better Result format

During investigating into #29 I stumbled over the following "issue": The results format of the ConfigMap of the illuminatio runner is not really nice to use in an programmatic way (you need to parse string..). The current format lookslike the following:

apiVersion: v1
items:
- apiVersion: v1
  data:
    results: |
      01-deny-all:web-7d65dd8bf4-kn6hz:
        01-deny-all:web:
          '-80': {nmap-state: filtered, string: 'Test 01-deny-all:web:-80 succeeded

              Couldn''t reach 01-deny-all:web on port 80. Expected target to not be reachable',
            success: true}
    runtimes: |
      tests:
        01-deny-all:web-7d65dd8bf4-kn6hz: {'01-deny-all:web': 2.1348140239715576}
  kind: ConfigMap
  metadata:
    creationTimestamp: "2019-08-19T13:40:50Z"
    labels:
      illuminatio-cleanup: always
      illuminatio/runner: result
    name: illuminatio-runner-mrw7l-results
    namespace: illuminatio
    resourceVersion: "7756"
    selfLink: /api/v1/namespaces/illuminatio/configmaps/illuminatio-runner-mrw7l-results
    uid: 754ac8bb-d415-40ad-bf2e-6b1e525daeb8
kind: List
metadata:
  resourceVersion: ""
  selfLink: ""

I would propose another results format (assume that the content below will be part of the ConfigMap data):

Runner: 01-deny-all:web-7d65dd8bf4-kn6hz
Results:
- "01-deny-all:web":
    - "name": "-80": 
      "description": "Expected target to not be reachable ..."
      "result": "success"
      "runtime": "..."

@maxbischoff @hacker-h what is your opinion?

illuminatio should also test endpoints

In the current implementation illuminatio "only" tests the service ip we should also execute tests that check if the endpoints are directly reachable (no communication over kube-proxy or what ever load balances the service IPs).

This could be helpful to see if both cases are supported/implemented, e.g. I think of some strange behaviour where service IPs are blocked but the direct endpoint call is not blocked or vice versa.

Add known issues

We should move all our internal issues into Github issues (I will work on this tomorrow)

Automatic illuminatio release

We should add automatic releases for illuminatio based on git tags with sem-ver. The release should include:

  • Docker image (correctly tagged)
  • illuminatio pypi release
  • illuminatio manifests with the according tags

delete clusterrole on clean or split the cleanup steps in two commands

Currently the illuminatio clean command currently deletes the clusterrole binding but not the clusterrole. Users should be able to easily purge all resources created by illuminatio.
I think this should be done together either on clean or in new command e.g. purge to delete all non-namespaced resources created by illuminatio while clean deletes the namespaced resources.

What do you think @johscheuer ?

Extend illuminatio to support custom NetworkPolicies

Currently illuminatio "only" supports Kubernetes Network Policies some CNI network providers like calico and cilium also implement their own NetworkPolicies (which could interference our tests).

Do we want to support them? And if what would be a smart/pluggable way to do this?

Document cluster prerequisites

Currently we only documented the "client" prerequisites. We should also document the Cluster prerequisites (and what versions/combinations are tested in the CI).

We should provide at least the following informations:

  • Kubernetes version
  • Containerd/Docker version
  • anything else ?

Build internal cache for targeted namespaces

Regarding the implementation of the test_orchestrator functions e.g. _create_project_namespace_if_missing we should build an internal cache containing all namespaces to work on to increase the performance.

Add flag for target namespaces

Currently illuminatio examines the entire cluster; users might want to merely examine a single namespace which is probably a lot faster.
It would be nice to provide a command line flag for this use case.

Illuminatio runner daemonset fails with enabled Pod Security Policy

  Warning  FailedCreate      5m34s (x17 over 11m)  daemonset-controller  Error creating: pods 
"illuminatio-runner-" is forbidden: unable to validate against any pod security policy: 
[spec.securityContext.hostPID: Invalid value: true: Host PID is not allowed to be used 
spec.volumes[1]: Invalid value: "hostPath": hostPath volumes are not allowed to be used 
spec.volumes[2]: Invalid value: "hostPath": hostPath volumes are not allowed to be used 
spec.containers[0].securityContext.runAsUser: Invalid value: 0: must be in the ranges: [{1 65535}] 
spec.containers[0].securityContext.capabilities.add: Invalid value: "SYS_ADMIN": capability may not 
be added spec.containers[0].securityContext.allowPrivilegeEscalation: Invalid value: true: Allowing 
privilege escalation for containers is not allowed]

Add illuminatio cli docker image

We should provide a docker image that contains the illuminatio cli (which makes it easier to use and doesn't require the end user to install packages with PyPi).

Setup travis

Currently we already have a travis file in place but the travis integration is not active.

Support heterogeneous nodes

We should support clusters with different runtimes (is this actually a think ?). The current approach only supports homogeneous nodes (all nodes have the same container runtime).

Make `illuminatio run` idempotent

Currently our suggested way of executing illuminatio is illuminatio clean run. As illuminatio run only works the first time, we should either integrate the clean or make the run idempotent by reusing existing resources where sensible.
What is your opinion on this?

Document tested scenarios

We should document in which environments we tested illuminatio (and which combinations:

  • GKE with Docker (K8s 1.14.?)
  • GKE with containerd (K8s 1.14.?)
  • OpenStack with Docker (K8s 1.14.?)
  • OpenStack with containerd (K8s 1.14.?)

more ?

Benchmark CNI Networks

Once we implemented: #29

We can use the test suite for benchmarking the CNI networks and see what features are supported by the different CNI networks.

In the end we should document how we tested the different CNI networks and the outcome of our tests (these should be repeatable over different infrastructure).

Add a contributing file

We should add a contributing file describing the desired workflow to make contributions to the repository.

Further runtime integration

We should test if the current approach also works with other runtime implementations:

  • KataContainers
  • Gvisor
  • ... ?

Code clean up

Currently there are some pylint issues that are ignored, we should fix them and add them to the CI pipeline.

Replace list and field-selector calls with the appropriate read/get call

We should replace all namespaced list calls with a field-selector=metadata.name=x with the appropriate get call for a better performance. I made a simple test with only 500 Deployments a get/read only need 0.276 seconds vs. a list call with a field-selector (which has the same return value) needs 6.388. The list call is roughly 25 times slower! :

kubectl get deployment --field-selector='metadata.name=web'
NAME   READY   UP-TO-DATE   AVAILABLE   AGE
web    1/1     1            1           4m40s
kubectl get deployment --field-selector='metadata.name=web'  0.09s user 0.03s system 1% cpu 6.388 total
time kubectl get deployment web                                 
NAME   READY   UP-TO-DATE   AVAILABLE   AGE
web    1/1     1            1           4m45s
kubectl get deployment web  0.09s user 0.03s system 41% cpu 0.276 total

I didn't tested this in depth but just from the pure implementation of the Kubernetes API Server and how the resources are accessed it just makes sense that a get ist faster than a list with a field-selector

Implement continuous testing/scanning mode

Currently illuminatio is implemented to run the tests only once. In my mind we should implement a further mode: continuous testing. This mode allows users to run the tests continuous in their infrastructure (either full tests for version v1 of this feature or incremental for the version v2 of this feature. This actually means we need to do some changes in the current implementation/design. I will write down my ideas on this in the next week(s).

illuminatio should check if DaemonSet Pods are starting

Currently the illuminatio cli doesn't check if the DaemonSet Pods are starting or if the are stuck in a reboot loop (for some reasons). We should add an check if the Pods are up an running otherwise illuminatio should fail.

Add command for checking cluster network policy conformance

Using the e2e manifests, we could also provide another command illuminatio conformance-test, which supplies these e2e manifests and checks that the network traffic in those setup behaves as expected.
This would enable cluster operators that don't have network policies in their clusters yet, to check, that their setup is correctly enforcing various kinds of policies.
That command could also be used for #38. It's implementation should not duplicate #29.

Add more units tests

We should add more unit tests / integration tests in order to have a better feeling when merging new PR into the master.

Running in the current master branch python3 setup.py test --addopts --runslow gives the following:

---------- coverage: platform darwin, python 3.7.3-final-0 -----------
Name                                       Stmts   Miss Branch BrPart  Cover   Missing
--------------------------------------------------------------------------------------
src/illuminatio/__init__.py                    7      2      0      0    71%   8-9
src/illuminatio/cleaner.py                    56     56     14      0     0%   1-76
src/illuminatio/collect_results.py            75     75     48      0     0%   1-91
src/illuminatio/create_eval_resources.py      57     57     18      0     0%   1-78
src/illuminatio/host.py                      125     54     69     12    49%   9, 12->13, 13, 20->21, 21, 27->28, 28, 29->32, 32, 36->37, 37, 38->43, 43, 49-54, 57-61, 64, 67, 70, 79->80, 80, 81->82, 82, 87->91, 91, 94, 102->103, 103, 109->113, 113-116, 119, 128-133, 136-140, 143, 146-162, 166, 172, 185, 191->194, 194, 197, 203
src/illuminatio/illuminatio.py               167    167     64      0     0%   1-228
src/illuminatio/illuminatio_runner.py        199    199     52      0     0%   1-278
src/illuminatio/k8s_util.py                   46     36     16      0    16%   7-9, 14-18, 22-26, 30-33, 37-46, 50-61, 65
src/illuminatio/rule.py                       75     51     40      0    21%   21, 26, 34-37, 41, 48, 52-66, 71-95, 99-104, 108-113
src/illuminatio/test_case.py                  58      7     31      6    85%   10->11, 11, 12->13, 13, 14->15, 15, 16->17, 17, 25->29, 29, 32, 54->55, 55
src/illuminatio/test_generator.py            166    143     93      0     9%   19, 23-59, 63-133, 136-150, 154-158, 162-168, 172-180, 184, 189-196, 200-210, 214-218, 222-227, 231-237
src/illuminatio/test_orchestrator.py         327    273    124      0    12%   16-25, 29-36, 54, 57, 74-100, 103-151, 154-191, 194-224, 228-233, 237-255, 258, 262, 270-313, 317-342, 347-360, 363-383, 386-401, 404-415, 418-429, 433
src/illuminatio/util.py                       22      4     10      0    75%   17-20
--------------------------------------------------------------------------------------
TOTAL                                       1380   1124    579     18    16%

So we have a total coverage of 16% which means we have a lot to test :)

Implement e2e tests

We should implement e2e tests that validate that illuminatio still work after a PR is submitted. Currently there already exists all e2e manifests (the need to be renamed to match the actual test) and we need to implement the test case in pytest see: #27

Fix PytextUnknownMarkWarning

The following warning comes up when executing the tests and should be fixed

=============================== warnings summary ===============================
/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/_pytest/mark/structures.py:324
  /home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/_pytest/mark/structures.py:324: PytestUnknownMarkWarning: Unknown pytest.mark.e2e - is this a typo?  You can register custom marks to avoid this warning - for details, see https://docs.pytest.org/en/latest/mark.html
    PytestUnknownMarkWarning,

e.g. https://travis-ci.org/inovex/illuminatio/jobs/575736370

illuminatio crashes on run

illuminatio crashes on run or clean run when running against a cluster with resources:

illuminatio clean run -o result.yml
Starting cleaning resources with policies ['on-request', 'always']
Deleting namespaces [] with cleanup policy on-request
Deleting namespaces [] with cleanup policy always
Deleting DSs in default with cleanup policy on-request
Deleting DSs in kube-node-lease with cleanup policy on-request
Deleting pods in default with cleanup policy on-request
Deleting pods in kube-node-lease with cleanup policy on-request
Deleting svcs in default with cleanup policy on-request
Deleting svcs in kube-node-lease with cleanup policy on-request
Deleting CfgMaps in default with cleanup policy on-request
Deleting CfgMaps in kube-node-lease with cleanup policy on-request
Deleting CRBs  with cleanup policy on-request globally
Deleting SAs in default with cleanup policy on-request
Deleting SAs in kube-node-lease with cleanup policy on-request
Deleting DSs in default with cleanup policy always
Deleting DSs in kube-node-lease with cleanup policy always
Deleting pods in default with cleanup policy always
Deleting pods in kube-node-lease with cleanup policy always
Deleting svcs in default with cleanup policy always
Deleting svcs in kube-node-lease with cleanup policy always
Deleting CfgMaps in default with cleanup policy always
Deleting CfgMaps in kube-node-lease with cleanup policy always
Deleting CRBs  with cleanup policy always globally
Deleting SAs in default with cleanup policy always
Deleting SAs in kube-node-lease with cleanup policy always
Finished cleanUp

Starting test generation and run.
Traceback (most recent call last):
  File "/home/hhaecker/virtualenvs/kspv/bin/illuminatio", line 11, in <module>
    load_entry_point('illuminatio==1.1.post0.dev32+g17fb87e', 'console_scripts', 'illuminatio')()
  File "/home/hhaecker/virtualenvs/kspv/lib/python3.5/site-packages/click/core.py", line 722, in __call__
    return self.main(*args, **kwargs)
  File "/home/hhaecker/virtualenvs/kspv/lib/python3.5/site-packages/click/core.py", line 697, in main
    rv = self.invoke(ctx)
  File "/home/hhaecker/virtualenvs/kspv/lib/python3.5/site-packages/click/core.py", line 1092, in invoke
    rv.append(sub_ctx.command.invoke(sub_ctx))
  File "/home/hhaecker/virtualenvs/kspv/lib/python3.5/site-packages/click/core.py", line 895, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "/home/hhaecker/virtualenvs/kspv/lib/python3.5/site-packages/click/core.py", line 535, in invoke
    return callback(*args, **kwargs)
  File "/home/hhaecker/virtualenvs/kspv/lib/python3.5/site-packages/illuminatio-1.1.post0.dev32+g17fb87e-py3.5.egg/illuminatio/illuminatio.py", line 75, in run
    cases, gen_run_times = generator.generate_test_cases(net_pols.items, orch.current_namespaces)
  File "/home/hhaecker/virtualenvs/kspv/lib/python3.5/site-packages/illuminatio-1.1.post0.dev32+g17fb87e-py3.5.egg/illuminatio/test_generator.py", line 91, in generate_test_cases
    other_hosts, namespaces)
  File "/home/hhaecker/virtualenvs/kspv/lib/python3.5/site-packages/illuminatio-1.1.post0.dev32+g17fb87e-py3.5.egg/illuminatio/test_generator.py", line 143, in generate_negative_cases_for_incoming_cases
    inverted_hosts = {h for l in {invert_host(host) for host in allowed_hosts} for h in l}
  File "/home/hhaecker/virtualenvs/kspv/lib/python3.5/site-packages/illuminatio-1.1.post0.dev32+g17fb87e-py3.5.egg/illuminatio/test_generator.py", line 143, in <setcomp>
    inverted_hosts = {h for l in {invert_host(host) for host in allowed_hosts} for h in l}
TypeError: unhashable type: 'list'

Resources have been created with the following manifest:

apiVersion: v1
kind: Namespace
metadata:
  name: 02-limit-traffic
---
apiVersion: apps/v1
kind: Deployment
metadata:
  generation: 1
  labels:
    app: bookstore
    role: api
  name: apiserver
  namespace: 02-limit-traffic
spec:
  replicas: 1
  revisionHistoryLimit: 2
  selector:
    matchLabels:
      app: bookstore
      role: api
  template:
    metadata:
      labels:
        app: bookstore
        role: api
    spec:
      containers:
      - image: nginx
        imagePullPolicy: Always
        name: apiserver
        ports:
        - containerPort: 80
          protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app: bookstore
    role: api
  name: apiserver
  namespace: 02-limit-traffic
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
  selector:
    app: bookstore
    role: api
  type: ClusterIP
---
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
  name: api-allow
  namespace: 02-limit-traffic
spec:
  podSelector:
    matchLabels:
      app: bookstore
      role: api
  ingress:
  - from:
      - podSelector:
          matchLabels:
            app: bookstore

illuminatio crashes when there is no kubeconfig

If there is no kubeconfig, illuminatio crashes:

illuminatio run
Traceback (most recent call last):
  File "/usr/local/bin/illuminatio", line 11, in <module>
    sys.exit(cli())
  File "/usr/local/lib/python3.5/dist-packages/click/core.py", line 764, in __call__
    return self.main(*args, **kwargs)
  File "/usr/local/lib/python3.5/dist-packages/click/core.py", line 717, in main
    rv = self.invoke(ctx)
  File "/usr/local/lib/python3.5/dist-packages/click/core.py", line 1146, in invoke
    Command.invoke(self, ctx)
  File "/usr/local/lib/python3.5/dist-packages/click/core.py", line 956, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "/usr/local/lib/python3.5/dist-packages/click/core.py", line 555, in invoke
    return callback(*args, **kwargs)
  File "/usr/local/lib/python3.5/dist-packages/illuminatio/illuminatio.py", line 34, in cli
    k8s.config.load_kube_config()
  File "/usr/local/lib/python3.5/dist-packages/kubernetes/config/kube_config.py", line 645, in load_kube_config
    persist_config=persist_config)

It would be nicer to have a proper error message

Performance tests

We should add some performance tests in order to see how illuminatio behaves in different environments (e.g. node size, count of networkpolicies, service/deployment number).

Dashboard

We should implement some kind of dashboard for illuminatio that shows:

  • Which tests are failing
  • Network Topology ?
  • Debugging information for failing tests

This would probably improve the UX for new users (and not only for people that are comfort with a cli).

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.