Giter Club home page Giter Club logo

kubernetes-network-policy-recipes's Introduction

You can get stuff like this You can get stuff like this with Network Policies...

Kubernetes Network Policy Recipes

This repository contains various use cases of Kubernetes Network Policies and sample YAML files to leverage in your setup. If you ever wondered how to drop/restrict traffic to applications running on Kubernetes, read on.

Easiest way to try out Network Policies is to create a new Google Kubernetes Engine cluster. Applying Network Policies on your existing cluster can disrupt the networking. At the time of writing, most cloud providers do not provide built-in network policy support.

If you are not familiar with Network Policies at all, I recommend reading my Securing Kubernetes Cluster Networking article first.

NetworkPolicy Crash Course

NetworkPolicies operate at layer 3 or 4 of OSI model (IP and port level). They are used to control the traffic in(ingress) and out(egress) of pods.

Here are some NetworkPolicies gotcha's

  • An empty selector will match everything. For example spec.podSelector: {} will apply the policy to all pods in the current namespace.

  • Selectors can only select Pods that are in the same namespace as the NetworkPolicies. Eg. spec.podSelector of an ingress rule can only select pods in the same namespace the NetworkPolicy is deployed to.

  • If no NetworkPolicies targets a pod, all traffic to and from the pod is allowed. In other words all traffic are allowed until a policy is applied.

  • There are no deny rules in NetworkPolicies. NetworkPolicies are deny by default allow explicitly. It's the same as saying "If you're not on the list you can't get in."

  • If a NetworkPolicies matches a pod but has a null rule, all traffic is blocked. Example of this is a "Deny all traffic policy".

spec:
  podSelector:
    matchLabels:
      ...
  ingress: []
  • Rules are chained together. NetworkPolicy are additive. If multiple NetworkPolicies are selecting a pod, their union is evaluated and applied to that pod.

Before you begin

I really recommend watching my KubeCon talk on Network Policies if you want to get a good understanding of this feature. It will help you understand this repo better.

Basics

Namespaces

Serving External Traffic

Advanced

Controlling Outbound (Egress) Traffic ๐Ÿ”ฅ๐Ÿ†•๐Ÿ”ฅ


Author

Created by Ahmet Alp Balkan (@ahmetb).

Copyright 2017, Google Inc. Distributed under Apache License Version 2.0 ,see LICENSE for details.

Disclaimer: This is not an official Google product.

Stargazers over time

kubernetes-network-policy-recipes's People

Contributors

ahmetb avatar ammaristotle avatar artturik avatar avinashdesireddy avatar blarc avatar bmcustodio avatar boredabdel avatar corentindy avatar ericyz avatar fmoctezuma avatar gre8t avatar hamzablm avatar j-zimnowoda avatar knabben avatar ligh1yagami avatar limoneren avatar mattfenwick avatar maxbischoff avatar muse-sisay avatar pdecat avatar rk295 avatar shinerrs avatar sobngwi avatar tej-singh-rana avatar tim-schwalbe avatar toqueteos avatar vtrduque avatar yamaszone avatar yanivpaz avatar zburgermeiszter avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

kubernetes-network-policy-recipes's Issues

Block all port except specified.

First of all, I like to thank you for this project. I know this is not a place for support but could you also add another example similar to below.

Here we blocking all port except 53(TCP and UDP). How to do all allow except 53. I tired action: deny but it's NOT working.

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: foo-deny-external-egress
spec:
  podSelector:
    matchLabels:
      app: foo
  policyTypes:
  - Egress
  egress:
  - ports:
    - port: 53
      protocol: UDP
    - port: 53
      protocol: TCP
   to:
    - namespaceSelector: {}

Container -> Pod

The following image in the readme:

image

I feel like we should change the wording to say Pod here (as containers might mean sidecars). I think back in the day I used this as a simplification but now that I look at it, it's maybe a gross simplification and does disservice.

The fix can be as easy as changing the Google Slides deck where they're sourced in, exporting gifs and feeding them to a gifmaker website.

ALLOW Traffic from DMZ Namespace (nginx ingress controller) to Kubernetes API (IP: 10.233.0.1:443) How?

I've created the default rule to deny all traffic from DMZ Namespace:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: deny-all-from-dmz
  namespace: dmz
spec:
  podSelector: {}
  policyTypes:
  - Egress

But now I need the access to the service "default/kubernetes" this service forward the traffik to node IP:443

ceku@ceku1 /a/r/network-policies> kubectl describe svc kubernetes
Name:              kubernetes
Namespace:         default
Labels:            component=apiserver
                   provider=kubernetes
Annotations:       <none>
Selector:          <none>
Type:              ClusterIP
IP:                10.233.0.1
Port:              https  443/TCP
TargetPort:        6443/TCP
Endpoints:         192.168.49.201:6443,192.168.49.202:6443,192.168.49.203:6443
Session Affinity:  ClientIP
Events:            <none>

How can I allow the access from "nginx ingress controller" to this api?
My test policy (but not working):

kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
  name: allow-ingresscontroller-dmz-to-pods
  namespace: dmz
spec:
  podSelector:
    matchLabels:
      k8s-app: nginx-ingress-controller-dmz
  egress:
  - to:
    - namespaceSelector:
        matchLabels:
          name: dmz
  - to:
    - ipBlock:
        cidr: 10.233.0.1/32
        cidr: 192.168.49.201/32
        cidr: 192.168.49.202/32
        cidr: 192.168.49.203/32
    ports:
    - protocol: TCP
      port: 443
  ingress:
  - from: []

add networkpolicy subproject maintainers

I think @mattfenwick @knabben @rikatz @jayunit100 would all be happy to help maintain this repo with you !
We work in upstream sig-network's network policy subproject. I believe matt is actually already a commiter on this repo .

Let us know what steps are next, and/or wether possibly merging in with kubernetes.io docs is an option.

We are also actively porting tests from this Repo over to the upstream network policy test suite, https://docs.google.com/document/d/1Vyv-andfj13exXf36FsMTG0QnB4Id8BQB4mo49f0oW8/edit

big thank you for creating this resource, its very valuable to us and part of how we teach new folks about how network policies work

deny-all not working

Blocking all traffic to a service of a specific namespace (i.e. staging) does not seem to work as suggested in this template.

Instead, traffic is allowed from bothdefault and staging namespace

Create the following namespace

kind: Namespace
apiVersion: v1
metadata:
  name: staging
  annotations:
    net.beta.kubernetes.io/network-policy: |
      {
        "ingress": {
          "isolation": "DefaultDeny"
        }
      }

Spin up a busybox for testing purposes in the default namespace:

kubectl run busybox --rm -ti --image=busybox /bin/sh  --namespace=default

...and my ui service (supposed to be listening on port 80) in staging namespace is reachable!

/ # wget --spider ui.staging.svc.cluster.local
Connecting to ui.staging.svc.cluster.local (100.68.222.37:80)

I have also tried to just create via command line a namespace (without) yml file

and apply the following presumably deny-all policy (didn't work either) which you suggest

kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
  namespace: staging
  name: web-deny-other-namespaces
spec:
  podSelector:
    matchLabels:
  ingress:
  - from:
    - podSelector: {}

Using aws.

Cluster deployed with

$ kops version
Version 1.8.0 (git-5099bc5)

... in private topology

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.1", GitCommit:"f38e43b221d08850172a9a4ea785a86a3ffa3b3a", GitTreeState:"clean", BuildDate:"2017-10-11T23:27:35Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.4", GitCommit:"9befc2b8928a9426501d3bf62f72849d5cbcd5a3", GitTreeState:"clean", BuildDate:"2017-11-20T05:17:43Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}

weaveworks/weave-npc                               2.0.5               55ed7c451d70        3 months ago        54.7 MB
weaveworks/weave-kube                              2.0.5               b73b5c64c5d3        3 months ago        101 MB

I applied the network policy (or updated the existing namespace) after my application was deployed, just for the record.

Deny policy not taking effect

I'm following this tutorial on a AWS hosted Kubernetes cluster (v1.7.2), built using kops (v1.7.0) but the deny policy doesn't block traffic. I've tried this with various network plug-ins (i.e. weave / calico / canal etc) but the result is the same.

Allow trafic from kubelet

Hi!

I deny all traffic to my pod, but now my probes (liveness and deadiness) are not working anymore.

How can i allow traffic only from kubelet?

Monitoring Network Policy

I know this might not be the right place, but wanted to ask if you or anyone else knows where to check logs for dropped traffic by Network Policy.

For example: ingress is denied for all namespaces, which also means for everything else but its own namespace. Where would one be able to check the logs that say namespaceB tried to do a connection however it was dropped due to Network Policy? Or even external source ? -- I tried finding it in various log places including nodes, no-go thus far .

issue with new network policy using IP address

I used your 4th example(DENY all traffic from other namespaces)
I created 2 namespace (test1 and test2) and deploy one test services on first namespace,

kubectl run -ti --rm --namespace=test1 --image ubuntu bash -- bash
apt-get update
apt-get --assume-yes install httpie
http http://test-service
and observed it to respond with a 200, indicating that it could connect. This is a good baseline.

now create 2nd namespace test2 and and test it
kubectl run -ti --namespace=test2 --rm --image ubuntu bash -- bash
apt-get update
apt-get --assume-yes install httpie
http http://test-service
and observed a connection error. This is good as it shows that the service discovery didn't work between the new namespaces.

But then tested the IP directly:
http http://100.69.170.122
and it worked. Not good. It's my understanding that the network policy should prevent connections from pods in different namespaces.

how can we overcome above issue?

Policy rule blocking the service

Hello,

I have faced difficulties when applying network policies to my Kubernetes cluster.
My network policy:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: staging
  namespace: staging
spec:
  policyTypes:
  - Ingress
  podSelector: {}
  ingress:
  - from:
    - namespaceSelector:
        matchLabels:
          staging: net

Environment:
I have 3 pods in the namespace called staging, all pods are labeled as staging: net. Let's get for an example one NGINX pod which serve on 80 port, one DB pod pgsql on port 5432 and third pod just to make test from him. Each of pods has a service which exposing a respective port for each pod as a ClusterIP. Before applying network policy traffic working with all namespaces. As fast as I apply network policy it working as expected by denying traffic from other namespaces. In case, when I want to allow traffic from another namespace I'm adding the label "staging:net" to this namespace and all working as expected. However, one thing is not working as I have expected to. My exposed services for example named nginx and pgsql unreachable.

Is it possible to make service accessible with such rules or it's expected behavior? I'm a little bit frustrated can't find any information on the internet with same symptoms.

Thanks.

example with multiple services on different ports ?

Or is it best practice not to filter on ports ?

e.g.

- to: 
  - podSelector:
      matchLabels:
        app: mongodb
  ports:
    - protocol: TCP
      port: 27017 #mongo
  - podSelector: # redis p:6379
        matchLabels:
          app: redis      
  ports:
    - protocol: TCP
      port: 6379 #redis

Example recipe for isolating service namespaces ?

I dont see this here but is anyone aware of a network policy recipe/ design guide for isolating certain "control plane" namespaces from application namespaces while allowing for certain namespaces to use day-n monitoring services (such as Prometheus, EFK) to also run in a "day n services" namespace and service all application namespaces ? I realize this is a slightly open ended question perhaps but looking for any white papers/ recipes/ recommended segmentation topologies that someone may have already prepared for the purpose of control plane-data plane protection and isolation with the aim of a "secure by default" kubernetes cluster model.

Network Policy on Plain Vanilla 1.18.4.


READ BEFORE YOU CREATE AN ISSUE:
This issue tracker is specifically about the recipes listed in this repository.
If you want to ask a general question about network policies or advice, DO NOT
ask them here. Instead ask them on:


I'm applying the same POD network security policy on Plain Vanilla 1.18.4 release with WeaveNet-2.6.5.

ubuntu@k8s-um01:~$ kubectl get pods
NAME READY STATUS RESTARTS AGE
web 1/1 Running 0 3h40m

ubuntu@k8s-um01:~$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 443/TCP 4h4m
web ClusterIP 10.97.61.57 80/TCP 3h40m

ubuntu@k8s-um01:~$ kubectl get networkpolicy
NAME POD-SELECTOR AGE
web-deny-all app=web 3h39m

ubuntu@k8s-um01:~$ kubectl run --generator=run-pod/v1 --rm -i -t --image=alpine test-$RANDOM -- sh
Flag --generator has been deprecated, has no effect and will be removed in the future.
If you don't see a command prompt, try pressing enter.
/ # wget -qO- http://web

<title>Welcome to nginx!</title> <style> body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style>

Welcome to nginx!

If you see this page, the nginx web server is successfully installed and working. Further configuration is required.

For online documentation and support please refer to nginx.org.
Commercial support is available at nginx.com.

Thank you for using nginx.

Any suggestions where we are doing mistakes.

Deny access from other namespaces does not work when using svc fqdn

I have a database in namespace1 which I deployed using statefulset and accessed through a service.
An application from namespace2 manages to access this database using the database service FQDN. I want to shut the access from the app to this database using network policy, but it didn't work.

I have tried several scenarios:

  • Deny access from all other namespaces
  • Deny access specifically from namespace2 using labels
  • Allow access only from namespace1 using labels

Am I missing something here? Would it be better to apply deny-all policy first and then open up policy for each pod/namespace?

Default Deny

I want to reach a default deny, for all services in a cluster. So all pods are complete isolated or all pods are namespace isolated. But when I want to have a pod which is not isolated how can I do it? Is it really important to deploy first the deny all rule and then the accept rule to override the first? What is the standard way to achieve this?

02a-allow-all-traffic-to-an-application document correction


READ BEFORE YOU CREATE AN ISSUE:
This issue tracker is specifically about the recipes listed in this repository.
If you want to ask a general question about network policies or advice, DO NOT
ask them here. Instead ask them on:


I think the below lines in the example 02a-allow-all-traffic-to-an-application.md should be corrected.

------existing---

Empty ingress rule ({}) allows traffic from all pods in the current namespace, as well as other namespaces. It corresponds to:

  • from:
    podSelector: {}
    namespaceSelector: {}

---- should be updated to -----

Empty ingress rule ({}) allows traffic from all pods in the current namespace, as well as other namespaces. It corresponds to:

  • from:
    • podSelector: {}
      namespaceSelector: {}

The elements in the from should be an array:

kubectl explain NetworkPolicy.spec.ingress.from
KIND: NetworkPolicy
VERSION: networking.k8s.io/v1

RESOURCE: from <[]Object>

Mixing Up NetworkPolicies?

As of v1.10, is it possible to set up Network Policy to meet requirements below:

  • Pods within namespace A can communicate with each other
  • Pods can't communicate across namespaces (ie. Pods in namespace B cannot communicate pods in namespace A)
  • Pods can allow ingress from the internet
  • Pods cannot allow egress to k8s nodes

So far, the closest I've got is

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: user-a-default-network-policy
  namespace: user-a-ns
spec:
  podSelector: {}
  policyTypes:
  - Ingress
  - Egress
  ingress:
  - from:
    - namespaceSelector:
        matchLabels:
          name: user-a-ns
  egress:
  - to:
    - ipBlock:
        cidr: 0.0.0.0/0
        except:
        - 172.16.35.11/32      # K8s Node IPs
        - 172.16.35.12/32

This results namespace isolation and k8s node denials, fulfilling three of the requirements listed:

  • Pods within namespace A can communicate with each other
  • Pods can't communicate across namespaces (ie. Pods in namespace B cannot communicate pods in namespace A)
  • Pods can allow ingress from the internet
  • Pods cannot allow egress to k8s nodes

We did think about adding spec.ingress.from.ipBlock as below, but it turns out allowing everything from other namespaces...

  ingress:
  - from:
    - ipBlock:
        cidr: 0.0.0.0/0
    - namespaceSelector:
        matchLabels:
          name: user-a-ns

resulting:

  • Pods within namespace A can communicate with each other
  • Pods can't communicate across namespaces (ie. Pods in namespace B cannot communicate pods in namespace A)
  • Pods can allow ingress from the internet
  • Pods cannot allow egress to k8s nodes

I'm really struggling to get all four of the requirements set up. Any help is appreciated.

Can't include podSelector into namespaceSelector

Regarding this doc: https://github.com/ahmetb/kubernetes-network-policy-recipes/blob/master/07-allow-traffic-from-some-pods-in-another-namespace.md
There is manifest which contain following lines:

ingress:
    - from:
      - namespaceSelector:     # chooses all pods in namespaces labelled with team=operations
          matchLabels:
            team: operations  
        podSelector:           # chooses pods with type=monitoring
          matchLabels:
            type: monitoring

I have tried to apply the same policy and got error:

The NetworkPolicy "test-policy" is invalid: spec.ingress[0].from[0]: Forbidden: may not specify more than 1 from type

Is it really possible to restricts traffic to pods with some label which placed in labelled namespaces?
If yes, please fix your manifest. Thanks!

P.S. Kubernetes cluster v.1.9.3 with Calico v.2.6

Clarify the meaning of the empty NetworkPolicyIngressRule?

Clarification is needed as to what the empty NetworkPolicyIngressRule means?

I'm referring to the statement in ALLOW all traffic to an application that the empty ingress rule ({}) corresponds to:

- from:
  - podSelector: {}
    namespaceSelector: {}

My interpretation of the spec is that the empty ingress rule allows traffic from all sources, including all ips.

The rule has two fields, from and ports, but the documentation for both of those fields says:

If this field is empty or missing, this rule matches all [sources or ports]

If it matches all sources, then my interpretation is that it matches all ipBlocks as well as all pods in all namespaces.

Furthermore, because the sources are OR'd, only including one source in the corresponding form above is different from an empty rule, which includes all sources, right?

Can you explain how the corresponding form (above) is equivalent to an empty ingress rule?

Documentation:

List of sources which should be able to access the pods selected for this rule. Items in this list are combined using a logical OR operation. If this field is empty or missing, this rule matches all sources (traffic not restricted by source). If this field is present and contains at least one item, this rule allows traffic only if the traffic matches at least one item in the from list.

Originally posted by @boredabdel in #62 (comment)

I hope my explanation in this PR clarifies things out #83

I will close this one. Please open a new PR or Issue so we can keep the discussion consistent in one place :)

Its also block the traffic from the same namespace pods

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: isolate-namespace
namespace: girish
spec:
podSelector:
matchLabels: {}
ingress:

  • from:
    • podSelector: {}
      #####################################################################

NAME READY STATUS RESTARTS AGE
pod/curl123 1/1 Running 0 6m25s
pod/nginx-deployment-58dff6b464-qqk2w 1/1 Running 0 9m36s
pod/nginx-deployment-58dff6b464-xlk42 1/1 Running 0 9m36s

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/ng ClusterIP 10.100.66.78 8080/TCP 6m39s

$kubectl exec curl123 curl http://10.100.66.78:8080 -n girish
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- 0:00:14 --:--:-- 0

block communication to specific pod

so can use guide me in this particular case allow pod to communicate to all of the others pod except one pod and can we use ip instead of pods label ?

deny-traffic-from-other-namespaces not functioning as described

Issue

04-deny-traffic-from-other-namespaces.md does not isolate namespace traffic as described. Is is missing a namespaceSelector to isolate pods to their respective namespaces.

GKE Versions

1.21.6-gke.1500 and 1.20.12-gke.1500

Corrected Network Policy

---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  namespace: ns-x
  name: deny-from-other-namespaces
spec:
  podSelector:
    matchLabels:
  ingress:
    - from:
      - podSelector: {}
      - namespaceSelector:
          matchLabels:
            kubernetes.io/metadata.name: ns-x

ALLOW traffic inside the namespace and the LoadBalancer

Hi everyone

Im trying to config the networkpolicy(cies) to allow traffic only in the same namespace and access from the LoadBalancer

  • AWS cluster
  • Calico 2.4.1
  • K8s 1.7.4

I tried many approaches for example:

kind: Namespace
apiVersion: v1
metadata:
  name: nginx-test
  labels:
    role: nginx-test

---
apiVersion: v1
kind: Pod
metadata:
  name : nginx-server
  namespace: nginx-test
  labels:
    app: nginx-server
spec:
  containers:
    - name: nginx-server
      image: nginx:1.12-alpine
      ports:
        - containerPort: 80
          protocol: TCP

---
apiVersion: v1
kind: Service
metadata:
  name: nginx-service
  namespace: nginx-test
  labels:
    app: nginx-server
spec:
  ports:
    - name: http
      port: 80
      protocol: TCP
      targetPort: 80
  selector:
    app: nginx-server
  type: LoadBalancer

---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: nginx-allow
  namespace: nginx-test
spec:
  podSelector:
    matchLabels:
      app: nginx-server
  ingress:
    - from:
      - namespaceSelector:
          matchLabels:
            role: nginx-test
      - podSelector:
          matchLabels:
            app: nginx-server
      ports:
        - protocol: TCP
          port: 80

After the above config the load balancer is OutOfService

It seems that I need to add external access to the pod, and this means access from other namespaces.
Any idea?

btw, thanks for this awesome tutorial.

Requirement for accessing external public URL's from pods

We have a huge cluster and whenever we want the deployment to communicate to the public URL's..We will have to open up proxy for all the nodes in the cluster (as we are behind a proxy)

Whenever we add a new node we need to make sure all of the rules are working as the kubernetes pods can spin anywhere in the cluster..

Can this requirement be achieved using any of the network policies in this project?(Im pretty new to this concept and I guess there can be some egress rules written)

Add a case for allowing traffic from namespace without labels , like using namespaceselector with matchExpressions or some other option

There are chances for challenges where there is no labels given to pod to a namespace but we need to allow all pods from that namespace

Can we please add an example for a namespace selector where a namespace is selected based on the namespace's name rather than the labels on its pod ( a common CKA question), I tried namespace selector with matchExpressions but that did not work in exam

Explain NetworkPolicy + service.type=LoadBalancer & Ingress behavior

When I apply a network policy, Service.type=LoadBalancer restricting all pod-to-pod traffic, it keeps working for a while, and a few minutes later it stops working.

Once I remove network policy, it still keeps spinning and doesn't load in the browser (or via curl). Health checks seem fine though:

image

Repro:

  1. Run: kubectl run apiserver --image=nginx --labels app=bookstore,role=api --expose --port 80
  2. kubectl expose deploy/apiserver --type=LoadBalancer --name=apiserver-external
  3. Visit site = works
  4. Apply:
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
  name: api-allow
spec:
  podSelector:
    matchLabels:
      app: bookstore
      role: api
  ingress:
  - from:
      - podSelector:
          matchLabels:
            app: bookstore
  1. Observe still works after deploying.
  2. Wait a few mins, delete/redeploy policy without from: section.
  3. Visit on browser = stops working
  4. Delete network policy = still doesn't work, spins forever.

2a and 5 seem to be duplicates of each other, but with different explanations -- should they be combined?

Apologies if I'm wrong about this, network policies are hard to grok :)

As far as I can tell, example 2a:

kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
  name: web-allow-all
  namespace: default
spec:
  podSelector:
    matchLabels:
      app: web
  ingress:
  - {}

and example 5:

kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
  namespace: secondary
  name: web-allow-all-namespaces
spec:
  podSelector:
    matchLabels:
      app: web
  ingress:
  - from:
    - namespaceSelector: {}

do the same thing as each other. Maybe their explanations should be merged, and the merged page could include examples of the various ways to specify "all pods and all namespaces" for ingress? (I think I have 4 already :) )

Happy to pick this up and put together a PR for this -- assuming I've understood both examples correctly!

Add example of named ports

09 describes that it is better to use named ports, but doesn't show how to use named ports. I think it would be useful to show how to implement a policy based on named ports!

NOTE: Network Policies will not know the port numbers you exposed the application, such as 8001 and 5001. This is because they control inter-pod traffic and when you expose Pod as Service, ports are remapped like above. Therefore, you need to use the container port numbers (such as 8000 and 5000) in the NetworkPolicy specification. An alternative less error prone is to refer to the port names (such as metrics and http).

Thanks you for this great resources @ahmetb ๐Ÿ™

Testing

Hello

I'm interested to hear how you tested these recipes. How did you know they worked, and how did you find bugs (if any)?

What about a selector for a PV when it needs to access NFS Pod

I used,
DENY all traffic from other namespaces

But looks like the PV is considered a different-namespace resource. Then,

ALLOW traffic from some pods in another namespace

to treat the PV as an outter-ns Pod and labeled it but didn't work either.

Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/a21b800b-0bce-41bd-b8f2-cbb4bfbd83f1/volumes/kubernetes.io~nfs/my-pv-f9da82d12a6f00abb20b832e9dadc879 --scope -- /home/kubernetes/containerized_mounter/mounter mount -t nfs nfs-server-svc.mynamespace.svc.cluster.local:/ /var/lib/kubelet/pods/a21b800b-0bce-41bd-b8f2-cbb4bfbd83f1/volumes/kubernetes.io~nfs/my-pv-f9da82d12a6f00abb20b832e9dadc879
Output: Running scope as unit: run-rdc06e260917a44f19efba49ee227e0ef.scope
Mount failed: mount failed: exit status 32
Mounting command: chroot
Mounting arguments: [/home/kubernetes/containerized_mounter/rootfs mount -t nfs nfs-server-svc.mynamespace.svc.cluster.local:/ /var/lib/kubelet/pods/a21b800b-0bce-41bd-b8f2-cbb4bfbd83f1/volumes/kubernetes.io~nfs/my-pv-f9da82d12a6f00abb20b832e9dadc879]

i have k8s cluster in offline mode and i applied NW policy to allow specifiec ip wxternal


READ BEFORE YOU CREATE AN ISSUE:
This issue tracker is specifically about the recipes listed in this repository.
If you want to ask a general question about network policies or advice, DO NOT
ask them here. Instead ask them on:


Will something like this work ?

Am including
Allow ingress from these namespaces AND these cidrs. or do I need to split that into two yaml?
Let me know if this is ok to post here or shall I go to k8 community.

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  namespace: my-default-policy
  name: deny-traffic-from-other-namespaces
spec:
  podSelector:
    matchLabels:
  ingress:
  - from:
    - ipBlock:
        cidr: 172.44.0.0/16
    - namespaceSelector:
        matchLabels:
          name: ingress-traffic
    - namespaceSelector:
        matchLabels:
          name: ingress-internet
    - podSelector: {}

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.