Giter Club home page Giter Club logo

canal's Introduction

Refer to the docs for installing Calico for policy and flannel for networking for up to date installation directions and manifests. This repo is deprecated and no further updates are expected here.

Wasn't Canal supposed to be the new name for Calico?

Canal was the name of Tigera and CoreOS’s project to integrate Calico and flannel.

Originally, we thought we might more deeply integrate the two projects (possibly even going as far as a rebranding!). However, over time it became clear that that wasn't really necessary to fulfil our goal of making them work well together. Ultimately, we decided to focus on adding features to both projects rather than doing work just to combine them.

canal's People

Contributors

archseer avatar caseydavenport avatar danielqsj avatar davemac30 avatar davidmccormick avatar djosborne avatar fasaxc avatar gunjan5 avatar heschlie avatar liljenstolpe avatar lxpollitt avatar micahhausler avatar netroby avatar nilsher avatar ozdanborne avatar tmjd avatar tomdee avatar vendrov avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

canal's Issues

canal_etcd_tls.yaml doesnt work with K8s 1.7.x and RBAC enabled

Expected Behavior

I tried to use Canal with tls secured etcd and RBAC enabled using canal_etcd_tls.yaml

I expected a working demo at http://docs.projectcalico.org/v2.4/getting-started/kubernetes/tutorials/simple-policy

Current Behavior

Networking didn't work at all and calico-policy-controller throws the following exception:

> kubectl -n kube-system logs -f calico-policy-controller-718627407-mxh28 | more
2017-08-09 05:13:39,735 7 INFO Configuring /etc/hosts
2017-08-09 05:13:39,736 7 INFO Appended 'kubernetes.default  -> 10.100.0.1' to /etc/hosts
2017-08-09 05:13:39,737 7 INFO Beginning execution
2017-08-09 05:13:39,738 7 DEBUG Getting ServiceAccount token from: /var/run/secrets/kubernetes.io/serviceaccount/token
2017-08-09 05:13:39,739 7 DEBUG Found ServiceAccount token: eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkZWZhdWx0LXRva2VuLWR4bWI2Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImRlZmF1bHQiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiJkZTljYTVmNC03YzBkLTExZTctYjEzNi0wNjc1ZGEwMDA1OWYiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06ZGVmYXVsdCJ9.qLcmpY3z2GDjVi_3RmE7CbGkinxGWgE7edU_k8wy1tuz-6Cy-HoVo4yL_5KpIbYJ8vVb1ERpP4FWnyQJH6MxLYxNPn2Auqj2lWTTe2D7ficYjJOVXrZ__gZV6KZh-BXKpXzIiPhNbk-caS5LMwLG-K-x21IGW0iC9N_HuBFFQXIniHvnUfDfp8qoAfIe8a_fcIhSdG233_xtqjGw-3W57iFjVwS3p6jmmJr4k82P31q3R5jd47vzYDpYy9tcvo-qoalqz1G-9hB8FSgQbWwv5S5o0bhjyVDZ1846Lq4s8NiqqUp10QLh222YI2XzqDV9up54qSyBqk2VVOpyXT63lg
2017-08-09 05:13:39,739 7 DEBUG Using auth token: eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkZWZhdWx0LXRva2VuLWR4bWI2Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImRlZmF1bHQiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiJkZTljYTVmNC03YzBkLTExZTctYjEzNi0wNjc1ZGEwMDA1OWYiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06ZGVmYXVsdCJ9.qLcmpY3z2GDjVi_3RmE7CbGkinxGWgE7edU_k8wy1tuz-6Cy-HoVo4yL_5KpIbYJ8vVb1ERpP4FWnyQJH6MxLYxNPn2Auqj2lWTTe2D7ficYjJOVXrZ__gZV6KZh-BXKpXzIiPhNbk-caS5LMwLG-K-x21IGW0iC9N_HuBFFQXIniHvnUfDfp8qoAfIe8a_fcIhSdG233_xtqjGw-3W57iFjVwS3p6jmmJr4k82P31q3R5jd47vzYDpYy9tcvo-qoalqz1G-9hB8FSgQbWwv5S5o0bhjyVDZ1846Lq4s8NiqqUp10QLh222YI2XzqDV9up54qSyBqk2VVOpyXT63lg
2017-08-09 05:13:39,750 7 DEBUG Setting NetworkPolicy ADDED handler: <function add_update_network_policy at 0x7fd6489bf8c0>
2017-08-09 05:13:39,750 7 DEBUG Setting NetworkPolicy MODIFIED handler: <function add_update_network_policy at 0x7fd6489bf8c0>
2017-08-09 05:13:39,751 7 DEBUG Setting NetworkPolicy DELETED handler: <function delete_network_policy at 0x7fd64a1cbaa0>
2017-08-09 05:13:39,751 7 DEBUG Setting Namespace ADDED handler: <function add_update_namespace at 0x7fd6489c5ed8>
2017-08-09 05:13:39,751 7 DEBUG Setting Namespace MODIFIED handler: <function add_update_namespace at 0x7fd6489c5ed8>
2017-08-09 05:13:39,752 7 DEBUG Setting Namespace DELETED handler: <function delete_namespace at 0x7fd6489d2488>
2017-08-09 05:13:39,752 7 DEBUG Setting Pod ADDED handler: <function add_pod at 0x7fd6489d2758>
2017-08-09 05:13:39,752 7 DEBUG Setting Pod MODIFIED handler: <function update_pod at 0x7fd6489d27d0>
2017-08-09 05:13:39,753 7 DEBUG Setting Pod DELETED handler: <function delete_pod at 0x7fd6489d2578>
2017-08-09 05:13:39,753 7 INFO Leader election enabled? False
2017-08-09 05:13:39,753 7 DEBUG Attempting to remove old tier k8s-network-policy
2017-08-09 05:13:39,774 7 INFO Syncing 'NetworkPolicy' objects
2017-08-09 05:13:39,775 7 INFO Started worker thread for: NetworkPolicy
2017-08-09 05:13:39,775 7 DEBUG Getting API resources 'https://159.100.243.108:443/apis/extensions/v1beta1/networkpolicies' at version 'None'. stream=False
2017-08-09 05:13:39,776 7 INFO Started worker thread for: Namespace
2017-08-09 05:13:39,778 7 INFO Syncing 'Namespace' objects
2017-08-09 05:13:39,779 7 INFO Started worker thread for: Pod
2017-08-09 05:13:39,779 7 INFO Syncing 'Pod' objects
2017-08-09 05:13:39,779 7 DEBUG Getting API resources 'https://159.100.243.108:443/api/v1/namespaces' at version 'None'. stream=False
2017-08-09 05:13:39,780 7 DEBUG Reading from event queue
2017-08-09 05:13:39,780 7 DEBUG Getting API resources 'https://159.100.243.108:443/api/v1/pods' at version 'None'. stream=False
2017-08-09 05:13:39,801 7 DEBUG Response: <Response [403]>
2017-08-09 05:13:39,804 7 DEBUG Response: <Response [403]>
2017-08-09 05:13:39,805 7 DEBUG Response: <Response [403]>
2017-08-09 05:13:39,806 7 ERROR Unhandled exception killed Pod manager
Traceback (most recent call last):
  File "<string>", line 320, in _manage_resource
  File "<string>", line 437, in _sync_resources
  File "site-packages/requests/models.py", line 866, in json
  File "site-packages/simplejson/__init__.py", line 516, in loads
  File "site-packages/simplejson/decoder.py", line 370, in decode
  File "site-packages/simplejson/decoder.py", line 400, in raw_decode
JSONDecodeError: Expecting value: line 1 column 1 (char 0)
2017-08-09 05:13:39,806 7 ERROR Unhandled exception killed NetworkPolicy manager
Traceback (most recent call last):
  File "<string>", line 320, in _manage_resource
  File "<string>", line 437, in _sync_resources
  File "site-packages/requests/models.py", line 866, in json
  File "site-packages/simplejson/__init__.py", line 516, in loads
  File "site-packages/simplejson/decoder.py", line 370, in decode
  File "site-packages/simplejson/decoder.py", line 400, in raw_decode
JSONDecodeError: Expecting value: line 1 column 1 (char 0)
2017-08-09 05:13:39,807 7 ERROR Unhandled exception killed Namespace manager
Traceback (most recent call last):
  File "<string>", line 320, in _manage_resource
  File "<string>", line 437, in _sync_resources
  File "site-packages/requests/models.py", line 866, in json
  File "site-packages/simplejson/__init__.py", line 516, in loads
  File "site-packages/simplejson/decoder.py", line 370, in decode
  File "site-packages/simplejson/decoder.py", line 400, in raw_decode
JSONDecodeError: Expecting value: line 1 column 1 (char 0)
2017-08-09 05:13:39,807 7 INFO Restarting watch on resource: Pod
2017-08-09 05:13:39,807 7 INFO Restarting watch on resource: NetworkPolicy
2017-08-09 05:13:39,808 7 INFO Restarting watch on resource: Namespace
2017-08-09 05:13:40,809 7 INFO Syncing 'Pod' objects
2017-08-09 05:13:40,809 7 INFO Syncing 'Namespace' objects
2017-08-09 05:13:40,810 7 DEBUG Getting API resources 'https://159.100.243.108:443/api/v1/pods' at version 'None'. stream=False
2017-08-09 05:13:40,810 7 INFO Syncing 'NetworkPolicy' objects
2017-08-09 05:13:40,810 7 DEBUG Getting API resources 'https://159.100.243.108:443/api/v1/namespaces' at version 'None'. stream=False
2017-08-09 05:13:40,813 7 DEBUG Getting API resources 'https://159.100.243.108:443/apis/extensions/v1beta1/networkpolicies' at version 'None'. stream=False
2017-08-09 05:13:40,830 7 DEBUG Response: <Response [403]>
2017-08-09 05:13:40,830 7 ERROR Unhandled exception killed Pod manager
Traceback (most recent call last):
  File "<string>", line 320, in _manage_resource
  File "<string>", line 437, in _sync_resources
  File "site-packages/requests/models.py", line 866, in json
  File "site-packages/simplejson/__init__.py", line 516, in loads
  File "site-packages/simplejson/decoder.py", line 370, in decode
  File "site-packages/simplejson/decoder.py", line 400, in raw_decode
JSONDecodeError: Expecting value: line 1 column 1 (char 0)
2017-08-09 05:13:40,831 7 INFO Restarting watch on resource: Pod
2017-08-09 05:13:40,834 7 DEBUG Response: <Response [403]>
2017-08-09 05:13:40,835 7 ERROR Unhandled exception killed Namespace manager
Traceback (most recent call last):
  File "<string>", line 320, in _manage_resource
  File "<string>", line 437, in _sync_resources
  File "site-packages/requests/models.py", line 866, in json
  File "site-packages/simplejson/__init__.py", line 516, in loads
  File "site-packages/simplejson/decoder.py", line 370, in decode
  File "site-packages/simplejson/decoder.py", line 400, in raw_decode

Possible Solution

More permissions (if that's the issue)?

Your Environment

  • Calico version: v2.4.0
  • Flannel version: v0.8.0
  • Orchestrator version: Kubernetes v1.7.3
  • Operating System and version: Container Linux (CoreOS)
  • Link to your project (optional): n/a

Wrong tag or image in canal-etcdless.yaml

Latest canal-etcdless.yaml (commit 1afff9f) points to a non-existing tag for the flannel-git image: "quay.io/coreos/flannel-git:v0.7.0".
I've checked against the quay.io registry and there is no such tag. However the flannel images has such a tag so I'm guessing that is what was intended? That is: "quay.io/coreos/flannel:v0.7.0"

In canal.yaml file no etcd_endpoints: "....." ?!

In canal.yaml file:

data:
  # The interface used by canal for host <-> host communication.
  # If left blank, then the interface is chosen using the node's
  # default route.
  canal_iface: ""
  # Whether or not to masquerade traffic to destinations not within
  # the pod network.
  masquerade: "true"

here is no etcd_endpoints: "....." ?!

Job/configure-canal CrashLoopBackOff

They cannot start configure canal container in kubernetes's node as following

[root@master manifests]# kubectl get po --all-namespaces
NAMESPACE     NAME                                   READY     STATUS              RESTARTS   AGE
kube-system   calico-policy-controller-m74q6         1/1       Running             0          8m
kube-system   canal-node-b1k9b                       3/3       Running             0          8m
kube-system   canal-node-egi67                       3/3       Running             0          8m
kube-system   canal-node-hf2bp                       3/3       Running             0          8m
kube-system   configure-canal-ds7dp                  0/1       CrashLoopBackOff    6          8m
kube-system   etcd-master.local                      1/1       Running             0          2h
kube-system   kube-apiserver-master.local            1/1       Running             0          1h
kube-system   kube-controller-manager-master.local   1/1       Running             0          2h
kube-system   kube-discovery-982812725-rvrsi         1/1       Running             0          2h
kube-system   kube-dns-2247936740-fp4lh              0/3       ContainerCreating   0          29m
kube-system   kube-proxy-amd64-9w007                 1/1       Running             0          2h
kube-system   kube-proxy-amd64-j0cqs                 1/1       Running             0          2h
kube-system   kube-proxy-amd64-vppqb                 1/1       Running             0          2h
kube-system   kube-scheduler-master.local            1/1       Running             0          2h

and from docker container

[root@minion02 ~]# docker ps -a|grep etcd
18b0a5821c84        quay.io/coreos/etcd:v3.0.9                         "etcdctl set /coreos."   About a minute ago   Exited (4) About a minute ago                       k8s_configure-flannel.5cb8baa7_configure-canal-ds7dp_kube-system_f422e416-944e-11e6-af58-000c29d821b6_840b6cb5
[root@minion02 ~]# docker logs 18b0a5821c84
Error:  client: etcd cluster is unavailable or misconfigured
error #0: dial tcp 127.0.0.1:2379: getsockopt: connection refused

Config.yaml

kind: ConfigMap
apiVersion: v1
metadata:
  name: canal-config 
  namespace: kube-system
data:
  etcd_endpoints: "http://192.168.10.137:2379"
  canal_iface: ""
  masquerade: "true"
  cni_network_config: |-
    {
        "name": "canal",
        "type": "flannel",
        "delegate": {
          "type": "calico",
          "etcd_endpoints": "__ETCD_ENDPOINTS__",
          "log_level": "info",
          "policy": {
              "type": "k8s",
              "k8s_api_root": "https://__KUBERNETES_SERVICE_HOST__:__KUBERNETES_SERVICE_PORT__",
              "k8s_api_token": "__SERVICEACCOUNT_TOKEN__"
          },
          "kubernetes": {
              "kubeconfig": "/etc/cni/net.d/__KUBECONFIG_FILENAME__"
          }
        }
    }

Calico network & on premises network fabric

Communication between calico network & Datacenter network with firewall.

I came out some used case where this discussion fall.

Let me explain little briefly.

We running multiple pod inside kubernetes platform. Take examples APP-1, APP-2 APP-3.

Now my multiple databases (DB-1, DB-2, DB-3) running outside kubernetes cluser. In front of all database there was firewall.

Now App-1only can connect DB-1 no other container should connect DB-1 similarly App-2 only connect DB-2 Not others for App-3 also same way.

As kubernetes uses flat networking so each container can talk other even outside side of kubernetes cluster.

Can you please explain how networks policy will be in this given scenario & what changes (rule or policy) we have to do in external firewall.

Need to update label on master for 1.6

The mentioned label on the master node is only true for 1.5, it is a different label in 1.6.
The mention is in this file: k8s-install/kubeadm/README.md

advanced-policy won't work

I have an on-premise test cluster using the newest versions of kubernetes, canal to-date.
Doing default kubernetes NetworkPolicy works, but using calicoctl and trying to add more advanced rules does not.

Everything works as expected as I see it, but calico is weird.

  • It does not add any profiles (calicoctl get profile), at all..
    • Nor do I see any under etcdctl ls /calico/v1/policy --recursive
  • I can create policies, but the cluster acts as them is not there.. Even a deny all policy does nothing.
  • I can't find any changes in the iptables rules after adding policies, on any of the hosts.
  • There are no logs in canal-* calico-node when using calicoctl, but I see log entries when using NetworkPolicy.

It feels like calicoctl writes to it's own etcd, that is not the same as kubernetnes uses.
I can only find 1 etcd running on my hosts, the default 127.0.0.1:2379

I tried setting FELIX_LOGSEVERITYSYS to debug, but it does still show me only INFO logs, even tho I've verified that the environment is actually debug.. Another bug?

How can I debug further?

Canal self-hosted failing - not properly authenticating with Kubernetes

I am unable to get self-hosted (or manual install) of Canal to authenticate with Kubernetes.

Cloud provider: AWS (VPC)
OS: Coreos 1185.3.0
Kernel: Linux ip-172-31-7-173.us-west-2.compute.internal 4.7.3-coreos-r2 #1 SMP Tue Nov 1 01:38:43 UTC 2016 x86_64 Intel(R) Xeon(R) CPU E5-2676 v3 @ 2.40GHz GenuineIntel GNU/Linux
Kubernetes version: v1.4.6
Docker version: 1.11.2

My master and node images boot with basically just the required Kubernetes components (kubelet, kube-proxy, and the control-panel portions on the masters)

I've also set:

            --network-plugin-dir=/etc/cni/net.d \
            --network-plugin=cni 

on my kubelet.

I'm first tried the files provided for canal.yaml and config.yaml without modification, without any success.

I've also tried it with the following modifications:

  • Flannel image: quay.io/coreos/flannel:v0.6.2
  • calico-node image: quay.io/calico/node:v1.0.0-beta-rc5
  • install-calico-cni image: calico/cni:v1.5.1
  • kube-policy-controller image: calico/kube-policy-controller:v0.5.0

I turned the delegate.log_level key to DEBUG in cni_network_config and set the FELIX_LOGSEVERITYSCREEN=DEBUG environment variable for calico-node

Here is the output from the kubelet log when trying to create a simple nginx pod

Nov 18 21:13:20 ip-172-31-7-173.us-west-2.compute.internal kubelet[4005]: E1118 21:13:20.795558    4005 docker_manager.go:746] Logging security options: {key:seccomp value:unconfined msg:}
Nov 18 21:13:20 ip-172-31-7-173.us-west-2.compute.internal kubelet[4005]: I1118 21:13:20.976330   24146 utils.go:224] libcalico glog logging configured
Nov 18 21:13:20 ip-172-31-7-173.us-west-2.compute.internal kubelet[4005]: time="2016-11-18T21:13:20Z" level=info msg="Extracted identifiers" Node=ip-172-31-7-173.us-west-2.compute.internal Orchestrator=k8s Workload=default.nginx-2226242539-1z7xp
Nov 18 21:13:20 ip-172-31-7-173.us-west-2.compute.internal kubelet[4005]: time="2016-11-18T21:13:20Z" level=info msg="Loaded CNI NetConf" NetConfg={canal calico { host-local 192.168.127.0/24 <nil> <nil>} 8951    http://127.0.0.1:2379 DEBUG {k8s https://10.100.0.1:443    } { /etc/cni/net.d/calico-kubeconfig } {{{ {[]}}}}    } Workload=default.nginx-2226242539-1z7xp
Nov 18 21:13:20 ip-172-31-7-173.us-west-2.compute.internal kubelet[4005]: time="2016-11-18T21:13:20Z" level=info msg="Configured environment: [PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin JOURNAL_STREAM=8:23556 CNI_COMMAND=ADD CNI_CONTAINERID=8843fa036492e449ad2948d2b4fe6e3d87cc13e920122e7eaa8b27870ef4b92c CNI_NETNS=/proc/24112/ns/net CNI_ARGS=IgnoreUnknown=1;IgnoreUnknown=1;K8S_POD_NAMESPACE=default;K8S_POD_NAME=nginx-2226242539-1z7xp;K8S_POD_INFRA_CONTAINER_ID=8843fa036492e449ad2948d2b4fe6e3d87cc13e920122e7eaa8b27870ef4b92c CNI_IFNAME=eth0 CNI_PATH=/opt/cni/bin:/opt/flannel/bin ETCD_ENDPOINTS=http://127.0.0.1:2379 KUBECONFIG=/etc/cni/net.d/calico-kubeconfig]"
Nov 18 21:13:20 ip-172-31-7-173.us-west-2.compute.internal kubelet[4005]: time="2016-11-18T21:13:20Z" level=info msg="No config file specified, loading config from environment"
Nov 18 21:13:20 ip-172-31-7-173.us-west-2.compute.internal kubelet[4005]: time="2016-11-18T21:13:20Z" level=info msg="Datastore type: etcdv2"
Nov 18 21:13:20 ip-172-31-7-173.us-west-2.compute.internal kubelet[4005]: time="2016-11-18T21:13:20Z" level=debug msg="Using datastore type 'etcdv2'"
Nov 18 21:13:20 ip-172-31-7-173.us-west-2.compute.internal kubelet[4005]: time="2016-11-18T21:13:20Z" level=debug msg="List Key: /calico/v1/host/ip-172-31-7-173.us-west-2.compute.internal/workload/k8s/default.nginx-2226242539-1z7xp/endpoint"
Nov 18 21:13:20 ip-172-31-7-173.us-west-2.compute.internal kubelet[4005]: time="2016-11-18T21:13:20Z" level=debug msg="Key not found error"
Nov 18 21:13:20 ip-172-31-7-173.us-west-2.compute.internal kubelet[4005]: time="2016-11-18T21:13:20Z" level=debug msg="Retrieved endpoints: &{{workloadEndpointList v1} {} []}" Workload=default.nginx-2226242539-1z7xp
Nov 18 21:13:20 ip-172-31-7-173.us-west-2.compute.internal kubelet[4005]: Calico CNI checking for existing endpoint: <nil>
Nov 18 21:13:20 ip-172-31-7-173.us-west-2.compute.internal kubelet[4005]: I1118 21:13:20.984454   24146 utils.go:224] libcalico glog logging configured
Nov 18 21:13:20 ip-172-31-7-173.us-west-2.compute.internal kubelet[4005]: time="2016-11-18T21:13:20Z" level=info msg="Extracted identifiers for CmdAddK8s" Node=ip-172-31-7-173.us-west-2.compute.internal Orchestrator=k8s Workload=default.nginx-2226242539-1z7xp
Nov 18 21:13:20 ip-172-31-7-173.us-west-2.compute.internal kubelet[4005]: I1118 21:13:20.985343   24146 loader.go:329] Config loaded from file /etc/cni/net.d/calico-kubeconfig
Nov 18 21:13:20 ip-172-31-7-173.us-west-2.compute.internal kubelet[4005]: time="2016-11-18T21:13:20Z" level=debug msg="Kubernetes config &{https://10.100.0.1:443   {  <nil> <nil>}     <nil> <nil> {   [] [] []} true  <nil> <nil> 0 0 <nil>}" Workload=default.nginx-2226242539-1z7xp
Nov 18 21:13:20 ip-172-31-7-173.us-west-2.compute.internal kubelet[4005]: time="2016-11-18T21:13:20Z" level=debug msg="Created Kubernetes client" Workload=default.nginx-2226242539-1z7xp client=&{0xc420386fa0 0xc420022198 0xc4200221a0 0xc4200221a8 0xc4200221b0 0xc4200221b8 0xc4200221c0}
Nov 18 21:13:20 ip-172-31-7-173.us-west-2.compute.internal kubelet[4005]: time="2016-11-18T21:13:20Z" level=debug msg="Calling IPAM plugin host-local" Workload=default.nginx-2226242539-1z7xp
Nov 18 21:13:20 ip-172-31-7-173.us-west-2.compute.internal kubelet[4005]: time="2016-11-18T21:13:20Z" level=debug msg="IPAM plugin returned: IP4:{IP:{IP:192.168.127.2 Mask:ffffff00} Gateway:192.168.127.1 Routes:[{Dst:{IP:192.168.0.0 Mask:ffff0000} GW:<nil>}]}, DNS:{Nameservers:[] Domain: Search:[] Options:[]}" Workload=default.nginx-2226242539-1z7xp
Nov 18 21:13:20 ip-172-31-7-173.us-west-2.compute.internal kubelet[4005]: time="2016-11-18T21:13:20Z" level=info msg="Populated endpoint" Workload=default.nginx-2226242539-1z7xp endpoint=&{{workloadEndpoint v1} {{} eth0 default.nginx-2226242539-1z7xp k8s ip-172-31-7-173.us-west-2.compute.internal map[]} {[192.168.127.2/32] [] <nil> <nil> [k8s_ns.default]  <nil>}}
Nov 18 21:13:20 ip-172-31-7-173.us-west-2.compute.internal kubelet[4005]: I1118 21:13:20.990913   24146 round_trippers.go:299] curl -k -v -XGET  -H "Accept: application/json, */*" -H "User-Agent: calico/v0.0.0 (linux/amd64) kubernetes/$Format" https://10.100.0.1:443/api/v1/namespaces/default/pods/nginx-2226242539-1z7xp
Nov 18 21:13:21 ip-172-31-7-173.us-west-2.compute.internal kubelet[4005]: I1118 21:13:21.063241   24146 round_trippers.go:318] GET https://10.100.0.1:443/api/v1/namespaces/default/pods/nginx-2226242539-1z7xp 401 Unauthorized in 72 milliseconds
Nov 18 21:13:21 ip-172-31-7-173.us-west-2.compute.internal kubelet[4005]: I1118 21:13:21.063259   24146 round_trippers.go:324] Response Headers:
Nov 18 21:13:21 ip-172-31-7-173.us-west-2.compute.internal kubelet[4005]: I1118 21:13:21.063266   24146 round_trippers.go:327]     Content-Type: text/plain; charset=utf-8
Nov 18 21:13:21 ip-172-31-7-173.us-west-2.compute.internal kubelet[4005]: I1118 21:13:21.063272   24146 round_trippers.go:327]     X-Content-Type-Options: nosniff
Nov 18 21:13:21 ip-172-31-7-173.us-west-2.compute.internal kubelet[4005]: I1118 21:13:21.063277   24146 round_trippers.go:327]     Date: Fri, 18 Nov 2016 21:13:21 GMT
Nov 18 21:13:21 ip-172-31-7-173.us-west-2.compute.internal kubelet[4005]: I1118 21:13:21.063282   24146 round_trippers.go:327]     Content-Length: 13
Nov 18 21:13:21 ip-172-31-7-173.us-west-2.compute.internal kubelet[4005]: I1118 21:13:21.065677   24146 request.go:908] Response Body: Unauthorized
Nov 18 21:13:21 ip-172-31-7-173.us-west-2.compute.internal kubelet[4005]: I1118 21:13:21.065699   24146 request.go:998] Response Body: "Unauthorized\n"
Nov 18 21:13:21 ip-172-31-7-173.us-west-2.compute.internal kubelet[4005]: time="2016-11-18T21:13:21Z" level=info msg="Cleaning up IP allocations for failed ADD" Workload=default.nginx-2226242539-1z7xp
Nov 18 21:13:21 ip-172-31-7-173.us-west-2.compute.internal kubelet[4005]: E1118 21:13:21.071867    4005 cni.go:255] Error adding network: the server has asked for the client to provide credentials (get pods nginx-2226242539-1z7xp)
Nov 18 21:13:21 ip-172-31-7-173.us-west-2.compute.internal kubelet[4005]: E1118 21:13:21.072209    4005 cni.go:209] Error while adding to cni network: the server has asked for the client to provide credentials (get pods nginx-2226242539-1z7xp)
Nov 18 21:13:21 ip-172-31-7-173.us-west-2.compute.internal kubelet[4005]: E1118 21:13:21.072508    4005 docker_manager.go:2127] Failed to setup network for pod "nginx-2226242539-1z7xp_default(ebf28f96-adc9-11e6-bb8b-0a672f8dd7e7)" using network plugins "cni": the server has asked for the client to provide credentials (get pods nginx-2226242539-1z7xp); Skipping pod
Nov 18 21:13:21 ip-172-31-7-173.us-west-2.compute.internal kubelet[4005]: E1118 21:13:21.126666    4005 pod_workers.go:184] Error syncing pod ebf28f96-adc9-11e6-bb8b-0a672f8dd7e7, skipping: failed to "SetupNetwork" for "nginx-2226242539-1z7xp_default" with SetupNetworkError: "Failed to setup network for pod \"nginx-2226242539-1z7xp_default(ebf28f96-adc9-11e6-bb8b-0a672f8dd7e7)\" using network plugins \"cni\": the server has asked for the client to provide credentials (get pods nginx-2226242539-1z7xp); Skipping pod"
Nov 18 21:13:21 ip-172-31-7-173.us-west-2.compute.internal kubelet[4005]: E1118 21:13:21.127577    4005 kubelet_getters.go:249] Could not read directory /var/lib/kubelet/pods/4af5f5c1-adcb-11e6-bb8b-0a672f8dd7e7/volumes: open /var/lib/kubelet/pods/4af5f5c1-adcb-11e6-bb8b-0a672f8dd7e7/volumes: no such file or directory
Nov 18 21:13:21 ip-172-31-7-173.us-west-2.compute.internal kubelet[4005]: E1118 21:13:21.127592    4005 kubelet_volumes.go:159] Orphaned pod "4af5f5c1-adcb-11e6-bb8b-0a672f8dd7e7" found, but error open /var/lib/kubelet/pods/4af5f5c1-adcb-11e6-bb8b-0a672f8dd7e7/volumes: no such file or directory occured during reading volume dir from disk

Here is the yaml for an example pod

apiVersion: v1
kind: Service
metadata:
  labels:
    app: nginx
  name: nginx
  namespace: default
spec:
  selector:
    app: nginx-default
  ports:
  - name: https
    port: 80
    protocol: TCP
    targetPort: 80
  type: NodePort
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: nginx
  namespace: default
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx-default
  template:
    metadata:
      labels:
        app: nginx-default
      name: nginx
    spec:
      containers:
      - image: "nginx:alpine"
        name: nginx
        ports:
        - containerPort: 80
          protocol: TCP

The pod is stuck in the ContainerCreating, and the events output from kubectl describe pod shows:

  1h	1s	1576	{kubelet ip-172-31-7-173.us-west-2.compute.internal}		Warning	FailedSync	Error syncing pod, skipping: failed to "SetupNetwork" for "nginx-2226242539-1z7xp_default" with SetupNetworkError: "Failed to setup network for pod \"nginx-2226242539-1z7xp_default(ebf28f96-adc9-11e6-bb8b-0a672f8dd7e7)\" using network plugins \"cni\": the server has asked for the client to provide credentials (get pods nginx-2226242539-1z7xp); Skipping pod"

It looks like an IP address is being allocated, but when querying kubernetes, the authorization fails. The file /etc/cni/net.d/10-canal.conf is getting properly populated with a valid k8s token.

I ran

curl \
    -H "Authorization: Bearer $(cat /etc/cni/net.d/10-canal.conf |jq -r .delegate.policy.k8s_api_token)" \
     https://10.100.0.1:443/api/v1/namespaces/default/pods/nginx-2226242539-1z7xp

and got back a response.

kube-flannel CrashLoopBackOff on bare-metal install

Hello,

I've deployed Kubernetes 1.6.2 on a brand new metal node with kubeadm with:

$ sudo kubeadm init --pod-network-cidr 10.244.0.0/16

...and did the steps in this document:

I then get:

kubectl -n kube-system logs po/canal-khsqt kube-flannel
E0501 20:55:21.603770 1 main.go:127] Failed to create SubnetManager: failed to read net conf: open /etc/kube-flannel/net-conf.json: no such file or directory

What am I doing incorrectly?

Second node using docker0 instead of canal

I've set up a new Kubernetes cluster with kubeadm on two bare-metal nodes.

The LAN that they sit on is 192.168.1.x/24.

I initialized the first node with:

$ sudo kubeadm init --pod-network-cidr 10.244.0.0/16

And set up Canal with the steps on:

And joined node2 to the cluster with kubeadm

However, when I join the second node, and schedule pods on it, it's using the docker0 interface - node1 is doing the right thing with canal:

NAMESPACE    NAME                  READY     STATUS    RESTARTS   AGE       IP           NODE
kube-system   canal-6mtm6   3/3       Running   0         6m        192.168.1.5   node1
kube-system   canal-n5db6   3/3       Running   0         5m        192.168.1.6   node2
kube-system   etcd-node1   1/1       Running   0         7m        192.168.1.5   node1
kube-system   heapster-1428305041-hvbp0   1/1       Running   0         3m        172.17.0.3   node2
kube-system   kube-apiserver-node1   1/1       Running   0         7m        192.168.1.5   node1
kube-system   kube-controller-manager-node1   1/1       Running   0         7m        192.168.1.5   node1
kube-system   kube-dns-3913472980-c0cst   3/3       Running   0         7m        10.244.0.7   node1
kube-system   kube-proxy-7s3h5   1/1       Running   0         5m        192.168.1.6   node2
kube-system   kube-proxy-ft2p6   1/1       Running   0         7m        192.168.1.5   node1
kube-system   kube-scheduler-node1   1/1       Running   0         7m        192.168.1.5   node1
kube-system   monitoring-grafana-3975459543-r69tt   1/1       Running   0         3m        172.17.0.2   node2
kube-system   monitoring-influxdb-3480804314-9708z   1/1       Running   0         3m        10.244.0.8   node1

On node1, I see:

3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
    link/ether ... brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 scope global docker0
       valid_lft forever preferred_lft forever
5: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN group default 
    link/ether ... brd ff:ff:ff:ff:ff:ff
    inet 10.244.0.0/32 scope global flannel.1
       valid_lft forever preferred_lft forever
    inet6 fe80::.../64 scope link 
       valid_lft forever preferred_lft forever
9: cali70daf1af12c@if3:
   ...

While on node2, there's:

3: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether ... brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 scope global docker0
       valid_lft forever preferred_lft forever
    inet6 fe80::.../64 scope link 
       valid_lft forever preferred_lft forever
4: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN group default 
    link/ether ... brd ff:ff:ff:ff:ff:ff
    inet 10.244.1.0/32 scope global flannel.1
       valid_lft forever preferred_lft forever
    inet6 fe80::.../64 scope link 
       valid_lft forever preferred_lft forever
5: cni0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether ... brd ff:ff:ff:ff:ff:ff
    inet 10.244.1.1/24 scope global cni0
       valid_lft forever preferred_lft forever
    inet6 fe80::.../64 scope link 
       valid_lft forever preferred_lft forever
41: veth4e88edb@if40: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default
    ...

However, the config looks good on node2:

node2 $ cat /etc/cni/net.d/10-calico.conf 
{
    "name": "k8s-pod-network",
    "type": "calico",
    "log_level": "info",
    "datastore_type": "kubernetes",
    "hostname": "node2",
    "ipam": {
        "type": "host-local",
        "subnet": "usePodCidr"
    },
    "policy": {
        "type": "k8s",
        "k8s_auth_token": "..."
    },
    "kubernetes": {
        "k8s_api_root": "https://10.96.0.1:443",
        "kubeconfig": "/etc/cni/net.d/calico-kubeconfig"
    }
}

Canal policy for hostnetwork=true pods?

Not sure if this is the right place to post this, but I'm facing a problem with kubernetes 16 when I try to access pods from the nginx-ingress-controller pod which is running with hostNetwork=true.

Is there a way to still restrict traffic to a pod / endpoint but allow the host network pods to connect?

I tried using namespace selectors but AFAIK hostNetworking=true means the pod is using the network namespace of the host so this doesn't restrict anything but makes it inaccessible.

Does anyone have ideas / better approaches on this?
The only solution I see is maybe through another nginx reverse proxy with manual added rules.

  • Calico version: 1.2.1
  • Orchestrator version (e.g. kubernetes, mesos, rkt): kubernetes 1.6
  • Operating System and version: CoreOS
  • Link to your project (optional):

Traffic from GCE LoadBalancer blocked by iptables filter

I have kubernets cluster on google cluster. One master and one minion. Kubeadm was used to create cluster. Canal(flannel) is used as cni provider. One KubeService was deployed with type LoadBalancer.
Google randomly distributed traffic to master and minion, but master don't accept requests.

nat iptables from master node:

-A KUBE-FW-MX7ZTTA3CLR5PD5H -m comment --comment "jenkins/jenkins-jenkins:http loadbalancer IP" -j KUBE-MARK-MASQ
-A KUBE-FW-MX7ZTTA3CLR5PD5H -m comment --comment "jenkins/jenkins-jenkins:http loadbalancer IP" -j KUBE-SVC-MX7ZTTA3CLR5PD5H
-A KUBE-FW-MX7ZTTA3CLR5PD5H -s xxx.xxx.xxx.xxx/32 -m comment --comment "jenkins/jenkins-jenkins:http loadbalancer IP" -j KUBE-SVC-MX7ZTTA3CLR5PD5H
-A KUBE-FW-MX7ZTTA3CLR5PD5H -m comment --comment "jenkins/jenkins-jenkins:http loadbalancer IP" -j KUBE-MARK-DROP
-A KUBE-NODEPORTS -p tcp -m comment --comment "jenkins/jenkins-jenkins:http" -m tcp --dport 30467 -j KUBE-MARK-MASQ
-A KUBE-NODEPORTS -p tcp -m comment --comment "jenkins/jenkins-jenkins:http" -m tcp --dport 30467 -j KUBE-SVC-MX7ZTTA3CLR5PD5H
-A KUBE-SEP-ETTYJLHSYBMK522U -s 10.244.2.16/32 -m comment --comment "jenkins/jenkins-jenkins:http" -j KUBE-MARK-MASQ
-A KUBE-SEP-ETTYJLHSYBMK522U -p tcp -m comment --comment "jenkins/jenkins-jenkins:http" -m tcp -j DNAT --to-destination 10.244.2.16:8080
-A KUBE-SEP-WZT46T3WAUJMMZNP -s 10.244.2.16/32 -m comment --comment "jenkins/jenkins-jenkins-agent:slavelistener" -j KUBE-MARK-MASQ
-A KUBE-SEP-WZT46T3WAUJMMZNP -p tcp -m comment --comment "jenkins/jenkins-jenkins-agent:slavelistener" -m tcp -j DNAT --to-destination 10.244.2.16:50000
-A KUBE-SERVICES ! -s 10.244.0.0/16 -d 10.0.41.173/32 -p tcp -m comment --comment "jenkins/jenkins-jenkins:http cluster IP" -m tcp --dport 8080 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.0.41.173/32 -p tcp -m comment --comment "jenkins/jenkins-jenkins:http cluster IP" -m tcp --dport 8080 -j KUBE-SVC-MX7ZTTA3CLR5PD5H
-A KUBE-SERVICES -d xxx.xxx.xxx.xxx/32 -p tcp -m comment --comment "jenkins/jenkins-jenkins:http loadbalancer IP" -m tcp --dport 8080 -j KUBE-FW-MX7ZTTA3CLR5PD5H
-A KUBE-SERVICES ! -s 10.244.0.0/16 -d 10.0.226.250/32 -p tcp -m comment --comment "jenkins/jenkins-jenkins-agent:slavelistener cluster IP" -m tcp --dport 50000 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.0.226.250/32 -p tcp -m comment --comment "jenkins/jenkins-jenkins-agent:slavelistener cluster IP" -m tcp --dport 50000 -j KUBE-SVC-GMUNCQ4ZNKK7N5PD
-A KUBE-SVC-GMUNCQ4ZNKK7N5PD -m comment --comment "jenkins/jenkins-jenkins-agent:slavelistener" -j KUBE-SEP-WZT46T3WAUJMMZNP
-A KUBE-SVC-MX7ZTTA3CLR5PD5H -m comment --comment "jenkins/jenkins-jenkins:http" -j KUBE-SEP-ETTYJLHSYBMK522U

filter iptables from master node:

-A FORWARD -m comment --comment "cali:wUHhoiAYhphO9Mso" -j cali-FORWARD
-A FORWARD -j DOCKER-USER
-A FORWARD -j DOCKER-ISOLATION
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -o docker0 -j DOCKER
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
-A FORWARD -i docker0 -o docker0 -j ACCEPT
-A cali-FORWARD -i cali+ -m comment --comment "cali:X3vB2lGcBrfkYquC" -j cali-from-wl-dispatch
-A cali-FORWARD -o cali+ -m comment --comment "cali:UtJ9FnhBnFbyQMvU" -j cali-to-wl-dispatch
-A cali-FORWARD -i cali+ -m comment --comment "cali:Tt19HcSdA5YIGSsw" -j ACCEPT
-A cali-FORWARD -o cali+ -m comment --comment "cali:9LzfFCvnpC5_MYXm" -j ACCEPT
-A cali-FORWARD -m comment --comment "cali:7AofLLOqCM5j36rM" -j MARK --set-xmark 0x0/0xe000000
-A cali-FORWARD -m comment --comment "cali:QM1_joSl7tL76Az7" -m mark --mark 0x0/0x1000000 -j cali-from-host-endpoint
-A cali-FORWARD -m comment --comment "cali:C1QSog3bk0AykjAO" -j cali-to-host-endpoint
-A cali-FORWARD -m comment --comment "cali:DmFiPAmzcisqZcvo" -m comment --comment "Host endpoint policy accepted packet." -m mark --mark 0x1000000/0x1000000 -j ACCEPT

Workaround:
I added additional log to iptables:
iptables -A FORWARD -m comment --comment drop -j LOG --log-prefix drop
And found dropped packets:
kernel: [16887.164101] dropIN=ens4 OUT=flannel.1 ......

Final solution that works for me:
iptables -A FORWARD -i flannel+ -j ACCEPT
iptables -A FORWARD -o flannel+ -j ACCEPT

I am not sure that it is right solution.

Canal on GCE using CoreOS does not work

Expected Behavior

Pods should be able to ping each other

Current Behavior

All services report healthy, but flannel container in canal pod shows the following type of error:

Possible Solution

Steps to Reproduce (for bugs)

  1. Create CoreOS instances in GCE
  2. Set up an ansible inventory such as this:
k8s-mattymo-test2-1 ansible_ssh_host=104.199.90.98
k8s-mattymo-test2-2 ansible_ssh_host=35.195.129.28
[kube-master]
k8s-mattymo-test2-1

[kube-node]
k8s-mattymo-test2-2

[etcd]
k8s-mattymo-test2-1

[k8s-cluster:children]
kube-node
kube-master
  1. Run ansible with -e kube_network_plugin=canal
  2. Try to ping pod IPs from any host or from any pod to another.

Context

Pod logs:
I don't have full flannel logs at the moment, but this type of message repeats constantly:
5 vxlan_network.go:241] L3 miss but route for 10.233.95.3 not found

calico-node http://paste.openstack.org/show/2R0SriTdthfMATf8t46V/
policy controller http://paste.openstack.org/show/ndCEAIAmYFmstYx2Hkxu/
endpoints http://paste.openstack.org/show/1im9g356CrPgFBT14f9o/
profile http://paste.openstack.org/show/OTVomtNV8CWn1Ikcp45m/

Your Environment

  • Calico version: v2.5.0
  • Flannel version: v0.8.0
  • Orchestrator version: Kubespray from master
  • Operating System and version: CoreOS stable (latest from GCE)
  • Link to your project (optional): github.com/kubernetes-incubator/kubespray

More details:
CoreOS + Canal works fine on vagrant
CoreOS + Flannel works fine on all platforms (including GCE)
CoreOS + Calico works fine on all platforms (including GCE)
Ubuntu and CentOS + Flannel works fine on GCE

I tried changing the backend from vxlan to gce, but no change in behavior.

The actual canal manifest being used: https://github.com/kubernetes-incubator/kubespray/blob/master/roles/network_plugin/canal/templates/canal-node.yaml.j2

Using calico-node v1.0.0 routes are not added

I'm doing some simple evaluation of calico, flannel and canal.

I am using the following versions of everything:

etcd - 3.0.15 api version 2
flanneld - v0.6.2
calico-node - v1.0.0

I am using the CNI plugins script priv-net-run.sh to run two simple netcat tests one "nc -l 88" and another "nc <container_ip> 88" to ensure traffics gets between the containers.

When I setup flanneld and calico independently following their docs and they both work fine. When I try the canal docs https://github.com/projectcalico/canal/blob/master/InstallGuide.md but with the current versions I cannot get my second container to talk to my first container.

Using tcpdump I see ARP requests for 169.254.1.1 being made but there is no reply. There are also no entries in my route table for the container ips.

If I switch and use v0.22.0 of calico then the ARP requests are answered and the container routes appear in the routing table.

All of my tests are done in a Centos 7.3 virtual box.

Canal local host routing issue

I am using the following template:
https://raw.githubusercontent.com/projectcalico/canal/master/k8s-install/canal-etcdless.yaml

The cluster seems to work just fine except any contact from a pod running on host N without net=host can not access the host N by ip. Cross host contact seems to work fine though. Tested with multiple values of N.

For example, for pod prometheus:
/prometheus # ifconfig eth0 | grep addr:[0-1]
inet addr:10.244.2.11 Bcast:0.0.0.0 Mask:255.255.255.255

kubectl get pods -o wide | grep 10.244.2.11

prometheus-server-2167327372-zbvcw 2/2 Running 0 18d 10.244.2.11 192.168.0.3

/prometheus # ping 192.168.0.3 -c 5
PING 192.168.0.3 (192.168.0.3): 56 data bytes
--- 192.168.0.3 ping statistics ---
5 packets transmitted, 0 packets received, 100% packet loss

cross host though works:
/prometheus # ping 192.168.0.4 -c 5
PING 192.168.0.4 (192.168.0.4): 56 data bytes
64 bytes from 192.168.0.4: seq=0 ttl=63 time=0.275 ms
64 bytes from 192.168.0.4: seq=1 ttl=63 time=0.242 ms
64 bytes from 192.168.0.4: seq=2 ttl=63 time=0.295 ms
64 bytes from 192.168.0.4: seq=3 ttl=63 time=0.275 ms
64 bytes from 192.168.0.4: seq=4 ttl=63 time=0.303 ms
--- 192.168.0.4 ping statistics ---
5 packets transmitted, 5 packets received, 0% packet loss
round-trip min/avg/max = 0.242/0.278/0.303 ms

k8s nodePort Service can't be reached externally

I have a 3x node kubernetes cluster: kube-ravi196 10.163.148.196 (master), kube-ravi197 10.163.148.197, and kube-ravi198 10.163.148.198. I have a pod that's currently scheduled on kube-ravi198 that I'd like to be exposed externally to the cluster. So I have a service of type nodePort with the nodePort set to 30080. I can successfully do curl localhost:30080 locally on each node. But externally, curl nodeX:30080 only works against kube-ravi198. The other two timeout.

I debugged iptables and found that the external request is getting dropped in the FORWARD chain as it's hitting the default DROP policy. From my (limited) understanding of Canal, Canal sets up flannel.1 interface on each node and then creates one calico interface for each pod running on a node. It then sets up a felix-FORWARD iptables target in the FORWARD chain to ACCEPT any traffic coming or leaving a calico interface. The problem is that node-to-node traffic goes through the flannel.1 interface and there is nothing to ACCEPT traffic that gets forwarded to it. Doing curl localhost:30080 works because it bypasses the FORWARD table even though its getting DNATed (not sure why).

My fix is to add:
sudo iptables -A FORWARD -o flannel.1 -j ACCEPT

Debug info below:

$ kubectl get pods --namespace=kube-system -l "k8s-app=kube-registry" -o wide
NAME                     READY     STATUS    RESTARTS   AGE       IP              NODE
kube-registry-v0-1mthd   1/1       Running   0          39m       192.168.75.13   ravi-kube198

$ kubectl get service --namespace=kube-system -l "k8s-app=kube-registry"
NAME            CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE
kube-registry   10.100.57.109   <nodes>       5000:30080/TCP   5h

$ kubectl get pods --namespace=kube-system -l "k8s-app=kube-proxy" -o wide
NAME               READY     STATUS    RESTARTS   AGE       IP               NODE
kube-proxy-1rzz8   1/1       Running   0          40m       10.163.148.198   ravi-kube198
kube-proxy-fz20x   1/1       Running   0          40m       10.163.148.197   ravi-kube197
kube-proxy-lm7nm   1/1       Running   0          40m       10.163.148.196   ravi-kube196
iptables-save
# Generated by iptables-save v1.6.0 on Thu Jan  5 22:33:57 2017
*nat
:PREROUTING ACCEPT [1:60]
:INPUT ACCEPT [1:60]
:OUTPUT ACCEPT [40:2834]
:POSTROUTING ACCEPT [40:2834]
:DOCKER - [0:0]
:KUBE-MARK-DROP - [0:0]
:KUBE-MARK-MASQ - [0:0]
:KUBE-NODEPORTS - [0:0]
:KUBE-POSTROUTING - [0:0]
:KUBE-SEP-4U6BTAJCDMHBCNTE - [0:0]
:KUBE-SEP-7QBKTOBWZOW2ADYZ - [0:0]
:KUBE-SEP-DARQFIU6CIZ6DHSZ - [0:0]
:KUBE-SEP-FMM5BAXI5QDNGXPJ - [0:0]
:KUBE-SEP-KJX7S6NVUIOUABFE - [0:0]
:KUBE-SEP-KMRLJBMSXVC225LD - [0:0]
:KUBE-SEP-KXX2UKHAML22525B - [0:0]
:KUBE-SEP-NCDBIYVKEUM6V7JV - [0:0]
:KUBE-SEP-YSVAKJLNBVVBENUI - [0:0]
:KUBE-SERVICES - [0:0]
:KUBE-SVC-E66MHSUH4AYEXSQE - [0:0]
:KUBE-SVC-ERIFXISQEP7F7OF4 - [0:0]
:KUBE-SVC-JV2WR75K33AEZUK7 - [0:0]
:KUBE-SVC-KWJORWLCTF22FLD3 - [0:0]
:KUBE-SVC-NPX46M4PTMTKRN6Y - [0:0]
:KUBE-SVC-TCOU7JCQXEZGVUNU - [0:0]
:KUBE-SVC-XGLOHA7QRQ3V22RZ - [0:0]
:felix-FIP-DNAT - [0:0]
:felix-FIP-SNAT - [0:0]
:felix-POSTROUTING - [0:0]
:felix-PREROUTING - [0:0]
-A PREROUTING -j felix-PREROUTING
-A PREROUTING -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER
-A OUTPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER
-A POSTROUTING -j felix-POSTROUTING
-A POSTROUTING -m comment --comment "kubernetes postrouting rules" -j KUBE-POSTROUTING
-A POSTROUTING -s 172.17.0.0/16 ! -o docker0 -j MASQUERADE
-A POSTROUTING -s 192.168.0.0/16 -d 192.168.0.0/16 -j RETURN
-A POSTROUTING -s 192.168.0.0/16 ! -d 224.0.0.0/4 -j MASQUERADE
-A POSTROUTING ! -s 192.168.0.0/16 -d 192.168.0.0/16 -j MASQUERADE
-A DOCKER -i docker0 -j RETURN
-A KUBE-MARK-DROP -j MARK --set-xmark 0x8000/0x8000
-A KUBE-MARK-MASQ -j MARK --set-xmark 0x4000/0x4000
-A KUBE-NODEPORTS -p tcp -m comment --comment "kube-system/kube-registry:registry" -m tcp --dport 30080 -j KUBE-MARK-MASQ
-A KUBE-NODEPORTS -p tcp -m comment --comment "kube-system/kube-registry:registry" -m tcp --dport 30080 -j KUBE-SVC-JV2WR75K33AEZUK7
-A KUBE-NODEPORTS -p tcp -m comment --comment "kube-system/kubernetes-dashboard:" -m tcp --dport 30882 -j KUBE-MARK-MASQ
-A KUBE-NODEPORTS -p tcp -m comment --comment "kube-system/kubernetes-dashboard:" -m tcp --dport 30882 -j KUBE-SVC-XGLOHA7QRQ3V22RZ
-A KUBE-POSTROUTING -m comment --comment "kubernetes service traffic requiring SNAT" -m mark --mark 0x4000/0x4000 -j MASQUERADE
-A KUBE-SEP-4U6BTAJCDMHBCNTE -s 192.168.75.33/32 -m comment --comment "kube-system/kube-dns:dns-tcp" -j KUBE-MARK-MASQ
-A KUBE-SEP-4U6BTAJCDMHBCNTE -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp" -m tcp -j DNAT --to-destination 192.168.75.33:53
-A KUBE-SEP-7QBKTOBWZOW2ADYZ -s 10.163.148.196/32 -m comment --comment "kube-system/glusterfs-dynamic-kube-registry-pvc:" -j KUBE-MARK-MASQ
-A KUBE-SEP-7QBKTOBWZOW2ADYZ -p tcp -m comment --comment "kube-system/glusterfs-dynamic-kube-registry-pvc:" -m tcp -j DNAT --to-destination 10.163.148.196:1
-A KUBE-SEP-DARQFIU6CIZ6DHSZ -s 10.163.148.198/32 -m comment --comment "kube-system/glusterfs-dynamic-kube-registry-pvc:" -j KUBE-MARK-MASQ
-A KUBE-SEP-DARQFIU6CIZ6DHSZ -p tcp -m comment --comment "kube-system/glusterfs-dynamic-kube-registry-pvc:" -m tcp -j DNAT --to-destination 10.163.148.198:1
-A KUBE-SEP-FMM5BAXI5QDNGXPJ -s 10.163.148.196/32 -m comment --comment "default/kubernetes:https" -j KUBE-MARK-MASQ
-A KUBE-SEP-FMM5BAXI5QDNGXPJ -p tcp -m comment --comment "default/kubernetes:https" -m recent --set --name KUBE-SEP-FMM5BAXI5QDNGXPJ --mask 255.255.255.255 --rsource -m tcp -j DNAT --to-destination 10.163.148.196:6443
-A KUBE-SEP-KJX7S6NVUIOUABFE -s 10.163.148.196/32 -m comment --comment "kube-system/canal-etcd:" -j KUBE-MARK-MASQ
-A KUBE-SEP-KJX7S6NVUIOUABFE -p tcp -m comment --comment "kube-system/canal-etcd:" -m tcp -j DNAT --to-destination 10.163.148.196:6666
-A KUBE-SEP-KMRLJBMSXVC225LD -s 192.168.45.37/32 -m comment --comment "kube-system/kubernetes-dashboard:" -j KUBE-MARK-MASQ
-A KUBE-SEP-KMRLJBMSXVC225LD -p tcp -m comment --comment "kube-system/kubernetes-dashboard:" -m tcp -j DNAT --to-destination 192.168.45.37:9090
-A KUBE-SEP-KXX2UKHAML22525B -s 10.163.148.197/32 -m comment --comment "kube-system/glusterfs-dynamic-kube-registry-pvc:" -j KUBE-MARK-MASQ
-A KUBE-SEP-KXX2UKHAML22525B -p tcp -m comment --comment "kube-system/glusterfs-dynamic-kube-registry-pvc:" -m tcp -j DNAT --to-destination 10.163.148.197:1
-A KUBE-SEP-NCDBIYVKEUM6V7JV -s 192.168.75.33/32 -m comment --comment "kube-system/kube-dns:dns" -j KUBE-MARK-MASQ
-A KUBE-SEP-NCDBIYVKEUM6V7JV -p udp -m comment --comment "kube-system/kube-dns:dns" -m udp -j DNAT --to-destination 192.168.75.33:53
-A KUBE-SEP-YSVAKJLNBVVBENUI -s 192.168.75.32/32 -m comment --comment "kube-system/kube-registry:registry" -j KUBE-MARK-MASQ
-A KUBE-SEP-YSVAKJLNBVVBENUI -p tcp -m comment --comment "kube-system/kube-registry:registry" -m tcp -j DNAT --to-destination 192.168.75.32:5000
-A KUBE-SERVICES ! -s 192.168.0.0/16 -d 10.96.0.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.96.0.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-SVC-NPX46M4PTMTKRN6Y
-A KUBE-SERVICES ! -s 192.168.0.0/16 -d 10.96.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.96.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-SVC-TCOU7JCQXEZGVUNU
-A KUBE-SERVICES ! -s 192.168.0.0/16 -d 10.100.57.109/32 -p tcp -m comment --comment "kube-system/kube-registry:registry cluster IP" -m tcp --dport 5000 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.100.57.109/32 -p tcp -m comment --comment "kube-system/kube-registry:registry cluster IP" -m tcp --dport 5000 -j KUBE-SVC-JV2WR75K33AEZUK7
-A KUBE-SERVICES ! -s 192.168.0.0/16 -d 10.110.206.254/32 -p tcp -m comment --comment "kube-system/kubernetes-dashboard: cluster IP" -m tcp --dport 80 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.110.206.254/32 -p tcp -m comment --comment "kube-system/kubernetes-dashboard: cluster IP" -m tcp --dport 80 -j KUBE-SVC-XGLOHA7QRQ3V22RZ
-A KUBE-SERVICES ! -s 192.168.0.0/16 -d 10.96.232.136/32 -p tcp -m comment --comment "kube-system/canal-etcd: cluster IP" -m tcp --dport 6666 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.96.232.136/32 -p tcp -m comment --comment "kube-system/canal-etcd: cluster IP" -m tcp --dport 6666 -j KUBE-SVC-KWJORWLCTF22FLD3
-A KUBE-SERVICES ! -s 192.168.0.0/16 -d 10.106.192.243/32 -p tcp -m comment --comment "kube-system/glusterfs-dynamic-kube-registry-pvc: cluster IP" -m tcp --dport 1 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.106.192.243/32 -p tcp -m comment --comment "kube-system/glusterfs-dynamic-kube-registry-pvc: cluster IP" -m tcp --dport 1 -j KUBE-SVC-E66MHSUH4AYEXSQE
-A KUBE-SERVICES ! -s 192.168.0.0/16 -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-SVC-ERIFXISQEP7F7OF4
-A KUBE-SERVICES -m comment --comment "kubernetes service nodeports; NOTE: this must be the last rule in this chain" -m addrtype --dst-type LOCAL -j KUBE-NODEPORTS
-A KUBE-SVC-E66MHSUH4AYEXSQE -m comment --comment "kube-system/glusterfs-dynamic-kube-registry-pvc:" -m statistic --mode random --probability 0.33332999982 -j KUBE-SEP-7QBKTOBWZOW2ADYZ
-A KUBE-SVC-E66MHSUH4AYEXSQE -m comment --comment "kube-system/glusterfs-dynamic-kube-registry-pvc:" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-KXX2UKHAML22525B
-A KUBE-SVC-E66MHSUH4AYEXSQE -m comment --comment "kube-system/glusterfs-dynamic-kube-registry-pvc:" -j KUBE-SEP-DARQFIU6CIZ6DHSZ
-A KUBE-SVC-ERIFXISQEP7F7OF4 -m comment --comment "kube-system/kube-dns:dns-tcp" -j KUBE-SEP-4U6BTAJCDMHBCNTE
-A KUBE-SVC-JV2WR75K33AEZUK7 -m comment --comment "kube-system/kube-registry:registry" -j KUBE-SEP-YSVAKJLNBVVBENUI
-A KUBE-SVC-KWJORWLCTF22FLD3 -m comment --comment "kube-system/canal-etcd:" -j KUBE-SEP-KJX7S6NVUIOUABFE
-A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment "default/kubernetes:https" -m recent --rcheck --seconds 10800 --reap --name KUBE-SEP-FMM5BAXI5QDNGXPJ --mask 255.255.255.255 --rsource -j KUBE-SEP-FMM5BAXI5QDNGXPJ
-A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment "default/kubernetes:https" -j KUBE-SEP-FMM5BAXI5QDNGXPJ
-A KUBE-SVC-TCOU7JCQXEZGVUNU -m comment --comment "kube-system/kube-dns:dns" -j KUBE-SEP-NCDBIYVKEUM6V7JV
-A KUBE-SVC-XGLOHA7QRQ3V22RZ -m comment --comment "kube-system/kubernetes-dashboard:" -j KUBE-SEP-KMRLJBMSXVC225LD
-A felix-POSTROUTING -j felix-FIP-SNAT
-A felix-PREROUTING -j felix-FIP-DNAT
COMMIT
# Completed on Thu Jan  5 22:33:57 2017
# Generated by iptables-save v1.6.0 on Thu Jan  5 22:33:57 2017
*filter
:INPUT ACCEPT [0:0]
:FORWARD DROP [0:0]
:OUTPUT ACCEPT [0:0]
:DOCKER - [0:0]
:DOCKER-ISOLATION - [0:0]
:KUBE-FIREWALL - [0:0]
:KUBE-SERVICES - [0:0]
:f2b-sshd - [0:0]
:felix-FAILSAFE-IN - [0:0]
:felix-FAILSAFE-OUT - [0:0]
:felix-FORWARD - [0:0]
:felix-FROM-ENDPOINT - [0:0]
:felix-FROM-HOST-IF - [0:0]
:felix-INPUT - [0:0]
:felix-OUTPUT - [0:0]
:felix-TO-ENDPOINT - [0:0]
:felix-TO-HOST-IF - [0:0]
:felix-from-2364763ca74 - [0:0]
:felix-from-428849870ea - [0:0]
:felix-p-_0f05888047b5982-i - [0:0]
:felix-p-_0f05888047b5982-o - [0:0]
:felix-p-_3806fbfa06d1365-i - [0:0]
:felix-p-_3806fbfa06d1365-o - [0:0]
:felix-to-2364763ca74 - [0:0]
:felix-to-428849870ea - [0:0]
:ufw-after-forward - [0:0]
:ufw-after-input - [0:0]
:ufw-after-logging-forward - [0:0]
:ufw-after-logging-input - [0:0]
:ufw-after-logging-output - [0:0]
:ufw-after-output - [0:0]
:ufw-before-forward - [0:0]
:ufw-before-input - [0:0]
:ufw-before-logging-forward - [0:0]
:ufw-before-logging-input - [0:0]
:ufw-before-logging-output - [0:0]
:ufw-before-output - [0:0]
:ufw-logging-allow - [0:0]
:ufw-logging-deny - [0:0]
:ufw-not-local - [0:0]
:ufw-reject-forward - [0:0]
:ufw-reject-input - [0:0]
:ufw-reject-output - [0:0]
:ufw-skip-to-policy-forward - [0:0]
:ufw-skip-to-policy-input - [0:0]
:ufw-skip-to-policy-output - [0:0]
:ufw-track-forward - [0:0]
:ufw-track-input - [0:0]
:ufw-track-output - [0:0]
:ufw-user-forward - [0:0]
:ufw-user-input - [0:0]
:ufw-user-limit - [0:0]
:ufw-user-limit-accept - [0:0]
:ufw-user-logging-forward - [0:0]
:ufw-user-logging-input - [0:0]
:ufw-user-logging-output - [0:0]
:ufw-user-output - [0:0]
-A INPUT -j felix-INPUT
-A INPUT -j KUBE-FIREWALL
-A INPUT -p tcp -m multiport --dports 22 -j f2b-sshd
-A INPUT -j ufw-before-logging-input
-A INPUT -j ufw-before-input
-A INPUT -j ufw-after-input
-A INPUT -j ufw-after-logging-input
-A INPUT -j ufw-reject-input
-A INPUT -j ufw-track-input
-A FORWARD -j felix-FORWARD
-A FORWARD -j DOCKER-ISOLATION
-A FORWARD -o docker0 -j DOCKER
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
-A FORWARD -i docker0 -o docker0 -j ACCEPT
-A FORWARD -j ufw-before-logging-forward
-A FORWARD -j ufw-before-forward
-A FORWARD -j ufw-after-forward
-A FORWARD -j ufw-after-logging-forward
-A FORWARD -j ufw-reject-forward
-A FORWARD -j ufw-track-forward
-A OUTPUT -j felix-OUTPUT
-A OUTPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT -j KUBE-FIREWALL
-A OUTPUT -j ufw-before-logging-output
-A OUTPUT -j ufw-before-output
-A OUTPUT -j ufw-after-output
-A OUTPUT -j ufw-after-logging-output
-A OUTPUT -j ufw-reject-output
-A OUTPUT -j ufw-track-output
-A DOCKER-ISOLATION -j RETURN
-A KUBE-FIREWALL -m comment --comment "kubernetes firewall for dropping marked packets" -m mark --mark 0x8000/0x8000 -j DROP
-A f2b-sshd -j RETURN
-A felix-FAILSAFE-IN -p tcp -m tcp --dport 22 -j ACCEPT
-A felix-FAILSAFE-OUT -p tcp -m tcp --dport 2379 -j ACCEPT
-A felix-FAILSAFE-OUT -p tcp -m tcp --dport 2380 -j ACCEPT
-A felix-FAILSAFE-OUT -p tcp -m tcp --dport 4001 -j ACCEPT
-A felix-FAILSAFE-OUT -p tcp -m tcp --dport 7001 -j ACCEPT
-A felix-FORWARD -i cali+ -m conntrack --ctstate INVALID -j DROP
-A felix-FORWARD -o cali+ -m conntrack --ctstate INVALID -j DROP
-A felix-FORWARD -i cali+ -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A felix-FORWARD -o cali+ -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A felix-FORWARD -i cali+ -j felix-FROM-ENDPOINT
-A felix-FORWARD -o cali+ -j felix-TO-ENDPOINT
-A felix-FORWARD -i cali+ -j ACCEPT
-A felix-FORWARD -o cali+ -j ACCEPT
-A felix-FROM-ENDPOINT -i cali2364763ca74 -g felix-from-2364763ca74
-A felix-FROM-ENDPOINT -i cali428849870ea -g felix-from-428849870ea
-A felix-FROM-ENDPOINT -m comment --comment "From unknown endpoint" -j DROP
-A felix-FROM-HOST-IF -m comment --comment "Unknown interface, return" -j RETURN
-A felix-INPUT -m conntrack --ctstate INVALID -j DROP
-A felix-INPUT -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A felix-INPUT ! -i cali+ -g felix-FROM-HOST-IF
-A felix-INPUT -p udp -m udp --sport 68 --dport 67 -j ACCEPT
-A felix-INPUT -p udp -m udp --dport 53 -j ACCEPT
-A felix-INPUT -j felix-FROM-ENDPOINT
-A felix-OUTPUT -m conntrack --ctstate INVALID -j DROP
-A felix-OUTPUT -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A felix-OUTPUT ! -o cali+ -g felix-TO-HOST-IF
-A felix-TO-ENDPOINT -o cali2364763ca74 -g felix-to-2364763ca74
-A felix-TO-ENDPOINT -o cali428849870ea -g felix-to-428849870ea
-A felix-TO-ENDPOINT -m comment --comment "To unknown endpoint" -j DROP
-A felix-TO-HOST-IF -m comment --comment "Unknown interface, return" -j RETURN
-A felix-from-2364763ca74 -j MARK --set-xmark 0x0/0x1000000
-A felix-from-2364763ca74 -m mac ! --mac-source 2A:E8:7F:82:AA:B5 -m comment --comment "Incorrect source MAC" -j DROP
-A felix-from-2364763ca74 -m comment --comment "Start of tier k8s-network-policy" -j MARK --set-xmark 0x0/0x2000000
-A felix-from-2364763ca74 -m mark --mark 0x0/0x2000000 -j felix-p-_3806fbfa06d1365-o
-A felix-from-2364763ca74 -m mark --mark 0x1000000/0x1000000 -m comment --comment "Return if policy accepted" -j RETURN
-A felix-from-2364763ca74 -m mark --mark 0x0/0x2000000 -m comment --comment "Drop if no policy in tier passed" -j DROP
-A felix-from-2364763ca74 -j felix-p-_0f05888047b5982-o
-A felix-from-2364763ca74 -m mark --mark 0x1000000/0x1000000 -m comment --comment "Profile accepted packet" -j RETURN
-A felix-from-2364763ca74 -m comment --comment "Packet did not match any profile (endpoint eth0)" -j DROP
-A felix-from-428849870ea -j MARK --set-xmark 0x0/0x1000000
-A felix-from-428849870ea -m mac ! --mac-source E6:76:86:8B:42:2A -m comment --comment "Incorrect source MAC" -j DROP
-A felix-from-428849870ea -m comment --comment "Start of tier k8s-network-policy" -j MARK --set-xmark 0x0/0x2000000
-A felix-from-428849870ea -m mark --mark 0x0/0x2000000 -j felix-p-_3806fbfa06d1365-o
-A felix-from-428849870ea -m mark --mark 0x1000000/0x1000000 -m comment --comment "Return if policy accepted" -j RETURN
-A felix-from-428849870ea -m mark --mark 0x0/0x2000000 -m comment --comment "Drop if no policy in tier passed" -j DROP
-A felix-from-428849870ea -j felix-p-_0f05888047b5982-o
-A felix-from-428849870ea -m mark --mark 0x1000000/0x1000000 -m comment --comment "Profile accepted packet" -j RETURN
-A felix-from-428849870ea -m comment --comment "Packet did not match any profile (endpoint eth0)" -j DROP
-A felix-p-_0f05888047b5982-i -j MARK --set-xmark 0x1000000/0x1000000
-A felix-p-_0f05888047b5982-i -m mark --mark 0x1000000/0x1000000 -j RETURN
-A felix-p-_0f05888047b5982-o -j MARK --set-xmark 0x1000000/0x1000000
-A felix-p-_0f05888047b5982-o -m mark --mark 0x1000000/0x1000000 -j RETURN
-A felix-p-_3806fbfa06d1365-i -j MARK --set-xmark 0x2000000/0x2000000
-A felix-p-_3806fbfa06d1365-i -m mark --mark 0x2000000/0x2000000 -j RETURN
-A felix-p-_3806fbfa06d1365-o -j MARK --set-xmark 0x2000000/0x2000000
-A felix-p-_3806fbfa06d1365-o -m mark --mark 0x2000000/0x2000000 -j RETURN
-A felix-to-2364763ca74 -j MARK --set-xmark 0x0/0x1000000
-A felix-to-2364763ca74 -m comment --comment "Start of tier k8s-network-policy" -j MARK --set-xmark 0x0/0x2000000
-A felix-to-2364763ca74 -m mark --mark 0x0/0x2000000 -j felix-p-_3806fbfa06d1365-i
-A felix-to-2364763ca74 -m mark --mark 0x1000000/0x1000000 -m comment --comment "Return if policy accepted" -j RETURN
-A felix-to-2364763ca74 -m mark --mark 0x0/0x2000000 -m comment --comment "Drop if no policy in tier passed" -j DROP
-A felix-to-2364763ca74 -j felix-p-_0f05888047b5982-i
-A felix-to-2364763ca74 -m mark --mark 0x1000000/0x1000000 -m comment --comment "Profile accepted packet" -j RETURN
-A felix-to-2364763ca74 -m comment --comment "Packet did not match any profile (endpoint eth0)" -j DROP
-A felix-to-428849870ea -j MARK --set-xmark 0x0/0x1000000
-A felix-to-428849870ea -m comment --comment "Start of tier k8s-network-policy" -j MARK --set-xmark 0x0/0x2000000
-A felix-to-428849870ea -m mark --mark 0x0/0x2000000 -j felix-p-_3806fbfa06d1365-i
-A felix-to-428849870ea -m mark --mark 0x1000000/0x1000000 -m comment --comment "Return if policy accepted" -j RETURN
-A felix-to-428849870ea -m mark --mark 0x0/0x2000000 -m comment --comment "Drop if no policy in tier passed" -j DROP
-A felix-to-428849870ea -j felix-p-_0f05888047b5982-i
-A felix-to-428849870ea -m mark --mark 0x1000000/0x1000000 -m comment --comment "Profile accepted packet" -j RETURN
-A felix-to-428849870ea -m comment --comment "Packet did not match any profile (endpoint eth0)" -j DROP
-A ufw-after-input -p udp -m udp --dport 137 -j ufw-skip-to-policy-input
-A ufw-after-input -p udp -m udp --dport 138 -j ufw-skip-to-policy-input
-A ufw-after-input -p tcp -m tcp --dport 139 -j ufw-skip-to-policy-input
-A ufw-after-input -p tcp -m tcp --dport 445 -j ufw-skip-to-policy-input
-A ufw-after-input -p udp -m udp --dport 67 -j ufw-skip-to-policy-input
-A ufw-after-input -p udp -m udp --dport 68 -j ufw-skip-to-policy-input
-A ufw-after-input -m addrtype --dst-type BROADCAST -j ufw-skip-to-policy-input
-A ufw-before-forward -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A ufw-before-forward -p icmp -m icmp --icmp-type 3 -j ACCEPT
-A ufw-before-forward -p icmp -m icmp --icmp-type 4 -j ACCEPT
-A ufw-before-forward -p icmp -m icmp --icmp-type 11 -j ACCEPT
-A ufw-before-forward -p icmp -m icmp --icmp-type 12 -j ACCEPT
-A ufw-before-forward -p icmp -m icmp --icmp-type 8 -j ACCEPT
-A ufw-before-forward -j ufw-user-forward
-A ufw-before-input -i lo -j ACCEPT
-A ufw-before-input -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A ufw-before-input -m conntrack --ctstate INVALID -j ufw-logging-deny
-A ufw-before-input -m conntrack --ctstate INVALID -j DROP
-A ufw-before-input -p icmp -m icmp --icmp-type 3 -j ACCEPT
-A ufw-before-input -p icmp -m icmp --icmp-type 4 -j ACCEPT
-A ufw-before-input -p icmp -m icmp --icmp-type 11 -j ACCEPT
-A ufw-before-input -p icmp -m icmp --icmp-type 12 -j ACCEPT
-A ufw-before-input -p icmp -m icmp --icmp-type 8 -j ACCEPT
-A ufw-before-input -p udp -m udp --sport 67 --dport 68 -j ACCEPT
-A ufw-before-input -j ufw-not-local
-A ufw-before-input -d 224.0.0.251/32 -p udp -m udp --dport 5353 -j ACCEPT
-A ufw-before-input -d 239.255.255.250/32 -p udp -m udp --dport 1900 -j ACCEPT
-A ufw-before-input -j ufw-user-input
-A ufw-before-output -o lo -j ACCEPT
-A ufw-before-output -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A ufw-before-output -j ufw-user-output
-A ufw-not-local -m addrtype --dst-type LOCAL -j RETURN
-A ufw-not-local -m addrtype --dst-type MULTICAST -j RETURN
-A ufw-not-local -m addrtype --dst-type BROADCAST -j RETURN
-A ufw-not-local -m limit --limit 3/min --limit-burst 10 -j ufw-logging-deny
-A ufw-not-local -j DROP
-A ufw-skip-to-policy-forward -j DROP
-A ufw-skip-to-policy-input -j ACCEPT
-A ufw-skip-to-policy-output -j ACCEPT
-A ufw-track-input -p tcp -m conntrack --ctstate NEW -j ACCEPT
-A ufw-track-input -p udp -m conntrack --ctstate NEW -j ACCEPT
-A ufw-track-output -p tcp -m conntrack --ctstate NEW -j ACCEPT
-A ufw-track-output -p udp -m conntrack --ctstate NEW -j ACCEPT
-A ufw-user-input -p tcp -m tcp --dport 22 -j ACCEPT
-A ufw-user-limit -j REJECT --reject-with icmp-port-unreachable
-A ufw-user-limit-accept -j ACCEPT
-A ufw-user-logging-forward -j RETURN
-A ufw-user-logging-input -j RETURN
-A ufw-user-logging-output -j RETURN
COMMIT
# Completed on Thu Jan  5 22:33:57 2017
iptables nat table
Chain PREROUTING (policy ACCEPT 1 packets, 60 bytes)
 pkts bytes target     prot opt in     out     source               destination         
    1    60 felix-PREROUTING  all  --  any    any     anywhere             anywhere            
 7328  448K KUBE-SERVICES  all  --  any    any     anywhere             anywhere             /* kubernetes service portals */
 9819  590K DOCKER     all  --  any    any     anywhere             anywhere             ADDRTYPE match dst-type LOCAL

Chain INPUT (policy ACCEPT 1 packets, 60 bytes)
 pkts bytes target     prot opt in     out     source               destination         

Chain OUTPUT (policy ACCEPT 2 packets, 144 bytes)
 pkts bytes target     prot opt in     out     source               destination         
 3398  207K KUBE-SERVICES  all  --  any    any     anywhere             anywhere             /* kubernetes service portals */
  177 11058 DOCKER     all  --  any    any     anywhere            !127.0.0.0/8          ADDRTYPE match dst-type LOCAL

Chain POSTROUTING (policy ACCEPT 2 packets, 144 bytes)
 pkts bytes target     prot opt in     out     source               destination         
    2   144 felix-POSTROUTING  all  --  any    any     anywhere             anywhere            
 3893  237K KUBE-POSTROUTING  all  --  any    any     anywhere             anywhere             /* kubernetes postrouting rules */
    0     0 MASQUERADE  all  --  any    !docker0  172.17.0.0/16        anywhere            
    0     0 RETURN     all  --  any    any     192.168.0.0/16       192.168.0.0/16      
    0     0 MASQUERADE  all  --  any    any     192.168.0.0/16      !base-address.mcast.net/4 
    0     0 MASQUERADE  all  --  any    any    !192.168.0.0/16       192.168.0.0/16      

Chain DOCKER (2 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 RETURN     all  --  docker0 any     anywhere             anywhere            

Chain KUBE-MARK-DROP (0 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 MARK       all  --  any    any     anywhere             anywhere             MARK or 0x8000

Chain KUBE-MARK-MASQ (18 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 MARK       all  --  any    any     anywhere             anywhere             MARK or 0x4000

Chain KUBE-NODEPORTS (1 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 KUBE-MARK-MASQ  tcp  --  any    any     anywhere             anywhere             /* kube-system/kube-registry:registry */ tcp dpt:30080
    0     0 KUBE-SVC-JV2WR75K33AEZUK7  tcp  --  any    any     anywhere             anywhere             /* kube-system/kube-registry:registry */ tcp dpt:30080
    0     0 KUBE-MARK-MASQ  tcp  --  any    any     anywhere             anywhere             /* kube-system/kubernetes-dashboard: */ tcp dpt:30882
    0     0 KUBE-SVC-XGLOHA7QRQ3V22RZ  tcp  --  any    any     anywhere             anywhere             /* kube-system/kubernetes-dashboard: */ tcp dpt:30882

Chain KUBE-POSTROUTING (1 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 MASQUERADE  all  --  any    any     anywhere             anywhere             /* kubernetes service traffic requiring SNAT */ mark match 0x4000/0x4000

Chain KUBE-SEP-4U6BTAJCDMHBCNTE (1 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 KUBE-MARK-MASQ  all  --  any    any     192.168.75.33        anywhere             /* kube-system/kube-dns:dns-tcp */
    0     0 DNAT       tcp  --  any    any     anywhere             anywhere             /* kube-system/kube-dns:dns-tcp */ tcp to:192.168.75.33:53

Chain KUBE-SEP-7QBKTOBWZOW2ADYZ (1 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 KUBE-MARK-MASQ  all  --  any    any     10.163.148.196       anywhere             /* kube-system/glusterfs-dynamic-kube-registry-pvc: */
    0     0 DNAT       tcp  --  any    any     anywhere             anywhere             /* kube-system/glusterfs-dynamic-kube-registry-pvc: */ tcp to:10.163.148.196:1

Chain KUBE-SEP-DARQFIU6CIZ6DHSZ (1 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 KUBE-MARK-MASQ  all  --  any    any     10.163.148.198       anywhere             /* kube-system/glusterfs-dynamic-kube-registry-pvc: */
    0     0 DNAT       tcp  --  any    any     anywhere             anywhere             /* kube-system/glusterfs-dynamic-kube-registry-pvc: */ tcp to:10.163.148.198:1

Chain KUBE-SEP-FMM5BAXI5QDNGXPJ (2 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 KUBE-MARK-MASQ  all  --  any    any     10.163.148.196       anywhere             /* default/kubernetes:https */
    0     0 DNAT       tcp  --  any    any     anywhere             anywhere             /* default/kubernetes:https */ recent: SET name: KUBE-SEP-FMM5BAXI5QDNGXPJ side: source mask: 255.255.255.255 tcp to:10.163.148.196:6443

Chain KUBE-SEP-KJX7S6NVUIOUABFE (1 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 KUBE-MARK-MASQ  all  --  any    any     10.163.148.196       anywhere             /* kube-system/canal-etcd: */
    0     0 DNAT       tcp  --  any    any     anywhere             anywhere             /* kube-system/canal-etcd: */ tcp to:10.163.148.196:6666

Chain KUBE-SEP-KMRLJBMSXVC225LD (1 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 KUBE-MARK-MASQ  all  --  any    any     192.168.45.37        anywhere             /* kube-system/kubernetes-dashboard: */
    0     0 DNAT       tcp  --  any    any     anywhere             anywhere             /* kube-system/kubernetes-dashboard: */ tcp to:192.168.45.37:9090

Chain KUBE-SEP-KXX2UKHAML22525B (1 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 KUBE-MARK-MASQ  all  --  any    any     10.163.148.197       anywhere             /* kube-system/glusterfs-dynamic-kube-registry-pvc: */
    0     0 DNAT       tcp  --  any    any     anywhere             anywhere             /* kube-system/glusterfs-dynamic-kube-registry-pvc: */ tcp to:10.163.148.197:1

Chain KUBE-SEP-NCDBIYVKEUM6V7JV (1 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 KUBE-MARK-MASQ  all  --  any    any     192.168.75.33        anywhere             /* kube-system/kube-dns:dns */
    0     0 DNAT       udp  --  any    any     anywhere             anywhere             /* kube-system/kube-dns:dns */ udp to:192.168.75.33:53

Chain KUBE-SEP-YSVAKJLNBVVBENUI (1 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 KUBE-MARK-MASQ  all  --  any    any     192.168.75.32        anywhere             /* kube-system/kube-registry:registry */
    0     0 DNAT       tcp  --  any    any     anywhere             anywhere             /* kube-system/kube-registry:registry */ tcp to:192.168.75.32:5000

Chain KUBE-SERVICES (2 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 KUBE-MARK-MASQ  tcp  --  any    any    !192.168.0.0/16       10.96.0.1            /* default/kubernetes:https cluster IP */ tcp dpt:https
    0     0 KUBE-SVC-NPX46M4PTMTKRN6Y  tcp  --  any    any     anywhere             10.96.0.1            /* default/kubernetes:https cluster IP */ tcp dpt:https
    0     0 KUBE-MARK-MASQ  udp  --  any    any    !192.168.0.0/16       10.96.0.10           /* kube-system/kube-dns:dns cluster IP */ udp dpt:domain
    0     0 KUBE-SVC-TCOU7JCQXEZGVUNU  udp  --  any    any     anywhere             10.96.0.10           /* kube-system/kube-dns:dns cluster IP */ udp dpt:domain
    0     0 KUBE-MARK-MASQ  tcp  --  any    any    !192.168.0.0/16       10.100.57.109        /* kube-system/kube-registry:registry cluster IP */ tcp dpt:5000
    0     0 KUBE-SVC-JV2WR75K33AEZUK7  tcp  --  any    any     anywhere             10.100.57.109        /* kube-system/kube-registry:registry cluster IP */ tcp dpt:5000
    0     0 KUBE-MARK-MASQ  tcp  --  any    any    !192.168.0.0/16       10.110.206.254       /* kube-system/kubernetes-dashboard: cluster IP */ tcp dpt:http
    0     0 KUBE-SVC-XGLOHA7QRQ3V22RZ  tcp  --  any    any     anywhere             10.110.206.254       /* kube-system/kubernetes-dashboard: cluster IP */ tcp dpt:http
    0     0 KUBE-MARK-MASQ  tcp  --  any    any    !192.168.0.0/16       10.96.232.136        /* kube-system/canal-etcd: cluster IP */ tcp dpt:6666
    0     0 KUBE-SVC-KWJORWLCTF22FLD3  tcp  --  any    any     anywhere             10.96.232.136        /* kube-system/canal-etcd: cluster IP */ tcp dpt:6666
    0     0 KUBE-MARK-MASQ  tcp  --  any    any    !192.168.0.0/16       10.106.192.243       /* kube-system/glusterfs-dynamic-kube-registry-pvc: cluster IP */ tcp dpt:tcpmux
    0     0 KUBE-SVC-E66MHSUH4AYEXSQE  tcp  --  any    any     anywhere             10.106.192.243       /* kube-system/glusterfs-dynamic-kube-registry-pvc: cluster IP */ tcp dpt:tcpmux
    0     0 KUBE-MARK-MASQ  tcp  --  any    any    !192.168.0.0/16       10.96.0.10           /* kube-system/kube-dns:dns-tcp cluster IP */ tcp dpt:domain
    0     0 KUBE-SVC-ERIFXISQEP7F7OF4  tcp  --  any    any     anywhere             10.96.0.10           /* kube-system/kube-dns:dns-tcp cluster IP */ tcp dpt:domain
    1    60 KUBE-NODEPORTS  all  --  any    any     anywhere             anywhere             /* kubernetes service nodeports; NOTE: this must be the last rule in this chain */ ADDRTYPE match dst-type LOCAL

Chain KUBE-SVC-E66MHSUH4AYEXSQE (1 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 KUBE-SEP-7QBKTOBWZOW2ADYZ  all  --  any    any     anywhere             anywhere             /* kube-system/glusterfs-dynamic-kube-registry-pvc: */ statistic mode random probability 0.33332999982
    0     0 KUBE-SEP-KXX2UKHAML22525B  all  --  any    any     anywhere             anywhere             /* kube-system/glusterfs-dynamic-kube-registry-pvc: */ statistic mode random probability 0.50000000000
    0     0 KUBE-SEP-DARQFIU6CIZ6DHSZ  all  --  any    any     anywhere             anywhere             /* kube-system/glusterfs-dynamic-kube-registry-pvc: */

Chain KUBE-SVC-ERIFXISQEP7F7OF4 (1 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 KUBE-SEP-4U6BTAJCDMHBCNTE  all  --  any    any     anywhere             anywhere             /* kube-system/kube-dns:dns-tcp */

Chain KUBE-SVC-JV2WR75K33AEZUK7 (2 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 KUBE-SEP-YSVAKJLNBVVBENUI  all  --  any    any     anywhere             anywhere             /* kube-system/kube-registry:registry */

Chain KUBE-SVC-KWJORWLCTF22FLD3 (1 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 KUBE-SEP-KJX7S6NVUIOUABFE  all  --  any    any     anywhere             anywhere             /* kube-system/canal-etcd: */

Chain KUBE-SVC-NPX46M4PTMTKRN6Y (1 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 KUBE-SEP-FMM5BAXI5QDNGXPJ  all  --  any    any     anywhere             anywhere             /* default/kubernetes:https */ recent: CHECK seconds: 10800 reap name: KUBE-SEP-FMM5BAXI5QDNGXPJ side: source mask: 255.255.255.255
    0     0 KUBE-SEP-FMM5BAXI5QDNGXPJ  all  --  any    any     anywhere             anywhere             /* default/kubernetes:https */

Chain KUBE-SVC-TCOU7JCQXEZGVUNU (1 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 KUBE-SEP-NCDBIYVKEUM6V7JV  all  --  any    any     anywhere             anywhere             /* kube-system/kube-dns:dns */

Chain KUBE-SVC-XGLOHA7QRQ3V22RZ (2 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 KUBE-SEP-KMRLJBMSXVC225LD  all  --  any    any     anywhere             anywhere             /* kube-system/kubernetes-dashboard: */

Chain felix-FIP-DNAT (1 references)
 pkts bytes target     prot opt in     out     source               destination         

Chain felix-FIP-SNAT (1 references)
 pkts bytes target     prot opt in     out     source               destination         

Chain felix-POSTROUTING (1 references)
 pkts bytes target     prot opt in     out     source               destination         
    2   144 felix-FIP-SNAT  all  --  any    any     anywhere             anywhere            

Chain felix-PREROUTING (1 references)
 pkts bytes target     prot opt in     out     source               destination         
    1    60 felix-FIP-DNAT  all  --  any    any     anywhere             anywhere            
iptables filter table
Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination         
  627  238K felix-INPUT  all  --  any    any     anywhere             anywhere            
22383 5721K KUBE-FIREWALL  all  --  any    any     anywhere             anywhere            
    7   448 f2b-sshd   tcp  --  any    any     anywhere             anywhere             multiport dports ssh
22421 5723K ufw-before-logging-input  all  --  any    any     anywhere             anywhere            
22421 5723K ufw-before-input  all  --  any    any     anywhere             anywhere            
11828  769K ufw-after-input  all  --  any    any     anywhere             anywhere            
10228  619K ufw-after-logging-input  all  --  any    any     anywhere             anywhere            
10228  619K ufw-reject-input  all  --  any    any     anywhere             anywhere            
10228  619K ufw-track-input  all  --  any    any     anywhere             anywhere            

Chain FORWARD (policy DROP 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 felix-FORWARD  all  --  any    any     anywhere             anywhere            
    0     0 DOCKER-ISOLATION  all  --  any    any     anywhere             anywhere            
    0     0 DOCKER     all  --  any    docker0  anywhere             anywhere            
    0     0 ACCEPT     all  --  any    docker0  anywhere             anywhere             ctstate RELATED,ESTABLISHED
    0     0 ACCEPT     all  --  docker0 !docker0  anywhere             anywhere            
    0     0 ACCEPT     all  --  docker0 docker0  anywhere             anywhere            
    0     0 ufw-before-logging-forward  all  --  any    any     anywhere             anywhere            
    0     0 ufw-before-forward  all  --  any    any     anywhere             anywhere            
    0     0 ufw-after-forward  all  --  any    any     anywhere             anywhere            
    0     0 ufw-after-logging-forward  all  --  any    any     anywhere             anywhere            
    0     0 ufw-reject-forward  all  --  any    any     anywhere             anywhere            
    0     0 ufw-track-forward  all  --  any    any     anywhere             anywhere            

Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination         
  603  271K felix-OUTPUT  all  --  any    any     anywhere             anywhere            
12066 8724K KUBE-SERVICES  all  --  any    any     anywhere             anywhere             /* kubernetes service portals */
11638 5267K KUBE-FIREWALL  all  --  any    any     anywhere             anywhere            
11674 5269K ufw-before-logging-output  all  --  any    any     anywhere             anywhere            
11674 5269K ufw-before-output  all  --  any    any     anywhere             anywhere            
 1219 81378 ufw-after-output  all  --  any    any     anywhere             anywhere            
 1219 81378 ufw-after-logging-output  all  --  any    any     anywhere             anywhere            
 1219 81378 ufw-reject-output  all  --  any    any     anywhere             anywhere            
 1219 81378 ufw-track-output  all  --  any    any     anywhere             anywhere            

Chain DOCKER (1 references)
 pkts bytes target     prot opt in     out     source               destination         

Chain DOCKER-ISOLATION (1 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 RETURN     all  --  any    any     anywhere             anywhere            

Chain KUBE-FIREWALL (2 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 DROP       all  --  any    any     anywhere             anywhere             /* kubernetes firewall for dropping marked packets */ mark match 0x8000/0x8000

Chain KUBE-SERVICES (1 references)
 pkts bytes target     prot opt in     out     source               destination         

Chain f2b-sshd (1 references)
 pkts bytes target     prot opt in     out     source               destination         
    7   448 RETURN     all  --  any    any     anywhere             anywhere            

Chain felix-FAILSAFE-IN (0 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 ACCEPT     tcp  --  any    any     anywhere             anywhere             tcp dpt:ssh

Chain felix-FAILSAFE-OUT (0 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 ACCEPT     tcp  --  any    any     anywhere             anywhere             tcp dpt:2379
    0     0 ACCEPT     tcp  --  any    any     anywhere             anywhere             tcp dpt:2380
    0     0 ACCEPT     tcp  --  any    any     anywhere             anywhere             tcp dpt:4001
    0     0 ACCEPT     tcp  --  any    any     anywhere             anywhere             tcp dpt:afs3-callback

Chain felix-FORWARD (1 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 DROP       all  --  cali+  any     anywhere             anywhere             ctstate INVALID
    0     0 DROP       all  --  any    cali+   anywhere             anywhere             ctstate INVALID
    0     0 ACCEPT     all  --  cali+  any     anywhere             anywhere             ctstate RELATED,ESTABLISHED
    0     0 ACCEPT     all  --  any    cali+   anywhere             anywhere             ctstate RELATED,ESTABLISHED
    0     0 felix-FROM-ENDPOINT  all  --  cali+  any     anywhere             anywhere            
    0     0 felix-TO-ENDPOINT  all  --  any    cali+   anywhere             anywhere            
    0     0 ACCEPT     all  --  cali+  any     anywhere             anywhere            
    0     0 ACCEPT     all  --  any    cali+   anywhere             anywhere            

Chain felix-FROM-ENDPOINT (2 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 felix-from-2364763ca74  all  --  cali2364763ca74 any     anywhere             anywhere            [goto] 
    0     0 felix-from-428849870ea  all  --  cali428849870ea any     anywhere             anywhere            [goto] 
    0     0 DROP       all  --  any    any     anywhere             anywhere             /* From unknown endpoint */

Chain felix-FROM-HOST-IF (1 references)
 pkts bytes target     prot opt in     out     source               destination         
   11   660 RETURN     all  --  any    any     anywhere             anywhere             /* Unknown interface, return */

Chain felix-INPUT (1 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 DROP       all  --  any    any     anywhere             anywhere             ctstate INVALID
  616  237K ACCEPT     all  --  any    any     anywhere             anywhere             ctstate RELATED,ESTABLISHED
   11   660 felix-FROM-HOST-IF  all  --  !cali+ any     anywhere             anywhere            [goto] 
    0     0 ACCEPT     udp  --  any    any     anywhere             anywhere             udp spt:bootpc dpt:bootps
    0     0 ACCEPT     udp  --  any    any     anywhere             anywhere             udp dpt:domain
    0     0 felix-FROM-ENDPOINT  all  --  any    any     anywhere             anywhere            

Chain felix-OUTPUT (1 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 DROP       all  --  any    any     anywhere             anywhere             ctstate INVALID
  603  271K ACCEPT     all  --  any    any     anywhere             anywhere             ctstate RELATED,ESTABLISHED
    0     0 felix-TO-HOST-IF  all  --  any    !cali+  anywhere             anywhere            [goto] 

Chain felix-TO-ENDPOINT (1 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 felix-to-2364763ca74  all  --  any    cali2364763ca74  anywhere             anywhere            [goto] 
    0     0 felix-to-428849870ea  all  --  any    cali428849870ea  anywhere             anywhere            [goto] 
    0     0 DROP       all  --  any    any     anywhere             anywhere             /* To unknown endpoint */

Chain felix-TO-HOST-IF (1 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 RETURN     all  --  any    any     anywhere             anywhere             /* Unknown interface, return */

Chain felix-from-2364763ca74 (1 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 MARK       all  --  any    any     anywhere             anywhere             MARK and 0xfeffffff
    0     0 DROP       all  --  any    any     anywhere             anywhere             MAC ! 2A:E8:7F:82:AA:B5 /* Incorrect source MAC */
    0     0 MARK       all  --  any    any     anywhere             anywhere             /* Start of tier k8s-network-policy */ MARK and 0xfdffffff
    0     0 felix-p-_3806fbfa06d1365-o  all  --  any    any     anywhere             anywhere             mark match 0x0/0x2000000
    0     0 RETURN     all  --  any    any     anywhere             anywhere             mark match 0x1000000/0x1000000 /* Return if policy accepted */
    0     0 DROP       all  --  any    any     anywhere             anywhere             mark match 0x0/0x2000000 /* Drop if no policy in tier passed */
    0     0 felix-p-_0f05888047b5982-o  all  --  any    any     anywhere             anywhere            
    0     0 RETURN     all  --  any    any     anywhere             anywhere             mark match 0x1000000/0x1000000 /* Profile accepted packet */
    0     0 DROP       all  --  any    any     anywhere             anywhere             /* Packet did not match any profile (endpoint eth0) */

Chain felix-from-428849870ea (1 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 MARK       all  --  any    any     anywhere             anywhere             MARK and 0xfeffffff
    0     0 DROP       all  --  any    any     anywhere             anywhere             MAC ! E6:76:86:8B:42:2A /* Incorrect source MAC */
    0     0 MARK       all  --  any    any     anywhere             anywhere             /* Start of tier k8s-network-policy */ MARK and 0xfdffffff
    0     0 felix-p-_3806fbfa06d1365-o  all  --  any    any     anywhere             anywhere             mark match 0x0/0x2000000
    0     0 RETURN     all  --  any    any     anywhere             anywhere             mark match 0x1000000/0x1000000 /* Return if policy accepted */
    0     0 DROP       all  --  any    any     anywhere             anywhere             mark match 0x0/0x2000000 /* Drop if no policy in tier passed */
    0     0 felix-p-_0f05888047b5982-o  all  --  any    any     anywhere             anywhere            
    0     0 RETURN     all  --  any    any     anywhere             anywhere             mark match 0x1000000/0x1000000 /* Profile accepted packet */
    0     0 DROP       all  --  any    any     anywhere             anywhere             /* Packet did not match any profile (endpoint eth0) */

Chain felix-p-_0f05888047b5982-i (2 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 MARK       all  --  any    any     anywhere             anywhere             MARK or 0x1000000
    0     0 RETURN     all  --  any    any     anywhere             anywhere             mark match 0x1000000/0x1000000

Chain felix-p-_0f05888047b5982-o (2 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 MARK       all  --  any    any     anywhere             anywhere             MARK or 0x1000000
    0     0 RETURN     all  --  any    any     anywhere             anywhere             mark match 0x1000000/0x1000000

Chain felix-p-_3806fbfa06d1365-i (2 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 MARK       all  --  any    any     anywhere             anywhere             MARK or 0x2000000
    0     0 RETURN     all  --  any    any     anywhere             anywhere             mark match 0x2000000/0x2000000

Chain felix-p-_3806fbfa06d1365-o (2 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 MARK       all  --  any    any     anywhere             anywhere             MARK or 0x2000000
    0     0 RETURN     all  --  any    any     anywhere             anywhere             mark match 0x2000000/0x2000000

Chain felix-to-2364763ca74 (1 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 MARK       all  --  any    any     anywhere             anywhere             MARK and 0xfeffffff
    0     0 MARK       all  --  any    any     anywhere             anywhere             /* Start of tier k8s-network-policy */ MARK and 0xfdffffff
    0     0 felix-p-_3806fbfa06d1365-i  all  --  any    any     anywhere             anywhere             mark match 0x0/0x2000000
    0     0 RETURN     all  --  any    any     anywhere             anywhere             mark match 0x1000000/0x1000000 /* Return if policy accepted */
    0     0 DROP       all  --  any    any     anywhere             anywhere             mark match 0x0/0x2000000 /* Drop if no policy in tier passed */
    0     0 felix-p-_0f05888047b5982-i  all  --  any    any     anywhere             anywhere            
    0     0 RETURN     all  --  any    any     anywhere             anywhere             mark match 0x1000000/0x1000000 /* Profile accepted packet */
    0     0 DROP       all  --  any    any     anywhere             anywhere             /* Packet did not match any profile (endpoint eth0) */

Chain felix-to-428849870ea (1 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 MARK       all  --  any    any     anywhere             anywhere             MARK and 0xfeffffff
    0     0 MARK       all  --  any    any     anywhere             anywhere             /* Start of tier k8s-network-policy */ MARK and 0xfdffffff
    0     0 felix-p-_3806fbfa06d1365-i  all  --  any    any     anywhere             anywhere             mark match 0x0/0x2000000
    0     0 RETURN     all  --  any    any     anywhere             anywhere             mark match 0x1000000/0x1000000 /* Return if policy accepted */
    0     0 DROP       all  --  any    any     anywhere             anywhere             mark match 0x0/0x2000000 /* Drop if no policy in tier passed */
    0     0 felix-p-_0f05888047b5982-i  all  --  any    any     anywhere             anywhere            
    0     0 RETURN     all  --  any    any     anywhere             anywhere             mark match 0x1000000/0x1000000 /* Profile accepted packet */
    0     0 DROP       all  --  any    any     anywhere             anywhere             /* Packet did not match any profile (endpoint eth0) */

Chain ufw-after-forward (1 references)
 pkts bytes target     prot opt in     out     source               destination         

Chain ufw-after-input (1 references)
 pkts bytes target     prot opt in     out     source               destination         
 1187 94602 ufw-skip-to-policy-input  udp  --  any    any     anywhere             anywhere             udp dpt:netbios-ns
  103 23080 ufw-skip-to-policy-input  udp  --  any    any     anywhere             anywhere             udp dpt:netbios-dgm
    0     0 ufw-skip-to-policy-input  tcp  --  any    any     anywhere             anywhere             tcp dpt:netbios-ssn
    0     0 ufw-skip-to-policy-input  tcp  --  any    any     anywhere             anywhere             tcp dpt:microsoft-ds
   11  3719 ufw-skip-to-policy-input  udp  --  any    any     anywhere             anywhere             udp dpt:bootps
    0     0 ufw-skip-to-policy-input  udp  --  any    any     anywhere             anywhere             udp dpt:bootpc
    0     0 ufw-skip-to-policy-input  all  --  any    any     anywhere             anywhere             ADDRTYPE match dst-type BROADCAST

Chain ufw-after-logging-forward (1 references)
 pkts bytes target     prot opt in     out     source               destination         

Chain ufw-after-logging-input (1 references)
 pkts bytes target     prot opt in     out     source               destination         

Chain ufw-after-logging-output (1 references)
 pkts bytes target     prot opt in     out     source               destination         

Chain ufw-after-output (1 references)
 pkts bytes target     prot opt in     out     source               destination         

Chain ufw-before-forward (1 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 ACCEPT     all  --  any    any     anywhere             anywhere             ctstate RELATED,ESTABLISHED
    0     0 ACCEPT     icmp --  any    any     anywhere             anywhere             icmp destination-unreachable
    0     0 ACCEPT     icmp --  any    any     anywhere             anywhere             icmp source-quench
    0     0 ACCEPT     icmp --  any    any     anywhere             anywhere             icmp time-exceeded
    0     0 ACCEPT     icmp --  any    any     anywhere             anywhere             icmp parameter-problem
    0     0 ACCEPT     icmp --  any    any     anywhere             anywhere             icmp echo-request
    0     0 ufw-user-forward  all  --  any    any     anywhere             anywhere            

Chain ufw-before-input (1 references)
 pkts bytes target     prot opt in     out     source               destination         
 3444  207K ACCEPT     all  --  lo     any     anywhere             anywhere            
    0     0 ACCEPT     all  --  any    any     anywhere             anywhere             ctstate RELATED,ESTABLISHED
    0     0 ufw-logging-deny  all  --  any    any     anywhere             anywhere             ctstate INVALID
    0     0 DROP       all  --  any    any     anywhere             anywhere             ctstate INVALID
    0     0 ACCEPT     icmp --  any    any     anywhere             anywhere             icmp destination-unreachable
    0     0 ACCEPT     icmp --  any    any     anywhere             anywhere             icmp source-quench
    0     0 ACCEPT     icmp --  any    any     anywhere             anywhere             icmp time-exceeded
    0     0 ACCEPT     icmp --  any    any     anywhere             anywhere             icmp parameter-problem
    0     0 ACCEPT     icmp --  any    any     anywhere             anywhere             icmp echo-request
    0     0 ACCEPT     udp  --  any    any     anywhere             anywhere             udp spt:bootps dpt:bootpc
 9181  598K ufw-not-local  all  --  any    any     anywhere             anywhere            
    0     0 ACCEPT     udp  --  any    any     anywhere             224.0.0.251          udp dpt:mdns
    0     0 ACCEPT     udp  --  any    any     anywhere             239.255.255.250      udp dpt:1900
 9181  598K ufw-user-input  all  --  any    any     anywhere             anywhere            

Chain ufw-before-logging-forward (1 references)
 pkts bytes target     prot opt in     out     source               destination         

Chain ufw-before-logging-input (1 references)
 pkts bytes target     prot opt in     out     source               destination         

Chain ufw-before-logging-output (1 references)
 pkts bytes target     prot opt in     out     source               destination         

Chain ufw-before-output (1 references)
 pkts bytes target     prot opt in     out     source               destination         
 3375  203K ACCEPT     all  --  any    lo      anywhere             anywhere            
    0     0 ACCEPT     all  --  any    any     anywhere             anywhere             ctstate RELATED,ESTABLISHED
  584 41189 ufw-user-output  all  --  any    any     anywhere             anywhere            

Chain ufw-logging-allow (0 references)
 pkts bytes target     prot opt in     out     source               destination         

Chain ufw-logging-deny (2 references)
 pkts bytes target     prot opt in     out     source               destination         

Chain ufw-not-local (1 references)
 pkts bytes target     prot opt in     out     source               destination         
 7880  476K RETURN     all  --  any    any     anywhere             anywhere             ADDRTYPE match dst-type LOCAL
    0     0 RETURN     all  --  any    any     anywhere             anywhere             ADDRTYPE match dst-type MULTICAST
 1301  121K RETURN     all  --  any    any     anywhere             anywhere             ADDRTYPE match dst-type BROADCAST
    0     0 ufw-logging-deny  all  --  any    any     anywhere             anywhere             limit: avg 3/min burst 10
    0     0 DROP       all  --  any    any     anywhere             anywhere            

Chain ufw-reject-forward (1 references)
 pkts bytes target     prot opt in     out     source               destination         

Chain ufw-reject-input (1 references)
 pkts bytes target     prot opt in     out     source               destination         

Chain ufw-reject-output (1 references)
 pkts bytes target     prot opt in     out     source               destination         

Chain ufw-skip-to-policy-forward (0 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 DROP       all  --  any    any     anywhere             anywhere            

Chain ufw-skip-to-policy-input (7 references)
 pkts bytes target     prot opt in     out     source               destination         
 1301  121K ACCEPT     all  --  any    any     anywhere             anywhere            

Chain ufw-skip-to-policy-output (0 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 ACCEPT     all  --  any    any     anywhere             anywhere            

Chain ufw-track-forward (1 references)
 pkts bytes target     prot opt in     out     source               destination         

Chain ufw-track-input (1 references)
 pkts bytes target     prot opt in     out     source               destination         
 7842  471K ACCEPT     tcp  --  any    any     anywhere             anywhere             ctstate NEW
   36  5577 ACCEPT     udp  --  any    any     anywhere             anywhere             ctstate NEW

Chain ufw-track-output (1 references)
 pkts bytes target     prot opt in     out     source               destination         
  247 14820 ACCEPT     tcp  --  any    any     anywhere             anywhere             ctstate NEW
  337 26369 ACCEPT     udp  --  any    any     anywhere             anywhere             ctstate NEW

Chain ufw-user-forward (1 references)
 pkts bytes target     prot opt in     out     source               destination         

Chain ufw-user-input (1 references)
 pkts bytes target     prot opt in     out     source               destination         
    2   128 ACCEPT     tcp  --  any    any     anywhere             anywhere             tcp dpt:ssh

Chain ufw-user-limit (0 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 REJECT     all  --  any    any     anywhere             anywhere             reject-with icmp-port-unreachable

Chain ufw-user-limit-accept (0 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 ACCEPT     all  --  any    any     anywhere             anywhere            

Chain ufw-user-logging-forward (0 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 RETURN     all  --  any    any     anywhere             anywhere            

Chain ufw-user-logging-input (0 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 RETURN     all  --  any    any     anywhere             anywhere            

Chain ufw-user-logging-output (0 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 RETURN     all  --  any    any     anywhere             anywhere            

Chain ufw-user-output (1 references)
 pkts bytes target     prot opt in     out     source               destination         
iptables FORWARD chain on ravi-kube196
deploy@ravi-kube196:~$ sudo iptables -t filter -v --line-numbers -L FORWARD
Chain FORWARD (policy DROP 0 packets, 0 bytes)
num   pkts bytes target     prot opt in     out     source               destination
1        0     0 felix-FORWARD  all  --  any    any     anywhere             anywhere
2      540 34911 DOCKER-ISOLATION  all  --  any    any     anywhere             anywhere
3        0     0 DOCKER     all  --  any    docker0  anywhere             anywhere
4        0     0 ACCEPT     all  --  any    docker0  anywhere             anywhere             ctstate RELATED,ESTABLISHED
5        0     0 ACCEPT     all  --  docker0 !docker0  anywhere             anywhere
6        0     0 ACCEPT     all  --  docker0 docker0  anywhere             anywhere
7      540 34911 ufw-before-logging-forward  all  --  any    any     anywhere             anywhere
8      540 34911 ufw-before-forward  all  --  any    any     anywhere             anywhere
9      495 30976 ufw-after-forward  all  --  any    any     anywhere             anywhere
10     495 30976 ufw-after-logging-forward  all  --  any    any     anywhere             anywhere
11     495 30976 ufw-reject-forward  all  --  any    any     anywhere             anywhere
12     495 30976 ufw-track-forward  all  --  any    any     anywhere             anywhere
iptables felix-FORWARD chain on ravi-kube196
deploy@ravi-kube196:~$ sudo iptables -t filter -v --line-numbers -L felix-FORWARD
Chain felix-FORWARD (1 references)
num   pkts bytes target     prot opt in     out     source               destination
1        0     0 DROP       all  --  cali+  any     anywhere             anywhere             ctstate INVALID
2        0     0 DROP       all  --  any    cali+   anywhere             anywhere             ctstate INVALID
3        0     0 ACCEPT     all  --  cali+  any     anywhere             anywhere             ctstate RELATED,ESTABLISHED
4        0     0 ACCEPT     all  --  any    cali+   anywhere             anywhere             ctstate RELATED,ESTABLISHED
5        0     0 felix-FROM-ENDPOINT  all  --  cali+  any     anywhere             anywhere
6        0     0 felix-TO-ENDPOINT  all  --  any    cali+   anywhere             anywhere
7        0     0 ACCEPT     all  --  cali+  any     anywhere             anywhere
8        0     0 ACCEPT     all  --  any    cali+   anywhere             anywhere
ravi-kube196 interfaces (the node I'm testing connectivity to pod externally)
deploy@ravi-kube196:~$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: ens160: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 00:50:56:8a:2a:2f brd ff:ff:ff:ff:ff:ff
    inet 10.163.148.196/24 brd 10.163.148.255 scope global ens160
       valid_lft forever preferred_lft forever
    inet6 fe80::250:56ff:fe8a:2a2f/64 scope link
       valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
    link/ether 02:42:11:4d:a2:d0 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 scope global docker0
       valid_lft forever preferred_lft forever
4: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN group default
    link/ether 46:59:e6:e9:27:7f brd ff:ff:ff:ff:ff:ff
    inet 192.168.44.0/16 scope global flannel.1
       valid_lft forever preferred_lft forever
    inet6 fe80::4459:e6ff:fee9:277f/64 scope link
       valid_lft forever preferred_lft forever
ravi-kube198 interfaces (the node running the target pod)
deploy@ravi-kube198:~$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: ens160: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 00:50:56:8a:ee:fa brd ff:ff:ff:ff:ff:ff
    inet 10.163.148.198/24 brd 10.163.148.255 scope global ens160
       valid_lft forever preferred_lft forever
    inet6 fe80::250:56ff:fe8a:eefa/64 scope link
       valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
    link/ether 02:42:3e:9d:23:64 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 scope global docker0
       valid_lft forever preferred_lft forever
4: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN group default
    link/ether b2:b1:ab:e4:91:c1 brd ff:ff:ff:ff:ff:ff
    inet 192.168.50.0/16 scope global flannel.1
       valid_lft forever preferred_lft forever
    inet6 fe80::b0b1:abff:fee4:91c1/64 scope link
       valid_lft forever preferred_lft forever
7: cali6b7c7fd87ef@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default
    link/ether 6a:bd:d6:bf:12:e4 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet6 fe80::68bd:d6ff:febf:12e4/64 scope link
       valid_lft forever preferred_lft forever

I originally raised this with kubernetes, kubernetes/kubernetes#39658, but now think this is a canal specific issue.

cni network type is configured as calico but cni.go is looking for loopback binary

Hello,
I do a fresh installation of canal cni plugin. Here is the file which get landed in /etc/cni/net.d

/etc/cni/net.d/10-canal.conf 
{
    "name": "canal",
    "type": "flannel",
    "delegate": {
      "type": "calico",
      "etcd_endpoints": "http://10.57.32.11:6666",
      "log_level": "debug",
      "policy": {
        "type": "k8s",
         "k8s_api_root": "https://10.57.32.1:443",
         "k8s_auth_token": "...removed ..."
      },
      "kubernetes": {
        "kubeconfig": "/etc/cni/net.d/calico-kubeconfig"
      }
    }
}

so the type is clearly set to flannel and calico. Regardless of this conifg, cni.go generates error because it is looking for loopback binaries. Could you confirm if loopback binaries are required regardless of the plugin type selected. If it is the case then which process is responsible for copying these binaries.
If it is not the case then why cni ignores configured plugin type and keeps asking for loopback.

Oct 13 19:23:49 k8s-1 kubelet: E1013 19:23:49.618529    2297 pod_workers.go:184] Error syncing pod fce376f4-9192-11e6-8a7a-525400008e75, skipping: failed to "SetupNetwork" for "kube-dns-3306409432-uhgs6_kube-system" with SetupNetworkError: "Failed to setup network for pod \"kube-dns-3306409432-uhgs6_kube-system(fce376f4-9192-11e6-8a7a-525400008e75)\" using network plugins \"cni\": failed to find plugin \"loopback\" in path [/opt/loopback/bin /opt/cni/bin]; Skipping pod"

Add some docs about issue submission and expected process

It would be nice to add some documentation about the expected issue submission process and what a submitter should expect.

Expected Behavior

There is documentation (or links to projectcalico documentation) about the issue submission process and a general expectation of the process.

Current Behavior

I didn't find any such documentation.

Possible Solution

Some notes in the readme or links to the projectcalico general information for this.

Context

Submitted issue was missed initially, see #77 (comment).

calico cni (canal) connectivity issue

Issue filed as well on calico cni page: https://github.com/projectcalico/calico-cni/issues/183

Hello,
I am running kubernetes 1.4.0 with canal cni. All cni required pods are running. See output below. I am observing connectivity issue between container running in a pod. It cannot even ping local's host flannel interface. When I ping anything outside, example another host's flannel intreface, and run tcpdump on cali414aa30554e interface, I do see icmp packet from the container, same tcpdump on flannel interface does not show any packets at all. So it seems packet from cali414aa30554e are not getting forwarded to flannel.

interfaces info from the pod:

docker exec ac3262b64b17 ip a
1: lo: mtu 65536 qdisc noqueue 
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host 
valid_lft forever preferred_lft forever
3: eth0@if6: mtu 8950 qdisc noqueue 
link/ether fe:7f:53:86:fc:b1 brd ff:ff:ff:ff:ff:ff
inet 10.57.4.2/32 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::fc7f:53ff:fe86:fcb1/64 scope link 
valid_lft forever preferred_lft forever

Routes from the host:

route

Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
default gateway 0.0.0.0 UG 0 0 0 enp2s0f0
10.10.14.0 0.0.0.0 255.255.255.0 U 0 0 0 enp2s0f1
10.57.0.0 0.0.0.0 255.255.224.0 U 0 0 0 flannel.1
10.57.4.2 0.0.0.0 255.255.255.255 UH 0 0 0 cali414aa30554e
67.58.53.128 0.0.0.0 255.255.255.192 U 0 0 0 enp2s0f0
link-local 0.0.0.0 255.255.0.0 U 1002 0 0 enp2s0f0
link-local 0.0.0.0 255.255.0.0 U 1003 0 0 enp2s0f1
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0

Interfaces from the host:

ifconfig
cali414aa30554e: flags=4163 mtu 8950
inet6 fe80::7ccb:faff:fe05:def9 prefixlen 64 scopeid 0x20
ether 7e:cb:fa:05:de:f9 txqueuelen 0 (Ethernet)
RX packets 14373 bytes 1690455 (1.6 MiB)
RX errors 0 dropped 13575 overruns 0 frame 0
TX packets 11 bytes 774 (774.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

docker0: flags=4099 mtu 1500
inet 172.17.0.1 netmask 255.255.0.0 broadcast 0.0.0.0
ether 02:42:17:71:f4:55 txqueuelen 0 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

enp2s0f0: flags=4163 mtu 9000
inet 67.58.53.133 netmask 255.255.255.192 broadcast 67.58.53.191
inet6 fe80::ec4:7aff:febb:c58c prefixlen 64 scopeid 0x20
ether 0c:c4:7a:bb:c5:8c txqueuelen 1000 (Ethernet)
RX packets 426243 bytes 191002762 (182.1 MiB)
RX errors 0 dropped 13571 overruns 0 frame 0
TX packets 226783 bytes 42266077 (40.3 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

enp2s0f1: flags=4163 mtu 9000
inet 10.10.14.123 netmask 255.255.255.0 broadcast 10.10.14.255
inet6 fe80::ec4:7aff:febb:c58d prefixlen 64 scopeid 0x20
ether 0c:c4:7a:bb:c5:8d txqueuelen 1000 (Ethernet)
RX packets 14373 bytes 1690455 (1.6 MiB)
RX errors 0 dropped 13575 overruns 0 frame 0
TX packets 11 bytes 774 (774.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

flannel.1: flags=4163 mtu 8950
inet 10.57.4.0 netmask 255.255.224.0 broadcast 0.0.0.0
inet6 fe80::88c4:8aff:fea5:d794 prefixlen 64 scopeid 0x20
ether 8a:c4:8a:a5:d7:94 txqueuelen 0 (Ethernet)
RX packets 7 bytes 532 (532.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 7 bytes 742 (742.0 B)
TX errors 0 dropped 8 overruns 0 carrier 0 collisions 0

lo: flags=73 mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10
loop txqueuelen 0 (Local Loopback)
RX packets 83 bytes 6816 (6.6 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 83 bytes 6816 (6.6 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
kubectl get pods --namespace=kube-system
NAME READY STATUS RESTARTS AGE
calico-policy-controller-n6gqq 1/1 Running 0 17h
canal-etcd-uf9wq 1/1 Running 0 17h
canal-node-fflk5 3/3 Running 0 17h
canal-node-g93oh 3/3 Running 0 17h
canal-node-k6l9s 3/3 Running 0 17h
canal-node-k8kno 3/3 Running 0 17h
canal-node-l5hta 3/3 Running 6 17h
canal-node-n7pie 3/3 Running 0 17h
canal-node-r0w4t 3/3 Running 0 17h
canal-node-tvars 3/3 Running 0 17h
canal-node-u1h1u 3/3 Running 0 17h
etcd-gateway.calit2.optiputer.net 1/1 Running 2 21h
kube-apiserver-gateway.calit2.optiputer.net 1/1 Running 1 21h
kube-controller-manager-gateway.calit2.optiputer.net 1/1 Running 2 21h
kube-discovery-982812725-0zfqe 1/1 Running 2 21h
kube-dns-v19-04ula 2/2 Running 0 2h
kube-proxy-amd64-f1hia 1/1 Running 3 21h
kube-proxy-amd64-gn0ki 1/1 Running 1 21h
kube-proxy-amd64-j661v 1/1 Running 1 21h
kube-proxy-amd64-jb870 1/1 Running 1 21h
kube-proxy-amd64-jgrs6 1/1 Running 1 21h
kube-proxy-amd64-pm1vm 1/1 Running 1 21h
kube-proxy-amd64-tjz57 1/1 Running 1 21h
kube-proxy-amd64-tmavq 1/1 Running 1 21h
kube-proxy-amd64-xiv40 1/1 Running 2 21h
kube-scheduler-gateway.calit2.optiputer.net 1/1 Running 2 21h

Kubernetes master node is 'untainted' when applying Canal CNI

Expected Behavior

I'm running kubernetes on a Centos 7.x VM, the purpose is to create an AIO to run kolla OpenStack images on. Typically, after I apply the canal.yaml CNI, I then have to mark the master node as scheduleable by 'untainting' the node. This allows you to use kubernetes as an AIO - creating OpenStack services on the one node. The command is:
kubectl taint nodes --all=true node-role.kubernetes.io/master:NoSchedule-

Current Behavior

In the last few days I see that after applying canal the taint is removed, here's my log:

[rwellum@kolla-k8s k8s]$ kubectl get nodes
NAME STATUS AGE VERSION
kolla-k8s NotReady 33s v1.6.4

#Taint is there on initial bring-up:
[rwellum@kolla-k8s k8s]$ kubectl describe node kolla-k8s | grep -i taint
Taints: node-role.kubernetes.io/master:NoSchedule

[rwellum@kolla-k8s k8s]$ # Now I will apply canal.yaml and check the node again:
[rwellum@kolla-k8s k8s]$ kubectl describe node kolla-k8s | grep -i taint
Taints:
[rwellum@kolla-k8s k8s]$ # Weirdly the Taint is gone...

[rwellum@kolla-k8s k8s]$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system canal-jwnp8 3/3 Running 0 28s
kube-system etcd-kolla-k8s 1/1 Running 0 1m
kube-system kube-apiserver-kolla-k8s 1/1 Running 0 1m
kube-system kube-controller-manager-kolla-k8s 1/1 Running 0 1m
kube-system kube-dns-3913472980-6lpm5 0/3 Pending 0 1m
kube-system kube-proxy-w1kf3 1/1 Running 0 1m
kube-system kube-scheduler-kolla-k8s 1/1 Running 0 1m
[rwellum@kolla-k8s k8s]$

Possible Solution

Steps to Reproduce (for bugs)

  1. I am following this deployment guide: https://docs.openstack.org/developer/kolla-kubernetes/deployment-guide.html
  2. Before applying canal (kubectl apply -f canal.yaml) - as my log above shows - check the taint
  3. Apply canal and then check the taint again - it should be gone

Context

For me as I am running an AIO - it has no effect other than a warning in my logs that the taint doesn't exist. However for a multi-node deployment or a production deployment this would be an issue.

Your Environment

Sync <1.6 yaml with the 1.6 yaml

The patch "Sync <1.6 yaml with the 1.6 yaml " broke the kolla-kubernetes gate. Reverting it fixed the issue. I think there may be an incompatibility with the canal containers and k8s 1.5?

Node cleanup on Kubernetes

I'm not sure if this is the correct place to report this since the issue is related to the calico/node DaemonSet installed by canal.

When increasing and decreasing our node count to handle for load spikes I found out that calico/node will fail if the IP address of the node was already used before. This leaves the newly added node broken.

My solution is to run this after draining and deleting a node from the cluster:

docker run --rm --net=host -e ETCD_ENDPOINTS=http://127.0.0.1:6666 calico/ctl node remove --hostname=NODE_HOSTNAME --remove-endpoints

canal network running out of ip addresses

Hi,

So I decided to test canal out on our clusters this weekend and am running into a simple issue (that maybe a configuration error), but I could not figure it out. Hoping to get some help here.

Symptoms:

Multiple restarts of the canal pods running on nodes (the pods running on master seem fine):

kubectl get po -n kube-system | grep canal                                                                                                              master ✭ ✱ ◼
canal-node-0dy92                                                       3/3       Running             22         23h
canal-node-ajf16                                                       3/3       Running             0          23h
canal-node-bhvvv                                                       3/3       Running             44         23h
canal-node-ewjhn                                                       3/3       Running             0          23h
canal-node-l60ze                                                       3/3       Running             69         23h
canal-node-ueo38                                                       3/3       Running             0          23h

Heapster and other multiple pods failing with error message like:

  2h		14s		755	{kubelet ip-172-20-68-62.us-west-2.compute.internal}			Warning		FailedSync	Error syncing pod, skipping: failed to "SetupNetwork" for "heapster-x32bq_kube-system" with SetupNetworkError: "Failed to setup network for pod \"heapster-x32bq_kube-system(9815cd0c-f0dd-11e6-87d7-06cccf320c3d)\" using network plugins \"cni\": no IP addresses available in network: canal; Skipping pod"

Two out of three nodes are now marked by K8S as not ready and unreachable.

NAME                                           STATUS         AGE
ip-172-20-112-137.us-west-2.compute.internal   Ready,master   23h
ip-172-20-125-205.us-west-2.compute.internal   NotReady       23h
ip-172-20-43-216.us-west-2.compute.internal    NotReady       23h
ip-172-20-53-41.us-west-2.compute.internal     Ready,master   23h
ip-172-20-68-62.us-west-2.compute.internal     Ready          23h
ip-172-20-94-83.us-west-2.compute.internal     Ready,master   23h

Found the following logs looking through one of the canal pods with restarts:

2017-02-12 04:24:03,841 [INFO][40/13] calico.felix.fiptables 603: Transaction included a refresh, re-applying our inserts and deletions.
2017-02-12 04:24:14,690 [INFO][40/14] calico.felix.fiptables 547: Refreshing all our chains
2017-02-12 04:24:14,695 [INFO][40/14] calico.felix.fiptables 566: IptablesUpdater<v6-filter,queue_len=0,live=True,msg=None,time=9.492s> Successfully processed iptables updates.
2017-02-12 04:24:14,696 [INFO][40/14] calico.felix.fiptables 603: Transaction included a refresh, re-applying our inserts and deletions.
2017-02-12 04:24:19.169 [INFO][31] config_batcher.go 107: Sending config update global: map[ReportingIntervalSecs:0 ClusterGUID:bb235932a9e84c968d5d8d09b0cf7108 InterfacePrefix:cali LogSeverityFile:none LogSeverityScreen:info LogFilePath:none IpInIpEnabled:false], host: map[marker:created DefaultEndpointToHostAction:RETURN].
2017-02-12 04:24:19.169 [INFO][31] event_buffer.go 185: Possible config update. global=map[InterfacePrefix:cali LogSeverityFile:none LogSeverityScreen:info LogFilePath:none IpInIpEnabled:false ReportingIntervalSecs:0 ClusterGUID:bb235932a9e84c968d5d8d09b0cf7108] host=map[marker:created DefaultEndpointToHostAction:RETURN]
2017-02-12 04:24:19.169 [INFO][31] config_params.go 167: Merging in config from datastore (global): map[ClusterGUID:bb235932a9e84c968d5d8d09b0cf7108 InterfacePrefix:cali LogSeverityFile:none LogSeverityScreen:info LogFilePath:none IpInIpEnabled:false ReportingIntervalSecs:0]
2017-02-12 04:24:19.169 [INFO][31] config_params.go 209: Parsing value for FelixHostname: ip-172-20-94-83 (from environment variable)
2017-02-12 04:24:19.170 [INFO][31] config_params.go 241: Parsed value for FelixHostname: ip-172-20-94-83 (from environment variable)
2017-02-12 04:24:19.170 [INFO][31] config_params.go 209: Parsing value for EtcdCertFile:  (from environment variable)
2017-02-12 04:24:19.170 [INFO][31] config_params.go 241: Parsed value for EtcdCertFile:  (from environment variable)
2017-02-12 04:24:19.170 [INFO][31] config_params.go 209: Parsing value for EtcdScheme:  (from environment variable)
2017-02-12 04:24:19.170 [ERROR][31] config_params.go 226: Failed to parse config parameter EtcdScheme; value "": unknown option (source environment variable)
2017-02-12 04:24:19.170 [ERROR][31] config_params.go 233: Replacing invalid value with default value for EtcdScheme: http
2017-02-12 04:24:19.170 [INFO][31] config_params.go 241: Parsed value for EtcdScheme: http (from environment variable)
2017-02-12 04:24:19.170 [INFO][31] config_params.go 209: Parsing value for EtcdAddr:  (from environment variable)
2017-02-12 04:24:19.170 [ERROR][31] config_params.go 226: Failed to parse config parameter EtcdAddr; value "": invalid URL authority (source environment variable)
2017-02-12 04:24:19.170 [ERROR][31] config_params.go 233: Replacing invalid value with default value for EtcdAddr: 127.0.0.1:2379
2017-02-12 04:24:19.170 [INFO][31] config_params.go 241: Parsed value for EtcdAddr: 127.0.0.1:2379 (from environment variable)
2017-02-12 04:24:19.170 [INFO][31] config_params.go 209: Parsing value for EtcdKeyFile:  (from environment variable)
2017-02-12 04:24:19.170 [INFO][31] config_params.go 241: Parsed value for EtcdKeyFile:  (from environment variable)
2017-02-12 04:24:19.170 [INFO][31] config_params.go 209: Parsing value for EtcdEndpoints: http://etcd-a.internal.infra-test-debian-canal.our.domain:4001,http://etcd-b.internal.infra-test-debian-canal.our.domain:4001,http://etcd-c.internal.infra-test-debian-canal.our.domain:4001 (from environment variable)
2017-02-12 04:24:19.170 [INFO][31] config_params.go 241: Parsed value for EtcdEndpoints: [http://etcd-a.internal.infra-test-debian-canal.dev.datapipe.io:4001/ http://etcd-b.internal.infra-test-debian-canal.dev.datapipe.io:4001/ http://etcd-c.internal.infra-test-debian-canal.dev.datapipe.io:4001/] (from environment variable)
2017-02-12 04:24:19.170 [INFO][31] config_params.go 209: Parsing value for EtcdCaFile:  (from environment variable)
2017-02-12 04:24:19.170 [INFO][31] config_params.go 241: Parsed value for EtcdCaFile:  (from environment variable)
2017-02-12 04:24:19.171 [INFO][31] config_params.go 209: Parsing value for LogFilePath: None (from config file)
2017-02-12 04:24:19.171 [INFO][31] config_params.go 222: Value set to 'none', replacing with zero-value: "".
2017-02-12 04:24:19.171 [INFO][31] config_params.go 241: Parsed value for LogFilePath:  (from config file)
2017-02-12 04:24:19.171 [INFO][31] config_params.go 209: Parsing value for LogSeverityFile: None (from config file)
2017-02-12 04:24:19.171 [INFO][31] config_params.go 222: Value set to 'none', replacing with zero-value: "".
2017-02-12 04:24:19.171 [INFO][31] config_params.go 241: Parsed value for LogSeverityFile:  (from config file)
2017-02-12 04:24:19.171 [INFO][31] config_params.go 209: Parsing value for MetadataAddr: None (from config file)
2017-02-12 04:24:19.171 [INFO][31] config_params.go 222: Value set to 'none', replacing with zero-value: "".
2017-02-12 04:24:19.171 [INFO][31] config_params.go 241: Parsed value for MetadataAddr:  (from config file)
2017-02-12 04:24:19.171 [INFO][31] config_params.go 197: Ignoring unknown config param. raw name=marker
2017-02-12 04:24:19.171 [INFO][31] config_params.go 209: Parsing value for DefaultEndpointToHostAction: RETURN (from datastore (per-host))
2017-02-12 04:24:19.171 [INFO][31] config_params.go 241: Parsed value for DefaultEndpointToHostAction: RETURN (from datastore (per-host))
2017-02-12 04:24:19.171 [INFO][31] config_params.go 209: Parsing value for LogSeverityFile: none (from datastore (global))
2017-02-12 04:24:19.171 [INFO][31] config_params.go 222: Value set to 'none', replacing with zero-value: "".
2017-02-12 04:24:19.171 [INFO][31] config_params.go 241: Parsed value for LogSeverityFile:  (from datastore (global))
2017-02-12 04:24:19.171 [INFO][31] config_params.go 245: Skipping config value for LogSeverityFile from datastore (global); already have a value from config file
2017-02-12 04:24:19.171 [INFO][31] config_params.go 209: Parsing value for LogSeverityScreen: info (from datastore (global))
2017-02-12 04:24:19.171 [INFO][31] config_params.go 241: Parsed value for LogSeverityScreen: INFO (from datastore (global))
2017-02-12 04:24:19.172 [INFO][31] config_params.go 209: Parsing value for LogFilePath: none (from datastore (global))
2017-02-12 04:24:19.172 [INFO][31] config_params.go 222: Value set to 'none', replacing with zero-value: "".
2017-02-12 04:24:19.172 [INFO][31] config_params.go 241: Parsed value for LogFilePath:  (from datastore (global))
2017-02-12 04:24:19.172 [INFO][31] config_params.go 245: Skipping config value for LogFilePath from datastore (global); already have a value from config file
2017-02-12 04:24:19.172 [INFO][31] config_params.go 209: Parsing value for IpInIpEnabled: false (from datastore (global))
2017-02-12 04:24:19.172 [INFO][31] config_params.go 241: Parsed value for IpInIpEnabled: false (from datastore (global))
2017-02-12 04:24:19.172 [INFO][31] config_params.go 209: Parsing value for ReportingIntervalSecs: 0 (from datastore (global))
2017-02-12 04:24:19.172 [INFO][31] config_params.go 241: Parsed value for ReportingIntervalSecs: 0 (from datastore (global))

Environment:

Kubernetes Version: 1.4.8
Cluster Deployment Mechanism: https://github.com/kubernetes/kops with CNI networking enabled
Operating System: Debian GNU/Linux (Jessie) with https://github.com/kopeio/kubernetes-kernel
Canal Configuration Used: https://raw.githubusercontent.com/projectcalico/canal/master/k8s-install/canal.yaml (with ETCD)

Configuration:

kind: ConfigMap
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: '{"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"canal-config","namespace":"kube-system","creationTimestamp":null},"data":{"canal_iface":"","cni_network_config":"{\n    \"name\":
      \"canal\",\n    \"type\": \"flannel\",\n    \"delegate\": {\n      \"type\":
      \"calico\",\n      \"etcd_endpoints\": \"__ETCD_ENDPOINTS__\",\n      \"log_level\":
      \"info\",\n      \"policy\": {\n          \"type\": \"k8s\",\n          \"k8s_api_root\":
      \"https://__KUBERNETES_SERVICE_HOST__:__KUBERNETES_SERVICE_PORT__\",\n          \"k8s_auth_token\":
      \"__SERVICEACCOUNT_TOKEN__\"\n      },\n      \"kubernetes\": {\n          \"kubeconfig\":
      \"/etc/cni/net.d/__KUBECONFIG_FILENAME__\"\n      }\n    }\n}","etcd_endpoints":"http://etcd-a.internal.infra-test-debian-canal.our.domain:4001,http://etcd-b.internal.infra-test-debian-canal.our.domain:4001,http://etcd-c.internal.infra-test-debian-canal.dev.our.domain:4001","masquerade":"true"}}'
  creationTimestamp: 2017-02-11T07:35:10Z
  name: canal-config
  namespace: kube-system
  resourceVersion: "281"
  selfLink: /api/v1/namespaces/kube-system/configmaps/canal-config
  uid: 9e7b11d6-f02c-11e6-9fff-0a86d4ab4d3f

Remote node flannel failed to operate

Expected Behavior

Successful canal pod

Current Behavior

kubectl get po -n kube-system -o wide
kNAME                                   READY     STATUS             RESTARTS   AGE       IP             NODE
canal-7pplx                            2/3       CrashLoopBackOff   12         46m       172.17.8.102   172.17.8.102
canal-dwfp8                            2/3       CrashLoopBackOff   12         46m       172.17.8.103   172.17.8.103
canal-l84s1                            3/3       Running            0          46m       172.17.8.101   172.17.8.101
kube-apiserver-172.17.8.101            1/1       Running            0          1h        172.17.8.101   172.17.8.101
kube-controller-manager-172.17.8.101   1/1       Running            0          1h        172.17.8.101   172.17.8.101
kube-proxy-172.17.8.101                1/1       Running            0          1h        172.17.8.101   172.17.8.101
kube-proxy-172.17.8.102                1/1       Running            0          1h        172.17.8.102   172.17.8.102
kube-proxy-172.17.8.103                1/1       Running            0          1h        172.17.8.103   172.17.8.103
kube-scheduler-172.17.8.101            1/1       Running            0          1h        172.17.8.101   172.17.8.101

Steps to Reproduce (for bugs)

  1. coreos + k8s cluster install
  2. kubelet configuration (--cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin --network-plugin=cni )
  3. controller manager configuration (--cluster-cidr=10.244.0.0/16 --allocate-node-cidrs=true)
  4. wget https://raw.githubusercontent.com/projectcalico/canal/master/k8s-install/canal.yaml
  5. kubectl apply -f canal.yaml

Context

Tested with local virtualbox.
The canal on the same node as apiserver will run normally. (3/3 Running)
Canal running on remote node failed. (2/3 CrashLoopBackOff)
Looking at the flannel log, the output looks like this:

kubectl logs -f canal-dwfp8 -n kube-system -c kube-flannel
I0712 06:20:19.618913       1 main.go:459] Using interface with name eth1 and address 172.17.8.103
I0712 06:20:19.619093       1 main.go:476] Defaulting external address to interface address (172.17.8.103)
E0712 06:20:49.622290       1 main.go:223] Failed to create SubnetManager: error retrieving pod spec for 'kube-system/canal-dwfp8': Get https://10.3.0.1:443/api/v1/namespaces/kube-system/pods/canal-dwfp8: dial tcp 10.3.0.1:443: i/o timeout

Changing canal.yaml is the same. ("k8s_api_root": "https://172.17.8.101:443")
There is no overlay network when the flannel is running.
But how does the flannel communicate with the 10.3.0.1 address?

Your Environment

  • Vagrant + Virtualbox (MASTER: 172.17.8.101, WORKER: 172.17.8.102, 172.17.8.103)
  • Calico version: 1.2.1
  • Flannel version: 0.8.0
  • Orchestrator version: k8s 1.6.4 (no rbac mode)
  • Operating System and version: Container Linux by CoreOS 1437.0.0

kube-dns never in ready state with etcd-tls (always 2/3, 1/3)

When create canal with canal_etcd_tls.yaml kube-dns pod never reaches ready state, only 2/3 or 1/3.
When deploying canal without tls, everything works fine.
We have 2-node etcd cluster running outside of kube.

kubelet version: 1.6.6
docker version: 1.12

kubectl logs kube-dns-692378583-xdxkh kubedns -n kube-system output:

I0706 09:52:22.211047       1 dns.go:48] version: 1.14.1-16-gff416ee
I0706 09:52:22.212257       1 server.go:70] Using configuration read from directory: /kube-dns-config with period 10s
I0706 09:52:22.212432       1 server.go:113] FLAG: --alsologtostderr="false"
I0706 09:52:22.212452       1 server.go:113] FLAG: --config-dir="/kube-dns-config"
I0706 09:52:22.212472       1 server.go:113] FLAG: --config-map=""
I0706 09:52:22.212505       1 server.go:113] FLAG: --config-map-namespace="kube-system"
I0706 09:52:22.212523       1 server.go:113] FLAG: --config-period="10s"
I0706 09:52:22.212535       1 server.go:113] FLAG: --dns-bind-address="0.0.0.0"
I0706 09:52:22.212547       1 server.go:113] FLAG: --dns-port="10053"
I0706 09:52:22.212565       1 server.go:113] FLAG: --domain="cluster.local."
I0706 09:52:22.212623       1 server.go:113] FLAG: --federations=""
I0706 09:52:22.212644       1 server.go:113] FLAG: --healthz-port="8081"
I0706 09:52:22.212654       1 server.go:113] FLAG: --initial-sync-timeout="1m0s"
I0706 09:52:22.212659       1 server.go:113] FLAG: --kube-master-url=""
I0706 09:52:22.212671       1 server.go:113] FLAG: --kubecfg-file=""
I0706 09:52:22.212721       1 server.go:113] FLAG: --log-backtrace-at=":0"
I0706 09:52:22.212730       1 server.go:113] FLAG: --log-dir=""
I0706 09:52:22.212743       1 server.go:113] FLAG: --log-flush-frequency="5s"
I0706 09:52:22.212754       1 server.go:113] FLAG: --logtostderr="true"
I0706 09:52:22.212765       1 server.go:113] FLAG: --nameservers=""
I0706 09:52:22.212802       1 server.go:113] FLAG: --stderrthreshold="2"
I0706 09:52:22.212820       1 server.go:113] FLAG: --v="2"
I0706 09:52:22.212831       1 server.go:113] FLAG: --version="false"
I0706 09:52:22.212844       1 server.go:113] FLAG: --vmodule=""
I0706 09:52:22.212966       1 server.go:176] Starting SkyDNS server (0.0.0.0:10053)
I0706 09:52:22.213179       1 server.go:198] Skydns metrics enabled (/metrics:10055)
I0706 09:52:22.213204       1 dns.go:147] Starting endpointsController
I0706 09:52:22.213211       1 dns.go:150] Starting serviceController
I0706 09:52:22.213298       1 logs.go:41] skydns: ready for queries on cluster.local. for tcp://0.0.0.0:10053 [rcache 0]
I0706 09:52:22.213326       1 logs.go:41] skydns: ready for queries on cluster.local. for udp://0.0.0.0:10053 [rcache 0]
I0706 09:52:22.713375       1 dns.go:174] Waiting for services and endpoints to be initialized from apiserver...
...
E0706 09:52:52.214678       1 reflector.go:199] k8s.io/dns/vendor/k8s.io/client-go/tools/cache/reflector.go:94: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout

kubectl logs kube-dns-692378583-xdxkh dnsmasq -n kube-system output:

I0706 09:55:56.298941       1 main.go:76] opts: {{/usr/sbin/dnsmasq [-k --cache-size=1000 --log-facility=- --server=/cluster.local/127.0.0.1#10053 --server=/in-addr.arpa/127.0.0.1#10053 --server=/ip6.arpa/127.0.0.1#10053] true} /etc/k8s/dns/dnsmasq-nanny 10000000000}
I0706 09:55:56.299052       1 nanny.go:86] Starting dnsmasq [-k --cache-size=1000 --log-facility=- --server=/cluster.local/127.0.0.1#10053 --server=/in-addr.arpa/127.0.0.1#10053 --server=/ip6.arpa/127.0.0.1#10053]
I0706 09:55:56.372491       1 nanny.go:111] 
W0706 09:55:56.372511       1 nanny.go:112] Got EOF from stdout
I0706 09:55:56.372525       1 nanny.go:108] dnsmasq[13]: started, version 2.76 cachesize 1000
I0706 09:55:56.372534       1 nanny.go:108] dnsmasq[13]: compile time options: IPv6 GNU-getopt no-DBus no-i18n no-IDN DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth no-DNSSEC loop-detect inotify
I0706 09:55:56.372543       1 nanny.go:108] dnsmasq[13]: using nameserver 127.0.0.1#10053 for domain ip6.arpa 
I0706 09:55:56.372550       1 nanny.go:108] dnsmasq[13]: using nameserver 127.0.0.1#10053 for domain in-addr.arpa 
I0706 09:55:56.372552       1 nanny.go:108] dnsmasq[13]: using nameserver 127.0.0.1#10053 for domain cluster.local 
I0706 09:55:56.372572       1 nanny.go:108] dnsmasq[13]: reading /etc/resolv.conf
I0706 09:55:56.372585       1 nanny.go:108] dnsmasq[13]: using nameserver 127.0.0.1#10053 for domain ip6.arpa 
I0706 09:55:56.372589       1 nanny.go:108] dnsmasq[13]: using nameserver 127.0.0.1#10053 for domain in-addr.arpa 
I0706 09:55:56.372591       1 nanny.go:108] dnsmasq[13]: using nameserver 127.0.0.1#10053 for domain cluster.local 
I0706 09:55:56.372593       1 nanny.go:108] dnsmasq[13]: using nameserver 8.8.8.8#53
I0706 09:55:56.372596       1 nanny.go:108] dnsmasq[13]: using nameserver 192.168.0.1#53
I0706 09:55:56.372599       1 nanny.go:108] dnsmasq[13]: read /etc/hosts - 7 addresses

kubectl logs kube-dns-692378583-xdxkh sidecar -n kube-system output:

ERROR: logging before flag.Parse: I0706 09:44:15.355606       1 main.go:48] Version v1.14.1-16-gff416ee
ERROR: logging before flag.Parse: I0706 09:44:15.355794       1 server.go:45] Starting server (options {DnsMasqPort:53 DnsMasqAddr:127.0.0.1 DnsMasqPollIntervalMs:5000 Probes:[{Label:kubedns Server:127.0.0.1:10053 Name:kubernetes.default.svc.cluster.local. Interval:5s Type:1} {Label:dnsmasq Server:127.0.0.1:53 Name:kubernetes.default.svc.cluster.local. Interval:5s Type:1}] PrometheusAddr:0.0.0.0 PrometheusPort:10054 PrometheusPath:/metrics PrometheusNamespace:kubedns})
ERROR: logging before flag.Parse: I0706 09:44:15.356011       1 dnsprobe.go:75] Starting dnsProbe {Label:kubedns Server:127.0.0.1:10053 Name:kubernetes.default.svc.cluster.local. Interval:5s Type:1}
ERROR: logging before flag.Parse: I0706 09:44:15.356147       1 dnsprobe.go:75] Starting dnsProbe {Label:dnsmasq Server:127.0.0.1:53 Name:kubernetes.default.svc.cluster.local. Interval:5s Type:1}
ERROR: logging before flag.Parse: W0706 09:48:05.395021       1 server.go:64] Error getting metrics from dnsmasq: read udp 127.0.0.1:55328->127.0.0.1:53: read: connection refused

Canal on OpenShift

I found some ppt which tells that I can use canal on OpenShift.

Can you please share instructions for canal deployment on OpenShift.

post location of current canal.yaml in the file

Hello,

Our gating system downloads the latest version of canal.yaml automatically with each job. When you change the location of canal.yaml, it breaks our gate and we need to change the code to point to the new location. Would it be possible to to post new location in the special static file in your repo? SO when you change canal.yaml location it gets automatically changed in that file as well. In this case we can read this file in the gate code and always get right location without any manual changes.
Thank you
Serguei

Include portmap plugin + chaining config for hostport support in k8s

Expected Behavior

Should support host ports in k8s

Current Behavior

does not support it

Possible Solution

Needs to include and chain the portmap plugin

Steps to Reproduce (for bugs)

Context

Your Environment

  • Calico version:
  • Flannel version:
  • Orchestrator version:
  • Operating System and version:
  • Link to your project (optional):

bootkube : installation

There seems to be an example for kubeadm, would be great if there was an example for bootkube

NetworkPolicy doesn't block traffic

Expected Behavior

NetworkPolicy blocks traffic according to the documentation of kubernetes https://kubernetes.io/docs/concepts/services-networking/networkpolicies/

Current Behavior

The network traffic is not blocked.

Steps to Reproduce (for bugs)

The reproducer is in this repo https://github.com/floriankammermann/kubernetes-networkpolicy

Context

I have to isolate pods from each other on L3. Kinda multitenancy.

Your Environment

OS: Ubuntu Xenial
Kernel (uname -a): Linux kube2 4.4.0-78-generic #99-Ubuntu SMP Thu Apr 27 15:29:09 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
Install tools:
installed docker according to https://docs.docker.com/engine/installation/linux/ubuntu/ docker -v Docker version 17.03.1-ce, build c6d412e
installed kubernetes according to https://kubernetes.io/docs/setup/independent/install-kubeadm/
Installed canal according to https://github.com/projectcalico/canal/blob/master/k8s-install/README.md (version 1.6)

Kubernetes version (use kubectl version):

Client Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.4", GitCommit:"d6f433224538d4f9ca2f7ae19b252e6fcb66a3ae", |swisscom@kube2:~$ 
GitTreeState:"clean", BuildDate:"2017-05-19T18:44:27Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"}           |swisscom@kube2:~$ 
Server Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.4", GitCommit:"d6f433224538d4f9ca2f7ae19b252e6fcb66a3ae", |swisscom@kube2:~$ 
GitTreeState:"clean", BuildDate:"2017-05-19T18:33:17Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"}

Needs to clear NodeNetworkUnavailable flag on Kubernetes 1.6

Expected Behavior

Kubernetes 1.6 has this lovely piece of code: https://github.com/kubernetes/kubernetes/blob/release-1.6/pkg/kubelet/kubelet_node_status.go#L214

This marks any new node as restricted to pods with host networking. The assumption is that the cluster networking implementation will clear this bit when the network setup is complete.

The only implementation that does that is kubenet, i.e. what they use to run GKE.

Discussion: kubernetes/kubernetes#33573

Said there:

This will require network plugins to manage the Node NoRouteCreated state on AWS in 1.5, as they already must do on GCE since 1.3.

Thing is, I think nobody actually did that on 1.3 or 1.4. Instead, we passed the --experimental-flannel-overlay flag which disabled this feature. However, that flag has been removed in 1.6.

Current Behavior

Nodes stay NetworkUnavailable forever, even though Canal is perfectly happy.

Possible Solution

After network setup, and possibly after passing some self-check, Canal should use the API to mark the node available.

some minor issues found in canal.yaml

In canal.yaml:

  1. it seems 10-canal.conf should be 10-calico.conf. the calico node will read 10-calico.conf.
  2. in older kubernetes version like 1.3.0, "_" is not supported, so it is better to change "cni_network_config" to "cni-network-config"

I would like to propose a fix for above issue.

Repeating INFO logs on calico-policy-controller after resource deletion

Expected Behavior

After a resource is deleted the controller should stop trying to delete it.

Current Behavior

After deleting a namespace or network policy the calico-policy-controller keeps repeating every 10 seconds its last action:

2017-09-06 08:32:40,722 7 INFO Handled MODIFIED for Pod: ('policy-demo', 'access-2475823348-815fq')
2017-09-06 08:32:42,549 7 INFO Handled MODIFIED for Pod: ('policy-demo', 'access-2475823348-815fq')
2017-09-06 08:33:14,340 7 INFO Handled MODIFIED for Pod: ('policy-demo', 'access-2475823348-815fq')
2017-09-06 08:33:14,643 7 INFO Handled MODIFIED for Pod: ('policy-demo', 'access-2475823348-815fq')
2017-09-06 08:33:22,385 7 INFO Handled MODIFIED for Pod: ('policy-demo', 'access-2475823348-815fq')
2017-09-06 08:33:22,392 7 INFO Handled DELETED for Pod: ('policy-demo', 'access-2475823348-815fq')
2017-09-06 08:33:23,880 7 INFO Handled DELETED for NetworkPolicy: ('policy-demo', 'default-deny')
2017-09-06 08:33:32,410 7 INFO Handled DELETED for Pod: ('policy-demo', 'access-2475823348-815fq')
2017-09-06 08:33:33,909 7 INFO Unable to find policy 'policy-demo.default-deny' - already deleted
2017-09-06 08:33:33,909 7 INFO Handled DELETED for NetworkPolicy: ('policy-demo', 'default-deny')
2017-09-06 08:33:42,432 7 INFO Handled DELETED for Pod: ('policy-demo', 'access-2475823348-815fq')
2017-09-06 08:33:43,924 7 INFO Unable to find policy 'policy-demo.default-deny' - already deleted
2017-09-06 08:33:43,924 7 INFO Handled DELETED for NetworkPolicy: ('policy-demo', 'default-deny')
2017-09-06 08:33:52,464 7 INFO Handled DELETED for Pod: ('policy-demo', 'access-2475823348-815fq')
2017-09-06 08:33:53,950 7 INFO Unable to find policy 'policy-demo.default-deny' - already deleted
2017-09-06 08:33:53,950 7 INFO Handled DELETED for NetworkPolicy: ('policy-demo', 'default-deny')
2017-09-06 08:34:02,493 7 INFO Handled DELETED for Pod: ('policy-demo', 'access-2475823348-815fq')
2017-09-06 08:34:03,975 7 INFO Unable to find policy 'policy-demo.default-deny' - already deleted
2017-09-06 08:34:03,975 7 INFO Handled DELETED for NetworkPolicy: ('policy-demo', 'default-deny')
2017-09-06 08:34:12,523 7 INFO Handled DELETED for Pod: ('policy-demo', 'access-2475823348-815fq')
2017-09-06 08:34:13,996 7 INFO Unable to find policy 'policy-demo.default-deny' - already deleted
2017-09-06 08:34:13,996 7 INFO Handled DELETED for NetworkPolicy: ('policy-demo', 'default-deny')
2017-09-06 08:34:22,555 7 INFO Handled DELETED for Pod: ('policy-demo', 'access-2475823348-815fq')
2017-09-06 08:34:24,015 7 INFO Unable to find policy 'policy-demo.default-deny' - already deleted
2017-09-06 08:34:24,015 7 INFO Handled DELETED for NetworkPolicy: ('policy-demo', 'default-deny')
2017-09-06 08:34:32,570 7 INFO Handled DELETED for Pod: ('policy-demo', 'access-2475823348-815fq')
2017-09-06 08:34:34,038 7 INFO Unable to find policy 'policy-demo.default-deny' - already deleted
2017-09-06 08:34:34,038 7 INFO Handled DELETED for NetworkPolicy: ('policy-demo', 'default-deny')
2017-09-06 08:34:42,604 7 INFO Handled DELETED for Pod: ('policy-demo', 'access-2475823348-815fq')
2017-09-06 08:34:44,062 7 INFO Unable to find policy 'policy-demo.default-deny' - already deleted
2017-09-06 08:34:44,062 7 INFO Handled DELETED for NetworkPolicy: ('policy-demo', 'default-deny')
2017-09-06 08:34:52,633 7 INFO Handled DELETED for Pod: ('policy-demo', 'access-2475823348-815fq')
2017-09-06 08:34:54,343 7 INFO Unable to find policy 'policy-demo.default-deny' - already deleted
2017-09-06 08:34:54,343 7 INFO Handled DELETED for NetworkPolicy: ('policy-demo', 'default-deny')
2017-09-06 08:35:02,666 7 INFO Handled DELETED for Pod: ('policy-demo', 'access-2475823348-815fq')
2017-09-06 08:35:04,121 7 INFO Unable to find policy 'policy-demo.default-deny' - already deleted
2017-09-06 08:35:04,121 7 INFO Handled DELETED for NetworkPolicy: ('policy-demo', 'default-deny')
2017-09-06 08:35:12,703 7 INFO Handled DELETED for Pod: ('policy-demo', 'access-2475823348-815fq')
2017-09-06 08:35:14,147 7 INFO Unable to find policy 'policy-demo.default-deny' - already deleted
2017-09-06 08:35:14,147 7 INFO Handled DELETED for NetworkPolicy: ('policy-demo', 'default-deny')
2017-09-06 08:35:22,729 7 INFO Handled DELETED for Pod: ('policy-demo', 'access-2475823348-815fq')
2017-09-06 08:35:24,165 7 INFO Unable to find policy 'policy-demo.default-deny' - already deleted
2017-09-06 08:35:24,165 7 INFO Handled DELETED for NetworkPolicy: ('policy-demo', 'default-deny')
...... <forever> ....

Possible Solution

Steps to Reproduce (for bugs)

  1. Deploy Canal. We used Juju so the deployment was juju deploy cs:~containers/canonical-kubernetes-canal-4
  2. Create a namespace and a network policy:
> kubectl create ns policy-demo
> kubectl run --namespace=policy-demo nginx --replicas=2 --image=nginx
> kubectl expose --namespace=policy-demo deployment nginx --port=80
> kubectl create -f - <<EOF
kind: NetworkPolicy
apiVersion: extensions/v1beta1
metadata:
  name: default-deny
  namespace: policy-demo
spec:
  podSelector:
EOF
  1. Delete the network policy
> kubectl delete netpol -n policy-demo default-deny
  1. See the logs of policy controller:
> kubectl logs po/calico-policy-controller-1263594417-dql2d -n kube-system -f

Context

Your Environment

  • Calico version: 2.5.1
  • Flannel version: 0.7.0
  • Orchestrator version: Juju 2.2
  • Operating System and version: Ubuntu 16.04
  • Link to your project (optional):

Document kubelet CNI dependency

Self-hosted install should document that --network-plugin=cni is required and --cni-conf-dir and --cni-bin-dir are set properly.

Mention Minikube and podCIDR in the documentation

Expected Behavior

At the moment, when you try to install Canal on Minikube using the directions in the self-hosted install, the kube-flannel container will fail to come up with messages in the logs about missing podCIDR. Minikube doesn't have a flag to set podCIDR like some other ways to install Kubernetes. I had to find this issue to learn that it's possible to set podCIDR on Minikube with kubectl:
kubectl patch node minikube -p '{"spec":{"podCIDR":"10.33.12.0/24"}}'
After I set this, I can at least connect between pods with Canal installed using netcat, though it looks to me like isolation isn't being enforced.

Current Behavior

There's no mention of Minikube or setting podCIDR for in the documentation.

Context

I'm trying to create a working development environment for testing network policy on my local machine so I don't have to depend on a remote cluster for doing development work.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.