Giter Club home page Giter Club logo

Comments (40)

pkoraca avatar pkoraca commented on June 15, 2024 2

In my case the same error was happening with httpV2 enabled. Removing that line fixed the issue.

    metrics:
      serviceMonitor:
        enabled: true
      enableOpenMetrics: true
      enabled:
      - dns:query;ignoreAAAA
      - drop
      - tcp
      - flow
      - port-distribution
      - icmp
      - http
      # - httpV2:exemplars=true;labelsContext=source_ip\,source_namespace\,source_workload\,destination_ip\,destination_namespace\,destination_workload\,traffic_direction

I managed to reproduce the Hubble CLI issue as well, but could not fix it by disabling TLS (example below). Chart reinstall helped though.

This did not help:

hubble:
  relay:
    tls:
      server:
        enabled: false
  tls:
    enabled: false

from hubble-ui.

pascal71 avatar pascal71 commented on June 15, 2024 1

Logs of backend container in UI:

level=info msg="running hubble status checker\n" subsys=ui-backend
level=info msg="fetching hubble flows: connecting to hubble-relay (attempt #1)\n" subsys=ui-backend
level=info msg="hubble-relay grpc client created (hubble-relay addr: hubble-relay:80)\n" subsys=ui-backend
level=info msg="hubble status checker: connection to hubble-relay established\n" subsys=ui-backend
level=info msg="hubble-relay grpc client created (hubble-relay addr: hubble-relay:80)\n" subsys=ui-backend
level=info msg="fetching hubble flows: connection to hubble-relay established\n" subsys=ui-backend
level=info msg="fetching hubble flows: connecting to hubble-relay (attempt #1)\n" subsys=ui-backend
level=error msg="flow error: EOF\n" subsys=ui-backend
level=info msg="hubble status checker: stopped\n" subsys="ui-backend:status-checker"
level=info msg="hubble-relay grpc client created (hubble-relay addr: hubble-relay:80)\n" subsys=ui-backend
level=error msg="fetching hubble flows: connecting to hubble-relay (attempt #1) failed: rpc error: code = Canceled desc = context canceled\n" subsys=ui-backend
level=info msg="fetching hubble flows: stream (ui backend <-> hubble-relay) is closed\n" subsys=ui-backend
level=info msg="Get flows request: number:10000  follow:true  blacklist:{source_label:\"reserved:unknown\"  source_label:\"reserved:host\"  source_label:\"k8s:k8s-app=kube-dns\"  source_label:\"reserved:remote-node\"  source_label:\"k8s:app=prometheus\"  source_label:\"reserved:kube-apiserver\"}  blacklist:{destination_label:\"reserved:unknown\"  destination_label:\"reserved:host\"  destination_label:\"reserved:remote-node\"  destination_label:\"k8s:app=prometheus\"  destination_label:\"reserved:kube-apiserver\"}  blacklist:{destination_label:\"k8s:k8s-app=kube-dns\"  destination_port:\"53\"}  blacklist:{source_fqdn:\"*.cluster.local*\"}  blacklist:{destination_fqdn:\"*.cluster.local*\"}  blacklist:{protocol:\"ICMPv4\"}  blacklist:{protocol:\"ICMPv6\"}  whitelist:{source_pod:\"default/\"  event_type:{type:1}  event_type:{type:4}  event_type:{type:129}  reply:false}  whitelist:{destination_pod:\"default/\"  event_type:{type:1}  event_type:{type:4}  event_type:{type:129}  reply:false}" subsys=ui-backend
level=info msg="running hubble status checker\n" subsys=ui-backend
level=info msg="fetching hubble flows: connecting to hubble-relay (attempt #1)\n" subsys=ui-backend
level=info msg="hubble-relay grpc client created (hubble-relay addr: hubble-relay:80)\n" subsys=ui-backend
level=info msg="hubble-relay grpc client created (hubble-relay addr: hubble-relay:80)\n" subsys=ui-backend
level=info msg="hubble status checker: connection to hubble-relay established\n" subsys=ui-backend
level=info msg="fetching hubble flows: connection to hubble-relay established\n" subsys=ui-backend
level=info msg="fetching hubble flows: connecting to hubble-relay (attempt #1)\n" subsys=ui-backend
level=error msg="flow error: EOF\n" subsys=ui-backend
level=info msg="hubble status checker: stopped\n" subsys="ui-backend:status-checker"
level=info msg="hubble-relay grpc client created (hubble-relay addr: hubble-relay:80)\n" subsys=ui-backend
level=error msg="fetching hubble flows: connecting to hubble-relay (attempt #1) failed: rpc error: code = Canceled desc = context canceled\n" subsys=ui-backend
level=info msg="fetching hubble flows: stream (ui backend <-> hubble-relay) is closed\n" subsys=ui-backend

Hope this helps.

Kind regards,

Pascal

from hubble-ui.

samos667 avatar samos667 commented on June 15, 2024 1

For me it was because my cluster domain is "cluster" (imperative for Cilium to not have doted cluster domain) BUT helm chart define by default hubble.peerService.clusterDomain to "cluster.local".

With Cilium installed with helm on 1.13.3, set correct hubble.peerService.clusterDomain value fix access to UI for me and I didn't need to disable TLS anywhere.

My Cilium values:

helm install cilium cilium/cilium --version 1.13.3 \
  --namespace kube-system \
  --set ipam.mode=cluster-pool \
  --set ipam.operator.clusterPoolIPv4PodCIDRList=10.66.0.0/16 \
  --set ipam.operator.clusterPoolIPv4MaskSize=20 \
  --set kubeProxyReplacement=strict \
  --set k8sServiceHost=172.16.66.200 \
  --set k8sServicePort=6443 \
  --set hubble.relay.enabled=true \
  --set hubble.ui.enabled=true \
  --set operator.replicas=1 \
  --set tunnel=disabled \
  --set ipv4NativeRoutingCIDR=10.66.0.0/16 \
  --set autoDirectNodeRoutes=true \
  --set hubble.peerService.clusterDomain=cluster

from hubble-ui.

morix1500 avatar morix1500 commented on June 15, 2024 1

In my case, the following configuration alone was not enough, the communication from hubble-relay to hubble-peer was failing due to Ubuntu's ufw.

I allowed access from Cilium's IP CIDR and it worked fine.

hubble:
  relay:
    tls:
      server:
        enabled: false
  tls:
    enabled: false

from hubble-ui.

kevin-shelaga avatar kevin-shelaga commented on June 15, 2024 1

The following fixed the issue for me

hubble: 
  relay: 
    enabled: true
  ui: 
    frontend:
      server: 
        ipv6: 
          enabled: false
    enabled: true
  metrics: 
    enableOpenMetrics: true
    enabled: 
    - dns
    - drop
    - tcp
    - flow
    - port-distribution
    - icmp
    - httpV2:exemplars=true;labelsContext=source_ip,source_namespace,source_workload,destination_ip,destination_namespace,destination_workload,traffic_direction

Alternatively, try with set

--set hubble.metrics.enabled="{dns,drop,tcp,flow,port-distribution,icmp,httpV2:exemplars=true;labelsContext=source_ip\,source_namespace\,source_workload\,destination_ip\,destination_namespace\,destination_workload\,traffic_direction}" 

from hubble-ui.

samwho avatar samwho commented on June 15, 2024

I just attempted upgrading to 1.13.0-rc0 and I experience the same problem.

from hubble-ui.

DhwanishShah avatar DhwanishShah commented on June 15, 2024

Hi i want to work on this issue. Please assign this issue to me @samwho @gandro @rolinh

from hubble-ui.

samuraii78 avatar samuraii78 commented on June 15, 2024

Having the same issue running on minikube. No flows are registering in the UI or via the hubble CLI utility

from hubble-ui.

gandro avatar gandro commented on June 15, 2024

Hi i want to work on this issue. Please assign this issue to me @samwho @gandro @rolinh

We're not yet sure what the root cause is. If you know it, please feel free to share or fix. Otherwise I think we need more info.

What response headers do you see in the browser network tab @samwho ?

from hubble-ui.

geakstr avatar geakstr commented on June 15, 2024

EOF error comes from hubble-relay. We need hubble-ui pod backend container logs and hubble-relay pod logs

from hubble-ui.

pascal71 avatar pascal71 commented on June 15, 2024

Can confirm same issue is on 1.12.2; K8S 1.25.2, ARM/PI4 architecture

from hubble-ui.

pascal71 avatar pascal71 commented on June 15, 2024

relay logs:

level=info msg="Starting gRPC server..." options="{peerTarget:hubble-peer.kube-system.svc.cluster.local:443 dialTimeout:5000000000 retryTimeout:30000000000 listenAddress::4245 metricsListenAddress: log:0x400037c2a0 serverTLSConfig:<nil> insecureServer:true clientTLSConfig:0x40000acbe8 clusterName:default insecureClient:false observerOptions:[0xbfc7d0 0xbfc8d0] grpcMetrics:<nil> grpcUnaryInterceptors:[] grpcStreamInterceptors:[]}" subsys=hubble-relay
level=warning msg="Failed to create peer client for peers synchronization; will try again after the timeout has expired" error="context deadline exceeded" subsys=hubble-relay target="hubble-peer.kube-system.svc.cluster.local:443"
level=warning msg="Failed to create peer client for peers synchronization; will try again after the timeout has expired" error="context deadline exceeded" subsys=hubble-relay target="hubble-peer.kube-system.svc.cluster.local:443"
level=warning msg="Failed to create peer client for peers synchronization; will try again after the timeout has expired" error="context deadline exceeded" subsys=hubble-relay target="hubble-peer.kube-system.svc.cluster.local:443"
level=warning msg="Failed to create peer client for peers synchronization; will try again after the timeout has expired" error="context deadline exceeded" subsys=hubble-relay target="hubble-peer.kube-system.svc.cluster.local:443"
level=warning msg="Failed to create peer client for peers synchronization; will try again after the timeout has expired" error="context deadline exceeded" subsys=hubble-relay target="hubble-peer.kube-system.svc.cluster.local:443"
level=warning msg="Failed to create peer client for peers synchronization; will try again after the timeout has expired" error="context deadline exceeded" subsys=hubble-relay target="hubble-peer.kube-system.svc.cluster.local:443"
level=warning msg="Failed to create peer client for peers synchronization; will try again after the timeout has expired" error="context deadline exceeded" subsys=hubble-relay target="hubble-peer.kube-system.svc.cluster.local:443"
level=warning msg="Failed to create peer client for peers synchronization; will try again after the timeout has expired" error="context deadline exceeded" subsys=hubble-relay target="hubble-peer.kube-system.svc.cluster.local:443"
level=warning msg="Failed to create peer client for peers synchronization; will try again after the timeout has expired" error="context deadline exceeded" subsys=hubble-relay target="hubble-peer.kube-system.svc.cluster.local:443"
level=warning msg="Failed to create peer client for peers synchronization; will try again after the timeout has expired" error="context deadline exceeded" subsys=hubble-relay target="hubble-peer.kube-system.svc.cluster.local:443"
level=warning msg="Failed to create peer client for peers synchronization; will try again after the timeout has expired" error="context deadline exceeded" subsys=hubble-relay target="hubble-peer.kube-system.svc.cluster.local:443"
level=warning msg="Failed to create peer client for peers synchronization; will try again after the timeout has expired" error="context deadline exceeded" subsys=hubble-relay target="hubble-peer.kube-system.svc.cluster.local:443"
level=warning msg="Failed to create peer client for peers synchronization; will try again after the timeout has expired" error="context deadline exceeded" subsys=hubble-relay target="hubble-peer.kube-system.svc.cluster.local:443"
level=warning msg="Failed to create peer client for peers synchronization; will try again after the timeout has expired" error="context deadline exceeded" subsys=hubble-relay target="hubble-peer.kube-system.svc.cluster.local:443"
level=warning msg="Failed to create peer client for peers synchronization; will try again after the timeout has expired" error="context deadline exceeded" subsys=hubble-relay target="hubble-peer.kube-system.svc.cluster.local:443"
level=warning msg="Failed to create peer client for peers synchronization; will try again after the timeout has expired" error="context deadline exceeded" subsys=hubble-relay target="hubble-peer.kube-system.svc.cluster.local:443"
level=warning msg="Failed to create peer client for peers synchronization; will try again after the timeout has expired" error="context deadline exceeded" subsys=hubble-relay target="hubble-peer.kube-system.svc.cluster.local:443"

from hubble-ui.

rolinh avatar rolinh commented on June 15, 2024

@pascal71 Your issue may be a different one. Would you mind opening a new issue?

from hubble-ui.

ensonic avatar ensonic commented on June 15, 2024

Get same error asin in first comment and the same logs as Pascal. Using cilium v1.11.8 and just downloaded latest hubble and cilium binaries today.
Here is the relay:

level=warning msg="Failed to create peer client for peers synchronization; will try again after the timeout has expired" error="context deadline exceeded" subsys=hubble-relay target="hubble-peer.kube-system.svc.cluster.local:443"

hubble-ui has:

level=info msg="Get flows request: number:10000 follow:true blacklist:{source_label:\"reserved:unknown\" source_label:\"reserved:host\" source_label:\"k8s:k8s-app=kube-dns\" source_label:\"reserved:remote-node\" source_label:\"k8s:app=prometheus\" source_label:\"reserved:kube-apiserv
level=info msg="running hubble status checker\n" subsys=ui-backend                                                                                                                                                                                                                          
level=info msg="fetching hubble flows: connecting to hubble-relay (attempt #1)\n" subsys=ui-backend                                                                                                                                                                                         
level=info msg="hubble-relay grpc client created (hubble-relay addr: hubble-relay:80)\n" subsys=ui-backend                                                                                                                                                                                  
level=info msg="hubble status checker: connection to hubble-relay established\n" subsys=ui-backend                                                                                                                                                                                          
level=info msg="hubble-relay grpc client created (hubble-relay addr: hubble-relay:80)\n" subsys=ui-backend                                                                                                                                                                                  
level=info msg="fetching hubble flows: connection to hubble-relay established\n" subsys=ui-backend                                                                                                                                                                                          
level=info msg="fetching hubble flows: connecting to hubble-relay (attempt #1)\n" subsys=ui-backend                                                                                                                                                                                         
level=error msg="flow error: EOF\n" subsys=ui-backend                                                                                                                                                                                                                                       
level=info msg="hubble status checker: stopped\n" subsys="ui-backend:status-checker"                                                                                                                                                                                                        
level=info msg="hubble-relay grpc client created (hubble-relay addr: hubble-relay:80)\n" subsys=ui-backend                                                                                                                                                                                  
level=error msg="fetching hubble flows: connecting to hubble-relay (attempt #1) failed: rpc error: code = Canceled desc = context canceled\n" subsys=ui-backend                                                                                                                             
level=info msg="fetching hubble flows: stream (ui backend <-> hubble-relay) is closed\n" subsys=ui-backend   

FYI: I am using cillium since quite some time and each time I update I am trying it and it did not work a single time. The errors are getting less, but it would be useful to actually get back to these reports and suggest how users could help to actually get this to work.

from hubble-ui.

s-reynier avatar s-reynier commented on June 15, 2024

Hi,
I have the same issue on my k8s env, have you found a fix to resole this issue ?

from hubble-ui.

brotherdust avatar brotherdust commented on June 15, 2024

Same issue here.

from hubble-ui.

jvizier avatar jvizier commented on June 15, 2024

Hello,

Same issue on a rke2 cluster

  • conf:

kind: HelmChartConfig
metadata:
name: rke2-cilium
namespace: kube-system
spec:
valuesContent: |-
hubble:
listenAddress: ":4245"
enabled: true
metrics:
enabled:
- dns:query;ignoreAAAA
- drop
- tcp
- flow
- port-distribution
- icmp
- http
peerService:
clusterDomain: cluster.local
relay:
enabled: true
ui:
enabled: true
tls:
enabled: false

  • Cilium status:
    Defaulted container "cilium-agent" out of: cilium-agent, install-portmap-cni-plugin (init), mount-cgroup (init), apply-sysctl-overwrites (init), mount-bpf-fs (init), clean-cilium-state (init)
    KVStore: Ok Disabled
    Kubernetes: Ok 1.23 (v1.23.14+rke2r1) [linux/amd64]
    Kubernetes APIs: ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumEndpoint", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]
    KubeProxyReplacement: Disabled
    Host firewall: Disabled
    CNI Chaining: portmap
    Cilium: Ok 1.12.3 (v1.12.3-1c466d2)
    NodeMonitor: Listening for events on 2 CPUs with 64x4096 of shared memory
    Cilium health daemon: Ok
    IPAM: IPv4: 7/254 allocated from 10.42.0.0/24,
    BandwidthManager: Disabled
    Host Routing: Legacy
    Masquerading: IPTables [IPv4: Enabled, IPv6: Disabled]
    Controller Status: 38/38 healthy
    Proxy Status: OK, ip 10.42.0.152, 0 redirects active on ports 10000-20000
    Global Identity Range: min 256, max 65535
    Hubble: Ok Current/Max Flows: 4095/4095 (100.00%), Flows/s: 505.43 Metrics: Ok
    Encryption: Disabled
    Cluster health: 3/3 reachable (2023-01-05T13:48:39Z)

  • logs on hubble-relay:

level=warning msg="Failed to create peer client for peers synchronization; will try again after the timeout has expired" error="context deadline exceeded" subsys=hubble-relay target="hubble-peer.kube-system.svc.cluster.local:80"

  • logs on hubble GUI:

Data stream has failed on the UI backend: EOF

  • Port 4244 is open on nodes

from hubble-ui.

brotherdust avatar brotherdust commented on June 15, 2024

Hello,

Same issue on a rke2 cluster

  • conf:

kind: HelmChartConfig

metadata:

name: rke2-cilium

namespace: kube-system

spec:

valuesContent: |-

hubble:

  listenAddress: ":4245"

  enabled: true

  metrics:

    enabled:

    - dns:query;ignoreAAAA

    - drop

    - tcp

    - flow

    - port-distribution

    - icmp

    - http

  peerService:

    clusterDomain: cluster.local

  relay:

    enabled: true

  ui:

    enabled: true

  tls:

    enabled: false
  • Cilium status:

Defaulted container "cilium-agent" out of: cilium-agent, install-portmap-cni-plugin (init), mount-cgroup (init), apply-sysctl-overwrites (init), mount-bpf-fs (init), clean-cilium-state (init)

KVStore: Ok Disabled

Kubernetes: Ok 1.23 (v1.23.14+rke2r1) [linux/amd64]

Kubernetes APIs: ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumEndpoint", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"]

KubeProxyReplacement: Disabled

Host firewall: Disabled

CNI Chaining: portmap

Cilium: Ok 1.12.3 (v1.12.3-1c466d2)

NodeMonitor: Listening for events on 2 CPUs with 64x4096 of shared memory

Cilium health daemon: Ok

IPAM: IPv4: 7/254 allocated from 10.42.0.0/24,

BandwidthManager: Disabled

Host Routing: Legacy

Masquerading: IPTables [IPv4: Enabled, IPv6: Disabled]

Controller Status: 38/38 healthy

Proxy Status: OK, ip 10.42.0.152, 0 redirects active on ports 10000-20000

Global Identity Range: min 256, max 65535

Hubble: Ok Current/Max Flows: 4095/4095 (100.00%), Flows/s: 505.43 Metrics: Ok

Encryption: Disabled

Cluster health: 3/3 reachable (2023-01-05T13:48:39Z)

  • logs on hubble-relay:

level=warning msg="Failed to create peer client for peers synchronization; will try again after the timeout has expired" error="context deadline exceeded" subsys=hubble-relay target="hubble-peer.kube-system.svc.cluster.local:80"

  • logs on hubble GUI:

Data stream has failed on the UI backend: EOF

  • Port 4244 is open on nodes

I think it's related to TLS (surprise!). I turned it off completely on relay, ui, and cilium configs and it started working.

from hubble-ui.

jvizier avatar jvizier commented on June 15, 2024

I think it's related to TLS (surprise!). I turned it off completely on relay, ui, and cilium configs and it started working.

Nice ! i already disable TLS but only on hubble side, i will looking on the others parts, Thanks

from hubble-ui.

ellakk avatar ellakk commented on June 15, 2024

I get the exact same issue, was also able to fix it temporary by disabling TLS which seems like a bad idea.

from hubble-ui.

eramax avatar eramax commented on June 15, 2024

I get the exact same issue

from hubble-ui.

miathedev avatar miathedev commented on June 15, 2024

Any updates? I went into exactly the same.

from hubble-ui.

geakstr avatar geakstr commented on June 15, 2024

This looks like not purely hubble-ui issue, but hubble/cilium issue. Usually it indicates, for example, when cilium installation was happened via helm, but hubble was enabled with cilium-cli. I would suggest to open new issue in https://github.com/cilium/cilium with detailed description how things were deployed. Please ref this issue there.

from hubble-ui.

a1git avatar a1git commented on June 15, 2024

this also happens when cilium and hubble were both installed at the same time using helm .. so this does not need a new issue.

from hubble-ui.

antonkobylko1990 avatar antonkobylko1990 commented on June 15, 2024

If you have enabled Traefik dashboard, try to disable it.

from hubble-ui.

ulfaric avatar ulfaric commented on June 15, 2024

Same issue here, RKE2 with Cilium.

from hubble-ui.

pkoraca avatar pkoraca commented on June 15, 2024

Has anyone found a nice way to resolve this issue without removing and reinstalling a Helm chart?

from hubble-ui.

camrossi avatar camrossi commented on June 15, 2024

Have the same issue with Cilium 1.13 3 on Upstream K8s. Everything is installed with Helm.
Disabling TLS fixed it also for me.

  tls:
    enabled: false

from hubble-ui.

V0idk avatar V0idk commented on June 15, 2024

you must visit http://localhost:12000/ because ui seems forbid outside ip, are you visit localhost?

from hubble-ui.

ervinb avatar ervinb commented on June 15, 2024

The CLI simply doesn't support TLS being disabled.

When the following flags are issued:

cilium hubble enable --ui --helm-set hubble.tls.enabled=false --helm-set hubble.tls.auto.enabled=false --helm-set hubble.relay.tls.server.enabled=false
  1. It causes the relay Secret not to be generated. This is what we want.

  2. Secret creation is forced in the CLI regardless of (1).

func (k *K8sHubble) enableRelay(ctx context.Context) (string, error) {
        ...
	k.Log("✨ Generating certificates...")

	if err := k.createRelayCertificates(ctx); err != nil {
		return "", err
	}
        ...
}

func (k *K8sHubble) createRelayCertificates(ctx context.Context) error {
	k.Log("🔑 Generating certificates for Relay...")
	...
	return k.createRelayClientCertificate(ctx)
}

func (k *K8sHubble) createRelayClientCertificate(ctx context.Context) error {
        secret, err := k.generateRelayCertificate(defaults.RelayClientSecretName)
        if err != nil {
                return err
        }

        _, err = k.client.CreateSecret(ctx, secret.GetNamespace(), &secret, metav1.CreateOptions{})
        if err != nil {
                return fmt.Errorf("unable to create secret %s/%s: %w", secret.GetNamespace(), secret.GetName(), err)
        }

        return nil
}

secret is empty because of (1).

k.client.CreateSecret fails because it's called with empty "payload" (the empty secret).

from hubble-ui.

lllsJllsJ avatar lllsJllsJ commented on June 15, 2024

For people who would like to enable httpV2 hubble metric. Try to remove \ in the labelsContext separator

I think the document was typo


    metrics:
      serviceMonitor:
        enabled: true
      enableOpenMetrics: true
      enabled:
      - dns:query;ignoreAAAA
      - drop
      - tcp
      - flow
      - port-distribution
      - icmp
      - http
      - httpV2:exemplars=true;labelsContext=source_ip,source_namespace,source_workload,destination_ip,destination_namespace,destination_workload,traffic_direction

from hubble-ui.

AyushCloud avatar AyushCloud commented on June 15, 2024

Hello,

I am also facing the same issue

ecureServer:true clientTLSConfig: clusterName:default insecureClient:true observerOptions:[0x10b23e0 0x10b24e0] grpcMetrics: grpcUnaryInterceptors:[] grpcStreamInterceptors:[]}" subsys=hubble-relay
level=warning msg="Failed to create peer client for peers synchronization; will try again after the timeout has expired" error="context deadline exceeded" subsys=hubble-relay target="hubble-peer.kube-system.svc.cluster.local:80"
level=warning msg="Failed to create peer client for peers synchronization; will try again after the timeout has expired" error="context deadline exceeded" subsys=hubble-relay target="hubble-peer.kube-system.svc.cluster.local:80"
level=warning msg="Failed to create peer client for peers synchronization; will try again after the timeout has expired" error="context deadline exceeded" subsys=hubble-relay target="hubble-peer.kube-system.svc.cluster.local:80"
level=warning msg="Failed to create peer client for peers synchronization; will try again after the timeout has expired" error="context deadline exceeded" subsys=hubble-relay target="hubble-peer.kube-system.svc.cluster.local:80"
level=warning msg="Failed to create peer client for peers synchronization; will try again after the timeout has expired" error="context deadline exceeded" subsys=hubble-relay target="hubble-peer.kube-system.svc.cluster.local:80"

Tried disabling the "tls" but it did not work

Thank You,
AYush

from hubble-ui.

kub3let avatar kub3let commented on June 15, 2024

I'm having the same problem as @AyushCloud, disabling TLS makes no difference.

k3s v1.27.4, cilium v1.14.1 via helm

from hubble-ui.

Bear-LB avatar Bear-LB commented on June 15, 2024

This is still a issue
Tried with TLS Disabled, HTTPV2 metric disabled, different CRI's. Fresh VM snapshot install on each attempt
Tried installing with both HELM and Cilium CLI
I have confirmed that i use the default cluster.local kubernetes domain
I have tried with and without kube-proxy replacement

Installed versions:
VMware Workstation 17
Debian 12 or Ubuntu 22.04
Kubernetes version 1.27
Cilium 1.14.4
Containerd.io 1.6.24

Cluster initiated with kubeadm init

Cilium installed with

helm install cilium cilium/cilium \
--version 1.14.4 \
--namespace=kube-system \
--set hubble.relay.enabled=true \
--set hubble.ui.enabled=true \
--set kubeProxyReplacement=true

OR

cilium install --version v1.14.4
...
cilium hubble enable --ui

Hubble relay logs
level=warning msg="Failed to create peer client for peers synchronization; will try again after the timeout has expired" error="context deadline exceeded" subsys=hubble-relay target="hubble-peer.kube-system.svc.cluster.local:443

Hubble status

root@debload1:~# cilium hubble port-forward&
[1] 6068
root@debload1:~# hubble status
Healthcheck (via localhost:4245): Ok
Current/Max Flows: 0/0
Flows/s: N/A
Connected Nodes: 0/0

Cilium status says all is ok

cilium status inside the agent container says all is ok ? Even Hubble seems to list some ongoing flows

root@debload1:~# kubectl -n kube-system exec cilium-dkvcv -- cilium status
Defaulted container "cilium-agent" out of: cilium-agent, config (init), mount-cgroup (init), apply-sysctl-overwrites (init), mount-bpf-fs (init), clean-cilium-state (init), install-cni-binaries (init)
KVStore:                 Ok   Disabled
Kubernetes:              Ok   1.27 (v1.27.7) [linux/amd64]
Kubernetes APIs:         ["EndpointSliceOrEndpoint", "cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumEndpoint", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "cilium/v2alpha1::CiliumCIDRGroup", "core/v1::Namespace", "core/v1::Pods", "core/v1::Service", "networking.k8s.io/v1::NetworkPolicy"]
KubeProxyReplacement:    True   [ens33 172.16.179.151 (Direct Routing)]
Host firewall:           Disabled
CNI Chaining:            none
Cilium:                  Ok   1.14.4 (v1.14.4-87dd2b64)
NodeMonitor:             Listening for events on 128 CPUs with 64x4096 of shared memory
Cilium health daemon:    Ok   
IPAM:                    IPv4: 3/254 allocated from 10.0.1.0/24, 
IPv4 BIG TCP:            Disabled
IPv6 BIG TCP:            Disabled
BandwidthManager:        Disabled
Host Routing:            Legacy
Masquerading:            IPTables [IPv4: Enabled, IPv6: Disabled]
Controller Status:       26/26 healthy
Proxy Status:            OK, ip 10.0.1.121, 0 redirects active on ports 10000-20000, Envoy: embedded
Global Identity Range:   min 256, max 65535
Hubble:                  Ok              Current/Max Flows: 1205/4095 (29.43%), Flows/s: 7.68   Metrics: Disabled
Encryption:              Disabled        
Cluster health:          2/2 reachable   (2023-11-15T17:05:47Z)

from hubble-ui.

feimingc avatar feimingc commented on June 15, 2024

This is still a issue Tried with TLS Disabled, HTTPV2 metric disabled, different CRI's. Fresh VM snapshot install on each attempt Tried installing with both HELM and Cilium CLI I have confirmed that i use the default cluster.local kubernetes domain I have tried with and without kube-proxy replacement

Installed versions: VMware Workstation 17 Debian 12 or Ubuntu 22.04 Kubernetes version 1.27 Cilium 1.14.4 Containerd.io 1.6.24

Cluster initiated with kubeadm init

Cilium installed with

helm install cilium cilium/cilium \
--version 1.14.4 \
--namespace=kube-system \
--set hubble.relay.enabled=true \
--set hubble.ui.enabled=true \
--set kubeProxyReplacement=true

OR

cilium install --version v1.14.4
...
cilium hubble enable --ui

Hubble relay logs level=warning msg="Failed to create peer client for peers synchronization; will try again after the timeout has expired" error="context deadline exceeded" subsys=hubble-relay target="hubble-peer.kube-system.svc.cluster.local:443

Hubble status

root@debload1:~# cilium hubble port-forward&
[1] 6068
root@debload1:~# hubble status
Healthcheck (via localhost:4245): Ok
Current/Max Flows: 0/0
Flows/s: N/A
Connected Nodes: 0/0

Cilium status says all is ok

cilium status inside the agent container says all is ok ? Even Hubble seems to list some ongoing flows

root@debload1:~# kubectl -n kube-system exec cilium-dkvcv -- cilium status
Defaulted container "cilium-agent" out of: cilium-agent, config (init), mount-cgroup (init), apply-sysctl-overwrites (init), mount-bpf-fs (init), clean-cilium-state (init), install-cni-binaries (init)
KVStore:                 Ok   Disabled
Kubernetes:              Ok   1.27 (v1.27.7) [linux/amd64]
Kubernetes APIs:         ["EndpointSliceOrEndpoint", "cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumEndpoint", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "cilium/v2alpha1::CiliumCIDRGroup", "core/v1::Namespace", "core/v1::Pods", "core/v1::Service", "networking.k8s.io/v1::NetworkPolicy"]
KubeProxyReplacement:    True   [ens33 172.16.179.151 (Direct Routing)]
Host firewall:           Disabled
CNI Chaining:            none
Cilium:                  Ok   1.14.4 (v1.14.4-87dd2b64)
NodeMonitor:             Listening for events on 128 CPUs with 64x4096 of shared memory
Cilium health daemon:    Ok   
IPAM:                    IPv4: 3/254 allocated from 10.0.1.0/24, 
IPv4 BIG TCP:            Disabled
IPv6 BIG TCP:            Disabled
BandwidthManager:        Disabled
Host Routing:            Legacy
Masquerading:            IPTables [IPv4: Enabled, IPv6: Disabled]
Controller Status:       26/26 healthy
Proxy Status:            OK, ip 10.0.1.121, 0 redirects active on ports 10000-20000, Envoy: embedded
Global Identity Range:   min 256, max 65535
Hubble:                  Ok              Current/Max Flows: 1205/4095 (29.43%), Flows/s: 7.68   Metrics: Disabled
Encryption:              Disabled        
Cluster health:          2/2 reachable   (2023-11-15T17:05:47Z)

Meet same problem, I disable all tls options, it doesn't work. I open a new issue on cilium repo.

from hubble-ui.

abctaylor avatar abctaylor commented on June 15, 2024

The yaml presented above by others had no effect for me. I messed around and changed the service from ClusterIP to NodePort to force connect to it when installing it as a pod in kube-system)

hubble-ui NodePort 10.3.199.163 <none> 80:31286/TCP 5m4s

and just got here

image

before eventually seeing the same image posted above
image

and occasionally this
image

This issue has been ongoing for a year now and there's clearly an intention to get Cilium 1.14 compatible with k8s > 1.27 - do any contributors to this project think the wider issue here (which seems not the fault of hubble-ui) will get actual traction? The Hubble screenshots from the project readme look great and it would be a shame to not have this great visibility.

from hubble-ui.

abctaylor avatar abctaylor commented on June 15, 2024

The following yaml eventually worked for me. Please note I specified my cluster domain which is k8s.core.example.net (I don't use cluster.local).

hubble:
  enabled: true
  metrics:
    enabled:
    - dns:query;ignoreAAAA
    - drop
    - tcp
    - flow
    - port-distribution
    - icmp
    - http
  peerService:
    clusterDomain: k8s.core.example.net
  relay:
    enabled: true
  ui:
    enabled: true
  tls:
    enabled: false

It would be extremely nice if this (TLS?) issue could be addressed!

image

from hubble-ui.

bernardgut avatar bernardgut commented on June 15, 2024

I had this exact same issue with the same logs as @pascal71 and a few others in this thread. I fixed by running:

sudo ufw allow 4244/tcp comment "Hubble Observability"

On the cluster nodes (In my case Ubuntu 23.10 on arm64). I suggest you all check for any firewalls sitting between you and your cluster.

Hope this helps someone out there.

from hubble-ui.

pierreblais avatar pierreblais commented on June 15, 2024

Cilium "v1.15.1" fix the problem for me !

from hubble-ui.

conradwt avatar conradwt commented on June 15, 2024

@pierreblais I'm still seeing this issue on a local installation on macOS 14.3.1 using Cilium v1.15.1. I have opened another issue, #809.

from hubble-ui.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.