Giter Club home page Giter Club logo

hubble-ui's Introduction

Hubble UI

Hubble UI is an open-source user interface for Cilium Hubble.

🚀 Installation

Hubble UI is installed as part of Hubble. Please see the Hubble Getting Started Guide for instructions.

🌐 Why Hubble UI?

Troubleshooting microservices application connectivity is a challenging task. Simply looking at kubectl get pods does not indicate dependencies between each service, external APIs, or databases.

Hubble UI enables zero-effort automatic discovery of the service dependency graph for Kubernetes Clusters at L3/L4 and even L7, allowing user-friendly visualization and filtering of those dataflows as a Service Map.

See Hubble Getting Started Guide for details.

Service Map

🛠 Development

Backend

If you want to point the frontend to a backend deployed in Minikube, simply create a port forward.

kubectl port-forward -n kube-system deployment/hubble-ui 8081

To make changes to the Go backend, there are additional steps.

  1. Go to the 📁 backend directory and execute ./ctl.sh.

    cd ./backend
    ./ctl.sh run

    Wait until the build and server are running.

  2. In a separate terminal, enter the 📁 server directory containing the Envoy config.

    cd ./server

    Assuming Envoy has already been installed, execute:

    envoy -c ./envoy.yaml
  3. In a separate terminal, run a port forward to Hubble Relay.

    kubectl port-forward -n kube-system deployment/hubble-relay 50051:4245

Docker 🐳

Build the backend Docker image:

make hubble-ui-backend

Frontend

  1. Install dependencies.
npm install
  1. Start the development server.
npm run watch
  1. Open http://localhost:8080

Docker 🐳

Build the frontend Docker image:

make hubble-ui

🐝 Community

Learn more about the Cilium community.

🌏 Releases

Push a tag into GitHub and ping a maintainer to accept the GitHub action run which pushes the built images into the official repositories.

⚖️ License

Apache License, Version 2.0

hubble-ui's People

Contributors

aanm avatar alex1989hu avatar bengentil avatar betula avatar dependabot[bot] avatar errordeveloper avatar geakstr avatar genbit avatar hassenius avatar joestringer avatar jorge07 avatar kaworu avatar kimstacy avatar mantoine96 avatar meyskens avatar michi-covalent avatar mkilchhofer avatar niklasbeinghaus avatar paxnil avatar raphink avatar rolinh avatar therealak12 avatar uhthomas avatar yandzee avatar yannikmesserli avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

hubble-ui's Issues

Is the community version of Hubble and Hubble UI going to be deprecated/Removed?

Hi, thank you for your great projects. I'm activity tracking the changes in cilium projects, unlike cilium itself, seems like Hubble and Hubble-UI are not under active maintenance (You can see the commits and MRs in the past 6-9 months)

Are these projects going to be deprecated in favor of their enterprise version? If not, is there any plan to implement some of the most interesting and wanted features from the enterprise version into the community version as well?

  • Support L7 traffic visualization: #157
  • RBAC
  • Process Context for Syscall Visibility & Enforcement
  • Cilium Service Mesh Integration (I don't have detailed information about this, is jaeger-ui going to be used instead?)

Frontend is unable to connect to the backend

I've installed Cilium from the Helm chart v1.10.4 and it seems that the frontend is unable to connect to the backend.

Frontend logs:

127.0.0.1 - - [09/Sep/2021:20:01:22 +0000] "GET / HTTP/1.1" 304 0 "-" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/93.0.4577.63 Safari/537.36" "-"                                                          
127.0.0.1 - - [09/Sep/2021:20:01:22 +0000] "POST /api/ui.UI/GetEvents HTTP/1.1" 405 559 "http://127.0.0.1:8080/" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/93.0.4577.63 Safari/537.36" "-"               
127.0.0.1 - - [09/Sep/2021:20:01:22 +0000] "GET /favicon.ico HTTP/1.1" 200 33310 "http://127.0.0.1:8080/" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/93.0.4577.63 Safari/537.36" "-"

Backend logs:

level=info msg="Started gops server" address="127.0.0.1:0" subsys=ui-backend                                                                                                                                                                  
level=info msg="hubble-relay grpc client created (hubble-relay addr: hubble-relay:80)\n" subsys=ui-backend                                                                                                                                    
level=info msg="listening at: 0.0.0.0:8090\n" subsys=ui-backend 

Pod state:

Name:         hubble-ui-577666f6b7-h89wt                                                                                                                                                                                                      
Namespace:    cilium                                                                                                                                                                                                                          
Priority:     0                                                                                                                                                                                                                               
Node:         ip-10-100-2-177.ec2.internal/10.100.2.177                                                                                                                                                                                       
Start Time:   Thu, 09 Sep 2021 22:35:33 +0300                                                                                                                                                                                                 
Labels:       k8s-app=hubble-ui                                                                                                                                                                                                               
              pod-template-hash=577666f6b7                                                                                                                                                                                                    
Annotations:  kubernetes.io/psp: eks.privileged                                                                                                                                                                                               
Status:       Running                                                                                                                                                                                                                         
IP:           10.128.6.155                                                                                                                                                                                                                    
IPs:                                                                                                                                                                                                                                          
  IP:           10.128.6.155                                                                                                                                                                                                                  
Controlled By:  ReplicaSet/hubble-ui-577666f6b7                                                                                                                                                                                               
Containers:                                                                                                                                                                                                                                   
  frontend:                                                                                                                                                                                                                                   
    Container ID:   docker://4a502f47ba3e5a654d71f7e7e7fbe96d111a660f726c96ffaab2ee9db9690849                                                                                                                                                 
    Image:          quay.io/cilium/hubble-ui:v0.7.9@sha256:e0e461c680ccd083ac24fe4f9e19e675422485f04d8720635ec41f2ba9e5562c                                                                                                                   
    Image ID:       docker-pullable://quay.io/cilium/hubble-ui@sha256:e0e461c680ccd083ac24fe4f9e19e675422485f04d8720635ec41f2ba9e5562c                                                                                                        
    Port:           8080/TCP                                                                                                                                                                                                                  
    Host Port:      0/TCP                                                                                                                                                                                                                     
    State:          Running                                                                                                                                                                                                                   
      Started:      Thu, 09 Sep 2021 22:35:36 +0300                                                                                                                                                                                           
    Ready:          True                                                                                                                                                                                                                      
    Restart Count:  0                                                                                                                                                                                                                         
    Limits:                                                                                                                                                                                                                                   
      cpu:     1                                                                                                                                                                                                                              
      memory:  1024M                                                                                                                                                                                                                          
    Requests:                                                                                                                                                                                                                                 
      cpu:        100m                                                                                                                                                                                                                        
      memory:     64Mi                                                                                                                                                                                                                        
    Environment:  <none>                                                                                                                                                                                                                      
    Mounts:                                                                                                                                                                                                                                   
      /var/run/secrets/kubernetes.io/serviceaccount from hubble-ui-token-vxvft (ro)                                                                                                                                                           
  backend:                                                                                                                                                                                                                                    
    Container ID:   docker://ac616771f8f8fddcf0ddfe0e5e6fc8ccb1d55db70d430ab10361e101d78883fd                                                                                                                                                 
    Image:          quay.io/cilium/hubble-ui-backend:v0.7.9@sha256:632c938ef6ff30e3a080c59b734afb1fb7493689275443faa1435f7141aabe76                                                                                                           
    Image ID:       docker-pullable://quay.io/cilium/hubble-ui-backend@sha256:632c938ef6ff30e3a080c59b734afb1fb7493689275443faa1435f7141aabe76                                                                                                
    Port:           8090/TCP                                                                                                                                                                                                                  
    Host Port:      0/TCP                                                                                                                                                                                                                     
    State:          Running                                                                                                                                                                                                                   
      Started:      Thu, 09 Sep 2021 22:35:38 +0300                                                                                                                                                                                           
    Ready:          True                                                                                                                                                                                                                      
    Restart Count:  0                                                                                                                                                                                                                         
    Limits:                                                                                                                                                                                                                                   
      cpu:     1                                                                                                                                                                                                                              
      memory:  1024M                                                                                                                                                                                                                          
    Requests:                                                                                                                                                                                                                                 
      cpu:     100m                                                                                                                                                                                                                           
      memory:  64Mi                                                                                                                                                                                                                           
    Environment:                                                                                                                                                                                                                              
      EVENTS_SERVER_PORT:  8090                                                                                                                                                                                                               
      FLOWS_API_ADDR:      hubble-relay:80                                                                                                                                                                                                    
    Mounts:                                                                                                                                                                                                                                   
      /var/run/secrets/kubernetes.io/serviceaccount from hubble-ui-token-vxvft (ro)                                                                                                                                                           
  proxy:                                                                                                                                                                                                                                      
    Container ID:  docker://75c0d5ffe6b5986d1b7236c1972f4e9ff0bbe4d3adf74828d58f19d56f88383e                                                                                                                                                  
    Image:         docker.io/envoyproxy/envoy:v1.18.2@sha256:e8b37c1d75787dd1e712ff389b0d37337dc8a174a63bed9c34ba73359dc67da7                                                                                                                 
    Image ID:      docker-pullable://envoyproxy/envoy@sha256:e8b37c1d75787dd1e712ff389b0d37337dc8a174a63bed9c34ba73359dc67da7                                                                                                                 
    Port:          8081/TCP                                                                                                                                                                                                                   
    Host Port:     0/TCP                                                                                                                                                                                                                      
    Command:                                                                                                                                                                                                                                  
      envoy                                                                                                                                                                                                                                   
    Args:                                                                                                                                                                                                                                     
      -c                                                                                                                                                                                                                                      
      /etc/envoy.yaml                                                                                                                                                                                                                         
      -l                                                                                                                                                                                                                                      
      info                                                                                                                                                                                                                                    
    State:          Running                                                                                                                                                                                                                   
      Started:      Thu, 09 Sep 2021 22:35:42 +0300
    Ready:          True                                                                                                                                                                                                                      
    Restart Count:  0                                                                                                                                                                                                                         
    Limits:                                                                                                                                                                                                                                   
      cpu:     1                                                                                                                                                                                                                              
      memory:  1024M                                                                                                                                                                                                                          
    Requests:                                                                                                                                                                                                                                 
      cpu:        100m                                                                                                                                                                                                                        
      memory:     64Mi                                                                                                                                                                                                                        
    Environment:  <none>                                                                                                                                                                                                                      
    Mounts:                                                                                                                                                                                                                                   
      /etc/envoy.yaml from hubble-ui-envoy-yaml (rw,path="envoy.yaml")                                                                                                                                                                        
      /var/run/secrets/kubernetes.io/serviceaccount from hubble-ui-token-vxvft (ro)                                                                                                                                                           
Conditions:                                                                                                                                                                                                                                   
  Type              Status                                                                                                                                                                                                                    
  Initialized       True                                                                                                                                                                                                                      
  Ready             True                                                                                                                                                                                                                      
  ContainersReady   True                                                                                                                                                                                                                      
  PodScheduled      True                                                                                                                                                                                                                      
Volumes:                                                                                                                                                                                                                                      
  hubble-ui-envoy-yaml:                                                                                                                                                                                                                       
    Type:      ConfigMap (a volume populated by a ConfigMap)                                                                                                                                                                                  
    Name:      hubble-ui-envoy                                                                                                                                                                                                                
    Optional:  false                                                                                                                                                                                                                          
  hubble-ui-token-vxvft:                                                                                                                                                                                                                      
    Type:        Secret (a volume populated by a Secret)                                                                                                                                                                                      
    SecretName:  hubble-ui-token-vxvft                                                                                                                                                                                                        
    Optional:    false                                                                                                                                                                                                                        
QoS Class:       Burstable                                                                                                                                                                                                                    
Node-Selectors:  <none>                                                                                                                                                                                                                       
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s                                                                                                                                                                    
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s                                                                                                                                                                  
Events:                                                                                                                                                                                                                                       
  Type    Reason     Age   From               Message                                                                                                                                                                                         
  ----    ------     ----  ----               -------                                                                                                                                                                                         
  Normal  Scheduled  22m   default-scheduler  Successfully assigned cilium/hubble-ui-577666f6b7-h89wt to ip-10-100-2-177.ec2.internal                                                                                                         
  Normal  Pulling    22m   kubelet            Pulling image "quay.io/cilium/hubble-ui:v0.7.9@sha256:e0e461c680ccd083ac24fe4f9e19e675422485f04d8720635ec41f2ba9e5562c"                                                                         
  Normal  Pulled     22m   kubelet            Successfully pulled image "quay.io/cilium/hubble-ui:v0.7.9@sha256:e0e461c680ccd083ac24fe4f9e19e675422485f04d8720635ec41f2ba9e5562c" in 1.79771277s                                              
  Normal  Created    22m   kubelet            Created container frontend                                                                                                                                                                      
  Normal  Started    22m   kubelet            Started container frontend                                                                                                                                                                      
  Normal  Pulling    22m   kubelet            Pulling image "quay.io/cilium/hubble-ui-backend:v0.7.9@sha256:632c938ef6ff30e3a080c59b734afb1fb7493689275443faa1435f7141aabe76"                                                                 
  Normal  Pulled     22m   kubelet            Successfully pulled image "quay.io/cilium/hubble-ui-backend:v0.7.9@sha256:632c938ef6ff30e3a080c59b734afb1fb7493689275443faa1435f7141aabe76" in 1.190350181s                                     
  Normal  Created    22m   kubelet            Created container backend                                                                                                                                                                       
  Normal  Started    22m   kubelet            Started container backend                                                                                                                                                                       
  Normal  Pulling    22m   kubelet            Pulling image "docker.io/envoyproxy/envoy:v1.18.2@sha256:e8b37c1d75787dd1e712ff389b0d37337dc8a174a63bed9c34ba73359dc67da7"                                                                      
  Normal  Pulled     22m   kubelet            Successfully pulled image "docker.io/envoyproxy/envoy:v1.18.2@sha256:e8b37c1d75787dd1e712ff389b0d37337dc8a174a63bed9c34ba73359dc67da7" in 3.242855047s                                          
  Normal  Created    22m   kubelet            Created container proxy                                                                                                                                                                         
  Normal  Started    22m   kubelet            Started container proxy

Helm values:

hubble:
  # -- Enable Hubble (true by default).
  enabled: true

  # -- Buffer size of the channel Hubble uses to receive monitor events. If this
  # value is not set, the queue size is set to the default monitor queue size.
  # eventQueueSize: ""

  # -- Number of recent flows for Hubble to cache. Defaults to 4095.
  # Possible values are:
  #   1, 3, 7, 15, 31, 63, 127, 255, 511, 1023,
  #   2047, 4095, 8191, 16383, 32767, 65535
  # eventBufferCapacity: "4095"

  # -- Hubble metrics configuration.
  # See https://docs.cilium.io/en/stable/configuration/metrics/#hubble-metrics
  # for more comprehensive documentation about Hubble metrics.
  metrics:
    # -- Configures the list of metrics to collect. If empty or null, metrics
    # are disabled.
    # Example:
    #
    #   enabled:
    #   - dns:query;ignoreAAAA
    #   - drop
    #   - tcp
    #   - flow
    #   - icmp
    #   - http
    #
    # You can specify the list of metrics from the helm CLI:
    #
    #   --set metrics.enabled="{dns:query;ignoreAAAA,drop,tcp,flow,icmp,http}"
    #
    enabled: ~
    # -- Configure the port the hubble metric server listens on.
    port: 9091
    serviceMonitor:
      # -- Create ServiceMonitor resources for Prometheus Operator.
      # This requires the prometheus CRDs to be available.
      # ref: https://github.com/prometheus-operator/prometheus-operator/blob/master/example/prometheus-operator-crd/monitoring.coreos.com_servicemonitors.yaml)
      enabled: true

  # -- Unix domain socket path to listen to when Hubble is enabled.
  socketPath: /var/run/cilium/hubble.sock

  # -- An additional address for Hubble to listen to.
  # Set this field ":4244" if you are enabling Hubble Relay, as it assumes that
  # Hubble is listening on port 4244.
  listenAddress: ":4244"

  # -- TLS configuration for Hubble
  tls:
    # -- Enable mutual TLS for listenAddress. Setting this value to false is
    # highly discouraged as the Hubble API provides access to potentially
    # sensitive network flow metadata and is exposed on the host network.
    enabled: true
    # -- Configure automatic TLS certificates generation.
    auto:
      # -- Auto-generate certificates.
      # When set to true, automatically generate a CA and certificates to
      # enable mTLS between Hubble server and Hubble Relay instances. If set to
      # false, the certs for Hubble server need to be provided by setting
      # appropriate values below.
      enabled: true
      # -- Set the method to auto-generate certificates. Supported values:
      # - helm:      This method uses Helm to generate all certificates.
      # - cronJob:   This method uses a Kubernetes CronJob the generate any
      #              certificates not provided by the user at installation
      #              time.
      method: cronJob
      # -- Generated certificates validity duration in days.
      certValidityDuration: 1095
      # -- Schedule for certificates regeneration (regardless of their expiration date).
      # Only used if method is "cronJob". If nil, then no recurring job will be created.
      # Instead, only the one-shot job is deployed to generate the certificates at
      # installation time.
      #
      # Defaults to midnight of the first day of every fourth month. For syntax, see
      # https://kubernetes.io/docs/tasks/job/automated-tasks-with-cron-jobs/#schedule
      schedule: "0 0 1 */4 *"
    # -- base64 encoded PEM values for the Hubble CA certificate and private key.
    ca:
      cert: ""
      # -- The CA private key (optional). If it is provided, then it will be
      # used by hubble.tls.auto.method=cronJob to generate all other certificates.
      # Otherwise, a ephemeral CA is generated if hubble.tls.auto.enabled=true.
      key: ""
    # -- base64 encoded PEM values for the Hubble server certificate and private key
    server:
      cert: ""
      key: ""

  relay:
    # -- Enable Hubble Relay (requires hubble.enabled=true)
    enabled: true

    # -- Roll out Hubble Relay pods automatically when configmap is updated.
    rollOutPods: false

    # -- Hubble-relay container image.
    image:
      repository: quay.io/cilium/hubble-relay
      tag: v1.10.4
       # hubble-relay-digest
      digest: "sha256:be17169d2b68a974e9e27bc194e0c899dbec8caee9dd95011654b75d775d413d"
      useDigest: true
      pullPolicy: IfNotPresent

    # -- Specifies the resources for the hubble-relay pods
    resources: {}

    # -- Number of replicas run for the hubble-relay deployment.
    replicas: 1

    # -- Node labels for pod assignment
    # ref: https://kubernetes.io/docs/user-guide/node-selection/
    nodeSelector: {}

    # -- Annotations to be added to hubble-relay pods
    podAnnotations: {}

    # -- Labels to be added to hubble-relay pods
    podLabels: {}

    # -- Node tolerations for pod assignment on nodes with taints
    # ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/
    #
    tolerations: []

    # -- hubble-relay update strategy
    updateStrategy:
      rollingUpdate:
        maxUnavailable: 1
      type: RollingUpdate

    # -- Host to listen to. Specify an empty string to bind to all the interfaces.
    listenHost: ""

    # -- Port to listen to.
    listenPort: "4245"

    # -- TLS configuration for Hubble Relay
    tls:
      # -- base64 encoded PEM values for the hubble-relay client certificate and private key
      # This keypair is presented to Hubble server instances for mTLS
      # authentication and is required when hubble.tls.enabled is true.
      # These values need to be set manually if hubble.tls.auto.enabled is false.
      client:
        cert: ""
        key: ""
      # -- base64 encoded PEM values for the hubble-relay server certificate and private key
      server:
        # When set to true, enable TLS on for Hubble Relay server
        # (ie: for clients connecting to the Hubble Relay API).
        enabled: false
        # These values need to be set manually if hubble.tls.auto.enabled is false.
        cert: ""
        key: ""

    # -- Dial timeout to connect to the local hubble instance to receive peer information (e.g. "30s").
    dialTimeout: ~

    # -- Backoff duration to retry connecting to the local hubble instance in case of failure (e.g. "30s").
    retryTimeout: ~

    # -- Max number of flows that can be buffered for sorting before being sent to the
    # client (per request) (e.g. 100).
    sortBufferLenMax: ~

    # -- When the per-request flows sort buffer is not full, a flow is drained every
    # time this timeout is reached (only affects requests in follow-mode) (e.g. "1s").
    sortBufferDrainTimeout: ~

    # -- Port to use for the k8s service backed by hubble-relay pods.
    # If not set, it is dynamically assigned to port 443 if TLS is enabled and to
    # port 80 if not.
    # servicePort: 80

  ui:
    # -- Whether to enable the Hubble UI.
    enabled: true

    # -- Roll out Hubble-ui pods automatically when configmap is updated.
    rollOutPods: false

    backend:
      # -- Hubble-ui backend image.
      image:
        repository: quay.io/cilium/hubble-ui-backend
        tag: v0.7.9@sha256:632c938ef6ff30e3a080c59b734afb1fb7493689275443faa1435f7141aabe76
        pullPolicy: IfNotPresent
      # [Example]
      # resources:
      #   limits:
      #     cpu: 1000m
      #     memory: 1024M
      #   requests:
      #     cpu: 100m
      #     memory: 64Mi
      # -- Resource requests and limits for the 'backend' container of the 'hubble-ui' deployment.
      resources:
        limits:
          cpu: 1000m
          memory: 1024M
        requests:
          cpu: 100m
          memory: 64Mi
    frontend:
      # -- Hubble-ui frontend image.
      image:
        repository: quay.io/cilium/hubble-ui
        tag: v0.7.9@sha256:e0e461c680ccd083ac24fe4f9e19e675422485f04d8720635ec41f2ba9e5562c
        pullPolicy: IfNotPresent
      # [Example]
      # resources:
      #   limits:
      #     cpu: 1000m
      #     memory: 1024M
      #   requests:
      #     cpu: 100m
      #     memory: 64Mi
      # -- Resource requests and limits for the 'frontend' container of the 'hubble-ui' deployment.
      resources:
        limits:
          cpu: 1000m
          memory: 1024M
        requests:
          cpu: 100m
          memory: 64Mi

    proxy:
      # -- Hubble-ui ingress proxy image.
      image:
        repository: docker.io/envoyproxy/envoy
        tag: v1.18.2@sha256:e8b37c1d75787dd1e712ff389b0d37337dc8a174a63bed9c34ba73359dc67da7
        pullPolicy: IfNotPresent
      # [Example]
      # resources:
      #   limits:
      #     cpu: 1000m
      #     memory: 1024M
      #   requests:
      #     cpu: 100m
      #     memory: 64Mi
      # -- Resource requests and limits for the 'proxy' container of the 'hubble-ui' deployment.
      resources:
        limits:
          cpu: 1000m
          memory: 1024M
        requests:
          cpu: 100m
          memory: 64Mi

    # -- The number of replicas of Hubble UI to deploy.
    replicas: 1

    # -- Annotations to be added to hubble-ui pods
    podAnnotations: {}

    # -- Labels to be added to hubble-ui pods
    podLabels: {}

    # -- Node labels for pod assignment
    # ref: https://kubernetes.io/docs/user-guide/node-selection/
    nodeSelector: {}

    # -- Node tolerations for pod assignment on nodes with taints
    # ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/
    #
    tolerations: []

    # -- hubble-ui update strategy.
    updateStrategy:
      rollingUpdate:
        maxUnavailable: 1
      type: RollingUpdate

    securityContext:
      # -- Whether to set the security context on the Hubble UI pods.
      enabled: true

    # -- hubble-ui ingress configuration.
    ingress:
      enabled: false
      annotations: {}
        # kubernetes.io/ingress.class: nginx
        # kubernetes.io/tls-acme: "true"
      hosts:
        - chart-example.local
      tls: []
      #  - secretName: chart-example-tls
      #    hosts:
      #      - chart-example.local

Hubble UI's envoy config needs to update

Is there an existing issue for this?

  • I have searched the existing issues

What happened?

Hey, folks.
Super stocked taking a stab at having Cilium in our cluster.

Running into an issue when the Hubble's UI is enabled. The Hubble UI proxy container goes into CrashLoopBackoff. The log is given the Relevant log output section.

Cilium is deployed with the Cilium helm chart version 1.9.12.

Came across to the Hubble UI's repo, found that there's an update in the envoy.yaml config. Updating the hubble-ui-envoy configmap to use the updated envoy config from the link resolves the issue.

These are the details I can give to you, hope they help.

Cilium Version

Client: 1.9.12 d12a812 2022-01-18T15:27:56-08:00 go version go1.15.15 linux/amd64
Daemon: 1.9.12 d12a812 2022-01-18T15:27:56-08:00 go version go1.15.15 linux/amd64

Kernel Version

Linux ip-10-0-3-125.ap-southeast-1.compute.internal 5.4.172-90.336.amzn2.x86_64 cilium/cilium#1 SMP Wed Jan 19 23:08:01 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux

Kubernetes Version

Client Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.2", GitCommit:"8b5a19147530eaac9476b0ab82980b4088bbc1b2", GitTreeState:"clean", BuildDate:"2021-09-15T21:31:32Z", GoVersion:"go1.16.8", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"20+", GitVersion:"v1.20.11-eks-f17b81", GitCommit:"f17b810c9e5a82200d28b6210b458497ddfcf31b", GitTreeState:"clean", BuildDate:"2021-10-15T21:46:21Z", GoVersion:"go1.15.15", Compiler:"gc", Platform:"linux/amd64"}

Sysdump

No response

Relevant log output

[2022-01-30 18:28:41.733][33][warning][misc] [source/common/protobuf/utility.cc:312] Configuration does not parse cleanly as v3. v2 configuration is deprecated and will be removed from Envoy at the start of Q1 2021: Unknown field in: {"static_resources":{"clusters":[{"connect_timeout":"0.25s","name":"frontend","lb_policy":"round_robin","type":"strict_dns","hosts":[{"socket_address":{"port_value":8080,"address":"127.0.0.1"}}]},{"type":"logical_dns","hosts":[{"socket_address":{"address":"127.0.0.1","port_value":8090}}],"connect_timeout":"0.25s","lb_policy":"round_robin","http2_protocol_options":{},"name":"backend"}],"listeners":[{"address":{"socket_address":{"port_value":8081,"address":"0.0.0.0"}},"filter_chains":[{"filters":[{"config":{"codec_type":"auto","http_filters":[{"name":"envoy.filters.http.grpc_web"},{"name":"envoy.filters.http.cors"},{"name":"envoy.filters.http.router"}],"route_config":{"name":"local_route","virtual_hosts":[{"cors":{"allow_methods":"GET, PUT, DELETE, POST, OPTIONS","expose_headers":"grpc-status,grpc-message","max_age":"1728000","allow_origin_string_match":[{"prefix":"*"}],"allow_headers":"keep-alive,user-agent,cache-control,content-type,content-transfer-encoding,x-accept-content-transfer-encoding,x-accept-response-streaming,x-user-agent,x-grpc-web,grpc-timeout"},"routes":[{"match":{"prefix":"/api/"},"route":{"max_grpc_timeout":"0s","cluster":"backend","prefix_rewrite":"/"}},{"route":{"cluster":"frontend"},"match":{"prefix":"/"}}],"domains":["*"],"name":"local_service"}]},"stat_prefix":"ingress_http"},"name":"envoy.filters.network.http_connection_manager"}]}],"name":"listener_hubble_ui"}]}}
[2022-01-30 18:28:41.733][33][debug][runtime] [source/common/runtime/runtime_features.cc:20] Unable to use runtime singleton for feature envoy.test_only.broken_in_production.enable_deprecated_v2_api
[2022-01-30 18:28:41.734][33][critical][main] [source/server/server.cc:113] error initializing configuration '/etc/envoy.yaml': The v2 xDS major version is deprecated and disabled by default. Support for v2 will be removed from Envoy at the start of Q1 2021. You may make use of v2 in Q4 2020 by following the advice in https://www.envoyproxy.io/docs/envoy/latest/faq/api/transition. (Unknown field in: {"static_resources":{"clusters":[{"connect_timeout":"0.25s","name":"frontend","lb_policy":"round_robin","type":"strict_dns","hosts":[{"socket_address":{"port_value":8080,"address":"127.0.0.1"}}]},{"type":"logical_dns","hosts":[{"socket_address":{"address":"127.0.0.1","port_value":8090}}],"connect_timeout":"0.25s","lb_policy":"round_robin","http2_protocol_options":{},"name":"backend"}],"listeners":[{"address":{"socket_address":{"port_value":8081,"address":"0.0.0.0"}},"filter_chains":[{"filters":[{"config":{"codec_type":"auto","http_filters":[{"name":"envoy.filters.http.grpc_web"},{"name":"envoy.filters.http.cors"},{"name":"envoy.filters.http.router"}],"route_config":{"name":"local_route","virtual_hosts":[{"cors":{"allow_methods":"GET, PUT, DELETE, POST, OPTIONS","expose_headers":"grpc-status,grpc-message","max_age":"1728000","allow_origin_string_match":[{"prefix":"*"}],"allow_headers":"keep-alive,user-agent,cache-control,content-type,content-transfer-encoding,x-accept-content-transfer-encoding,x-accept-response-streaming,x-user-agent,x-grpc-web,grpc-timeout"},"routes":[{"match":{"prefix":"/api/"},"route":{"max_grpc_timeout":"0s","cluster":"backend","prefix_rewrite":"/"}},{"route":{"cluster":"frontend"},"match":{"prefix":"/"}}],"domains":["*"],"name":"local_service"}]},"stat_prefix":"ingress_http"},"name":"envoy.filters.network.http_connection_manager"}]}],"name":"listener_hubble_ui"}]}})
[2022-01-30 18:28:41.734][33][info][main] [source/server/server.cc:815] exiting
[2022-01-30 18:28:41.734][33][debug][main] [source/common/access_log/access_log_manager_impl.cc:21] destroyed access loggers
[2022-01-30 18:28:41.734][33][debug][main] [source/common/event/dispatcher_impl.cc:80] destroying dispatcher main_thread
[2022-01-30 18:28:41.734][33][debug][init] [source/common/init/watcher_impl.cc:31] init manager Server destroyed
The v2 xDS major version is deprecated and disabled by default. Support for v2 will be removed from Envoy at the start of Q1 2021. You may make use of v2 in Q4 2020 by following the advice in https://www.envoyproxy.io/docs/envoy/latest/faq/api/transition. (Unknown field in: {"static_resources":{"clusters":[{"connect_timeout":"0.25s","name":"frontend","lb_policy":"round_robin","type":"strict_dns","hosts":[{"socket_address":{"port_value":8080,"address":"127.0.0.1"}}]},{"type":"logical_dns","hosts":[{"socket_address":{"address":"127.0.0.1","port_value":8090}}],"connect_timeout":"0.25s","lb_policy":"round_robin","http2_protocol_options":{},"name":"backend"}],"listeners":[{"address":{"socket_address":{"port_value":8081,"address":"0.0.0.0"}},"filter_chains":[{"filters":[{"config":{"codec_type":"auto","http_filters":[{"name":"envoy.filters.http.grpc_web"},{"name":"envoy.filters.http.cors"},{"name":"envoy.filters.http.router"}],"route_config":{"name":"local_route","virtual_hosts":[{"cors":{"allow_methods":"GET, PUT, DELETE, POST, OPTIONS","expose_headers":"grpc-status,grpc-message","max_age":"1728000","allow_origin_string_match":[{"prefix":"*"}],"allow_headers":"keep-alive,user-agent,cache-control,content-type,content-transfer-encoding,x-accept-content-transfer-encoding,x-accept-response-streaming,x-user-agent,x-grpc-web,grpc-timeout"},"routes":[{"match":{"prefix":"/api/"},"route":{"max_grpc_timeout":"0s","cluster":"backend","prefix_rewrite":"/"}},{"route":{"cluster":"frontend"},"match":{"prefix":"/"}}],"domains":["*"],"name":"local_service"}]},"stat_prefix":"ingress_http"},"name":"envoy.filters.network.http_connection_manager"}]}],"name":"listener_hubble_ui"}]}})

Anything else?

No response

Code of Conduct

  • I agree to follow this project's Code of Conduct

Failed to receive data from backend. Please make sure that your deployment is up and refresh this page.

I got this error while loading the Hubble UI page.
Step taken:

  • Deploy cilium on GCP GKE with following settings:
cilium:
  resources:

  operator:
    enabled: true
    replicas: 1

  gke:
    enabled: true
  nodeinit:
    enabled: true
    reconfigureKubelet: true
    removeCbrBridge: true
    resources:
      requests:
        cpu: 10m
        memory: 50Mi
  hubble:
    relay:
      enabled: true
    enabled: true
    ui:
      enabled: true

  cni:
    binPath: /home/kubernetes/bin
  ipam:
    mode: kubernetes
  nativeRoutingCIDR: 10.60.0.0/14
  • port forward the hubble-ui k8s service to localhost 8080
  • load the page and see this:
    Screen Shot 2021-10-12 at 7 24 23 PM

I check the hubble-ui pod and everything container looks fine.
Please help to advise on how i can resolve this error.
Thanks

hubble-ui: should display a meaningful error if it's not able to retrieve flows from hubble

We have been seeing several users report "flows not showing up" in hubble-ui. There could be different issues why flows not showing up:

  • no flows in the selected namespace
  • namespace doesn't exist anymore
  • there are flows, but they don't have labels
  • there are only flows to kube-dns, and they are filtered out by default (though user can toggle the filter)
  • there are no flows in cluster at all (not in any namespace)
  • could not connect to hubble-relay
  • connected to hubble-relay, but there were errors retrieving flows from hubble-relay
  • connection issues between browser and grpc api

and potentially more. We need to try out best to show current status and errors.

Signed Images

Software Bill of Materials (SBOM) provides insights of the components involved, a bit like a nested ingredient list and signed images enables the user to verify that the image actually contains what it clams to.

I've noticed that other images within the Cilium project are signed by cosign and I believe it would provide good value from a security perspective to be able to validate the images, although I couldn't find such signatures from the Hubble images.

See here for more information:
https://docs.cilium.io/en/stable/configuration/verify-image-signatures/#verify-signed-container-images
cilium/cilium#21918

Connections made to the Kubernetes API show up in hubble-ui as 'world'

Sorry if this has been answered elsewhere, or maybe if it actually belongs to the hubble-ui project.

As mentioned in the Title, when running on EKS in AWS, connections to the kubernetes api show in the hubble-ui as 'world'

Is there any way to change this behaviour to make it easier to identify this as Kubernetes api traffic?

Frontend broken in 0.9.0, missing nginx config

Hey guys,

Unless I've missed something, v0.9.0 is completely broken, because this line:

COPY --from=stage1 /app/server/nginx-hubble-ui-frontend.conf /etc/nginx/conf.d/default.conf

Has been removed from the Dockerfile, so all I see when running UI 0.9.0 is the default nginx welcome screen..

The change seems to have been made by 706f630, but it seems so unlikely that this breakage should have gone unnoticed since 26 Apr, that I'm wondering whether I've missed some docs / update re deployment?

Thanks!
D

(Re-)Name proposal for Hubble UI: Webb

Some of us are fans of Cilium as a technology at my employer's. I am personally also a huge fan of the names of the systems and tools in the Cilium suite: Cilium, Hubble, pwru are all so clever. Hubble UI is probably one of those where I think we had a missed opportunity.

Webb, I think is a better name. But in all fairness, we did not know whether Webb would even work back when Hubble UI was conceived. It's still not too late and I propose considering renaming Hubble UI to Webb.

PS: This is not a joke, I am serious

PS2: And as a nod to Java applets from back in the days, we could rename to JWebb (*wink* *wink*)

ui: Export svg as file

It would be really nice to be able to export the generated SVG to clipboard or file. 🦄

backend client : http2: frame too large

kubectl -n kube-system exec -ti hubble-ui-55786f8b5b-62zzk -c backend -- sh -c "MODE=client backend"

level=info msg="initialized with TLS disabled\n" subsys=config

level=info msg="connecting to server: 0.0.0.0:8090\n" subsys=ui-backend

level=info msg="grpc client successfully created\n" subsys=ui-client

level=error msg="failed to GetEvents: rpc error: code = Unavailable desc = connection error: desc = \"error reading server preface: http2: frame too large\"\n" subsys=ui-client

command terminated with exit code 1

kubectl get po -n kube-system

NAME                               READY   STATUS    RESTARTS   AGE

cilium-965dm                       1/1     Running   1          29d

cilium-operator-8657b[78455](tel:78455)-k8pbr   1/1     Running   154        29d

cilium-operator-8657b[78455](tel:78455)-nfjz5   1/1     Running   165        29d

cilium-psxnz                       1/1     Running   2          29d

cilium-z2k5c                       1/1     Running   4          29d

hubble-relay-7867f[68447](tel:68447)-6gbcx      1/1     Running   1          30d

hubble-ui-[55786](tel:55786)f8b5b-62zzk         2/2     Running   0          107m

kubectl get svc -n kube-system

NAME                                                     TYPE           CLUSTER-IP       EXTERNAL-IP       PORT(S)                        AGE

hubble-peer                                              NodePort       10.103.152.18                443:[30363](tel:30363)/TCP                  33d

hubble-relay                                             NodePort       10.106.153.164               80:[31096](tel:31096)/TCP                   33d

hubble-ui                                                NodePort       10.107.162.168               80:[32101](tel:32101)/TCP,8090:[30604](tel:30604)/TCP    33d

BUG: Hubble-UI backend container not responding to SIGTERM

I am running cilium on an AWS EKS cluster using helm (chart version 1.12.3). As part of my CI, I have terraform spinning up a cluster to run tests and afterwards, the cluster is destroyed. The destruction bit is failing because the cilium namespace is stuck in Terminate state (now I know about finalizers, but what we need is a graceful cleanup of resources on CI, rather than a failing pipeline which needs manual intervention). The namespace is stuck because the hubble-ui pod is stuck in Terminate as well, because the backend container returns a non-zero exit code when Kubernetes sends a SIGTERM signal.

To recreate the scenario, log in to a shell on the backend container and run kill -TERM 1. It does not respond. I believe that Kubernetes is forced to kill the container, which leads the backend process to exit with a non-zero code, resulting in the pod and namespace getting stuck in the terminate state. You can also test this using helm delete ... after installing using the helm package.

hubble ui: add traffic direction as column

Cilium Feature Proposal

Is your feature request related to a problem?

No, but it simplify network troubleshooting within the Kubernetes Cluster

Describe the feature you'd like

Currently it is not possible to choose "traffic direction" as column in the hubble ui. When I click on one line, I can see the traffic direction, but troubleshooting would be faster when I could see this information without selecting a line.

The hubble-ui pod of cilium 1.12.0 are not up

hubble-ui pod needs ipv6 support to start, but my servers all have ipv6 turned off, is there a parameter to make hubble-ui pod start with only ipv4?

The error log is as follows:

kubectl -n kube-system get pod | grep ui
hubble-ui-5fccdb86fc-g6nhp        1/2     CrashLoopBackOff   12 (75s ago)   40m


kubectl -n kube-system logs  hubble-ui-5fccdb86fc-g6nhp 
Defaulted container "frontend" out of: frontend, backend
/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
/docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
/docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
10-listen-on-ipv6-by-default.sh: info: ipv6 not available
/docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
/docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh
/docker-entrypoint.sh: Configuration complete; ready for start up
2022/08/17 09:10:09 [emerg] 1#1: socket() [::]:8081 failed (97: Address family not supported by protocol)
nginx: [emerg] socket() [::]:8081 failed (97: Address family not supported by protocol)

Missing scrollbar on homepage

On clusters with lots of namespaces, the list of namespaces passes the edge of the screen. It then becomes a requirement to use the search field to find the right namespace.

Exposing hubble-ui behind Contour/Envoy ingress leads to unsupported HTTP/2 protocol preface requests

Hi.

Using helm chart 1.12.7

I've exposed Hubble-UI through an ingress managed by Contour. I've exposed the path /api/ through a service configured with h2c protocol since these requests are HTTP/2 based.

Unfortunately, this does not work since, as far as I understand, configuring Contour to use gRPC requests leads to its reverse-proxy, envoy, to check if the upstream server (hubble-ui is this case) is talking HTTP/2 protocol with these requests:

100.83.80.69 - - [24/Feb/2023:16:45:17 +0000] "PRI * HTTP/2.0" 400 157 "-" "-" "-"
100.83.80.69 - - [24/Feb/2023:16:45:19 +0000] "PRI * HTTP/2.0" 400 157 "-" "-" "-"
100.83.80.69 - - [24/Feb/2023:16:45:25 +0000] "PRI * HTTP/2.0" 400 157 "-" "-" "-"
...

Hubble-UI does not like those requests and envoy is failing to start forwarding requests from /api/ to hubble-ui. A solution might be to stop envoy from sending those "probes" but it looks like it is not yet implemented. On the other hand, it looks like any HTTP/2 server should implement a response to the PRI verb.

Here is the ingress I'm using :

---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: hubble-ui
  namespace: kube-system
spec:
  rules:
  - host: hubble.somewhere.com
    http:
      paths:
      - backend:
          service:
            name: hubble-ui
            port:
              name: http
        path: /
        pathType: Prefix
  - host: hubble.somewhere.com
    http:
      paths:
      - backend:
          service:
            name: hubble-ui-grpc
            port:
              name: http
        path: /api/
        pathType: Prefix

along with those two services:

---
apiVersion: v1
kind: Service
metadata:
  name: hubble-ui
  namespace: kube-system
spec:
  ports:
  - name: http
    port: 80
    protocol: TCP
    targetPort: 8081
  selector:
    k8s-app: hubble-ui

---
apiVersion: v1
kind: Service
metadata:
  annotations:
    projectcontour.io/upstream-protocol.h2c: http
  name: hubble-ui-grpc
  namespace: kube-system
spec:
  ports:
  - name: http
    port: 80
    protocol: TCP
    targetPort: 8081
  selector:
    k8s-app: hubble-ui

Add ability to specify custom icons

It will be great if we have ability to specify custom icons in hubble-ui for some services like gitlab/victoriametrics etc. For now only few hardcoded apps has custom icons

Hubble UI: show better labels

Proposal / RFE

Is your feature request related to a problem?
When looking at hubble-ui, a lot of of the boxes in the same namespace will have the same name. This looks confusing: box X talks to box X TCP port 1234 that talks to box X TCP port 5678 etc.... When you click on a box, you get more informations from the labels, but it isn't easy to navigate at first glance.

Describe the solution you'd like
It would be nice to add to the visual dropdown an option to display some common labels, like name, instance, release or component, maybe leave it on as a default.

Verify Service Map for connectivity check

I'm trying hubble-ui 0.7.5 with connectivity-check and here is how it's looking on a service map:

Screen Shot 2020-11-16 at 11 31 38 AM

I'm looking for an expert opinion to verify if this looks correct or not and if not - what's missing.

Unable to inspect L7 info with hubble by configuring io.cilium.proxy-visibility

Hello forks, I'm trying to set up a simle HTTP info inspecting environment with Cilium and hubble, using minikube for single node environment.

My kubernets info:

  • Linux version: Centos 7 (Inside a vmware virtual machine) with Linux kernel 5.4 (5.4.210-1.el7.elrepo.x86_64/kernel-lt)
  • Kubernets version: v1.16.2
  • kubectl version: v1.16.2
  • minikube version: v1.26.1

creating kubernetes cluster with minikube:

minikube start \
--vm-driver=none \
--kubernetes-version=v1.16.2 \
--network-plugin=cni \
--cni=false \
--image-mirror-country='cn'

deploying cilium with helm:

helm install cilium cilium/cilium \
--version 1.12.0 \
--namespace=kube-system \
--set endpointStatus.enabled=true \
--set endpointStatus.status="policy" \
--set operator.replicas=1 \
--set hubble.relay.enabled=true \
--set hubble.ui.enabled=true

and cluster status seems fine

[root@control-plane ~]# kubectl cluster-info
Kubernetes master is running at https://192.168.59.131:8443
KubeDNS is running at https://192.168.59.131:8443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

[root@control-plane ~]# kubectl get pods -n kube-system
NAME                                    READY   STATUS    RESTARTS   AGE
cilium-f6std                            1/1     Running   0          13m
cilium-operator-58b84657f-ljqq7         1/1     Running   0          13m
coredns-67c766df46-t55vg                1/1     Running   0          13m
etcd-control-plane                      1/1     Running   0          12m
hubble-relay-85bfc97ddb-vvlgv           1/1     Running   0          13m
hubble-ui-5fb54dc4db-d4676              2/2     Running   0          13m
kube-apiserver-control-plane            1/1     Running   0          12m
kube-controller-manager-control-plane   1/1     Running   0          12m
kube-proxy-jzjvh                        1/1     Running   0          13m
kube-scheduler-control-plane            1/1     Running   0          12m
storage-provisioner                     1/1     Running   0          13m

[root@control-plane ~]# cilium status
    /¯¯\
 /¯¯\__/¯¯\    Cilium:         OK
 \__/¯¯\__/    Operator:       OK
 /¯¯\__/¯¯\    Hubble:         OK
 \__/¯¯\__/    ClusterMesh:    disabled
    \__/

DaemonSet         cilium             Desired: 1, Ready: 1/1, Available: 1/1
Deployment        cilium-operator    Desired: 1, Ready: 1/1, Available: 1/1
Deployment        hubble-relay       Desired: 1, Ready: 1/1, Available: 1/1
Deployment        hubble-ui          Desired: 1, Ready: 1/1, Available: 1/1
Containers:       hubble-relay       Running: 1
                  hubble-ui          Running: 1
                  cilium-operator    Running: 1
                  cilium             Running: 1
Cluster Pods:     4/4 managed by Cilium
Image versions    cilium             quay.io/cilium/cilium:v1.12.0@sha256:079baa4fa1b9fe638f96084f4e0297c84dd4fb215d29d2321dcbe54273f63ade: 1
                  hubble-relay       quay.io/cilium/hubble-relay:v1.12.0@sha256:ca8033ea8a3112d838f958862fa76c8d895e3c8d0f5590de849b91745af5ac4d: 1
                  hubble-ui          quay.io/cilium/hubble-ui:v0.9.0@sha256:0ef04e9a29212925da6bdfd0ba5b581765e41a01f1cc30563cef9b30b457fea0: 1
                  hubble-ui          quay.io/cilium/hubble-ui-backend:v0.9.0@sha256:000df6b76719f607a9edefb9af94dfd1811a6f1b6a8a9c537cba90bf12df474b: 1
                  cilium-operator    quay.io/cilium/operator-generic:v1.12.0@sha256:bb2a42eda766e5d4a87ee8a5433f089db81b72dd04acf6b59fcbb445a95f9410: 1

For testing, I create nginx deployment and expose it. Success to acess the website

[root@control-plane ~]# kubectl get deployment
NAME    READY   UP-TO-DATE   AVAILABLE   AGE
nginx   1/1     1            1           15m
[root@control-plane ~]# kubectl get service
NAME         TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP   10.96.0.1      <none>        443/TCP        17m
nginx        NodePort    10.109.79.91   <none>        80:30219/TCP   15m
[root@control-plane ~]# minikube service nginx --url
http://192.168.59.131:30219

# (In my Windows host powershell)
    ~   09:22:53  ﮫ 0ms⠀ curl http://192.168.59.131:32233/



StatusCode        : 200
StatusDescription : OK
Content           : ...

I can observe L3/L4 info with hubble-ui

cilium hubble port-forward &
cilium hubble ui &

image

Accodring to (Layer 7 Protocol Visibility)[https://docs.cilium.io/en/stable/policy/visibility/], I make annotations

[root@control-plane ~]# kubectl get pods
NAME                     READY   STATUS    RESTARTS   AGE
nginx-86c57db685-fxv2v   1/1     Running   0          74s
[root@control-plane ~]# kubectl annotate pod nginx-86c57db685-fxv2v io.cilium.proxy-visibility="<Ingress/80/TCP/HTTP>,<Egress/80/TCP/HTTP>"
pod/nginx-86c57db685-fxv2v annotated
[root@control-plane ~]# kubectl get cep
NAME                     ENDPOINT ID   IDENTITY ID   INGRESS ENFORCEMENT   EGRESS ENFORCEMENT   VISIBILITY POLICY   ENDPOINT STATE   IPV4        IPV6
nginx-86c57db685-fxv2v   345           10688         false                 false                                    ready            10.0.0.38
[root@control-plane ~]# kubectl annotate pod nginx-86c57db685-fxv2v io.cilium.proxy-visibility="<Ingress/80/TCP/HTTP>,<Egress/80/TCP/HTTP>"
error: --overwrite is false but found the following declared annotation(s): 'io.cilium.proxy-visibility' already has a value (<Ingress/80/TCP/HTTP>,<Egress/80/TCP/HTTP>)
[root@control-plane ~]# kubectl get cep
NAME                     ENDPOINT ID   IDENTITY ID   INGRESS ENFORCEMENT   EGRESS ENFORCEMENT   VISIBILITY POLICY   ENDPOINT STATE   IPV4        IPV6
nginx-86c57db685-fxv2v   345           10688         false                 false                OK                  ready            10.0.0.38
[root@control-plane ~]#

After that, I can't get acess to my nginx service, or inspecting L7 info.
image

I'm trapped here, any suggestions to debug this?

CVE-2023-4863 - libwebp vuln in hubble-ui

Using a Vulnerability Scanner, hubble-ui, is being flagged with CVE-2023-4863. The CVE is sourced to the libwebp library being provided by Alpine. This CVE is on the US DHS CISA Exploited Vulnerabilities List. This issue is to request an incremental update to Hubble to provide a new build that includes the Alpine Patch. It appears this patch is included in the latest NGINX Alpine Base Image that hubble-ui is derived from. Need an ETA of when this may be potentially pulled in for next incremental or major release of hubble. Thanks...

nhooyr.io/websocket - PRISMA-2021-0118

https://security.snyk.io/vuln/SNYK-GOLANG-NHOOYRIOWEBSOCKET-1244972

https://github.com/cilium/hubble-ui/blob/master/backend/go.mod#L62

Fixed in v1.8.7

websocket package versions before v1.8.7 are vulnerable to Denial of Service (DoS). A double-channel close panic was possible if a peer sent back multiple pongs for every ping. If the second ping arrived before the ping goroutine deleted its channel from the map, the channel would be closed twice and so panic would ensue.

ui: Exclude elements from the service map

Hi,

While trying to view a service map for a namespace that has Prometheus scrape enabled, the view is very disordered and unusable since all services send traffic to Prometheus.

Example:
image

Will be happy to have a way to exclude specific services from the map to exclude Prometheus.

Resize Flow Columns & sort by newest/oldest/name by clicking column name

  1. ability to resize horizonatly the columns shown in the flow table, with the text wrapping/hiding as necessary.
    Currently if you have a lot of the column items enabled, you can't fully see all the lines details.

  2. Ability to sort by oldest/newest or alphabetically when clicking on column name.

Both of these items are basic functionality in other similar software (OSS or paid).

UI should prefer left2right layout

I have a >FullHD Notebook screen and would be easily able to see simple graphs all at once, IF the layout was better adjusted to my aspect ratio.

As I guess most people have horizontal screen layouts a sensible default for the layout algo would be left2right instead of top2botton like it seems to be now.

example image

Also, that I can't completely collapse the bottom pane doesn't help.

Add legend for mutual auth

Related to #568

We should have an explicit legend to show that the padlock represents mutual authentication (not to be interpreted as mTLS)

CFP: Hubble UI should have a filter per IP Family

Cilium Feature Proposal

Is your feature request related to a problem?

I'm trying to debug IPv6 traffic using hubble UI but I'm not able to filter per IP family (just a specific address filtering is not enough)

Describe the feature you'd like

The feature should allow to select ANY NS and filter for IP family eg IPv4 or IPv6.

Add visibility for "Audit" type verdicts

User Story

As a visitor of the hubble-ui, I expect to see verdicts of the type "Audit" when running Cilium in Audit Mode, so I can validate my policies.

Background

When I first experimented with Cilium, I didn't want to harm any traffic/usability and enjoyed using hubble-ui to get a sense of how traffic flows. However, once I established rules, I couldn't see the difference when adding Policies although visible within the hubble-cli (hubble observe --verdict AUDIT -f).

Links

can not get the data

why I use the microk8s install the cilium and hubble, I can not get the data from the hubble-ui?,the pod error logs as follows:
image
the hubble-ui as follows:
image
and I found that the hubble ui looks like can not resolve the svc name of the hubble-relay.

Invalid nginx config in 0.9.0

I just upgraded hubble ui using helm chart from 0.8.5 to 0.9.0 and the frontend serves the default "Welcome to nginx" page instead of the hubble app

Indeed, on inspection, the file at /etc/nginx/conf.d/default is the default nginx file

I can see a similar issue was closed as resolved but this is still ongoing in my case. I'm using the latest helm chart 1.11.6

Demo instance

I just noticed found out about this beautiful project and I would love to fiddle a bit with it before spinning up Cilium and Hubble.
A demo public demo instance which will allow interested users to check out the UI can be a great addition to Hubble.

[FEATURE] Visualize node probes as directed graphs

Following the discussion started here https://cilium.slack.com/archives/CQRL1EPAA/p1586200540018400 I'd like to propose adding a feature to hubble to query the probe endpoint and visualize the health status information in a probably directed network graph.

Any ideas about visualizations or open source tools that can be used to add this feature are much appreciated. What I know and used in the past is Neovis.js (https://github.com/neo4j-contrib/neovis.js/) which is one of the recommendations used in Neo4j and Cytoscape (https://github.com/cytoscape/cytoscape.js)
cc @luanguimaraesla

Hubble-ui hangs with large number of Flows / Second

We are adopting cilium/hubble where I work (Goodbye forever, weavenet) and everything is working great except getting the dependency map to render in hubble-ui. We have ~3k pods in the namespace, and around 400k flows/second. Using the hubble CLI, I'm able to get flow information from the relay with no issues, but with the UI, it fails after trying to render the map. It was requested in the slack channel that I open an issue for this.

feat: Health endpoints on both frontend/nginx and backend

I'd appreciate when hubble would provide health endpoints which could be used as readinessProbe and livenessProbe inside the Cilium helm chart.

As this is a common pattern for all Kubernetes pods, I think it doesn't require additional use cases.

For the nginx / frontend, you could extend the nginx.conf like this:

            location /healthz {
                return 200 'OK';
                add_header Content-Type text/plain;
            }

And for the backend, a dedicated handler would be nice, at least something like:

diff --git a/backend/main.go b/backend/main.go
index 23a4af84..c5935e8b 100644
--- a/backend/main.go
+++ b/backend/main.go
@@ -50,6 +50,12 @@ func runServer(cfg *config.Config) {
        )

        handler := http.NewServeMux()
+
+       handler.HandleFunc("/healthz", func(w http.ResponseWriter, r *http.Request) {
+               w.WriteHeader(http.StatusOK)
+               fmt.Fprintf(w, "Server is healthy")
+       })
+
        handler.HandleFunc("/api/", func(resp http.ResponseWriter, req *http.Request) {
                // NOTE: GRPC server handles requests with URL like "ui.UI/functionName"
                req.URL.Path = req.URL.Path[len("/api/"):]

But it would be better to provide 2 different probes:

  • is the app ready?
  • is it alive?

Confusing names for source and destination security identities in Hubble UI

In Hubble UI, the column for the source and destination security identities are currently called Source Service and Destination Service:

SrcService = 'Source Service',
DstPod = 'Destination Pod',
DstIp = 'Destination IP',
DstService = 'Destination Service',

That's a bit confusing because:

  • They seem to be security identities (e.g., unmanaged) and not e.g. VIPs.
  • There is no such thing as a service identity. Identities are assigned to pods, nodes, and even CIDRs, but not VIPs since we enforce policies on the backend pod.

Is there a reason for this naming?

Hubble-ui excessive dns info

Hi

Im experienceing some wired behaviour in hubble-ui where i see alot of UDP calls from 10.43.0.10 (core-dns)

Overview:

Alot

Zoom view:

image

Some logs:

image

Check cilium monitor

kubectl -n kube-system exec -ti cilium-9zh8r -- cilium monitor | grep 10.43.0.10
-> endpoint 3166 flow 0x0 , identity 30166->11362 state reply ifindex lxcb6862e3e3d70 orig-ip 10.43.0.116: 10.43.0.10:53 -> 10.43.0.117:53168 udp
-> endpoint 3166 flow 0x0 , identity world->11362 state new ifindex 0 orig-ip 10.43.0.10: 10.43.0.10:53 -> 10.43.0.117:53168 udp
-> endpoint 3166 flow 0x0 , identity 30166->11362 state reply ifindex lxcb6862e3e3d70 orig-ip 10.43.0.116: 10.43.0.10:53 -> 10.43.0.117:36851 udp
-> endpoint 3166 flow 0x0 , identity world->11362 state new ifindex 0 orig-ip 10.43.0.10: 10.43.0.10:53 -> 10.43.0.117:36851 udp
-> endpoint 3166 flow 0x0 , identity 30166->11362 state reply ifindex lxcb6862e3e3d70 orig-ip 10.43.0.116: 10.43.0.10:53 -> 10.43.0.117:50495 udp
-> endpoint 3166 flow 0x0 , identity world->11362 state new ifindex 0 orig-ip 10.43.0.10: 10.43.0.10:53 -> 10.43.0.117:50495 udp
-> endpoint 3166 flow 0x0 , identity 30166->11362 state reply ifindex lxcb6862e3e3d70 orig-ip 10.43.0.116: 10.43.0.10:53 -> 10.43.0.117:42253 udp
-> endpoint 3166 flow 0x0 , identity world->11362 state new ifindex 0 orig-ip 10.43.0.10: 10.43.0.10:53 -> 10.43.0.117:42253 udp
-> endpoint 3166 flow 0x0 , identity 30166->11362 state reply ifindex lxcb6862e3e3d70 orig-ip 10.43.0.116: 10.43.0.10:53 -> 10.43.0.117:52897 udp
-> endpoint 3166 flow 0x0 , identity world->11362 state new ifindex 0 orig-ip 10.43.0.10: 10.43.0.10:53 -> 10.43.0.117:52897 udp
-> endpoint 3166 flow 0x4a2bff2b , identity 30166->11362 state reply ifindex lxcb6862e3e3d70 orig-ip 10.43.0.116: 10.43.0.10:53 -> 10.43.0.117:57186 tcp SYN, ACK
-> endpoint 3166 flow 0x4a2bff2b , identity world->11362 state new ifindex 0 orig-ip 10.43.0.10: 10.43.0.10:53 -> 10.43.0.117:57186 tcp SYN, ACK
-> endpoint 3166 flow 0x4a2bff2b , identity world->11362 state established ifindex 0 orig-ip 10.43.0.10: 10.43.0.10:53 -> 10.43.0.117:57186 tcp ACK
-> endpoint 3166 flow 0x4a2bff2b , identity 30166->11362 state reply ifindex lxcb6862e3e3d70 orig-ip 10.43.0.116: 10.43.0.10:53 -> 10.43.0.117:57186 tcp ACK
-> endpoint 3166 flow 0x4a2bff2b , identity world->11362 state established ifindex 0 orig-ip 10.43.0.10: 10.43.0.10:53 -> 10.43.0.117:57186 tcp ACK
-> stack flow 0x0 , identity 11362->world state related ifindex 0 orig-ip 0.0.0.0: 10.43.0.117 -> 10.43.0.10 DestinationUnreachable(Port)
xx drop (Stale or unroutable IP) flow 0x0 to endpoint 0, file bpf_host.c line 665, , identity 11362->unknown: 10.43.0.117 -> 10.43.0.10 DestinationUnreachable(Port)
-> endpoint 3166 flow 0x4a2bff2b , identity 30166->11362 state reply ifindex lxcb6862e3e3d70 orig-ip 10.43.0.116: 10.43.0.10:53 -> 10.43.0.117:57186 tcp ACK, FIN
-> endpoint 3166 flow 0x4a2bff2b , identity world->11362 state established ifindex 0 orig-ip 10.43.0.10: 10.43.0.10:53 -> 10.43.0.117:57186 tcp ACK, FIN
-> endpoint 3166 flow 0x0 , identity 30166->11362 state reply ifindex lxcb6862e3e3d70 orig-ip 10.43.0.116: 10.43.0.10:53 -> 10.43.0.117:39634 udp
-> endpoint 3166 flow 0x0 , identity world->11362 state new ifindex 0 orig-ip 10.43.0.10: 10.43.0.10:53 -> 10.43.0.117:39634 udp
-> endpoint 3166 flow 0x0 , identity 30166->11362 state reply ifindex lxcb6862e3e3d70 orig-ip 10.43.0.116: 10.43.0.10:53 -> 10.43.0.117:42653 udp
-> endpoint 3166 flow 0x0 , identity world->11362 state new ifindex 0 orig-ip 10.43.0.10: 10.43.0.10:53 -> 10.43.0.117:42653 udp
-> endpoint 3166 flow 0x0 , identity 30166->11362 state reply ifindex lxcb6862e3e3d70 orig-ip 10.43.0.116: 10.43.0.10:53 -> 10.43.0.117:59442 udp
-> endpoint 3166 flow 0x0 , identity world->11362 state new ifindex 0 orig-ip 10.43.0.10: 10.43.0.10:53 -> 10.43.0.117:59442 udp
-> endpoint 3166 flow 0x0 , identity 30166->11362 state reply ifindex lxcb6862e3e3d70 orig-ip 10.43.0.116: 10.43.0.10:53 -> 10.43.0.117:53871 udp
-> endpoint 3166 flow 0x0 , identity world->11362 state new ifindex 0 orig-ip 10.43.0.10: 10.43.0.10:53 -> 10.43.0.117:53871 udp
-> endpoint 3166 flow 0x0 , identity 30166->11362 state reply ifindex lxcb6862e3e3d70 orig-ip 10.43.0.116: 10.43.0.10:53 -> 10.43.0.117:46446 udp
-> endpoint 3166 flow 0x0 , identity world->11362 state new ifindex 0 orig-ip 10.43.0.10: 10.43.0.10:53 -> 10.43.0.117:46446 udp
-> endpoint 3166 flow 0x0 , identity 30166->11362 state reply ifindex lxcb6862e3e3d70 orig-ip 10.43.0.116: 10.43.0.10:53 -> 10.43.0.117:40172 udp
-> endpoint 3166 flow 0x0 , identity world->11362 state new ifindex 0 orig-ip 10.43.0.10: 10.43.0.10:53 -> 10.43.0.117:40172 udp
-> endpoint 3166 flow 0x0 , identity 30166->11362 state reply ifindex lxcb6862e3e3d70 orig-ip 10.43.0.116: 10.43.0.10:53 -> 10.43.0.117:50264 udp
-> endpoint 3166 flow 0x0 , identity world->11362 state new ifindex 0 orig-ip 10.43.0.10: 10.43.0.10:53 -> 10.43.0.117:50264 udp
-> endpoint 3166 flow 0x0 , identity 30166->11362 state reply ifindex lxcb6862e3e3d70 orig-ip 10.43.0.116: 10.43.0.10:53 -> 10.43.0.117:57706 udp
-> endpoint 3166 flow 0x0 , identity world->11362 state new ifindex 0 orig-ip 10.43.0.10: 10.43.0.10:53 -> 10.43.0.117:57706 udp
-> endpoint 3166 flow 0x0 , identity 30166->11362 state reply ifindex lxcb6862e3e3d70 orig-ip 10.43.0.116: 10.43.0.10:53 -> 10.43.0.117:49604 udp
-> endpoint 3166 flow 0x0 , identity world->11362 state new ifindex 0 orig-ip 10.43.0.10: 10.43.0.10:53 -> 10.43.0.117:49604 udp
-> endpoint 3166 flow 0x0 , identity 30166->11362 state reply ifindex lxcb6862e3e3d70 orig-ip 10.43.0.116: 10.43.0.10:53 -> 10.43.0.117:55871 udp
-> endpoint 3166 flow 0x0 , identity world->11362 state new ifindex 0 orig-ip 10.43.0.10: 10.43.0.10:53 -> 10.43.0.117:55871 udp

image

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.