Giter Club home page Giter Club logo

kubetap's People

Contributors

alexivkin avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

kubetap's Issues

Can't tap a service tied to a deployment

Description

The following command
$ kubectl tap on -n mynamepsace -p443 --https myservice
Returns:
Error: error resolving Deployment from Service selectors: the Service selector did not match any Deployments

Screenshots or other information

My service and deployment

Go version:
Kubernetes client version: Major:"1", Minor:"19", GitVersion:"v1.19.0"
Kubernetes server version: Major:"1", Minor:"19", GitVersion:"v1.19.4"

tap does not work on macOS/darwin arm64 systems

Description

Installation via krew or brew fails and building it locally works, but the binary does not appear to work.

Kubectl commands to create reproducable environment / deployment

$ git clone [email protected]:soluble-ai/kubetap.git
$ cd kubetap
$ go build .
$ ./kubetap list

No output at all (expected or error) is produced.

Screenshots or other information

Go version: 1.20.5
Kubernetes client version: v1.27.3
Kubernetes server version: v1.24.6+k3s1

Empty ConfigMap

Great tool, looking forward to use it !

Description

The following command hang and then cancel port forward:
kubectl tap on -p 80 myservice --port-forward

Result :

Establishing port-forward tunnels to Service...
Waiting for Pod containers to become ready..........................................
Pod not running after 90 seconds. Cancelling port-forward, tap still active.

When checking the pod for my service :

kubectl get pods
....
mypod        0/2     ContainerCreating   0          12m
....
kubectl logs mypod
Error from server (BadRequest): a container name must be specified for pod mypod, choose one of: [mypod kubetap]
kubectl logs mypod kubetap
Error from server (BadRequest): container "kubetap" in pod "mypod" is waiting to start: ContainerCreating

Screenshots or other information

Go version: go version go1.13 linux/amd64
Kubernetes client version: 1.16.0
Kubernetes server version: 1.14.9-eks

Tap doesn't work with Rancher downstream clusters

Description

1 - Installed kubetap using krew
2 - Tried to tap an existing service running inside a cluster created and managed by Rancher
3 - Got the error below:

Error: error upgrading connection: error creating request: parse "https://rancher.mydomain.local%2Fk8s%2Fclusters%2Fc-wagrt/api/v1/namespaces/hmg/pods/app-56fd855877-4fg7c/portforward": invalid URL escape "%2F"

Kubectl commands to create reproducable environment / deployment

Using against a cluster created/managed by Rancher:

kubectl tap on app-service -p 8080 --https --browser -n hmg

Screenshots or other information

Go version:
Kubernetes client version: version.Info{Major:"1", Minor:"27", GitVersion:"v1.27.2", GitCommit:"7f6f68fdabc4df88cfea2dcf9a19b2b830f1e647", GitTreeState:"clean", BuildDate:"2023-05-17T14:20:07Z", GoVersion:"go1.20.4", Compiler:"gc", Platform:"linux/amd64"}
Kubernetes server version: version.Info{Major:"1", Minor:"24", GitVersion:"v1.24.13", GitCommit:"49433308be5b958856b6949df02b716e0a7cf0a3", GitTreeState:"clean", BuildDate:"2023-04-12T12:08:36Z", GoVersion:"go1.19.8", Compiler:"gc", Platform:"linux/amd64"}

Cannot list resource "namespaces" in API group "" at the cluster scope

Description

When tapping a service with the --namespace parameter tap is still trying to access all namespaces. in our situation this is a blocker issue as we have limited access to K8s API outside our namespace. Is there a way to enforce only namespace scope, instead of cluster, for all calls.

kubectl tap on --namespace test -p 8080 service

Error: error fetching namespaces: namespaces is forbidden: User "system:serviceaccount:...:..." cannot list resource "namespaces" in API group "" at the cluster scope

script returned exit code 1

Screenshots or other information

Kubernetes client version: v1.26.0

Add support for Statefulset

Feature Description

Add support for tapping Statefulset objects.

Proposed Solution

Extend the search beyond just deployments to include Statefulsets.

Alternative Solutions

I'm not aware of a workaround to this issue.

Additional Context

It appears that this tool doesn't support Statefulset objects. When attempting to tap a service pointing at a statefulset, I get:

Error: error resolving Deployment from Service selectors: the Service selector did not match any Deployments

I'm interpreting the "Deployments" as being literal in the above error.

Cannot install using Homebrew

Description

Please check the existing issues to make sure you're not duplicating a report.

Kubectl commands to create reproducable environment / deployment

brew install kubetap

Results in:

Error: kubetap: wrong number of arguments (given 1, expected 0)

EDIT: Might have to do with the fact that I'm running on M1.

Add an flag setting kubetap-mitmproxy repo to pull

Feature Description

Kubetap is a great tool to debug web application running k8s cluster.
But it's depend on a kubetap-mitmproxy image default from gcr.io registry.
I'm in China, and can not access to gcr.io normally, so allways fails when I execute kubectl tap on command.

Failed to pull image "gcr.io/soluble-oss/kubetap-mitmproxy:latest": rpc error: code = Unknown desc = Error response from daemon: Get https://gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)

Hope can add an option flag to set this image repo to pull. Then we can push this image to other registry (like dockerhub).

Proposed Solution

Add a flag (for example --proxy-image) to set a image repo replacing default one.

All pods in NS crashed with Tap

Description

All pods in the namespace of the pod I tapped started tripping out. Some time after I ran the command to tap my pod, random pods in the same namespace started failing and restarting. It didn't do this right away, it started happening an hour or so after I left the tap on, i.e. after I was done sniffing some headers, I didn't do kubectl tap off my-service. Not only did pods start failing, entire nodes started getting tainted with NoSchedule which in turn caused the cluster autoscaler to overwork itself replacing failed nodes over and over.

Kubectl commands to create reproducable environment / deployment

First off, when I ran the initialize command, it would always complain the tap took too long and didn't immediately port-forward on it's own.
Here is what I ran.

kubectl tap on -n my-ns -p 4000 my-service --port-forward

Then because the port-forward didn't activate because of timeout, I ran:

kubectl port-forward svc/my-service 2244:2244 -n my-ns

Then I did my sniffings then killed the port-forward, but did not turn off the tap.
Leaving that extra container in one pod seemed to cause all hell to break loose in the namespace.
As soon as I turn it off, everything went back to normal.

Screenshots or other information

Kubernetes client version: 1.17
Kubernetes server version: 1.17
Cloud: AWS EKS

One thing to note is we have Appmesh Auto-Inject active on the namespace. Not all pods in the NS are injected with Appmesh, however the pod I injected with tap was also injected with Appmesh. This means the pod had an X-Ray sidecar and an Envoy sidecar already present when I injected the tap. Maybe this was part of the issue?

Is this project maintained?

Description

It has been 3 years since there has been any activity on this project? Would it be possible to move this out to it's own organisation so that volunteers can take over stewardship?

I'm the author of the tiny patch #16 but I think that it wouldn't be hard to find a few more people to help out. Seems there are still being bugs lodged against the project so there is still community interest in it.

Mitmproxy TCP Interception

Description

I tried the Tcp Raw option of mitmproyx and its not working.
I don't know if this should be a feature request or a bug report.

Kubectl commands to create reproducable environment / deployment

I enabled rawtcp in the option menu of mitmweb.
and added ".*" to tcp_hosts
According to mitmproxy/mitmproxy#2595 this should work.
Do you have a idea to get rawtcp mode working?

The

Screenshots or other information

This are the deployments and the service I use:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: ncl
  labels:
    app: netcatlistener
spec:
  replicas: 1
  revisionHistoryLimit: 1
  selector:
    matchLabels:
      app: netcatlistener
  template:
    metadata:
      labels:
        app: netcatlistener
    spec:
      containers:
        - args:
            - "-lk"
            - "8888"
          image: subfuzion/netcat
          imagePullPolicy: IfNotPresent
          name: ncl
          tty: true
          stdin: true
          ports:
            - containerPort: 8888
              name: listenerport
      nodeName: k8s-worker-1

---

apiVersion: v1
kind: Service
metadata:
  name: ncservice
spec:
  clusterIP: 10.103.53.167
  ports:
    - name: nctcp
      port: 8888
      protocol: TCP
      targetPort: 8888
    - name: ncudp
      port: 8888
      protocol: UDP
      targetPort: 8888
  selector:
    app: netcatlistener
  sessionAffinity: None
  type: ClusterIP

---

apiVersion: apps/v1
kind: Deployment
metadata:
  name: ncs
spec:
  replicas: 1
  revisionHistoryLimit: 1
  selector:
    matchLabels:
      app: netcatsend
  template:
    metadata:
      labels:
        app: netcatsend
    spec:
      containers:
        - args:
            - "-v"
            - "10.103.53.167"
            - "8888"
          image: subfuzion/netcat
          imagePullPolicy: IfNotPresent
          name: ncs
          tty: true
          stdin: true
      nodeName: k8s-worker-1

Raw capture

Feature Description

TCP/UDP capture to pcap.

Potential Solutions

Correct implementation non-trivial, as it involves traffic routing management,
potentially modifying the security context to allow capture, and exporting data
to the client.

  • Data export implementation still undecided, options under consideration:
    • tcpdump + ( FF Send || S3 || PVC || kubectl-tap client stream )
    • webshark interactive interface (stale project)

Context

There's already a kubectl plugin
for this, but the implementation by uploading binaries into running Pods is not ideal:

ksniff use kubectl to upload a statically compiled tcpdump binary to your pod
and redirecting it's output to your local Wireshark for smooth network debugging
experience.

That's pretty gross. Because you share the network namespace, it's much cleaner to just run the tap as a sidecar. I'd bet ksniff has some fun process-management code that I wouldn't want to write.

Deployment tweaking options

Feature Description

Allow operators to modify the deployed Container or other Deployment specs, which may be required in some environment configurations. To support these use-cases, we need to make it easy for operators to define their own tweaks and modifications.

Proposed Solution

Sidecar YAML

  • Optionally ingest sidecar configuration through a JSON/Yaml file.
    • This should be an optional feature, and will likely only be necessary for
      those few environments that require special configuration to function. We would
      like to support these environments, but in that case it's up to the operator
      to configure. Allowing yaml sidecar definition is the path to enable that.
  • There should be to flags for this:
    • --manifest which behaves like kubectl apply
    • --overrides which behaves like kustomize

gRPC

Feature Description

Tap gRPC Services.

To be useful, this needs to provide a proxy flow list and the ability to replay individual requests.
As far as I know, this tool doesn't exist. There is grpcurl,
but this doesn't provide proxy flow-through.

Proposed Solution

I was originally going to use this proxy library here, by @mwitkow, but that library doesn't really solve the use case of needing to define custom handlers.

Instead I may make my own tool, as the correct solution here is to convert gRPC to JSON and present an API for interception and modification. I can create a shitty web UI on my own, but down the road help will be much appreciated in this regard.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.