Giter Club home page Giter Club logo

python-base's People

Contributors

bpicolo avatar fabianvf avatar fooka03 avatar goddenrich avatar iamneha avatar iciclespider avatar itaru2622 avatar jamesgetx avatar jfrabaute avatar k8s-ci-robot avatar ltamaster avatar mbohlool avatar micw523 avatar mitar avatar moshevayner avatar mriduls avatar oz123 avatar palnabarun avatar pokoli avatar rawler avatar rogerhmar avatar roycaihw avatar ryphon avatar spiffxp avatar tomplus avatar vishnu667 avatar xvello avatar yliaog avatar yuvipanda avatar zshihang avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

python-base's Issues

How to use it in aws eks's pods??

config.load_incluster_config() donot work, and how to create_namespaced_pod in eks's pods?

from kubernetes import client, config
config.load_incluster_config()
File "/usr/local/lib/python3.5/multiprocessing/pool.py", line 119, in worker
    result = (True, func(*args, **kwds))
  File "/usr/local/lib/python3.5/site-packages/kubernetes/client/api_client.py", line 155, in __call_api
    _request_timeout=_request_timeout)
  File "/usr/local/lib/python3.5/site-packages/kubernetes/client/api_client.py", line 387, in request
    body=body)
  File "/usr/local/lib/python3.5/site-packages/kubernetes/client/rest.py", line 256, in DELETE
    body=body)
  File "/usr/local/lib/python3.5/site-packages/kubernetes/client/rest.py", line 222, in request
    raise ApiException(http_resp=r)
kubernetes.client.rest.ApiException: (403)
Reason: Forbidden
HTTP response headers: HTTPHeaderDict({'Date': 'Thu, 24 Jan 2019 20:06:30 GMT', 'Content-Length': '296', 'X-Content-Type-Options': 'nosniff', 'Audit-Id': '3a087c29-6862-4ad3-89bb-c69808b10b28', 'Content-Type': 'application/json'})
HTTP response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"pods \"test\" is forbidden: User \"system:serviceaccount:default:default\" cannot delete pods in the namespace \"default\"","reason":"Forbidden","details":{"name":"test","kind":"pods"},"code":403}

Strange error when using this in high-load on PyPy

Heya!

I've been load testing our JupyterHub kubespawner with high loads, and have recently been testing on PyPy.

I've discovered this error show up:

[E 2017-07-23 04:16:26.938 JupyterHub base:344] Failed to add user <User([email protected]:8888)> to proxy!
    Traceback (most recent call last):
      File "/srv/venv/site-packages/jupyterhub/handlers/base.py", line 342, in finish_user_spawn
        yield self.proxy.add_user(user)
      File "/srv/venv/site-packages/jupyterhub/proxy.py", line 242, in add_user
        {'user': user.name}
      File "/srv/venv/site-packages/kubespawner/proxy.py", line 150, in add_route
        kind='endpoints'
      File "/srv/venv/site-packages/kubespawner/proxy.py", line 118, in create_if_required
        body=body
      File "/usr/local/pypy/lib-python/3/concurrent/futures/_base.py", line 398, in result
        return self.__get_result()
      File "/usr/local/pypy/lib-python/3/concurrent/futures/_base.py", line 357, in __get_result
        raise self._exception
      File "/usr/local/pypy/lib-python/3/concurrent/futures/thread.py", line 55, in run
        result = self.fn(*self.args, **self.kwargs)
      File "/srv/venv/site-packages/kubespawner/proxy.py", line 96, in asynchronize
        return method(*args, **kwargs)
      File "/srv/venv/site-packages/kubernetes/client/apis/core_v1_api.py", line 5517, in create_namespaced_endpoints
        (data) = self.create_namespaced_endpoints_with_http_info(namespace, body, **kwargs)
      File "/srv/venv/site-packages/kubernetes/client/apis/core_v1_api.py", line 5607, in create_namespaced_endpoints_with_http_info
        collection_formats=collection_formats)
      File "/srv/venv/site-packages/kubernetes/client/api_client.py", line 329, in call_api
        _return_http_data_only, collection_formats, _preload_content, _request_timeout)
      File "/srv/venv/site-packages/kubernetes/client/api_client.py", line 153, in __call_api
        _request_timeout=_request_timeout)
      File "/srv/venv/site-packages/kubernetes/client/api_client.py", line 383, in request
        body=body)
      File "/srv/venv/site-packages/kubernetes/client/rest.py", line 275, in POST
        body=body)
      File "/srv/venv/site-packages/kubernetes/client/rest.py", line 175, in request
        headers=headers)
      File "/srv/venv/site-packages/urllib3/request.py", line 70, in request
        **urlopen_kw)
      File "/srv/venv/site-packages/urllib3/request.py", line 148, in request_encode_body
        return self.urlopen(method, url, **extra_kw)
      File "/srv/venv/site-packages/urllib3/poolmanager.py", line 321, in urlopen
        response = conn.urlopen(method, u.request_uri, **kw)
      File "/srv/venv/site-packages/urllib3/connectionpool.py", line 600, in urlopen
        chunked=chunked)
      File "/srv/venv/site-packages/urllib3/connectionpool.py", line 345, in _make_request
        self._validate_conn(conn)
      File "/srv/venv/site-packages/urllib3/connectionpool.py", line 844, in _validate_conn
        conn.connect()
      File "/srv/venv/site-packages/urllib3/connection.py", line 346, in connect
        _match_hostname(cert, self.assert_hostname or hostname)
      File "/srv/venv/site-packages/urllib3/connection.py", line 356, in _match_hostname
        match_hostname(cert, asserted_hostname)
      File "/usr/local/pypy/lib-python/3/ssl.py", line 288, in match_hostname
        if host_ip is not None and _ipaddress_match(value, host_ip):
      File "/usr/local/pypy/lib-python/3/ssl.py", line 259, in _ipaddress_match
        ip = ipaddress.ip_address(ipname.rstrip())
      File "/usr/local/pypy/lib-python/3/ipaddress.py", line 54, in ip_address
        address)
    ValueError: '10.0.0.1\x005.1' does not appear to be an IPv4 or IPv6 address

Not sure what that is and why it'd think the IP is that. I assume it's trying to talk to the kubernetes API, which is at 10.0.0.1 - not sure where the \x005.1 comes from.

This could be a bug in PyPy - unsure! I'll try to repro this in cpython.

Azure Refresh Token Unauthorised Error

I am using python kubernetes client to connect my Cluster which is hosted in Azure. I am able to connect my kubernetes env using my token from kubeconfig file using python client, whenever my token expires, I got a refreshed token but that token does not seems to be right. If I use that refreshed token thru python client or as well as manually from UI (Kubernetes) I am getting an unauthorized error.

Little dig in to that method _refresh_azure_token in kube_config.py
I see a client id mentioned as "00000002-0000-0000-c000-000000000000" - I assume this might by replaced with my client id - Please confirm.

Either way I still getting that unauthorized error. Can some one help here? or direction? Appreciated

Authentication Issue for AWS EKS Cluster

We are attempting to connect to our AWS EKS cluter (via Apache Airflow) and are getting an Authentication error for jobs running longer than 15 minutes. We are using the aws-iam-authenticator for authentication. The issue is that this provides an auth token that expires every 15 minutes, so I think that the client is not able to update the token the currently running job as it is monitoring its status, so after 15 minutes it tries to get a status update with the old token then fails due to the unauthorized error.

We tried attacking an IAM role in our .kube_config to increase this token expiration to 2 hours, but this isn't changing anything. Looking into it it seems like there was a similar issue with the Kubernetes Python Client for Google Cloud Platform that was fixed last year.

stream/ws_client.py doesn't close the connection if _preload_content is True

Using kubernetes-4.0.0 python client and exec call as below:-

stream( api.connect_get_namespaced_pod_exec,
name=pod_name, namespace=namespace, command=cmd,
stderr=True, stdin=False, stdout=True, tty=False)

After this call it returns output as a string and not closing connection so If I use it in scale env then I see below error:-
filedescriptor out of range in select

So IMO ws_client.py should be fixed from below code snippet to close connection before return:-
client.run_forever(timeout=_request_timeout)
return WSResponse('%s' % ''.join(client.read_all()))

Please fix this if my observation is correct here.

OIDC auth uses incorrect base64 decoding

We had a customer report the following error with this client:

Traceback (most recent call last): 
File ".../k8s_client.py", line 6, in <module> 
config.load_kube_config() 
File ".../venv/lib/python3.6/site-packages/kubernetes/config/kube_config.py", line 473, in load_kube_config 
loader.load_and_set(config) 
File ".../venv/lib/python3.6/site-packages/kubernetes/config/kube_config.py", line 354, in load_and_set 
self._load_authentication() 
File ".../venv/lib/python3.6/site-packages/kubernetes/config/kube_config.py", line 185, in _load_authentication 
if self._load_oid_token(): 
File ".../venv/lib/python3.6/site-packages/kubernetes/config/kube_config.py", line 236, in _load_oid_token 
base64.b64decode(parts[1]).decode('utf-8') 
File ".../python/3.6.5/Frameworks/Python.framework/Versions/3.6/lib/python3.6/base64.py", line 87, in b64decode 
return binascii.a2b_base64(s) 
binascii.Error: Incorrect padding

Which appears to originate from here:

base64.b64decode(parts[1]).decode('utf-8')

jwt_attributes = json.loads(
    base64.b64decode(parts[1]).decode('utf-8')
)

JWTs aren't encoded using standard base64 encoding, they use URL encoding without the final padding

   Base64url Encoding
      Base64 encoding using the URL- and filename-safe character set
      defined in Section 5 of RFC 4648 [RFC4648], with all trailing '='
      characters omitted (as permitted by Section 3.2) and without the
      inclusion of any line breaks, whitespace, or other additional
      characters.  Note that the base64url encoding of the empty octet
      sequence is the empty string.  (See Appendix C for notes on
      implementing base64url encoding without padding.)

https://tools.ietf.org/html/rfc7515#section-2

So "hello world" should become aGVsbG8gd29ybGQ, not aGVsbG8gd29ybGQ= https://play.golang.org/p/vFrVzr9uyAQ

Python's default base64 library doesn't handle this encoding and spits out the same exception our customer's seeing:

$ python3 -c 'import base64; base64.b64decode("aGVsbG8gd29ybGQ")'
Traceback (most recent call last):
  File "<string>", line 1, in <module>
  File "/usr/lib64/python3.6/base64.py", line 87, in b64decode
    return binascii.a2b_base64(s)
binascii.Error: Incorrect padding

The REST client doesn't recognize IP addresses in subjectAltNames in the API server cert

On GKE, the master has a self-signed cert with its IP addresses in the subjectAltNames. The REST client seems to ignore those.

2017-06-08 21:57:56,534 ERROR Certificate did not match expected hostname: 146.xxx.yyy.144. Certificate: {'notAfter': 'May 24 23:19:20 2021 GMT', 'subjectAltName': (('DNS', 'kubernetes'), ('DNS', 'kubernetes.default'), ('DNS', 'kubernetes.default.svc'), ('DNS', 'kubernetes.default.svc.cluster.local'), ('IP Address', '146.xxx.yyy.144'), ('IP Address', '10.3.240.1')), 'subject': ((('commonName', u'146.xxx.yyy.144'),),)}
Traceback (most recent call last):
  File "build.py", line 45, in <module>
    resp = api_instance.list_namespaced_pod('jenkins')
  File "/home/shimin/env/local/lib/python2.7/site-packages/kubernetes/client/apis/core_v1_api.py", line 12382, in list_namespaced_pod
    (data) = self.list_namespaced_pod_with_http_info(namespace, **kwargs)
  File "/home/shimin/env/local/lib/python2.7/site-packages/kubernetes/client/apis/core_v1_api.py", line 12481, in list_namespaced_pod_with_http_info
    collection_formats=collection_formats)
  File "/home/shimin/env/local/lib/python2.7/site-packages/kubernetes/client/api_client.py", line 329, in call_api
    _return_http_data_only, collection_formats, _preload_content, _request_timeout)
  File "/home/shimin/env/local/lib/python2.7/site-packages/kubernetes/client/api_client.py", line 153, in __call_api
    _request_timeout=_request_timeout)
  File "/home/shimin/env/local/lib/python2.7/site-packages/kubernetes/client/api_client.py", line 361, in request
    headers=headers)
  File "/home/shimin/env/local/lib/python2.7/site-packages/kubernetes/client/rest.py", line 240, in GET
    query_params=query_params)
  File "/home/shimin/env/local/lib/python2.7/site-packages/kubernetes/client/rest.py", line 217, in request
    raise ApiException(status=0, reason=msg)
kubernetes.client.rest.ApiException: (0)
Reason: SSLError
hostname '146.xxx.yyy.144' doesn't match either of 'kubernetes', 'kubernetes.default', 'kubernetes.default.svc', 'kubernetes.default.svc.cluster.local'

OIDC auth behaivor differs from kubectl

I trying to use kubeconfig that works with kubectl but causes problems in python client

auth-provider:
  name: oidc
  config:
    client-id: <CENSORED>
    id-token: <CENSORED>
    idp-certificate-authority-data: <CENSORED>
    idp-issuer-url: <CENSORED>
    refresh-token: <CENSORED>

Code: config.load_kube_config('testkube.yml')
Error:

Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/home/mogaika/.local/lib/python3.6/site-packages/kubernetes/config/kube_config.py", line 549, in load_kube_config
    loader.load_and_set(config)
  File "/home/mogaika/.local/lib/python3.6/site-packages/kubernetes/config/kube_config.py", line 430, in load_and_set
    self._load_authentication()
  File "/home/mogaika/.local/lib/python3.6/site-packages/kubernetes/config/kube_config.py", line 194, in _load_authentication
    if self._load_auth_provider_token():
  File "/home/mogaika/.local/lib/python3.6/site-packages/kubernetes/config/kube_config.py", line 213, in _load_auth_provider_token
    return self._load_oid_token(provider)
  File "/home/mogaika/.local/lib/python3.6/site-packages/kubernetes/config/kube_config.py", line 290, in _load_oid_token
    self._refresh_oidc(provider)
  File "/home/mogaika/.local/lib/python3.6/site-packages/kubernetes/config/kube_config.py", line 351, in _refresh_oidc
    verify=config.ssl_ca_cert if config.verify_ssl else None
  File "/home/mogaika/.local/lib/python3.6/site-packages/requests_oauthlib/oauth2_session.py", line 363, in refresh_token
    timeout=timeout, headers=headers, verify=verify, withhold_token=True, proxies=proxies)
  File "/home/mogaika/.local/lib/python3.6/site-packages/requests/sessions.py", line 581, in post
    return self.request('POST', url, data=data, json=json, **kwargs)
  File "/home/mogaika/.local/lib/python3.6/site-packages/requests_oauthlib/oauth2_session.py", line 425, in request
    headers=headers, data=data, **kwargs)
  File "/home/mogaika/.local/lib/python3.6/site-packages/requests/sessions.py", line 533, in request
    resp = self.send(prep, **send_kwargs)
  File "/home/mogaika/.local/lib/python3.6/site-packages/requests/sessions.py", line 646, in send
    r = adapter.send(request, **kwargs)
  File "/home/mogaika/.local/lib/python3.6/site-packages/requests/adapters.py", line 514, in send
    raise SSLError(e, request=request)
requests.exceptions.SSLError: HTTPSConnectionPool(host='172.16.244.35', port=443): Max retries exceeded with url: /auth/realms/iam/protocol/openid-connect/token (Caused by SSLError(SSLError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:852)'),))

I didn't get origin of this problem since temp cert file created and passed to urllib correctly (code)
Workaround is to remove idp-certificate-authority-data field and disable cert checks.
But then I found other 2 problems:

  • code expects that 'client-secret' is always provided, but this is not always the case (code). Workaround: provide any string as client-secret, but afaik this depend on oauth server side.
  • code passes None object as verify parameter to refresh_token method instead of boolean value (code), (method definition) (deeper method definition)

userinfo.email scope is needed to work with rbac

KubeConfigLoader._refresh_credentials sets the scope to cloud-platform.

I think userinfo.email scope is needed as well.

See kubernetes/kubernetes#58141 for a similar issue with kubectl and a good explanation.

I think we need the userinfo.email scope because RBAC rules can be expressed in terms of the email of service accounts. But if the userinfo.email scope isn't included that APIServer ends up using the numeric id of service accounts which won't work if RBAC rules are written in terms of
the emails.

I haven't confirmed for myself this is an issue (I'm working through a variety of issues with kubectl/kubeconfig/client libs) so I could be wrong.

load_incluster_config() Authorization API key should conform to IETF 6750 standard

While trying to reuse the Configuration.api_key['authorization'] = 'bearer ...' for communicating to other services, I discovered some (for example Jenkins) don't like the incorrectly cased bearer type instead of the expected Bearer. I don't know if/what depends on this implemented this way, however I believe it should follow the IETF 6750 standard so this can be reused elsewhere that strictly hold the standard.

Originally misfiled as kubernetes-client/python#633

Python kubernetes client breaks when parsing the response from custom controller

Issue: Python kubernetes client breaks when parsing the response from custom controller

When running a watch primitive as:
resource_version = ''
while True:
stream = watch.Watch().stream(crds.list_cluster_custom_object, DOMAIN, "v1", "guitars", resource_version=resource_version)

The python client breaks with:

Traceback (most recent call last):
File "controller-pods.py", line 39, in
for event in stream:
File "/usr/lib/python2.7/site-packages/kubernetes/watch/watch.py", line 154, in stream
for line in iter_resp_lines(resp):
File "/usr/lib/python2.7/site-packages/kubernetes/watch/watch.py", line 60, in iter_resp_lines
resp = resp.read_chunked(decode_content=False)
AttributeError: 'HTTPResponse' object has no attribute 'read_chunked'

Packages versions:
rpm -qf /usr/lib/python2.7/site-packages/openshift/dynamic/client.py /usr/lib/python2.7/site-packages/kubernetes/watch/watch.py

python2-openshift-0.8.8-1.el7.noarch
python2-kubernetes-8.0.1-1.el7.noarch

Workaround, it seems that in this case the response does not allow to be chunked, using instead **read() in[1]:

https://github.com/kubernetes-client/python-base/blob/master/watch/watch.py#L48

Before:
for seg in resp.read_chunked(decode_content=False):
After:
for seg in resp.read():

Seemed to work fine.

Watch stream duplicates events

I have been using a Watch stream to handle events for my custom resource. I have found that the same events are repeated indefinitely even though nothing has changed on the system.

Here is an example. I have a custom resource defined as follows:

apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: hobbitses.test.example
spec:
  group: test.example
  version: v1
  scope: Namespaced
  names:
    plural: hobbitses 
    singular: hobbit
    kind: Hobbit
    shortNames:
     - ht

After creating it, I start my python script. When I create the first resource, it tells me that it has been added. But, then a minute or so later, it tells me that it has been added again, despite there being no changes to it. In the following example, I have added a single resource. It has been repeated three times total. The last two times, the event has not differed from the first.

python3 -u test.py
Starting stream
Handling ADDED for bilbo-baggins:{'items': {'weapon': {'name': 'Sting', 'type': 'ShortSword'}}} version 455811
Event was new!
Handling ADDED for bilbo-baggins:{'items': {'weapon': {'name': 'Sting', 'type': 'ShortSword'}}} version 455811
Handling ADDED for bilbo-baggins:{'items': {'weapon': {'name': 'Sting', 'type': 'ShortSword'}}} version 455811

The loop that executes this is here:

running = True
resource_version = "" # Default to beginning of history
while running:
    old = None
    print("Starting stream")
    watcher = kubernetes.watch.Watch()
    stream = watcher.stream(crd_api.list_namespaced_custom_object,
                            "test.example",
                            "v1",
                            "", # Match all
                            "hobbitses",
                            resource_version = resource_version)
    for event in stream:
        obj = event["object"]
        op = event["type"]
        metadata = obj.get("metadata")
        name = metadata["name"]
        hobbit = obj.get("spec")

        resource_version = metadata["resourceVersion"]

        print("Handling %s for %s:%s version %s" % (op, name, hobbit, resource_version))

        if event != old:
            print("Event was new!")
        old = event

Digging into this a bit more, I found the following in Watch.Stream:

        while True:
            resp = func(*args, **kwargs)
            try:
                for line in iter_resp_lines(resp):
                    yield self.unmarshal_event(line, return_type)
                    if self._stop:
                        break
            finally:
                kwargs['resource_version'] = self.resource_version
                resp.close()
                resp.release_conn()

Link

Thinking about this a bit, it seems to me like the issue is that the resource_version being passed to func is incorrect.

The first time stream()'s for loop is called, the resource_version is the one passed in to stream(). That's fine. However, for subsequent iterations, the resource version is the value stored in the Watch object -- i.e. 0 if it's the default. This does not seem correct.

My understanding is that stream should be updating the resource version to the latest version handled in order to keep the streaming semantics.

FWIW, I was able to work around the issue by updating the Watch object's resource version prior to calling stream again.

E.g.

...
        print("Handling %s for %s:%s version %s" % (op, name, hobbit, resource_version))

        watcher.resource_version = resource_version

        if event != old:
            print("Event was new!")
            old = event

While this works, it doesn't seem right for two reasons.

  1. It is not documented
  2. It breaks encapsulation; we really shouldn't need to understand the internals of Watch or the stream() function in order to use it.

I'd be fine with a documentation update, but I think the proper fix is to have the stream() function use the most recent resource version.

Service token file does not exists when using workload identity

Hi,

We have a k8s service that connects to kubernetes to get some metrics. This has previously worked well using the load_incluster_config method.

We've recently switched to workload identity and this is no longer working, giving the error :

site-packages/kubernetes/config/incluster_config.py", line 64, in _load_config raise ConfigException("Service token file does not exists.")

This file does not exist, basically we've done automountServiceAccountToken: true

So, I realise this file won't exist. But I'm not sure of the recommended way to make incluster connections now.

Thanks in advance for any help / pointers.

Olly

Consider releasing python-base as an stand alone package

Python base can be re-arranged to be released as an stand alone package. Benefits are:

  • Clear dependencies between main repo and base.
  • A fully functional kubernetes client without convenience API calls and Models. One should be able to use it to talk json with API server's endpoints.
  • Versioning, testing, etc. independent of main repo.

Add E2E test to python-base

Instead of mocked API. To really test if things work. There is now kind which can help setup a Kubernetes cluster inside a CI against which then to test.

Losing the "new line character" when use watch stream

Hi,

I lost the new line character when use watch stream like below.
If not using the follow parameter, then fine. But I want to use stream with new line character.

use case

api = client.CoreV1Api(api_client=self.client)
w = watch.Watch()
return w.stream(api.read_namespaced_pod_log, name=pod_name, namespace=namespace, follow=True)

How about fix like this?

watch.py

        timeouts = ('timeout_seconds' in kwargs)
        while True:
            resp = func(*args, **kwargs)
            try:
                for line in iter_resp_lines(resp):
                    yield self.unmarshal_event(line, return_type) + '\n' # <- here!
                    if self._stop:
                        break

stream assumes all data as UTF-8. That's not always the case.

resp = kubernetes.stream.stream(api.connect_get_namespaced_pod_exec, 'sleeping-bash', namespace='default', command=['bash', '-c', 'echo -n -e \\\\xD4'], stderr=True, stdin=False, stdout=True, tty=False)
site-packages\kubernetes\stream\ws_client.py in update(self, timeout)
    176            elif op_code == ABNF.OPCODE_BINARY or op_code == ABNF.OPCODE_TEXT:
    177                data = frame.data
    178                 if six.PY3:
--> 179                     data = data.decode("utf-8")
    180                 if len(data) > 1:

UnicodeDecodeError: 'utf-8' codec can't decode byte 0xd4 in position 1: unexpected end of data

select should not be used in WSClient because python-websockets does its own buffering

I'm hitting an issue where some exec calls made via the python client hang.

For background, I'm in the process of migrating my cluster from docker to cri-o and I have a suite of acceptance tests written in python that I run against the cluster to verify functionality. A good deal of of the tests exec commands inside pods and upon switching to cri-o, a certain command in the tests hangs/times out every time. Initially, I thought there was a bug in cri-o, but If the same command is executed via kubectl, it does not hang, so I started diving into the python client.

I believe I've tracked the problem down to the select call that is made in WSClient:

r, _, _ = select.select(
(self.sock.sock, ), (), (), timeout)

If I remove that select call, commands never hang.

I think that it's invalid to call select there because select is a system call that checks if a socket ready to be read/written, however, with websockets, we're several layers of abstraction away from the underlying system socket that select is checking and buffering is occurring at each of the layers. I suspect what is happening is:

  1. recv_data_frame is called. The underlying recv call on the socket uses a fixed buffer size and receives multiple frames and recv_data_frame returns only one of them.
  2. on the next iteration, we check if the underlying socket has any bytes available using select. It does not, but there is a frame waiting in the buffer in websocket-client that recv_data_frame would instantly return.

So I think the solution is to simply remove that select call, but I'm curious why it was added in the first place and whether removing it breaks some expectation. The only thing that I can think of is that it's being used to make read timeouts possible, however, if so, that is still buggy because the select could return if 1 byte is available to read, but then the recv calls in websocket-client would still block waiting on a complete frame.

@mbohlool what do you think?

Create an OWNERS file

ref: kubernetes/community#1721

This repo is listed in sigs.yaml as belonging to sig-api-machinery, but it lacks an OWNERS file. Can you please add one with the appropriate list of approvers and reviewers? eg:

For more information see the OWNERS docs

/sig api-machinery

/assign @lavalamp @deads2k
sig-api-machinery chairs

/assign @roycaihw @yliaog @mbohlool
as folks who've been merging PR's recently

Valid configs fail if fields missing

eg. get

kube_config.py", line 587, in load_config
    self._merge(item, config[item], path)
KeyError: 'clusters'

for the example config from the k8s documentation https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/

contexts:
- context:
    cluster: development
    namespace: frontend
    user: developer
  name: dev-frontend
- context:
    cluster: development
    namespace: ramp
    user: developer
  name: dev-ramp-up
- context:
    cluster: development
    namespace: storage
    user: developer
  name: dev-storage
- context:
    cluster: scratch
    namespace: default
    user: experimenter
  name: exp-scratch

Add CLA check

As in other kubernetes repos, the CLA check must be activated for this repo.

Create a SECURITY_CONTACTS file.

As per the email sent to kubernetes-dev[1], please create a SECURITY_CONTACTS
file.

The template for the file can be found in the kubernetes-template repository[2].
A description for the file is in the steering-committee docs[3], you might need
to search that page for "Security Contacts".

Please feel free to ping me on the PR when you make it, otherwise I will see when
you close this issue. :)

Thanks so much, let me know if you have any questions.

(This issue was generated from a tool, apologies for any weirdness.)

[1] https://groups.google.com/forum/#!topic/kubernetes-dev/codeiIoQ6QE
[2] https://github.com/kubernetes/kubernetes-template-project/blob/master/SECURITY_CONTACTS
[3] https://github.com/kubernetes/community/blob/master/committee-steering/governance/sig-governance-template-short.md

credential error in GCE/GKE

using the pre-release 5.0 version of kubernetes-client python. Run the example1.py, however still get the following credential errors.

`
config.load_kube_config()
File "/Library/Python/2.7/site-packages/kubernetes/config/kube_config.py", line 363, in load_kube_config

loader.load_and_set(config)

File "/Library/Python/2.7/site-packages/kubernetes/config/kube_config.py", line 253, in load_and_set

self._load_authentication()

File "/Library/Python/2.7/site-packages/kubernetes/config/kube_config.py", line 176, in _load_authentication

if self._load_gcp_token():

File "/Library/Python/2.7/site-packages/kubernetes/config/kube_config.py", line 196, in _load_gcp_token

self._refresh_gcp_token()

File "/Library/Python/2.7/site-packages/kubernetes/config/kube_config.py", line 205, in _refresh_gcp_token

credentials = self._get_google_credentials()

File "/Library/Python/2.7/site-packages/kubernetes/config/kube_config.py", line 133, in _refresh_credentials

scopes=['https://www.googleapis.com/auth/cloud-platform']

File "/Library/Python/2.7/site-packages/google/auth/_default.py", line 283, in default

raise exceptions.DefaultCredentialsError(_HELP_MESSAGE)

google.auth.exceptions.DefaultCredentialsError: Could not automatically determine credentials. Please set GOOGLE_APPLICATION_CREDENTIALS or explicitly create credential and re-run the application. For more information, please see
https://developers.google.com/accounts/docs/application-default-credentials.
`

It is able to work after I rerun kubectl command to refresh the token.

Boilerplate includes un-necessary shebang line

Currently the following modules contain this line:

watch/watch.py:#!/usr/bin/env python
watch/__init__.py:#!/usr/bin/env python
watch/watch_test.py:#!/usr/bin/env python
config/incluster_config_test.py:#!/usr/bin/env python
config/incluster_config.py:#!/usr/bin/env python
config/dateutil.py:#!/usr/bin/env python
config/kube_config_test.py:#!/usr/bin/env python
config/exec_provider.py:#!/usr/bin/env python
config/kube_config.py:#!/usr/bin/env python
config/__init__.py:#!/usr/bin/env python
config/config_exception.py:#!/usr/bin/env python
config/exec_provider_test.py:#!/usr/bin/env python
config/dateutil_test.py:#!/usr/bin/env python
stream/stream.py:#!/usr/bin/env python
stream/ws_client.py:#!/usr/bin/env python
stream/ws_client_test.py:#!/usr/bin/env python
stream/__init__.py:#!/usr/bin/env python

We should remove the shebang line from hack/boilerplate/boilerplate.py.txt and then remove it from the modules which aren't expected to run as scripts.

AttributeError: 'ConfigNode' object has no attribute 'get'

Using latest pre-release version kubernetes-11.0.0b2 with Python 3.6.5 on Windows 10 against an Azuze Kubernetes Service cluster, configured using Azure AD for authentication:

I receive the error:

Traceback (most recent call last):
  File "check_neptune2.py", line 4, in <module>
    config.load_kube_config()
  File "C:\programs\Python36\lib\site-packages\kubernetes\config\kube_config.py", line 667, in load_kube_config
    loader.load_and_set(config)
  File "C:\programs\Python36\lib\site-packages\kubernetes\config\kube_config.py", line 474, in load_and_set
    self._load_authentication()
  File "C:\programs\Python36\lib\site-packages\kubernetes\config\kube_config.py", line 205, in _load_authentication
    if self._load_auth_provider_token():
  File "C:\programs\Python36\lib\site-packages\kubernetes\config\kube_config.py", line 222, in _load_auth_provider_token
    return self._load_azure_token(provider)
  File "C:\programs\Python36\lib\site-packages\kubernetes\config\kube_config.py", line 241, in _load_azure_token
    self._refresh_azure_token(provider['config'])
  File "C:\programs\Python36\lib\site-packages\kubernetes\config\kube_config.py", line 256, in _refresh_azure_token
    apiserver_id = config.get(
AttributeError: 'ConfigNode' object has no attribute 'get'

Try out Github Actions for Continuous Integration/Continuous Deployment

What is the feature and why do you need it:
Feature: Explore Github Actions to run tests.

Need: Dependence on a single vendor for source control as well as CI/CD.

Describe the solution you'd like to see:
Continuous Integration (CI)

Whenever a PR if filed, a Github Action workflow starts executing which runs the tests for all the different environments as is happening now.

The CI workflow could also run every time a commit gets pushed to master but this case does not arise in this repo since pushing directly to master is disabled.

Continuous Deployment (CD)

Whenever we want to release a new version of the client. we can tag the version and push the tag. This would result in the CD Github workflow to start executing. In the future, we can use it to release the client. Added benefits are that the credentials to push packages to PyPI remain on Github and are not needed to be managed elsewhere. Currently, there is no CD for this repo. It would be a new feature.

References:

  1. kubernetes/org#1405

support for asyncio

Hi guys.

Do you consider providing this client library with support for asyncio ?

Regards,
Tomasz Prus

Bug in rfc3339 implementation

This issue can be easily reproduced like so (same on Python 2.7.15):

$ python
Python 3.7.2 (default, Feb 21 2019, 17:35:59) [MSC v.1915 64 bit (AMD64)] :: Anaconda, Inc. on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> from datetime import datetime, timedelta
>>> from time import sleep
>>> from kubernetes.config.dateutil import UTC, format_rfc3339, parse_rfc3339
>>>
>>> for tz in (None, UTC):
...     dt = parse_rfc3339(datetime.now(tz) + timedelta(seconds=20))
...     sleep(0.5)
...     delta = dt - datetime.now(dt.tzinfo)
...     print('Delta of {} should be ~19.5, and was {}'.format(format_rfc3339(dt), delta.total_seconds()))
...
Delta of 2019-03-18T00:21:17Z should be ~19.5, and was -14380.501032
Delta of 2019-03-18T04:21:17Z should be ~19.5, and was 19.499945

Based on this comment, there is logic that is supposed to coerce naive date times to UTC but it does not appear to be working consistently cc @mbohlool @pokoli

I should also note, this exact example passes on Travis (Xenial):

Job: https://travis-ci.com/DataDog/integrations-core/jobs/185296386#L592
Test: https://github.com/DataDog/integrations-core/blob/e3b1d64dfb1f40ec8e0b311752960ebd488387dd/datadog_checks_base/tests/test_kube_leader.py#L162-L168
Code: https://github.com/DataDog/integrations-core/blob/e3b1d64dfb1f40ec8e0b311752960ebd488387dd/datadog_checks_base/datadog_checks/base/checks/kube_leader/record.py#L75-L83

Watch stream should handle HTTP error before unmarshaling event

I could be mistaken but looking at the infinite loop for the watch stream doesn't handle the case when you receive an event that is expired i.e. a HTTP status code of 410.

        while True:
            resp = func(*args, **kwargs)
            try:
                for line in iter_resp_lines(resp):
                    yield self.unmarshal_event(line, return_type)
                    if self._stop:
                        break
            finally:
                kwargs['resource_version'] = self.resource_version
                resp.close()
                resp.release_conn()

Looking at the code it seems that if the event is expired then resp should return something along the lines of

{'raw_object': {u'status': u'Failure', u'kind': u'Status', u'code': 410, u'apiVersion': u'v1', u'reason': u'Gone', u'message': u'too old resource version: 2428 (88826)', u'metadata': {}}, u'object': {'api_version': 'v1',
 'kind': 'Status',

And unmarshall_event should fail to deserialize the object and break. And self.resource_version should just be stuck on the resource_version of the event that was expired.

Am I missing something here?

kube_config.py fails to load config if no context is set

I have an interactive application that let a user select a context (from the available contexts of a configuration). That works great but loading of the config fails if there's no active context set and no context is passed to the loader. In my case I cannot pass a context to the loader since I know contexts only after loading.

Solution: if no context is passed and no current-context is set, start without an active context.

context_name = self._config['current-context']

Open ssl certificate verify failed

Cant use the InClusterConfigLoader with an self signed ca.cert
ssl.SSLCertVerificationError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate ferify failed: self signed certificate in certificate chain
I propose to add a parameter or an env variable to allow a verify=false .

Run hack/ as an individual CI test

Currently the hack/ scripts are run as part of all of CI tests. An example can be seen here.

Instead, we should move the hack/ scripts to an additional individual check. This saves us some build time for CI and also allows us to potentially expand the scope of hack/ checks we can do in the future. As a background, see kubernetes-client/python#22

Loading kube-config - issue with python base64.b64decode() function

This script uses base64.b64decode() function to decode 'id-token' from kube-config file.

Even though my base64 string is ok( Golang API decodes it without any issue),
the python decoder always returns 'incorrect padding' error message. I have tried with PY3.5, PY3.7 and PY3.8, none works.

Here is my base64 string:
eyJpYW1faWQiOiJJQk1pZC01NTAwMDE5IiwiaXNzIjoiaHR0cHM6Ly9pYW0uYmx1ZW1peC5uZXQvaWRlbnRpdHkiLCJzdWIiOiJ2aXNoYWwueWFkYXZAeHh4eC5jb20iLCJhdWQiOiJrdWJlIiwiZ2l2ZW5fbmFtZSI6IlZpc2hhbCIsImZhbWlseV9uYW1lIjoiWWFkYXYiLCJuYW1lIjoiVmlzaGFsIFlhZGF2IiwiZW1haWwiOiJWaXNoYWwuWWFkYXZAeHh4LmNvbSIsImV4cCI6eHh4eHgsInNjb3BlIjoiaWJtIG9wZW5pZCBjb250YWluZXJzLWt1YmVybmV0ZXMiLCJpYXQiOjE1NTQ0ODMyODQsInN1Yl8xNzU4NTg1ZTA3MjQ1NGU5NmEwNGQ2YmMxN2RkOWNmIjoidmlzaGFsLnlhZGF2QHh4eHguY29tIiwiZ3JvdXBzXzE3NTg1ODVlMDd4eHh4eDZhMDRkNmJjMTdkZDljZiI6WyJFZGl0IGNvbXBhbnkgcHJvZmlsZSIsIkFkbWluaXN0cmF0b3IiLCJWaWV3IGFjY291bnQgc3VtbWFyeSIsIlVwZGF0ZSBwYXltZW50IGRldGFpbHMiLCJSZXRyaWV2ZSB1c2VycyIsIkdldCBjb21wbGlhbmNlIHJlcG9ydCIsIk9uZS10aW1lIHBheW1lbnRzIiwiQWRkIGNhc2VzIGFuZCB2aWV3IG9yZGVycyIsIkxpbWl0IEVVIGNhc2UgcmVzdHJpY3Rpb24iLCJFZGl0IGNhc2VzIiwiVmlldyBjYXNlcyJdfQ

Python code:
import base64 base64.b64decode('eyJpYW1faWQiOiJJQk1pZC01NTAwMDE5IiwiaXNzIjoiaHR0cHM6Ly9pYW0uYmx1ZW1peC5uZXQvaWRlbnRpdHkiLCJzdWIiOiJ2aXNoYWwueWFkYXZAeHh4eC5jb20iLCJhdWQiOiJrdWJlIiwiZ2l2ZW5fbmFtZSI6IlZpc2hhbCIsImZhbWlseV9uYW1lIjoiWWFkYXYiLCJuYW1lIjoiVmlzaGFsIFlhZGF2IiwiZW1haWwiOiJWaXNoYWwuWWFkYXZAeHh4LmNvbSIsImV4cCI6eHh4eHgsInNjb3BlIjoiaWJtIG9wZW5pZCBjb250YWluZXJzLWt1YmVybmV0ZXMiLCJpYXQiOjE1NTQ0ODMyODQsInN1Yl8xNzU4NTg1ZTA3MjQ1NGU5NmEwNGQ2YmMxN2RkOWNmIjoidmlzaGFsLnlhZGF2QHh4eHguY29tIiwiZ3JvdXBzXzE3NTg1ODVlMDd4eHh4eDZhMDRkNmJjMTdkZDljZiI6WyJFZGl0IGNvbXBhbnkgcHJvZmlsZSIsIkFkbWluaXN0cmF0b3IiLCJWaWV3IGFjY291bnQgc3VtbWFyeSIsIlVwZGF0ZSBwYXltZW50IGRldGFpbHMiLCJSZXRyaWV2ZSB1c2VycyIsIkdldCBjb21wbGlhbmNlIHJlcG9ydCIsIk9uZS10aW1lIHBheW1lbnRzIiwiQWRkIGNhc2VzIGFuZCB2aWV3IG9yZGVycyIsIkxpbWl0IEVVIGNhc2UgcmVzdHJpY3Rpb24iLCJFZGl0IGNhc2VzIiwiVmlldyBjYXNlcyJdfQ').decode('utf-8')

Golang version:
https://play.golang.org/p/1QwgjkRrQUx

kubernetes.stream with proxy

Seems like even though i have setup proxy to all of my k8s python api calls i cannot use stream in order to perform the call to connect_get_namespaced_pod_exec

c = client.Configuration()
c.proxy = proxy_url
client.Configuration.set_default(c)

stream(api.connect_get_namespaced_pod_exec, 'some_name', 'default', command="/bin/sh", stderr=True, stdin=False, stdout=True, tty=True)

the class 'WSClient' doesnt seem to take this into account:
Example for the cod e i expected:

ws.connect = ('wss://blabla.com', http_proxy_host='ip', http_proxy_port='8080')

Python3 write_channel "Can't convert 'bytes' object to str implicitly"

I am using the rundeck kubernetes plugin to execute a script to a pod and I get the following error

15:53:46 | Traceback (most recent call last):
-- | --
15:53:46 | File "/home/rundeck/libext/cache/kubernetes-plugin-1.10.1-SNAPSHOT/pods-copy-file.py", line 69, in <module>
15:53:46 | main()
15:53:46 | File "/home/rundeck/libext/cache/kubernetes-plugin-1.10.1-SNAPSHOT/pods-copy-file.py", line 65, in main
15:53:46 | common.copy_file(name, namespace, container, source_file, destination_path, destination_file_name)
15:53:46 | File "/home/rundeck/libext/cache/kubernetes-plugin-1.10.1-SNAPSHOT/common.py", line 396, in copy_file
15:53:46 | resp.write_stdin(c)
15:53:46 | File "/usr/local/lib/python3.5/dist-packages/kubernetes/stream/ws_client.py", line 160, in write_stdin
15:53:46 | self.write_channel(STDIN_CHANNEL, data)
15:53:46 | File "/usr/local/lib/python3.5/dist-packages/kubernetes/stream/ws_client.py", line 114, in write_channel
15:53:46 | self.sock.send(chr(channel) + data)
15:53:46 | TypeError: Can't convert 'bytes' object to str implicitly

On this comment there is a workaround - a change to ws_client.py

def write_channel(self, channel, data):
    """Write data to a channel."""
    self.sock.send(bytes(chr(channel), 'utf-8') + data)

This worked for me, so maybe someone with more knowledge can see if it should be merged fixed this way, or there's something better

I am using 'Python 3.5.2' and I tried kubernetes 9.0.0 and 10.0.0
This issue does not happen with Python2.7

Migrate to pytest?

Currently, the test suite uses nose while the python-client uses pytest.
I suggest migrating the tests of this project to pytest too. This will ease contributions
to both projects.

Buggy conditional in kube_config.py (TypeError: '<' not supported between instances of 'int' and 'time.struct_time')

if int(provider['config']['expires-on']) < time.gmtime():

The line above fails with the error:

 File "/Users/ogonna/virtualenvs/k8stlsingress-Q6OI3mKm/lib/python3.6/site-packages/kubernetes/config/kube_config.py", line 505, in load_kube_config
    loader.load_and_set(config)
  File "/Users/ogonna/virtualenvs/k8stlsingress-Q6OI3mKm/lib/python3.6/site-packages/kubernetes/config/kube_config.py", line 386, in load_and_set
    self._load_authentication()
  File "/Users/ogonna/virtualenvs/k8stlsingress-Q6OI3mKm/lib/python3.6/site-packages/kubernetes/config/kube_config.py", line 183, in _load_authentication
    if self._load_auth_provider_token():
  File "/Users/ogonna/virtualenvs/k8stlsingress-Q6OI3mKm/lib/python3.6/site-packages/kubernetes/config/kube_config.py", line 198, in _load_auth_provider_token
    return self._load_azure_token(provider)
  File "/Users/ogonna/virtualenvs/k8stlsingress-Q6OI3mKm/lib/python3.6/site-packages/kubernetes/config/kube_config.py", line 209, in _load_azure_token
    if int(provider['config']['expires-on']) < time.gmtime():
TypeError: '<' not supported between instances of 'int' and 'time.struct_time'

Suggested fix:

            if time.gmtime(int(provider['config']['expires-on'])) < time.gmtime():
                self._refresh_azure_token(provider['config'])

Consider setting connection_pool_maxsize to a higher value by default

Currently, it is set to 1. Which means the moment you start doing concurrent requests to the same kubernetes master, you're going to end up thrashing the connection pool & reducing performance.

Since the most likely case is that we're going to be making many requests to the same IP rather than many requests to different IP, setting this to a larger value by default will (and does!) improve performance.

Add tox and travis

We should add travis and tox to ensure test pass on all the suported platforms (py27, py34, py35)

I'm wondering if we should also add coverage and pep8 checks

I can work on it

Support for sending EOF to stdin via stream?

Is there a way to close stdin or send an EOF to stdin when using stream, without also closing stdout and stderr? I would like to be able to use stream to send input to a command via stdin, then send an EOF to signal that the command should stop processing input, then read all output from the command.

resp = stream(
    api.connect_get_namespaced_pod_exec,
    'podname',
    'default',
    command=["/bin/sh", "-c", "whatever"],
    stdin=True,
    _preload_content=False)

resp.write_stdin("""\
some text
some more text
etc.
""")

# Somehow send EOF to stdin here, i.e. resp.close_stdin()

resp.run_forever(timeout=60)
print(resp.read_all())
resp.close()

Is there any example of WSClient usage?

Hi,

Is there any example of WSClient usage to mimic kubectl exec -ti?

Not sure if i'm going right direction, but that is what I'm doing now:

ws = stream(self.kubectl.connect_get_namespaced_pod_exec, 'name, 'namespace', command=['/bin/sh'], stdout=True, stderr=True, stdin=True, tty=True, _preload_content=False)

for line in sys.stdin:
    ws.write_stdin(line)
    print(ws.read_all())

Problem is when I running it and entering ls in stdin it still return empty string.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.