Giter Club home page Giter Club logo

openshift-client-python's Issues

image mirror

Please add the ability of executing: 'image mirror'

Override context for current kubeconfig

Context() class allows to override many options that would be passed as cmdline arguments, like token or server url, but I cannot find a way how to override --context cmdline option to enable having one kubeconfig and referencing different contexts within.

Is there a way how to do this currently?

Change/update package namespace

It has come to our attention that the "openshift" namespace is being used by another Openshift python project:
openshift -> openshift-restclient-python

Each project provides their own version of init.py, each with conflicting setup/configuration. Because of this, these 2 projects can not happily co-exist together in the same environment

Opening this issue to re-home our package to allow for these packages to co-exist.

Catch missing export options in collection

The following snippets have an issue because the export verb in the oc utility does not include the flag --ignore-not-found. A program will fail because objects(exportable=True) will silently pass True for the ignore flag and error out the CLI tool.

if ignore_not_found:
cmd_args.append("--ignore-not-found")

def objects(self, exportable=False, ignore_not_found=True):
"""
Returns a python list of APIObject objects that represent the selected resources. An
empty is returned if nothing is selected.
:param exportable: Whether export should be used instead of get.
:param ignore_not_found: If true, missing named resources will not raise an exception.
:return: A list of Model objects representing the receiver's selected resources.
"""
obj = json.loads(self.object_json(exportable, ignore_not_found=ignore_not_found))

A catch as simple as adjusting L374 to if ignore_not_found and not exportable: should suffice (though it'd probably be a good idea to log or print that that flag is being ignored). In the long run (if this tool does see future developments), it would be good to see the export verb removed in favor of an --export argument to pass to get.

oc run

Is there any way to do 'oc run' with this project?

Allow oc exec into statefulset

The 'oc exec' command supports passing a stateful set and a ready pod from the stateful set will automatically be chosen.
E.g.

oc exec -it statefulset/my-stateful-set -c foo -- env - echo

The openshift-python-client doesn't appear to support this. If you pass a statefulset to the command the execute command assumes it's a pod and tries to exec into a pod with the same name as the statefulset.

oc.selector('statefulset/my-stateful-set').object().execute(['ls'], container_name='foo')

...
  openshift.model.OpenShiftPythonException: [Error running statefulset.apps/my-stateful-set exec on ls [rc=1]: Error from server (NotFound): pods "my-stateful-set" not found]
{
    "operation": "exec",
    "status": 1,
    "actions": [
        {
            "timestamp": 1649435656752,
            "elapsed_time": 0.3165004253387451,
            "success": false,
            "status": 1,
            "verb": "exec",
            "cmd": [
                "oc",
                "exec",
                "--namespace=my-project",
                "--container=foo",
                "my-stateful-set",
                "--",
                "ls"
            ],
            "out": "",
            "err": "Error from server (NotFound): pods \"my-stateful-set\" not found\n",
            "in": null,
            "references": {
                ".stack": [
                  ...
                ]
            },
            "timeout": false,
            "last_attempt": true,
            "internal": false
        }
    ]
}                                                    

Allow custom subclasses of APIObject for Selector.object() and Selector.objects()

Overview

It would be great if Selector.objects() and Selector.object() would allow to specifiy which class it should create. This can be useful when you have custom class on top of APIObject for helping with managing said resource.

Example

oc.selector("pods").objects(class=DeploymentConfig) ~> List[DeploymentConfig]
oc.selector("pods").object(class=DeploymentConfig) ~> DeploymentConfig
oc.selector("pods").object() ~> APIObject

I think I could implement this if you would like this to be added.

Creation of a Route onject raise ResourceNotFoundError excpetion when serving.knative.dev/v1 Route api resource is present on the cluster

Issue

When Red Hat OpenShift Serverless is installed and serving.knative.dev/v1 Route api exists as resource in the cluster, the v1_routes.create( .. ) method raise the following exception : openshift.dynamic.exceptions.ResourceNotFoundError: No matches found for {'api_version': 'v1', 'kind': 'Route'}.

How to reproduce the issue:

  1. Install Red Hat OpenShift Serverless Operator

  2. Create a KnativeServing object (as the following for example:)

    # cat << EOF  | oc apply -n knative-serving -f -
    apiVersion: operator.knative.dev/v1alpha1
    kind: KnativeServing
    metadata:
      name: knative-serving
      namespace: knative-serving
    spec: {}
    EOF
  3. Check if the KnativeServing Route api exists in the cluster:

    # oc api-resources --verbs=list | grep routes          
    routes                                             route.openshift.io/v1                       true         Route
    routes                            rt               serving.knative.dev/v1                      true         Route
  4. Try to create an OpenShift Route using openshift-client-python library executing the following script (adjusting the <variable> ):

    import yaml
    from kubernetes import client, config
    from openshift.dynamic import DynamicClient
    
    k8s_client = config.new_client_from_config()
    dyn_client = DynamicClient(k8s_client)
    
    v1_routes = dyn_client.resources.get(api_version='v1', kind='Route')
    route = """
    kind: Route
    metadata:
      name: test-route
    spec:
      host: test-route.<host>
      port:
        targetPort: 8080-tcp
      to:
        kind: Service
        name:  <service>
        weight: 100
      wildcardPolicy: None
      tls:
        termination: edge
    """
    
    route_data = yaml.load(route)
    resp = v1_routes.create(body=route_data, namespace='default')
    
    # resp is a ResourceInstance object
    print(resp.metadata)

    The execution of the script raise the following Exception:

    Traceback (most recent call last):
      File "createroute.py", line 11, in <module>
        v1_routes = dyn_client.resources.get(api_version='v1', kind='Route')
      File "/Users/gmeghnag/Library/Python/3.8/lib/python/site-packages/openshift/dynamic/discovery.py", line 246, in get
        raise ResourceNotFoundError('No matches found for {}'.format(kwargs))
    openshift.dynamic.exceptions.ResourceNotFoundError: No matches found for {'api_version': 'v1', 'kind': 'Route'}

Potential cause

From openshift-restclient-python

def get(self, **kwargs):
        """ Same as search, but will throw an error if there are multiple or no
            results. If there are multiple results and only one is an exact match
            on api_version, that resource will be returned.
        """

Other Information

# oc get clusterversion
NAME      VERSION   AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.6.15    True        False         134m    Cluster version is 4.6.15

# oc get sub -A
NAMESPACE              NAME                  PACKAGE               SOURCE             CHANNEL
openshift-serverless   serverless-operator   serverless-operator   redhat-operators   stable

Each log has head string and tail string

In case users use print_log function each file contains
start string [logs:begin]xxxt-xx-xxx-01:pod/xxx-xxxxx-xxxxxxxl-d997f557d-vpf8r->pod/xxx-xxxxxxx-xxxxxxxx- f557d-vpf8r(postgresql)========
end string [logs:end]xxxxx-xx-xxxxx-01:pod/xxx-xxxxxx-xxxxxl-d997f557d-vpf8r->pod/xxx-xxxxxx-xxxxxxx-d997f557d-vpf8r(postgresql)========

This is not convenient. For example if log has json format I can load to an editor and format this log as json file. This strings confuse formatter. Commonly speaking I don see real role of this string. This string is mandatory added in utils module. Can I fix them or anybody use these strings

A way to stream stdout?

We would like to stream the output of the oc command as it completes work. So far, we haven't been able to find a way; we can only get the full log once the command completes. Is there something simple I'm missing? Thanks! If there is a way, could you add an example file and/or doc showing how?

use selector to find objects without a specific label

Hi

So I would like to find all pods in a namespace that do not have a specific label set.
I've been looking at following piece of documentation but it doesn't seem to mention it.
https://github.com/openshift/openshift-client-python#selectors

Selectors can also select based on kind and labels.
sa_label_selector = oc.selector("sa", labels={"mylabel":"myvalue"})

When doing this using the oc command I can do following:
oc get pod --selector='!mylabel'

However I can't seem to emulate this using this library.
Can you help me?

missing positional argumetn in start_build()

import openshift as oc
bc = oc.selector('bc/web-devops-bc')
bc.start_build()

Error message:

  File ".\oc_test.py", line 25, in _build_oc
    bc.start_build()
  File "C:\dev\wm-cicd-builder\build\lib\site-packages\openshift\selector.py", line 425, in start_build
    r = Selector()
TypeError: __init__() missing 1 required positional argument: 'high_level_operation'

[Question] sharing context with subprocess

Hello,

is there a way to share a context with subprocess library, for example:

import openshift as oc
import subprocess

HOST = 'https://localhost:6443'
USER = 'ocuser'
PASSWORD = 'password'
PROJECT = 'my-proj'

with oc.api_server(HOST):
    with oc.login(USER, PASSWORD):
        with oc.project(PROJECT):
            subprocess.run('oc get pods | grep ABCD')

I would like to execute some commands directly from the subprocess library, making sure I'm in the correct server/project context

oc cp: Success but no File

oc.invoke('cp', ('/Users/<user>/Desktop/upload.txt', '{}:/tmp'.format(pod)))

I'm using this Python OpenShift client to upload a file from my local machine to the remote client_host and everything is working as expected. However, when I attempt to invoke a cp from my local machine it shows that the invocation works:

{
    "operation": "tracking",
    "status": 0,
    "actions": [
        {
            "timestamp": 1624989569902,
            "elapsed_time": 1.038727045059204,
            "success": true,
            "status": 0,
            "verb": "cp",
            "cmd": [
                "oc",
                "cp",
                "/Users/<user>/Desktop/upload.txt",
                "<pod-name>:/tmp"
            ],
            "out": "",
            "err": "",
            "in": null,
            "references": {
                ".client_host": "root@<ip-address>"
            },
            "timeout": false,
            "last_attempt": true,
            "internal": false
        }
    ]
}

When I ls the file it's not there. Any ideas why the invoke command works but there's no sight of the file on the pod?

[Question] sharing context with subprocess

Hello,

is there a way to share a context with subprocess library, for example:

import openshift as oc
import subprocess

HOST = 'https://localhost:6443'
USER = 'ocuser'
PASSWORD = 'password'
PROJECT = 'my-proj'

with oc.api_server(HOST):
    with oc.login(USER, PASSWORD):
        with oc.project(PROJECT):
            subprocess.run('oc get pods | grep ABCD')

I would like to execute some commands directly from the subprocess library, making sure I'm in the correct server/project context

Error error: invalid resource name \"pod/xxxx\": [may not contain '/']

Hello,
i find and Error when using the follow command:

  • oc.selector('pod/mypodname').object().execute(xxx)

i have the error:
"err": "error: invalid resource name "pod/xxxx": [may not contain '/']\n",

when looking the reason:
i find an error in the openshift/apiobject.py
in the fonction : def execute(.....)

you have to change the parameter qname() to name()

have a nice day..

Selector#until_all or Selector#until_any methods do not have a timeout

The until_all method does not have a timeout. Consequently, if the caller does not define a failure function, the method waits indefinitely.

The failure function approach assumes that the caller knows all of the possible issues that could happen with the object. However, this is not always the case.

Would you be open to adding a timeout parameter to the functions?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.