Giter Club home page Giter Club logo

openshift-client-python's Introduction

Openshift Python Client

Table of Contents generated with DocToc

Overview

The openshift-client-python library aims to provide a readable, concise, comprehensive, and fluent API for rich interactions with an OpenShift cluster. Unlike other clients, this library exclusively uses the command line tool (oc) to achieve the interactions. This approach comes with important benefits and disadvantages when compared to other client libraries.

Pros:

  • No additional software needs to be installed on the cluster. If a system with python support can (1) invoke oc locally OR (2) ssh to a host and invoke oc, you can use the library.
  • Portable. If you have python and oc working, you don't need to worry about OpenShift versions or machine architectures.
  • Custom resources are supported and treated just like any other resource. There is no need to generate code to support them.
  • Quick to learn. If you understand the oc command line interface, you can use this library.

Cons:

  • This API is not intended to implement something as complex as a controller. For example, it does not implement watch functionality. If you can't imagine accomplishing your use case through CLI interactions, this API is probably not the right starting point for it.
  • If you care about whether a REST API returns a particular error code, this API is probably not for you. Since it is based on the CLI, high level return codes are used to determine success or failure.

Reader Prerequisites

  • Familiarity with OpenShift command line interface is highly encouraged before exploring the API's features. The API leverages the oc binary and, in many cases, passes method arguments directly on to the command line. This document cannot, therefore, provide a complete description of all possible OpenShift interactions -- the user may need to reference the CLI documentation to find the pass-through arguments a given interaction requires.

  • A familiarity with Python is assumed.

Setup

Prerequisites

  1. Download and install the OpenShift command-line Tools needed to access your OpenShift cluster.

Installation Instructions

Using PIP

  1. Install the openshift-client module from PyPI.
    sudo pip install openshift-client

For development

  1. Git clone https://github.com/openshift/openshift-client-python.git (or your fork).
  2. Add required libraries
    sudo pip install -r requirements.txt
  3. Append ./packages to your PYTHONPATH environment variable (e.g. export PYTHONPATH=$(pwd)/packages:$PYTHONPATH).
  4. Write and run your python script!

Usage

Quickstart

Any standard Python application should be able to use the API if it imports the openshift package. The simplest possible way to begin using the API is login to your target cluster before running your first application.

Can you run oc project successfully from the command line? Then write your app!

#!/usr/bin/python
import openshift_client as oc

print('OpenShift client version: {}'.format(oc.get_client_version()))
print('OpenShift server version: {}'.format(oc.get_server_version()))

# Set a project context for all inner `oc` invocations and limit execution to 10 minutes
with oc.project('openshift-infra'), oc.timeout(10*60):
    # Print the list of qualified pod names (e.g. ['pod/xyz', 'pod/abc', ...]  in the current project
    print('Found the following pods in {}: {}'.format(oc.get_project_name(), oc.selector('pods').qnames()))
    
    # Read in the current state of the pod resources and represent them as python objects
    for pod_obj in oc.selector('pods').objects():
        
        # The APIObject class exposes several convenience methods for interacting with objects
        print('Analyzing pod: {}'.format(pod_obj.name()))
        pod_obj.print_logs(timestamps=True, tail=15)
    
        # If you need access to the underlying resource definition, get a Model instance for the resource
        pod_model = pod_obj.model
        
        # Model objects enable dot notation and allow you to navigate through resources
        # to an arbitrary depth without checking if any ancestor elements exist.
        # In the following example, there is no need for boilerplate like:
        #    `if .... 'ownerReferences' in pod_model['metadata'] ....`
        # Fields that do not resolve will always return oc.Missing which 
        # is a singleton and can also be treated as an empty dict.
        for owner in pod_model.metadata.ownerReferences:  # ownerReferences == oc.Missing if not present in resource
            # elements of a Model are also instances of Model or ListModel
            if owner.kind is not oc.Missing:  # Compare as singleton
                print('  pod owned by a {}'.format(owner.kind))  # e.g. pod was created by a StatefulSet

Selectors

Selectors are a central concept used by the API to interact with collections of OpenShift resources. As the name implies, a "selector" selects zero or more resources on a server which satisfy user specified criteria. An apt metaphor for a selector might be a prepared SQL statement which can be used again and again to select rows from a database.

# Create a selector which selects all projects.
project_selector = oc.selector("projects")

# Print the qualified name (i.e. "kind/name") of each resource selected.
print("Project names: " + project_selector.qnames())

# Count the number of projects on the server.
print("Number of projects: " + project_selector.count_existing())

# Selectors can also be created with a list of names.
sa_selector = oc.selector(["serviceaccount/deployer", "serviceaccount/builder"])

# Performing an operation will act on all selected resources. In this case,
# both serviceaccounts are labeled.
sa_selector.label({"mylabel" : "myvalue"})

# Selectors can also select based on kind and labels.
sa_label_selector = oc.selector("sa", labels={"mylabel":"myvalue"})

# We should find the service accounts we just labeled.
print("Found labeled serviceaccounts: " + sa_label_selector.names())

# Create a selector for a set of kinds.
print(oc.selector(['dc', 'daemonset']).describe())

The output should look something like this:

Project names: [u'projects/default', u'projects/kube-system', u'projects/myproject', u'projects/openshift', u'projects/openshift-infra', u'projects/temp-1495937701365', u'projects/temp-1495937860505', u'projects/temp-1495937908009']
Number of projects: 8
Found labeled serviceaccounts: [u'serviceaccounts/builder', u'serviceaccounts/deployer']

APIObjects

Selectors allow you to perform "verb" level operations on a set of objects, but what if you want to interact objects at a schema level?

projects_sel = oc.selector("projects")

# .objects() will perform the selection and return a list of APIObjects
# which model the selected resources.
projects = projects_sel.objects()

print("Selected " + len(projects) + " projects")

# Let's store one of the project APIObjects for easy access.
project = projects[0]

# The APIObject exposes methods providing simple access to metadata and common operations.
print('The project is: {}/{}'.format(project.kind(), project.name()))
project.label({ 'mylabel': 'myvalue' })

# And the APIObject allow you to interact with an object's data via the 'model' attribute.
# The Model is similar to a standard dict, but also allows dot notation to access elements
# of the structured data.
print('Annotations:\n{}\n'.format(project.model.metadata.annotations))

# There is no need to perform the verbose 'in' checking you may be familiar with when
# exploring a Model object. Accessing Model attributes will always return a value. If the
# any component of a path into the object does not exist in the underlying model, the
# singleton 'Missing' will be returned.

if project.model.metadata.annotations.myannotation is oc.Missing:
    print("This object has not been annotated yet")

# If a field in the model contains special characters, use standard Python notation
# to access the key instead of dot notation.
if project.model.metadata.annotations['my-annotation'] is oc.Missing:
    print("This object has not been annotated yet")

# For debugging, you can always see the state of the underlying model by printing the
# APIObject as JSON.
print('{}'.format(project.as_json()))

# Or getting deep copy dict. Changes made to this dict will not affect the APIObject.
d = project.as_dict()

# Model objects also simplify looking through kubernetes style lists. For example, can_match
# returns True if the modeled list contains an object with the subset of attributes specified.
# If this example, we are checking if the a node's kubelet is reporting Ready:
oc.selector('node/alpha').object().model.status.conditions.can_match(
    {
        'type': 'Ready',
        'status': "True",
    }
)

# can_match can also ensure nest objects and list are present within a resource. Several
# of these types of checks are already implemented in the openshift.status module.
def is_route_admitted(apiobj):
    return apiobj.model.status.can_match({
        'ingress': [
            {
                'conditions': [
                    {
                        'type': 'Admitted',
                        'status': 'True',
                    }
                ]
            }
        ]
    })

Making changes to APIObjects

# APIObject exposes simple interfaces to delete and patch the resource it represents.
# But, more interestingly, you can make detailed changes to the model and apply those
# changes to the API.

project.model.metadata.labels['my_label'] = 'myvalue'
project.apply()

# If modifying the underlying API resources could be contentious, use the more robust
# modify_and_apply method which can retry the operation multiple times -- refreshing
# with the current object state between failures.

# First, define a function that will make changes to the model.
def make_model_change(apiobj):
    apiobj.model.data['somefile.yaml'] = 'wyxz'
    return True

# modify_and_apply will call the function and attempt to apply its changes to the model
# if it returns True. If the apply is rejected by the API, the function will pull
# the latest object content, call make_model_change again, and try the apply again
# up to the specified retry account.
configmap.modify_and_apply(make_model_change, retries=5)


# For best results, ensure the function passed to modify_and_apply is idempotent:

def set_unmanaged_in_cvo(apiobj):
    desired_entry = {
        'group': 'config.openshift.io/v1',
        'kind': 'ClusterOperator',
        'name': 'openshift-samples',
        'unmanaged': True,
    }

    if apiobj.model.spec.overrides.can_match(desired_entry):
        # No change required
        return False

    if not apiobj.model.spec.overrides:
        apiobj.model.spec.overrides = []

    context.progress('Attempting to disable CVO interest in openshift-samples operator')
    apiobj.model.spec.overrides.append(desired_entry)
    return True

result, changed = oc.selector('clusterversion.config.openshift.io/version').object().modify_and_apply(set_unmanaged_in_cvo)
if changed:
    context.report_change('Instructed CVO to ignore openshift-samples operator')

Running within a Pod

It is simple to use the API within a Pod. The oc binary automatically detects it is running within a container and automatically uses the Pod's serviceaccount token/cacert.

Tracking oc invocations

It is good practice to setup at least one tracking context within your application so that you will be able to easily analyze what oc invocations were made on your behalf and the result of those operations. Note that details about all oc invocations performed within the context will be stored within the tracker. Therefore, do not use a single tracker for a continuously running process -- it will consume memory for every oc invocation.

#!/usr/bin/python
import openshift_client as oc

with oc.tracking() as tracker:
    try:
        print('Current user: {}'.format(oc.whoami()))
    except:
        print('Error acquiring current username')
    
    # Print out details about the invocations made within this context.
    print(tracker.get_result())

In this case, the tracking output would look something like:

{
    "status": 0, 
    "operation": "tracking", 
    "actions": [
        {
            "status": 0, 
            "verb": "project", 
            "references": {}, 
            "in": null, 
            "out": "aos-cd\n", 
            "err": "", 
            "cmd": [
                "oc", 
                "project", 
                "-q"
            ], 
            "elapsed_time": 0.15344810485839844, 
            "internal": false, 
            "timeout": false, 
            "last_attempt": true
        }, 
        {
            "status": 0, 
            "verb": "whoami", 
            "references": {}, 
            "in": null, 
            "out": "aos-ci-jenkins\n", 
            "err": "", 
            "cmd": [
                "oc", 
                "whoami"
            ], 
            "elapsed_time": 0.6328380107879639, 
            "internal": false, 
            "timeout": false, 
            "last_attempt": true
        }
    ]
}

Alternatively, you can record actions yourself by passing an action_handler to the tracking contextmanager. Your action handler will be invoked each time an oc invocation completes.

def print_action(action):
    print('Performed: {} - status={}'.format(action.cmd, action.status))

with oc.tracking(action_handler=print_action):
    try:
        print('Current project: {}'.format(oc.get_project_name()))
        print('Current user: {}'.format(oc.whoami()))
    except:
        print('Error acquiring details about project/user')

Time limits

Have a script you want to ensure succeeds or fails within a specific period of time? Use a timeout context. Timeout contexts can be nested - if any timeout context expires, the current oc invocation will be killed.

#!/usr/bin/python
import openshift_client as oc

def node_is_ready(node):
    ready = node.model.status.conditions.can_match({
        'type': 'Ready',
        'status': 'True',
    })
    return ready


print("Waiting for up to 15 minutes for at least 6 nodes to be ready...")
with oc.timeout(15 * 60):
    oc.selector('nodes').until_all(6, success_func=node_is_ready)
    print("All detected nodes are reporting ready")

You will be able to see in tracking context results that a timeout occurred for an affected invocation. The timeout field will be set to True.

Advanced contexts

If you are unable to use a KUBECONFIG environment variable or need fine grained control over the server/credentials you communicate with for each invocation, use openshift-client-python contexts. Contexts can be nested and cause oc invocations within them to use the most recently established context information.

with oc.api_server('https:///....'):  # use the specified api server for nested oc invocations.
    
    with oc.token('abc..'):  # --server=... --token=abc... will be included in inner oc invocations.
        print("Current project: " + oc.get_project_name())
    
    with oc.token('def..'):  # --server=... --token=def... will be included in inner oc invocations.
        print("Current project: " + oc.get_project_name())

You can control the loglevel specified for oc invocations.

with oc.loglevel(6):
   # all oc invocations within this context will be invoked with --loglevel=6
    oc...   

You ask oc to skip TLS verification if necessary.

with oc.tls_verify(enable=False):
   # all oc invocations within this context will be invoked with --insecure-skip-tls-verify
    oc...   

Something missing?

Most common API iterations have abstractions, but if there is no openshift-client-python API exposing the oc function you want to run, you can always use oc.invoke to directly pass arguments to an oc invocation on your host.

# oc adm policy add-scc-to-user privileged -z my-sa-name
oc.invoke('adm', ['policy', 'add-scc-to-user', 'privileged', '-z', 'my-sa-name'])

Running oc on a bastion host

Is your oc binary on a remote host? No problem. Easily remote all CLI interactions over SSH using the client_host context. Before running this command, you will need to load your ssh agent up with a key appropriate to the target client host.

with openshift_client.client_host(hostname="my.cluster.com", username="root", auto_add_host=True):
    # oc invocations will take place on my.cluster.com host as the root user.
    print("Current project: " + oc.get_project_name())

Using this model, your Python script will run exactly where you launch it, but all oc invocations will occur on the remote host.

Gathering reports and logs with selectors

Various objects within OpenShift have logs associated with them:

  • pods
  • deployments
  • daemonsets
  • statefulsets
  • builds
  • etc..

A selector can gather logs from pods associated with each (and for each container within those pods). Each log will be a unique value in the dictionary returned.

# Print logs for all pods associated with all daemonsets & deployments in openshift-monitoring namespace.
with oc.project('openshift-monitoring'):
    for k, v in oc.selector(['daemonset', 'deployment']).logs(tail=500).iteritems():
        print('Container: {}\n{}\n\n'.format(k, v))

The above example would output something like:

Container: openshift-monitoring:pod/node-exporter-hw5r5(node-exporter)
time="2018-10-22T21:07:36Z" level=info msg="Starting node_exporter (version=0.16.0, branch=, revision=)" source="node_exporter.go:82"
time="2018-10-22T21:07:36Z" level=info msg="Enabled collectors:" source="node_exporter.go:90"
time="2018-10-22T21:07:36Z" level=info msg=" - arp" source="node_exporter.go:97"
...

Note that these logs are held in memory. Use tail or other available method parameters to ensure predictable and efficient results.

To simplify even further, you can ask the library to pretty-print the logs for you:

oc.selector(['daemonset', 'deployment']).print_logs()

And to quickly pull together significant diagnostic data on selected objects, use report() or print_report(). A report includes the following information for each selected object, if available:

  • object - The current state of the object.
  • describe - The output of describe on the object.
  • logs - If applicable, a map of logs -- one of each container associated with the object.
# Pretty-print a detail set of data about all deploymentconfigs, builds, and configmaps in the 
# current namespace context.
oc.selector(['dc', 'build', 'configmap']).print_report()

Advanced verbs:

Running oc exec on a pod.

    result = oc.selector('pod/alertmanager-main-0').object().execute(['cat'],
                                                                     container_name='alertmanager',
                                                                     stdin='stdin for cat')
    print(result.out())

Finding all pods running on a node:

with oc.client_host():
    for node_name in oc.selector('nodes').qnames():
        print('Pods running on node: {}'.format(node_name))
            for pod_obj in oc.get_pods_by_node(node_name):
                print('  {}'.format(pod_obj.fqname()))

Example output:

...
Pods running on node: node/ip-172-31-18-183.ca-central-1.compute.internal
  72-sus:pod/sus-1-vgnmx
  ameen-blog:pod/ameen-blog-2-t68qn
  appejemplo:pod/ejemplo-1-txdt7
  axxx:pod/mysql-5-lx2bc
...

Examples

Environment Variables

To allow openshift-client-python applications to be portable between environments without needing to be modified, you can specify many default contexts in the environment.

Defaults when invoking oc

Establishing explicit contexts within an application will override these environment defaults.

  • OPENSHIFT_CLIENT_PYTHON_DEFAULT_OC_PATH - default path to use when invoking oc
  • OPENSHIFT_CLIENT_PYTHON_DEFAULT_CONFIG_PATH - default --kubeconfig argument
  • OPENSHIFT_CLIENT_PYTHON_DEFAULT_API_SERVER - default --server argument
  • OPENSHIFT_CLIENT_PYTHON_DEFAULT_CA_CERT_PATH - default --cacert argument
  • OPENSHIFT_CLIENT_PYTHON_DEFAULT_PROJECT - default --namespace argument
  • OPENSHIFT_CLIENT_PYTHON_DEFAULT_OC_LOGLEVEL - default --loglevel argument
  • OPENSHIFT_CLIENT_PYTHON_DEFAULT_SKIP_TLS_VERIFY - default --insecure-skip-tls-verify

Master timeout

Defines an implicit outer timeout(..) context for the entire application. This allows you to ensure that an application terminates within a reasonable time, even if the author of the application has not included explicit timeout contexts. Like any timeout context, this value is not overridden by subsequent timeout contexts within the application. It provides an upper bound for the entire application's oc interactions.

  • OPENSHIFT_CLIENT_PYTHON_MASTER_TIMEOUT

SSH Client Host

In some cases, it is desirable to run an openshift-client-python application using a local oc binary and in other cases, the oc binary resides on a remote client. Encoding this decision in the application itself is unnecessary.

Simply wrap you application in a client_host context without arguments. This will try to pull client host information from environment variables if they are present. If they are not present, the application will execute on the local host.

For example, the following application will ssh to OPENSHIFT_CLIENT_PYTHON_DEFAULT_SSH_HOSTNAME if it is defined in the environment. Otherwise, oc interactions will be executed on the host running the python application.

with oc.client_host():  # if OPENSHIFT_CLIENT_PYTHON_DEFAULT_SSH_HOSTNAME if not defined in the environment, this is a no-op
    print( 'Found nodes: {}'.format(oc.selector('nodes').qnames()) ) 
  • OPENSHIFT_CLIENT_PYTHON_DEFAULT_SSH_HOSTNAME - The hostname on which the oc binary resides
  • OPENSHIFT_CLIENT_PYTHON_DEFAULT_SSH_USERNAME - Username to use for the ssh connection (optional)
  • OPENSHIFT_CLIENT_PYTHON_DEFAULT_SSH_PORT - SSH port to use (optional; defaults to 22)
  • OPENSHIFT_CLIENT_PYTHON_DEFAULT_SSH_AUTO_ADD - Defaults to false. If set to true, unknown hosts will automatically be trusted.
  • OPENSHIFT_CLIENT_PYTHON_DEFAULT_LOAD_SYSTEM_HOST_KEYS - Defaults to true. If true, the local known hosts information will be used.

openshift-client-python's People

Contributors

bradmwilliams avatar damaizel avatar dmaizel avatar elenagerman avatar ellakk avatar jaurbanrh avatar jupierce avatar kshithijiyer avatar mdujava avatar openshift-merge-bot[bot] avatar openshift-merge-robot avatar pehala avatar psolarvi avatar spilchen avatar tonejito avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

openshift-client-python's Issues

Change/update package namespace

It has come to our attention that the "openshift" namespace is being used by another Openshift python project:
openshift -> openshift-restclient-python

Each project provides their own version of init.py, each with conflicting setup/configuration. Because of this, these 2 projects can not happily co-exist together in the same environment

Opening this issue to re-home our package to allow for these packages to co-exist.

oc cp: Success but no File

oc.invoke('cp', ('/Users/<user>/Desktop/upload.txt', '{}:/tmp'.format(pod)))

I'm using this Python OpenShift client to upload a file from my local machine to the remote client_host and everything is working as expected. However, when I attempt to invoke a cp from my local machine it shows that the invocation works:

{
    "operation": "tracking",
    "status": 0,
    "actions": [
        {
            "timestamp": 1624989569902,
            "elapsed_time": 1.038727045059204,
            "success": true,
            "status": 0,
            "verb": "cp",
            "cmd": [
                "oc",
                "cp",
                "/Users/<user>/Desktop/upload.txt",
                "<pod-name>:/tmp"
            ],
            "out": "",
            "err": "",
            "in": null,
            "references": {
                ".client_host": "root@<ip-address>"
            },
            "timeout": false,
            "last_attempt": true,
            "internal": false
        }
    ]
}

When I ls the file it's not there. Any ideas why the invoke command works but there's no sight of the file on the pod?

Catch missing export options in collection

The following snippets have an issue because the export verb in the oc utility does not include the flag --ignore-not-found. A program will fail because objects(exportable=True) will silently pass True for the ignore flag and error out the CLI tool.

if ignore_not_found:
cmd_args.append("--ignore-not-found")

def objects(self, exportable=False, ignore_not_found=True):
"""
Returns a python list of APIObject objects that represent the selected resources. An
empty is returned if nothing is selected.
:param exportable: Whether export should be used instead of get.
:param ignore_not_found: If true, missing named resources will not raise an exception.
:return: A list of Model objects representing the receiver's selected resources.
"""
obj = json.loads(self.object_json(exportable, ignore_not_found=ignore_not_found))

A catch as simple as adjusting L374 to if ignore_not_found and not exportable: should suffice (though it'd probably be a good idea to log or print that that flag is being ignored). In the long run (if this tool does see future developments), it would be good to see the export verb removed in favor of an --export argument to pass to get.

Allow oc exec into statefulset

The 'oc exec' command supports passing a stateful set and a ready pod from the stateful set will automatically be chosen.
E.g.

oc exec -it statefulset/my-stateful-set -c foo -- env - echo

The openshift-python-client doesn't appear to support this. If you pass a statefulset to the command the execute command assumes it's a pod and tries to exec into a pod with the same name as the statefulset.

oc.selector('statefulset/my-stateful-set').object().execute(['ls'], container_name='foo')

...
  openshift.model.OpenShiftPythonException: [Error running statefulset.apps/my-stateful-set exec on ls [rc=1]: Error from server (NotFound): pods "my-stateful-set" not found]
{
    "operation": "exec",
    "status": 1,
    "actions": [
        {
            "timestamp": 1649435656752,
            "elapsed_time": 0.3165004253387451,
            "success": false,
            "status": 1,
            "verb": "exec",
            "cmd": [
                "oc",
                "exec",
                "--namespace=my-project",
                "--container=foo",
                "my-stateful-set",
                "--",
                "ls"
            ],
            "out": "",
            "err": "Error from server (NotFound): pods \"my-stateful-set\" not found\n",
            "in": null,
            "references": {
                ".stack": [
                  ...
                ]
            },
            "timeout": false,
            "last_attempt": true,
            "internal": false
        }
    ]
}                                                    

oc run

Is there any way to do 'oc run' with this project?

Each log has head string and tail string

In case users use print_log function each file contains
start string [logs:begin]xxxt-xx-xxx-01:pod/xxx-xxxxx-xxxxxxxl-d997f557d-vpf8r->pod/xxx-xxxxxxx-xxxxxxxx- f557d-vpf8r(postgresql)========
end string [logs:end]xxxxx-xx-xxxxx-01:pod/xxx-xxxxxx-xxxxxl-d997f557d-vpf8r->pod/xxx-xxxxxx-xxxxxxx-d997f557d-vpf8r(postgresql)========

This is not convenient. For example if log has json format I can load to an editor and format this log as json file. This strings confuse formatter. Commonly speaking I don see real role of this string. This string is mandatory added in utils module. Can I fix them or anybody use these strings

missing positional argumetn in start_build()

import openshift as oc
bc = oc.selector('bc/web-devops-bc')
bc.start_build()

Error message:

  File ".\oc_test.py", line 25, in _build_oc
    bc.start_build()
  File "C:\dev\wm-cicd-builder\build\lib\site-packages\openshift\selector.py", line 425, in start_build
    r = Selector()
TypeError: __init__() missing 1 required positional argument: 'high_level_operation'

[Question] sharing context with subprocess

Hello,

is there a way to share a context with subprocess library, for example:

import openshift as oc
import subprocess

HOST = 'https://localhost:6443'
USER = 'ocuser'
PASSWORD = 'password'
PROJECT = 'my-proj'

with oc.api_server(HOST):
    with oc.login(USER, PASSWORD):
        with oc.project(PROJECT):
            subprocess.run('oc get pods | grep ABCD')

I would like to execute some commands directly from the subprocess library, making sure I'm in the correct server/project context

A way to stream stdout?

We would like to stream the output of the oc command as it completes work. So far, we haven't been able to find a way; we can only get the full log once the command completes. Is there something simple I'm missing? Thanks! If there is a way, could you add an example file and/or doc showing how?

Selector#until_all or Selector#until_any methods do not have a timeout

The until_all method does not have a timeout. Consequently, if the caller does not define a failure function, the method waits indefinitely.

The failure function approach assumes that the caller knows all of the possible issues that could happen with the object. However, this is not always the case.

Would you be open to adding a timeout parameter to the functions?

Error error: invalid resource name \"pod/xxxx\": [may not contain '/']

Hello,
i find and Error when using the follow command:

  • oc.selector('pod/mypodname').object().execute(xxx)

i have the error:
"err": "error: invalid resource name "pod/xxxx": [may not contain '/']\n",

when looking the reason:
i find an error in the openshift/apiobject.py
in the fonction : def execute(.....)

you have to change the parameter qname() to name()

have a nice day..

use selector to find objects without a specific label

Hi

So I would like to find all pods in a namespace that do not have a specific label set.
I've been looking at following piece of documentation but it doesn't seem to mention it.
https://github.com/openshift/openshift-client-python#selectors

Selectors can also select based on kind and labels.
sa_label_selector = oc.selector("sa", labels={"mylabel":"myvalue"})

When doing this using the oc command I can do following:
oc get pod --selector='!mylabel'

However I can't seem to emulate this using this library.
Can you help me?

[Question] sharing context with subprocess

Hello,

is there a way to share a context with subprocess library, for example:

import openshift as oc
import subprocess

HOST = 'https://localhost:6443'
USER = 'ocuser'
PASSWORD = 'password'
PROJECT = 'my-proj'

with oc.api_server(HOST):
    with oc.login(USER, PASSWORD):
        with oc.project(PROJECT):
            subprocess.run('oc get pods | grep ABCD')

I would like to execute some commands directly from the subprocess library, making sure I'm in the correct server/project context

image mirror

Please add the ability of executing: 'image mirror'

Allow custom subclasses of APIObject for Selector.object() and Selector.objects()

Overview

It would be great if Selector.objects() and Selector.object() would allow to specifiy which class it should create. This can be useful when you have custom class on top of APIObject for helping with managing said resource.

Example

oc.selector("pods").objects(class=DeploymentConfig) ~> List[DeploymentConfig]
oc.selector("pods").object(class=DeploymentConfig) ~> DeploymentConfig
oc.selector("pods").object() ~> APIObject

I think I could implement this if you would like this to be added.

Creation of a Route onject raise ResourceNotFoundError excpetion when serving.knative.dev/v1 Route api resource is present on the cluster

Issue

When Red Hat OpenShift Serverless is installed and serving.knative.dev/v1 Route api exists as resource in the cluster, the v1_routes.create( .. ) method raise the following exception : openshift.dynamic.exceptions.ResourceNotFoundError: No matches found for {'api_version': 'v1', 'kind': 'Route'}.

How to reproduce the issue:

  1. Install Red Hat OpenShift Serverless Operator

  2. Create a KnativeServing object (as the following for example:)

    # cat << EOF  | oc apply -n knative-serving -f -
    apiVersion: operator.knative.dev/v1alpha1
    kind: KnativeServing
    metadata:
      name: knative-serving
      namespace: knative-serving
    spec: {}
    EOF
  3. Check if the KnativeServing Route api exists in the cluster:

    # oc api-resources --verbs=list | grep routes          
    routes                                             route.openshift.io/v1                       true         Route
    routes                            rt               serving.knative.dev/v1                      true         Route
  4. Try to create an OpenShift Route using openshift-client-python library executing the following script (adjusting the <variable> ):

    import yaml
    from kubernetes import client, config
    from openshift.dynamic import DynamicClient
    
    k8s_client = config.new_client_from_config()
    dyn_client = DynamicClient(k8s_client)
    
    v1_routes = dyn_client.resources.get(api_version='v1', kind='Route')
    route = """
    kind: Route
    metadata:
      name: test-route
    spec:
      host: test-route.<host>
      port:
        targetPort: 8080-tcp
      to:
        kind: Service
        name:  <service>
        weight: 100
      wildcardPolicy: None
      tls:
        termination: edge
    """
    
    route_data = yaml.load(route)
    resp = v1_routes.create(body=route_data, namespace='default')
    
    # resp is a ResourceInstance object
    print(resp.metadata)

    The execution of the script raise the following Exception:

    Traceback (most recent call last):
      File "createroute.py", line 11, in <module>
        v1_routes = dyn_client.resources.get(api_version='v1', kind='Route')
      File "/Users/gmeghnag/Library/Python/3.8/lib/python/site-packages/openshift/dynamic/discovery.py", line 246, in get
        raise ResourceNotFoundError('No matches found for {}'.format(kwargs))
    openshift.dynamic.exceptions.ResourceNotFoundError: No matches found for {'api_version': 'v1', 'kind': 'Route'}

Potential cause

From openshift-restclient-python

def get(self, **kwargs):
        """ Same as search, but will throw an error if there are multiple or no
            results. If there are multiple results and only one is an exact match
            on api_version, that resource will be returned.
        """

Other Information

# oc get clusterversion
NAME      VERSION   AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.6.15    True        False         134m    Cluster version is 4.6.15

# oc get sub -A
NAMESPACE              NAME                  PACKAGE               SOURCE             CHANNEL
openshift-serverless   serverless-operator   serverless-operator   redhat-operators   stable

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.