openshift / openshift-client-python Goto Github PK
View Code? Open in Web Editor NEWA python library for interacting with OpenShift via the OpenShift client binary.
License: Apache License 2.0
A python library for interacting with OpenShift via the OpenShift client binary.
License: Apache License 2.0
Please add the ability of executing: 'image mirror'
Context()
class allows to override many options that would be passed as cmdline arguments, like token or server url, but I cannot find a way how to override --context
cmdline option to enable having one kubeconfig and referencing different contexts within.
Is there a way how to do this currently?
It has come to our attention that the "openshift" namespace is being used by another Openshift python project:
openshift -> openshift-restclient-python
Each project provides their own version of init.py, each with conflicting setup/configuration. Because of this, these 2 projects can not happily co-exist together in the same environment
Opening this issue to re-home our package to allow for these packages to co-exist.
The following snippets have an issue because the export
verb in the oc
utility does not include the flag --ignore-not-found
. A program will fail because objects(exportable=True)
will silently pass True
for the ignore flag and error out the CLI tool.
openshift-client-python/packages/openshift/selector.py
Lines 374 to 375 in 4c148e0
openshift-client-python/packages/openshift/selector.py
Lines 412 to 421 in 4c148e0
A catch as simple as adjusting L374 to if ignore_not_found and not exportable:
should suffice (though it'd probably be a good idea to log or print that that flag is being ignored). In the long run (if this tool does see future developments), it would be good to see the export
verb removed in favor of an --export
argument to pass to get.
Is there any way to do 'oc run' with this project?
The 'oc exec' command supports passing a stateful set and a ready pod from the stateful set will automatically be chosen.
E.g.
oc exec -it statefulset/my-stateful-set -c foo -- env - echo
The openshift-python-client doesn't appear to support this. If you pass a statefulset to the command the execute command assumes it's a pod and tries to exec into a pod with the same name as the statefulset.
oc.selector('statefulset/my-stateful-set').object().execute(['ls'], container_name='foo')
...
openshift.model.OpenShiftPythonException: [Error running statefulset.apps/my-stateful-set exec on ls [rc=1]: Error from server (NotFound): pods "my-stateful-set" not found]
{
"operation": "exec",
"status": 1,
"actions": [
{
"timestamp": 1649435656752,
"elapsed_time": 0.3165004253387451,
"success": false,
"status": 1,
"verb": "exec",
"cmd": [
"oc",
"exec",
"--namespace=my-project",
"--container=foo",
"my-stateful-set",
"--",
"ls"
],
"out": "",
"err": "Error from server (NotFound): pods \"my-stateful-set\" not found\n",
"in": null,
"references": {
".stack": [
...
]
},
"timeout": false,
"last_attempt": true,
"internal": false
}
]
}
It would be great if Selector.objects()
and Selector.object()
would allow to specifiy which class it should create. This can be useful when you have custom class on top of APIObject for helping with managing said resource.
oc.selector("pods").objects(class=DeploymentConfig) ~> List[DeploymentConfig]
oc.selector("pods").object(class=DeploymentConfig) ~> DeploymentConfig
oc.selector("pods").object() ~> APIObject
I think I could implement this if you would like this to be added.
When Red Hat OpenShift Serverless
is installed and serving.knative.dev/v1 Route
api exists as resource in the cluster, the v1_routes.create( .. )
method raise the following exception : openshift.dynamic.exceptions.ResourceNotFoundError: No matches found for {'api_version': 'v1', 'kind': 'Route'}
.
Install Red Hat OpenShift Serverless Operator
Create a KnativeServing object (as the following for example:)
# cat << EOF | oc apply -n knative-serving -f -
apiVersion: operator.knative.dev/v1alpha1
kind: KnativeServing
metadata:
name: knative-serving
namespace: knative-serving
spec: {}
EOF
Check if the KnativeServing Route api exists in the cluster:
# oc api-resources --verbs=list | grep routes
routes route.openshift.io/v1 true Route
routes rt serving.knative.dev/v1 true Route
Try to create an OpenShift Route using openshift-client-python library executing the following script (adjusting the <variable>
):
import yaml
from kubernetes import client, config
from openshift.dynamic import DynamicClient
k8s_client = config.new_client_from_config()
dyn_client = DynamicClient(k8s_client)
v1_routes = dyn_client.resources.get(api_version='v1', kind='Route')
route = """
kind: Route
metadata:
name: test-route
spec:
host: test-route.<host>
port:
targetPort: 8080-tcp
to:
kind: Service
name: <service>
weight: 100
wildcardPolicy: None
tls:
termination: edge
"""
route_data = yaml.load(route)
resp = v1_routes.create(body=route_data, namespace='default')
# resp is a ResourceInstance object
print(resp.metadata)
The execution of the script raise the following Exception:
Traceback (most recent call last):
File "createroute.py", line 11, in <module>
v1_routes = dyn_client.resources.get(api_version='v1', kind='Route')
File "/Users/gmeghnag/Library/Python/3.8/lib/python/site-packages/openshift/dynamic/discovery.py", line 246, in get
raise ResourceNotFoundError('No matches found for {}'.format(kwargs))
openshift.dynamic.exceptions.ResourceNotFoundError: No matches found for {'api_version': 'v1', 'kind': 'Route'}
From openshift-restclient-python
def get(self, **kwargs):
""" Same as search, but will throw an error if there are multiple or no
results. If there are multiple results and only one is an exact match
on api_version, that resource will be returned.
"""
# oc get clusterversion
NAME VERSION AVAILABLE PROGRESSING SINCE STATUS
version 4.6.15 True False 134m Cluster version is 4.6.15
# oc get sub -A
NAMESPACE NAME PACKAGE SOURCE CHANNEL
openshift-serverless serverless-operator serverless-operator redhat-operators stable
If you create APIObject
with a model that is missing namespace
it will default to the default
namespace instead of whatever namespace context has.
In case users use print_log function each file contains
start string [logs:begin]xxxt-xx-xxx-01:pod/xxx-xxxxx-xxxxxxxl-d997f557d-vpf8r->pod/xxx-xxxxxxx-xxxxxxxx- f557d-vpf8r(postgresql)========
end string [logs:end]xxxxx-xx-xxxxx-01:pod/xxx-xxxxxx-xxxxxl-d997f557d-vpf8r->pod/xxx-xxxxxx-xxxxxxx-d997f557d-vpf8r(postgresql)========
This is not convenient. For example if log has json format I can load to an editor and format this log as json file. This strings confuse formatter. Commonly speaking I don see real role of this string. This string is mandatory added in utils module. Can I fix them or anybody use these strings
We would like to stream the output of the oc
command as it completes work. So far, we haven't been able to find a way; we can only get the full log once the command completes. Is there something simple I'm missing? Thanks! If there is a way, could you add an example file and/or doc showing how?
Hi
So I would like to find all pods in a namespace that do not have a specific label set.
I've been looking at following piece of documentation but it doesn't seem to mention it.
https://github.com/openshift/openshift-client-python#selectors
Selectors can also select based on kind and labels.
sa_label_selector = oc.selector("sa", labels={"mylabel":"myvalue"})
When doing this using the oc command I can do following:
oc get pod --selector='!mylabel'
However I can't seem to emulate this using this library.
Can you help me?
Please help, I don't see in your documentation mentioning these common resources.
Hi,
the base args should be --replicas
not --scale
.
Getting the following error.
Error: unknown flag: --scale
import openshift as oc
bc = oc.selector('bc/web-devops-bc')
bc.start_build()
Error message:
File ".\oc_test.py", line 25, in _build_oc
bc.start_build()
File "C:\dev\wm-cicd-builder\build\lib\site-packages\openshift\selector.py", line 425, in start_build
r = Selector()
TypeError: __init__() missing 1 required positional argument: 'high_level_operation'
Hello,
is there a way to share a context with subprocess
library, for example:
import openshift as oc
import subprocess
HOST = 'https://localhost:6443'
USER = 'ocuser'
PASSWORD = 'password'
PROJECT = 'my-proj'
with oc.api_server(HOST):
with oc.login(USER, PASSWORD):
with oc.project(PROJECT):
subprocess.run('oc get pods | grep ABCD')
I would like to execute some commands directly from the subprocess library, making sure I'm in the correct server/project context
oc.invoke('cp', ('/Users/<user>/Desktop/upload.txt', '{}:/tmp'.format(pod)))
I'm using this Python OpenShift client to upload a file from my local machine to the remote client_host
and everything is working as expected. However, when I attempt to invoke a cp
from my local machine it shows that the invocation works:
{
"operation": "tracking",
"status": 0,
"actions": [
{
"timestamp": 1624989569902,
"elapsed_time": 1.038727045059204,
"success": true,
"status": 0,
"verb": "cp",
"cmd": [
"oc",
"cp",
"/Users/<user>/Desktop/upload.txt",
"<pod-name>:/tmp"
],
"out": "",
"err": "",
"in": null,
"references": {
".client_host": "root@<ip-address>"
},
"timeout": false,
"last_attempt": true,
"internal": false
}
]
}
When I ls
the file it's not there. Any ideas why the invoke command works but there's no sight of the file on the pod?
Hello,
is there a way to share a context with subprocess
library, for example:
import openshift as oc
import subprocess
HOST = 'https://localhost:6443'
USER = 'ocuser'
PASSWORD = 'password'
PROJECT = 'my-proj'
with oc.api_server(HOST):
with oc.login(USER, PASSWORD):
with oc.project(PROJECT):
subprocess.run('oc get pods | grep ABCD')
I would like to execute some commands directly from the subprocess library, making sure I'm in the correct server/project context
For example there already is https://github.com/openshift/openshift-restclient-python and I can't imagine why there is two of them and which one should i use.
Hello,
i find and Error when using the follow command:
i have the error:
"err": "error: invalid resource name "pod/xxxx": [may not contain '/']\n",
when looking the reason:
i find an error in the openshift/apiobject.py
in the fonction : def execute(.....)
you have to change the parameter qname() to name()
have a nice day..
The until_all method does not have a timeout. Consequently, if the caller does not define a failure function, the method waits indefinitely.
The failure function approach assumes that the caller knows all of the possible issues that could happen with the object. However, this is not always the case.
Would you be open to adding a timeout parameter to the functions?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.