Giter Club home page Giter Club logo

kube's Introduction

kube-rs

Crates.io Rust 1.75 Tested against Kubernetes v1_24 and above Best Practices Discord chat

A Rust client for Kubernetes in the style of a more generic client-go, a runtime abstraction inspired by controller-runtime, and a derive macro for CRDs inspired by kubebuilder. Hosted by CNCF as a Sandbox Project

These crates build upon Kubernetes apimachinery + api concepts to enable generic abstractions. These abstractions allow Rust reinterpretations of reflectors, controllers, and custom resource interfaces, so that you can write applications easily.

Installation

Select a version of kube along with the generated k8s-openapi structs at your chosen Kubernetes version:

[dependencies]
kube = { version = "0.90.0", features = ["runtime", "derive"] }
k8s-openapi = { version = "0.21.1", features = ["latest"] }

Features are available.

Upgrading

Please check the CHANGELOG when upgrading. All crates herein are versioned and released together to guarantee compatibility before 1.0.

Usage

See the examples directory for how to use any of these crates.

Official examples:

For real world projects see ADOPTERS.

Api

The Api is what interacts with kubernetes resources, and is generic over Resource:

use k8s_openapi::api::core::v1::Pod;
let pods: Api<Pod> = Api::default_namespaced(client);

let p = pods.get("blog").await?;
println!("Got blog pod with containers: {:?}", p.spec.unwrap().containers);

let patch = json!({"spec": {
    "activeDeadlineSeconds": 5
}});
let pp = PatchParams::apply("kube");
let patched = pods.patch("blog", &pp, &Patch::Apply(patch)).await?;
assert_eq!(patched.spec.active_deadline_seconds, Some(5));

pods.delete("blog", &DeleteParams::default()).await?;

See the examples ending in _api examples for more detail.

Custom Resource Definitions

Working with custom resources uses automatic code-generation via proc_macros in kube-derive.

You need to #[derive(CustomResource)] and some #[kube(attrs..)] on a spec struct:

#[derive(CustomResource, Debug, Serialize, Deserialize, Default, Clone, JsonSchema)]
#[kube(group = "kube.rs", version = "v1", kind = "Document", namespaced)]
pub struct DocumentSpec {
    title: String,
    content: String,
}

Then you can use the generated wrapper struct Document as a kube::Resource:

let docs: Api<Document> = Api::default_namespaced(client);
let d = Document::new("guide", DocumentSpec::default());
println!("doc: {:?}", d);
println!("crd: {:?}", serde_yaml::to_string(&Document::crd()));

There are a ton of kubebuilder-like instructions that you can annotate with here. See the documentation or the crd_ prefixed examples for more.

NB: #[derive(CustomResource)] requires the derive feature enabled on kube.

Runtime

The runtime module exports the kube_runtime crate and contains higher level abstractions on top of the Api and Resource types so that you don't have to do all the watch/resourceVersion/storage book-keeping yourself.

Watchers

A low level streaming interface (similar to informers) that presents Applied, Deleted or Restarted events.

let api = Api::<Pod>::default_namespaced(client);
let stream = watcher(api, Config::default()).applied_objects();

This now gives a continual stream of events and you do not need to care about the watch having to restart, or connections dropping.

while let Some(event) = stream.try_next().await? {
    println!("Applied: {}", event.name_any());
}

NB: the plain items in a watcher stream are different from WatchEvent. If you are following along to "see what changed", you should flatten it with one of the utilities from WatchStreamExt, such as applied_objects.

Reflectors

A reflector is a watcher with Store on K. It acts on all the Event<K> exposed by watcher to ensure that the state in the Store is as accurate as possible.

let nodes: Api<Node> = Api::all(client);
let lp = Config::default().labels("kubernetes.io/arch=amd64");
let (reader, writer) = reflector::store();
let rf = reflector(writer, watcher(nodes, lp));

At this point you can listen to the reflector as if it was a watcher, but you can also query the reader at any point.

Controllers

A Controller is a reflector along with an arbitrary number of watchers that schedule events internally to send events through a reconciler:

Controller::new(root_kind_api, Config::default())
    .owns(child_kind_api, Config::default())
    .run(reconcile, error_policy, context)
    .for_each(|res| async move {
        match res {
            Ok(o) => info!("reconciled {:?}", o),
            Err(e) => warn!("reconcile failed: {}", Report::from(e)),
        }
    })
    .await;

Here reconcile and error_policy refer to functions you define. The first will be called when the root or child elements change, and the second when the reconciler returns an Err.

TLS

By default rustls is used for TLS, but openssl is supported. To switch, turn off default-features, and enable the openssl-tls feature:

[dependencies]
kube = { version = "0.90.0", default-features = false, features = ["client", "openssl-tls"] }
k8s-openapi = { version = "0.21.0", features = ["latest"] }

This will pull in openssl and hyper-openssl. If default-features is left enabled, you will pull in two TLS stacks, and the default will remain as rustls.

musl-libc

Kube will work with distroless, scratch, and alpine (it's also possible to use alpine as a builder with some caveats).

License

Apache 2.0 licensed. See LICENSE for details.

kube's People

Contributors

aryan9600 avatar chinying-li avatar clux avatar ctron avatar dav1dde avatar davidb avatar dependabot[bot] avatar ericmcbride avatar github-actions[bot] avatar gnunicorn avatar goenning avatar kazk avatar kflansburg avatar kitmoog avatar lfrancke avatar lukemathwalker avatar mateiidavid avatar mikailbag avatar nightkr avatar olix0r avatar praveenperera avatar rylev avatar sbernauer avatar svend avatar thomastaylor312 avatar tiagolobocastro avatar tottoto avatar twz123 avatar xrl avatar ynqa avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

kube's Issues

Make a typed Api struct

Currently, the user is expected to figure out the return types by inferring. Have a typed wrapper around Api (and probably change the names so that this is default) so users don't have to:

let req = foos.patch("baz", &pp, serde_json::to_vec(&patch)?)?;
let o = client.request::<Object<FooSpec, FooStatus>>(req)?;

but can just

let o = foos.patch("baz", &pp, serde_json::to_vec(&patch)?)?;

That does involve the user having to create the Api with the desired output structs out front though, and passing the client into the Api:

let api : Api<FooSpec, FooStatus> = Api::customResource("foos")
    .version("v1")
    .group("clux.dev")
    .within("dev")
    .client(client);

And it might cause us to have to think about types even more. In the first case we probably have to always wrap .status in an Option due to how when you create an instance of crd it does not return with a status object (at least if you didn't set one). Maybe this can be fixed with openapi defaults, but that doesn't sound like a nice solution.

`Error(BlockingClientInFutureContext,` when using api from within actix handler

Minimal code that causes this error is here, https://gist.github.com/vigneshsarma/426074f7e5bf76518be302bc4d8ff4ca

I am running on macos and kubernetes server is minikube and version:

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.2", GitCommit:"66049e3b21efe110454d67df4fa62b08ea79a19b", GitTreeState:"clean", BuildDate:"2019-05-16T18:55:03Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.3", GitCommit:"5e53fd6bc17c0dec8434817e69b04a25d8ae0ff0", GitTreeState:"clean", BuildDate:"2019-06-06T01:36:19Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}

Sample run:

$ RUST_BACKTRACE=1 POD_NAMESPACE=test cargo run
    Finished dev [unoptimized + debuginfo] target(s) in 0.62s
     Running `target/debug/foo`
Jobs: "test-0-job"
Jobs: "test-1-job"
thread 'actix-rt:worker:0' panicked at 'called `Result::unwrap()` on an `Err` value: Error { inner: Error(BlockingClientInFutureContext, "https://192.168.198.128:8443/apis/batch/v1/namespaces/test/jobs?")

Error executing request }', src/libcore/result.rs:999:5
note: Run with `RUST_BACKTRACE=1` environment variable to display a backtrace.
Panic in Arbiter thread, shutting down system.

invoke the endpoint by calling curl http://localhost:6666/foo/.

Full stacktrace is in the gist.

As you can see the ls_jobs call works outside the actix web server context, while the exact same call fails inside.

Any idea what might be wrong?

Export Config API Structs from Crate

Currently API types defined in config/apis.rs are not exported from the crate. Would like to see them exported.

I am currently writing a utility to manage some kubeconfig files. Instead of redefining the types myself, would be great if I can simply use the types exported by this library.

Optional rustls support

Some Rust users in the community are not interested in having OpenSSL as a dependency. Especially just for the support of loading certificate/key data from kubeconfig files. reqwest already supports rustls as an optional feature instead of using native-tls. You would simply need to switch the cert/key loading to only use reqwest Certificate/Identity APIs, and potentially rustls::internal::pemfile if necessary, rather than the openssl crate.

Better logging of api calls from client interface

Would like to have something akin to kubectl VERB RESOURCE -v=9 where it prints the corresponding curl command with all headers at trace level:

curl -k -v -XGET  -H "Accept: application/json" -H "User-Agent: kubectl/v1.13.3 (linux/amd64) kubernetes/721bfa7" -H "Authorization: Bearer TOKENHERE" 'https://MYCLUSTER/apis/apiextensions.k8s.io/v1beta1/customresourcedefinitions/foos.clux.dev'
GET https://MYCLUSTER/apis/apiextensions.k8s.io/v1beta1/customresourcedefinitions/foos.clux.dev 200 OK in 17 milliseconds
Response Headers:
    Date: Mon, 27 May 2019 00:14:51 GMT
    Audit-Id: 8cf2d917-b232-4ec6-9798-090c9c3b051c
    Content-Type: application/json
    Content-Length: 1445
Response Body: {"kind":"CustomResourceDefinition","apiVersion":"apiextensions.k8s.io/v1beta1","metadata":{"name":"foos.clux.dev","selfLink":"/apis/apiextensions.k8s.io/v1beta1/customresourcedefinitions/foos.clux.dev","uid":"700afb9e-8014-11e9-9779-02a8860543ec","resourceVersion":"75406382","generation":1,"creationTimestamp":"2019-05-27T00:14:47Z","annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apiextensions.k8s.io/v1beta1\",\"kind\":\"CustomResourceDefinition\",\"metadata\":{\"annotations\":{},\"name\":\"foos.clux.dev\"},\"spec\":{\"group\":\"clux.dev\",\"names\":{\"kind\":\"Foo\",\"listKind\":\"FooList\",\"plural\":\"foos\",\"singular\":\"foo\"},\"scope\":\"Namespaced\",\"version\":\"v1\",\"versions\":[{\"name\":\"v1\",\"served\":true,\"storage\":true}]}}\n"}},"spec":{"group":"clux.dev","version":"v1","names":{"plural":"foos","singular":"foo","kind":"Foo","listKind":"FooList"},"scope":"Namespaced","versions":[{"name":"v1","served":true,"storage":true}],"conversion":{"strategy":"None"}},"status":{"conditions":[{"type":"NamesAccepted","status":"True","lastTransitionTime":"2019-05-27T00:14:47Z","reason":"NoConflicts","message":"no conflicts found"},{"type":"Established","status":"True","lastTransitionTime":null,"reason":"InitialNamesAccepted","message":"the initial names have been accepted"}],"acceptedNames":{"plural":"foos","singular":"foo","kind":"Foo","listKind":"FooList"},"storedVersions":["v1"]}}

We're using client.execute now rather than send, so there SHOULD be more info, but the default headers set in the client module does not seem to be printed.

Canonical Reason is probably also useful here.

implement Copy for Client?

we always clone this for passing between threads/informers anyway so might as well hide the clone call

Allow handling special verbs/resources

There's a couple of special resources that need to be handled in less generic ways. E.g. just on pods alone:

pods/attach
pods/portforward
pods/eviction
pods/exec
pods/log

and maybe some node specific stuff. Don't see drain + cordon in the openapi.
We could allow customising these, but maybe the list is short enough to just have it as a feature.

Third party provider support for out of cluster config

Certainly nice to have so we don't have to impersonate service accounts with their tokens directly outside the cluster.

I can see there's some partial support upstream via:

and will try to port it. That said:

It would be really awesome to have a dedicated crate for the kubeconfig. Preferably one that doesn't export a http client to use with api calls. That way this library can be purely an API abstraction and a completely disjoint approach to ynqa's version.

easy way of setting conditions

Conditions are kind of core to .status objects even if there was great uncertainty around them.

Need to investigate what the best way of doing this is. Atm, it's probably just a patch_status call, but it's possible that the conditions struct we tell people to use need a particular serde annotation to fit in correctly (type keyword is used as map key).

version prefix ResourceTypes

Rather than exposing ResourceType::Node, we should expose ResourceType::v1Node, to have a sensible interface to apiVersions.

It would require us actually having done #8 first though.

ca load error

~/.kube/config content

apiVersion: v1
clusters:
- cluster:
    server: https://<api-server>:8443
  name: <cluster_name>
contexts:
- context:
    cluster: <cluster_name>
    namespace: default
    user: ops/<cluster_name>
  name: default/<cluster_name>/ops
current-context: default/<cluster_name>/ops
kind: Config
preferences: {}
users:
- name: ops/<cluster_name>
  user:
    token: <token>

backtrace infomation

thread 'main' panicked at 'failed to load kubernetes config: ErrorMessage { msg: "Failed to get data/file with base64 format" }

stack backtrace:
   0: backtrace::backtrace::libunwind::trace::h8de4fdf219a770b7 (0x107ad819d)
             at /Users/liber/.cargo/registry/src/github.com-1ecc6299db9ec823/backtrace-0.3.32/src/backtrace/libunwind.rs:88
      backtrace::backtrace::trace_unsynchronized::ha6afc095d9332ab9
             at /Users/liber/.cargo/registry/src/github.com-1ecc6299db9ec823/backtrace-0.3.32/src/backtrace/mod.rs:66
   1: backtrace::backtrace::trace::h4fac325b7e05066a (0x107ad8123)
             at /Users/liber/.cargo/registry/src/github.com-1ecc6299db9ec823/backtrace-0.3.32/src/backtrace/mod.rs:53
   2: backtrace::capture::Backtrace::create::h7b587b6a1d993b62 (0x107acbc47)
             at /Users/liber/.cargo/registry/src/github.com-1ecc6299db9ec823/backtrace-0.3.32/src/capture.rs:165
   3: backtrace::capture::Backtrace::new_unresolved::h34e2772506327d29 (0x107acbbc8)
             at /Users/liber/.cargo/registry/src/github.com-1ecc6299db9ec823/backtrace-0.3.32/src/capture.rs:159
   4: failure::backtrace::internal::InternalBacktrace::new::hb9d17a2bf6aee8db (0x107ac9940)
             at /Users/liber/.cargo/registry/src/github.com-1ecc6299db9ec823/failure-0.1.5/src/backtrace/internal.rs:44
   5: failure::backtrace::Backtrace::new::h1031a9f1486e8c9a (0x107aca7ef)
             at /Users/liber/.cargo/registry/src/github.com-1ecc6299db9ec823/failure-0.1.5/src/backtrace/mod.rs:111
   6: <failure::error::error_impl::ErrorImpl as core::convert::From<F>>::from::h7e1a433a37ac03a7 (0x107503b6d)
             at /Users/liber/.cargo/registry/src/github.com-1ecc6299db9ec823/failure-0.1.5/src/error/error_impl.rs:19
   7: <failure::error::Error as core::convert::From<F>>::from::hcb6d1cf3345130d1 (0x1074d7f88)
             at /Users/liber/.cargo/registry/src/github.com-1ecc6299db9ec823/failure-0.1.5/src/error/mod.rs:36
   8: failure::error_message::err_msg::h19bb9fd7e57228fe (0x107502de0)
             at /Users/liber/.cargo/registry/src/github.com-1ecc6299db9ec823/failure-0.1.5/src/error_message.rs:12
   9: kube::config::utils::data_or_file_with_base64::hb84a4552025028d7 (0x107500f2f)
             at /Users/liber/.cargo/registry/src/github.com-1ecc6299db9ec823/kube-0.10.0/src/config/utils.rs:34
  10: kube::config::apis::Cluster::load_certificate_authority::h73c27af072a2c87f (0x1074bed0b)
             at /Users/liber/.cargo/registry/src/github.com-1ecc6299db9ec823/kube-0.10.0/src/config/apis.rs:118
  11: kube::config::kube_config::KubeConfigLoader::ca::h7e4f13e496e7b9e9 (0x1074fd83d)
             at /Users/liber/.cargo/registry/src/github.com-1ecc6299db9ec823/kube-0.10.0/src/config/kube_config.rs:59
  12: kube::config::load_kube_config::h076b913f2d127e18 (0x1074e2689)
             at /Users/liber/.cargo/registry/src/github.com-1ecc6299db9ec823/kube-0.10.0/src/config/mod.rs:47
  13: controller::main::hd20ffe3bd6b3e6cb (0x10749d396)
             at src/main.rs:22
  14: std::rt::lang_start::{{closure}}::ha8e06417b2f85ecb (0x10749d202)
             at /rustc/5d20ff4d2718c820632b38c1e49d4de648a9810b/src/libstd/rt.rs:64
  15: {{closure}} (0x107cec488)
             at src/libstd/rt.rs:49
      do_call<closure,i32>
             at src/libstd/panicking.rs:293
  16: __rust_maybe_catch_panic (0x107cf059f)
             at src/libpanic_unwind/lib.rs:85
  17: try<i32,closure> (0x107cecf6e)
             at src/libstd/panicking.rs:272
      catch_unwind<closure,i32>
             at src/libstd/panic.rs:388
      lang_start_internal
             at src/libstd/rt.rs:48
  18: std::rt::lang_start::hfac9a9dfbdf902c2 (0x10749d1e2)
             at /rustc/5d20ff4d2718c820632b38c1e49d4de648a9810b/src/libstd/rt.rs:64
  19: controller::main::hd20ffe3bd6b3e6cb (0x10749d482)', src/libcore/result.rs:999:5
stack backtrace:
   0: std::sys::unix::backtrace::tracing::imp::unwind_backtrace
             at src/libstd/sys/unix/backtrace/tracing/gcc_s.rs:39
   1: std::sys_common::backtrace::_print
             at src/libstd/sys_common/backtrace.rs:71
   2: std::panicking::default_hook::{{closure}}
             at src/libstd/sys_common/backtrace.rs:59
             at src/libstd/panicking.rs:197
   3: std::panicking::default_hook
             at src/libstd/panicking.rs:211
   4: <std::panicking::begin_panic::PanicPayload<A> as core::panic::BoxMeUp>::get
             at src/libstd/panicking.rs:474
   5: std::panicking::continue_panic_fmt
             at src/libstd/panicking.rs:381
   6: std::panicking::try::do_call
             at src/libstd/panicking.rs:308
   7: <T as core::any::Any>::type_id
             at src/libcore/panicking.rs:85
   8: core::result::unwrap_failed
             at /rustc/5d20ff4d2718c820632b38c1e49d4de648a9810b/src/libcore/macros.rs:18
   9: core::result::Result<T,E>::expect
             at /rustc/5d20ff4d2718c820632b38c1e49d4de648a9810b/src/libcore/result.rs:827
  10: controller::main
             at src/main.rs:22
  11: std::rt::lang_start::{{closure}}
             at /rustc/5d20ff4d2718c820632b38c1e49d4de648a9810b/src/libstd/rt.rs:64
  12: std::panicking::try::do_call
             at src/libstd/rt.rs:49
             at src/libstd/panicking.rs:293
  13: panic_unwind::dwarf::eh::read_encoded_pointer
             at src/libpanic_unwind/lib.rs:85
  14: std::panicking::update_count_then_panic
             at src/libstd/panicking.rs:272
             at src/libstd/panic.rs:388
             at src/libstd/rt.rs:48
  15: std::rt::lang_start
             at /rustc/5d20ff4d2718c820632b38c1e49d4de648a9810b/src/libstd/rt.rs:64
  16: controller::main

my certifcate authority is not private, so there is no ca info in ~/.kube/config

Merge ApiStatus + ErrorResponse

Follow up from #36
Currently Status and ErrorResponse are the same underlying object in kube AFAIKT, but are returned in different contexts:

  • ErrorResponse when !request.is_success() (and you need to grab it out of the Error)
  • Status when request.is_success() and is_right()

We should standardise this to reduce duplication.

Reflector key conflict across namespaces

Hi!

I was using this library and I found that Reflector indexes the objects by name. If a reflector is used without within method, as keys are indexed by name, if a name is shared among distinct namespaces, some objects gets overwritten on the Cache.

For example, with the given code:

    let config = kube_config::load_kube_config().expect("failed to load kubeconfig");
    let client = APIClient::new(config);
    let resource = Api::v1Pod(client);
    let rf = Reflector::new(resource).init().unwrap();

    let cache = rf.read().unwrap();
    for c in cache.iter() {
        println!("KEY: {} {:?}", c.0, c.1.metadata.namespace);
    } 

I get the following results:

KEY: hello-node Some("default")
KEY: hello-node-55b49fb9f8-dcfdp Some("default")
KEY: hello-node-never-restart Some("test")

With kubectl get pod --all-namespaces I get:

default       hello-node                         0/1     CrashLoopBackOff   236        10d
default       hello-node-55b49fb9f8-dcfdp        1/1     Running            0          11d
default       hello-node-never-restart           1/1     Running            0          10d
test          hello-node-never-restart           1/1     Running            0          33m

I was expecting this cache to be indexed by both name and namespace, which I think is a unique identifier for objects (but I'm not sure about this).

api.delete can return a status object

We had assumed that delete could be modelled as:

pub fn delete(&self, name: &str, dp: &DeleteParams) -> Result<Object<P, U>>;

but turns out kube sends:

{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"baz","group":"clux.dev","kind":"foos","uid":"XXX"}}

on a successful delete.

See #28 for more details. Fix client.request to fix this.

Edit: same is true for delete_collection (when it's not backgrounded).

create object shorthand

It's currently easier to just json! in an object than using Object<P, U> atm:

let f = json!({
    "apiVersion": "clux.dev/v1",
    "kind": "Foo",
    "metadata": { "name": "baz" },
    "spec": { "name": "baz", "info": "old baz" },
});

vs

let f : Object<FooSpec, FooStatus> = Object {
    apiVersion: Some("clux.dev/v1".into()),
    kind: Some("Foo".into()),
    metadata: Metadata: { "name": Some("baz".into()), ..Default::default() }
    spec: FooSpec { "name": "baz".into(), "info".into() },
}

would be nice if Api structs had a way to at least provide the base at least. If you really needed to change labels or something in metadata you could do that later.

Move Informer-like logic out of Reflector

In go, the Informer is meant to be the thing to handle events, and if you're handling events yourself, you might not need the internal state - and maybe you want to aggregate it yourself.

An Informer could just follow the current Reflector logic, but defer entirely to the user to handle .events(). I.e. we'd split watch_for_resource_updates into two functions:

  • a default reconcile (to propagate the ResourceMap)
  • the two lines that actually fetch the WatchEvents

pod_openapi build error

I get this error when running cargo build --example pod_openapi after cloning this repo.

image

This is odd, because I can clearly see the implementation in my IDE.

Apply support

Server side?

FEATURE STATE: Kubernetes v1.14 alpha

Server Side Apply allows clients other than kubectl to perform the Apply operation, and will eventually fully replace the complicated Client Side Apply logic that only exists in kubectl. If the Server Side Apply feature is enabled, the PATCH endpoint accepts the additional application/apply-patch+yaml content type. Users of Server Side Apply can send partially specified objects to this endpoint. An applied config should always include every field that the applier has an opinion about.

Looks useful. Clearly related to #24 with an extra application/apply-patch+yaml content-type.

Perhaps an Api::apply(name: &str, data: Vec<u8>) would be a better alias for a patch with this.
Check what client-go does.

However the field management notes is a scary read.

Client side?

We could also do the client-side approach which the ruby client did. See Building and Maintaining a Client Library - Stories From the Trenches - Jussi Nummelin, Kontena Inc.

Sounds even worse. I'd rather just wait with this support for now until I get to test server side.

Don't enable the v1_13 feature in the k8s-openapi direct dep

k8s-openapi only allows one feature to be enabled. If a crate's dep graph contains multiple crates like kube that decide to enable features in their direct deps, Cargo will union the features and enable multiple of them simultaneously, which will not compile.

(It's mentioned in a little detail at the top of https://docs.rs/k8s-openapi/0.4.0/k8s_openapi/ , but I'm planning to expand that and the build script panic messages in Arnavion/k8s-openapi#44 )

Basically I would like binary crates to enable a specific feature depending on what minimum version of API server they want to run against. So for something like your controller-rs crate it's okay to enable the v1_13 feature there.

But for library crates like kube, my ask is:

  • Don't enable any features in your k8s-openapi direct dep.

  • Do enable it in your k8s-openapi dev-dep for your crate's tests and examples.

  • Use the k8s_* version detection macros to limit what versions you want your crate to compile with. For example you can use compile_error like in the docs example.

It does mean that controller-rs would need to add a dependency on k8s-openapi just to enable the feature, even though all its interaction with Kubernetes is through kube.

Please tell me if you have any objections. (I have not received any feedback positive or negative to this design decision.)


I know this goes against Cargo's general idea that features should be additive. The original implementation of this in 0.3.0 and earlier did have additive features, by having the crate's API exposed under a module of the same name. Eg the v1_13 feature enabled the k8s_openapi::v1_13 module.

But it meant you had to write imports like this:

https://github.com/Arnavion/k8s-openapi/blob/v0.3.0/k8s-openapi-tests/src/pod.rs#L3-L20

which was a PITA, and still wouldn't have worked if more than one feature was enabled. See Arnavion/k8s-openapi#18 for more details.

Expose WatchEvents on demand

A couple of options, not sure what's the nicest interface yet:

  • On demand events since last "reconcile"
  • Events as they occur through callbacks

First is easy to do, just queue up, then pop and return on demand.
Latter would probably block the polling thread/owner (since that's the thing that runs the watcher).

expose a less copy-heavy interface from Reflector

Currently all Reflector::read calls do a full copy:

https://github.com/clux/kube-rs/blob/3b14161baefe64ba308344e886b186aa4fbb0c7f/src/api/reflector.rs#L133-L149

this is probably not what all users want (see #57).

Alternatives so far:

  • wrap self.cache in an Arc and hand out clones of that
  • make read be the equivalent of into_iter and expose an alternative iter equivalent

The first sounds tricky to get right depending on how people drive the reflector (borrow-checking might get harder to satisfy) when one thread polls and other threads want to read. Attempts at making this better is welcome.

implement more bindings to the standard api-resources

We got core::v1 covered AFAIKT. But a quick peek at the openapi spec shows there's still tons of uncovered resources.

As a starting point, let's aim to cover the normal kubectl api-resources output:

NAME                              SHORTNAMES   APIGROUP                             NAMESPACED   KIND
bindings                                                                            true         Binding
componentstatuses                 cs                                                false        ComponentStatus
configmaps                        cm                                                true         ConfigMap
endpoints                         ep                                                true         Endpoints
events                            ev                                                true         Event
limitranges                       limits                                            true         LimitRange
namespaces                        ns                                                false        Namespace
nodes                             no                                                false        Node
persistentvolumeclaims            pvc                                               true         PersistentVolumeClaim
persistentvolumes                 pv                                                false        PersistentVolume
pods                              po                                                true         Pod
podtemplates                                                                        true         PodTemplate
replicationcontrollers            rc                                                true         ReplicationController
resourcequotas                    quota                                             true         ResourceQuota
secrets                                                                             true         Secret
serviceaccounts                   sa                                                true         ServiceAccount
services                          svc                                               true         Service
mutatingwebhookconfigurations                  admissionregistration.k8s.io         false        MutatingWebhookConfiguration
validatingwebhookconfigurations                admissionregistration.k8s.io         false        ValidatingWebhookConfiguration
customresourcedefinitions         crd,crds     apiextensions.k8s.io                 false        CustomResourceDefinition
apiservices                                    apiregistration.k8s.io               false        APIService
controllerrevisions                            apps                                 true         ControllerRevision
daemonsets                        ds           apps                                 true         DaemonSet
deployments                       deploy       apps                                 true         Deployment
replicasets                       rs           apps                                 true         ReplicaSet
statefulsets                      sts          apps                                 true         StatefulSet
tokenreviews                                   authentication.k8s.io                false        TokenReview
localsubjectaccessreviews                      authorization.k8s.io                 true         LocalSubjectAccessReview
selfsubjectaccessreviews                       authorization.k8s.io                 false        SelfSubjectAccessReview
selfsubjectrulesreviews                        authorization.k8s.io                 false        SelfSubjectRulesReview
subjectaccessreviews                           authorization.k8s.io                 false        SubjectAccessReview
horizontalpodautoscalers          hpa          autoscaling                          true         HorizontalPodAutoscaler
cronjobs                          cj           batch                                true         CronJob
jobs                                           batch                                true         Job
certificatesigningrequests        csr          certificates.k8s.io                  false        CertificateSigningRequest
leases                                         coordination.k8s.io                  true         Lease
events                            ev           events.k8s.io                        true         Event
ingresses                         ing          extensions                           true         Ingress
nodes                                          metrics.k8s.io                       false        NodeMetrics
pods                                           metrics.k8s.io                       true         PodMetrics
networkpolicies                   netpol       networking.k8s.io                    true         NetworkPolicy
poddisruptionbudgets              pdb          policy                               true         PodDisruptionBudget
podsecuritypolicies               psp          policy                               false        PodSecurityPolicy
clusterrolebindings                            rbac.authorization.k8s.io            false        ClusterRoleBinding
clusterroles                                   rbac.authorization.k8s.io            false        ClusterRole
rolebindings                                   rbac.authorization.k8s.io            true         RoleBinding
roles                                          rbac.authorization.k8s.io            true         Role
priorityclasses                   pc           scheduling.k8s.io                    false        PriorityClass
storageclasses                    sc           storage.k8s.io                       false        StorageClass
volumeattachments                              storage.k8s.io                       false        VolumeAttachment

That's what a 1.13 cluster supports (but took out some extensions == v1beta versions of workloads api).

There's many ways of finding the Api values for a resource. One way is to browse to the openapi spec for a resource - here Service as an example. Then view src on the first create request which should easily show a __url of the form:

let __url = format!("/api/v1/namespaces/{namespace}/services?", namespace = namespace);

In this case we can infer the resource is:

RawApi {
    resource: "services".into(),
    group: "".into(),
    ..Default::default()
}

(note the v1 version + apis prefix and unset namespace defaults).

It would need a v1Service constructor in RawApi.

It would also need the following entry for the "openapi" users:

use k8s_openapi::api::core::v1::{ServiceSpec, ServiceStatus};
impl Api<Object<ServiceSpec, ServiceStatus>> {
    pub fn v1Service(client: APIClient) -> Self {
        Api {
            api: RawApi::v1Service(),
            client,
            phantom: PhantomData,
        }
    }
}

k8s-openapi memory consumption blocks CI

Did a full cargo test --all-features after having bumped the k8s-openapi dependency and memory use is almost 8GB with parallelism. Looks like it takes >4GB just compiling the dependency itself. This blocks CircleCI from building it (on the free account), and travis just seems to time out. Need a better CI solution..

client.request should also return the statuscode

We often need a way to distinguish from CREATED, ACCEPTED, and OK as these often map to kube notions such as "configured", "created", "unchanged".

Document the common results in .create(), .replace() etc.

convert to normal snake_case for fields

It's getting annoying in crates where you need to propagate the non_snake_case allow flag.

We should probably just be consistent with rust style even if it is inconsistent with the underlying API. Even go renames their fields anyway..

EDIT: updated with comment below:
Remove all allow(non_snake_case) and get library + examples to compile without warnings.

Remove allow(non_snake_case) from some files:

  • api/resources.rs
  • api/metadata.rs
  • client/mod.rs

Leave the constructor types alone for now.

Async Informer event handling interface

Excerpt from Best practices for building Kubernetes Operators and stateful app point 5:

  1. Use asynchronous sync loops
    If an operator detects an error (e.g., failed pod creation) when reconciling the current cluster state to the desired state, it should immediately terminate the current sync call and return the error. The work queue should then schedule a resync at a later time; the sync call should not block the application by continuing to poll the cluster state until the error is resolved. Similarly, controllers that initiate and monitor long-running operations should not synchronously wait for the
    operations. Instead, the controllers should go back to sleep and check again later.

That's something worth exposing better. Whether it's something we can hook into actix' actor system in a more native rust way, or whether we should have better queuing built in. Not sure. Leaving an issue here for now.

get Patch working

In my initial sketch of update_ actions that takes a Patch object, I'm just handing in the towel and asking for a u8 array. The openapi version of handling this is weird

It would be better to have just something that implements Serialize.

Unfortunately, it's not merely json:

even if json is the most common form of a patch.

(and even if we were to only support that, you can't put Serialize as a trait object without some difficulty)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.