Giter Club home page Giter Club logo

koordinator's People

Contributors

bowen-intel avatar buptcozy avatar cheimu avatar chzhj avatar dependabot[bot] avatar eahydra avatar fillzpp avatar honpey avatar hormes avatar huiwq1990 avatar j4ckstraw avatar jasonliu747 avatar kangclzjc avatar kunwuluan avatar lambdahj avatar leoliuyan avatar lucming avatar re-grh avatar saintube avatar shaloulcy avatar songtao98 avatar stormgbs avatar wangxiaoq avatar wenshiqi222 avatar xigang avatar xulinfei1996 avatar zimengsheng avatar zqzten avatar zwzhang0107 avatar zyecho avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

koordinator's Issues

[proposal] remove dependency of kubelet readonly port

What is your proposal:
Use ListWatch instead of querying Pods from the kubelet when 10255 is closed.

Why is this needed:
The kubelet readonly prot is disabled by default.

Is there a suggested solution, if so, please add it:

project roadmap

Hi.....

What is the roadmap of the project? When is the scheduler code open source?

[BUG] not syncing NodeSLO from configMap slo-controller-config

What happened:

  1. Not syncing to NodeSLO after memoryEvictThresholdPercent of resource-threshold-config in configMap slo-controller-config

What you expected to happen:
Expect syncing to NodeSLO with memoryEvictThresholdPercent of resource-threshold-config

How to reproduce it (as minimally and precisely as possible):

  1. update configMap slo-controller-config
apiVersion: v1
data:
  colocation-config: |
    {
      "enable": true
    }
  resource-threshold-config: |
    {
      "clusterStrategy": {
        "enable": true,
        "memoryEvictThresholdPercent": 70
      }
    }
kind: ConfigMap
metadata:
  name: slo-controller-config
  namespace: koordinator-system
  1. check NodeSLO
$ kubectl get nodeslo -o yaml | grep memoryEvictThresholdPercent # get nothing

Anything else we need to know?:

Environment:

  • App version:
  • Kubernetes version (use kubectl version):
  • Install details (e.g. helm install args):
  • Others:

[proposal] scheduler support node load balancing

What would you like to be added:
Support scheduling based on node load. Nodes with low load are allocated first, and when the load reaches a safe threshold, consideration should be given not to continue allocation.

We can also consider supporting the ability of portraits to describe the resource usage of different workloads, and to guide the scheduler to achieve finer load balancing through portrait data.

By the way, in the actual production environment, the load of the node will not increase quickly and can be quickly perceived by the scheduler. In the case of no portrait, we should consider providing a mechanism to deal with this scenario.

Why is this needed:

Koordinator allowing for resource overcommitments to achieve high resource utilizations. We must reduce the impact of co-location on latency-sensitive applications and improve runtime reliability.

[proposal] CI should fail if diff coverage is below 70%

Why is this needed:
Diff coverage is the percentage of new or modified lines that are covered by tests. This provides a clear and achievable standard for code review: If you touch a line of code, that line should be covered.

Code coverage is every developer's responsibility!

So, I think CI should fail if diff coverage is below 70% or something else.

Is there a suggested solution, if so, please add it:
Codecov might support this. Need to do some research.

[BUG]I install koordinator on huaweicloud cce,but I can't get the right result for the spark-pi job.

What happened:
when I use koordinator on huaweiCloud cce.
image

I describe the pod:
image
image

I can't find batch-cpu and batch-memory information from the node:
image

I fetch logs from koordlet use kubectl logs ds:
kubectl logs --tail=10 koordlet-cd9xg
image

So I modify the kubelet config for the node:
image

I get the error info when the koordlet send request to kubelet:
this is the logs on the koordlet pod:
image
this is the thumbnail on the node:
image
image

kubelet config:
`kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: 0.0.0.0
#address: 7.183.231.172
authentication:
anonymous:
enabled: false
x509:
clientCAFile: /opt/cloud/cce/srv/kubernetes/cluster-ca.crt
authorization:
mode: Webhook
clusterDNS:

- 10.247.3.10
clusterDomain: cluster.local
enableControllerAttachDetach: true
evictionHard:
memory.available: 100Mi
nodefs.available: 10%
nodefs.inodesFree: 5%
imagefs.available: 10%
featureGates:
DevicePlugins: true
MultiGPUScheduling: true
ExpandCSIVolumes: true
CSIInlineVolume: true
CSIMigrationFlexVolumeFuxi: true
CSIMigrationFlexVolumeFuxiComplete: true
CSIMigration: true
IPv6DualStack: false
ReserveMemoryCgroupForPageCache: false
SizeMemoryBackedVolumes: true
OversubscriptionResource: false

maxPods: 110
podPidsLimit: -1
port: 10250
readOnlyPort: 10255
staticPodPath: "/opt/cloud/cce/kubernetes/manifests"
serverTLSBootstrap: true
imageGCHighThresholdPercent: 80
imageGCLowThresholdPercent: 70
kubeAPIQPS: 100
kubeAPIBurst: 100
cpuManagerPolicy: none
tlsCipherSuites:
- TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256
- TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
- TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
- TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384
kubeletCgroups: "/kubelet"
eventRecordQPS: 5
`

kubernetes version:
image

What you expected to happen:
kubelet response the pod information to the koordlet

How to reproduce it (as minimally and precisely as possible):
use huawei cloud cce and helm install koordinator.
Anything else we need to know?:

Environment:

  • App version:
  • Kubernetes version (use kubectl version):
  • Install details (e.g. helm install args):
  • Others:

[BUG] handle unexpected cpu info in case of koordlet panic

What happened:

What you expected to happen:

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know?:

Environment:

  • App version:
  • Kubernetes version (use kubectl version):
  • Install details (e.g. helm install args):
  • Others:

[BUG]README.md LICENSE linked error

What happened:
README.md LICENSE linked error. LICENSE.md cannot be found.
What you expected to happen:
fix LICENSE.md to LICENSE in README.md
How to reproduce it (as minimally and precisely as possible):

Anything else we need to know?:

Environment:

  • App version:
  • Kubernetes version (use kubectl version):
  • Install details (e.g. helm install args):
  • Others:

[proposal] Utilize PMEM hardware to implement a new memory policy when the system memory exceeds safety threshold

What is your proposal:
On the machine with PMEM hardware, we can utilize this hardware capability, migrating BE pods memory to PMEM instead of evicting them directly when the system archives the memory safety threshold.

Why is this needed:
PMEM can be used as the system memory, then we suggest to implement a differentiated memory policy for the machine with PMEM hardware, which can make full use of the hardware capability.

Is there a suggested solution, if so, please add it:
When the memory utilization exceeds the safety threshold, we can choose to migrating BE pods memory to PMEM, BE pods can continue running on PMEM or temporarily freezing the CPU, which can reduce the memory pressure of LSR & LS pods. And when the memory utilization is far below the safety threshold, we can choose to migration BE pods memory back to DDR.

[BUG] Memory qos WmarkRatio description

What happened:

memoryqos apis/slo/v1alpha1/nodeslo_types.go :
WmarkRatio docs:

// `memory.wmark_high` := min(memory.high, memory.limit_in_bytes) * memory.wmark_scale_factor

this should be
memory.wmark_high := min(memory.high, memory.limit_in_bytes) * memory.wmark_ratio?

What you expected to happen:

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know?:

Environment:

  • App version:
  • Kubernetes version (use kubectl version):
  • Install details (e.g. helm install args):
  • Others:

[proposal] remove vendor dependencies

What is your proposal:
Now that we have switched to Go modules, there's no need to commit dependencies in the vendoring directory to the repository. I don't see much value in keeping in the repo and it just adds bloat.

For users in China, there're lot of modules in the Go ecosystem that cannot be easilygo install. In that case, they can follow the instruction below to fix this problem.

$ go env -w GO111MODULE=on
$ go env -w GOPROXY=https://goproxy.cn,direct

[BUG] data race in states_informer_test.go

What happened:
I ran test with race detection enabled using go test ./... -race, and it would fail with followig errors:

W0416 10:52:14.139957   51219 metrics.go:60] register nil node for metrics
E0416 10:52:14.140726   51219 pleg.go:82] failed to create pod watcherwatch not supported on darwin
E0416 10:52:14.140778   51219 pleg.go:86] failed to create container watcherwatch not supported on darwin
I0416 10:52:14.141045   51219 states_informer.go:111] starting statesInformer
I0416 10:52:14.141081   51219 states_informer.go:113] starting informer for Node
I0416 10:52:14.244944   51219 states_informer.go:135] start meta service successfully
I0416 10:52:14.245620   51219 states_informer.go:212] get pods from kubelet success, len 1
==================
WARNING: DATA RACE
Read at 0x00c0001d2628 by goroutine 44:
  runtime.racereadrange()
      <autogenerated>:1 +0x1b
  testing.tRunner()
      /usr/local/go/src/testing/testing.go:1259 +0x22f
  testing.(*T).Run·dwrap·21()
      /usr/local/go/src/testing/testing.go:1306 +0x47

Previous write at 0x00c0001d2628 by goroutine 59:
  github.com/koordinator-sh/koordinator/pkg/koordlet/statesinformer.(*statesInformer).syncKubelet()
      /Users/jason/Projects/koordinator/pkg/koordlet/statesinformer/states_informer.go:211 +0x358
  github.com/koordinator-sh/koordinator/pkg/koordlet/statesinformer.(*statesInformer).syncKubeletLoop()
      /Users/jason/Projects/koordinator/pkg/koordlet/statesinformer/states_informer.go:226 +0x2f1
  github.com/koordinator-sh/koordinator/pkg/koordlet/statesinformer.(*statesInformer).Run·dwrap·4()
      /Users/jason/Projects/koordinator/pkg/koordlet/statesinformer/states_informer.go:130 +0x58

Goroutine 44 (running) created at:
  testing.(*T).Run()
      /usr/local/go/src/testing/testing.go:1306 +0x726
  testing.runTests.func1()
      /usr/local/go/src/testing/testing.go:1598 +0x99
  testing.tRunner()
      /usr/local/go/src/testing/testing.go:1259 +0x22f
  testing.runTests()
      /usr/local/go/src/testing/testing.go:1596 +0x7ca
  testing.(*M).Run()
      /usr/local/go/src/testing/testing.go:1504 +0x9d1
  main.main()
      _testmain.go:103 +0x324

Goroutine 59 (running) created at:
  github.com/koordinator-sh/koordinator/pkg/koordlet/statesinformer.(*statesInformer).Run()
      /Users/jason/Projects/koordinator/pkg/koordlet/statesinformer/states_informer.go:130 +0x714
  github.com/koordinator-sh/koordinator/pkg/koordlet/statesinformer.Test_metaService_syncPods·dwrap·13()
      /Users/jason/Projects/koordinator/pkg/koordlet/statesinformer/states_informer_test.go:181 +0x59
==================

Environment:

  • Go version: go version go1.17.9 darwin/amd64

[feature request] koordlet support memoryEvictLowerPercent field in NodeSLO

What happened:

memoryEvictLowerPercent is defined in NodeSLO, but koordlet does not support it.

What you expected to happen:

Expect koordlet supports memoryEvictLowerPercent in NodeSLO

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know?:

Environment:

  • App version:
  • Kubernetes version (use kubectl version):
  • Install details (e.g. helm install args):
  • Others:

[BUG] koordlet pod always CrashLoopBackOff with a painc error

What happened:
koordlet dosen't work when i install it with helm

What you expected to happen:

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know?:

Environment:

  • App version:v0.2.0
  • Kubernetes version (use kubectl version):Client Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.3", GitCommit:"c92036820499fedefec0f847e2054d824aea6cd1", GitTreeState:"clean", BuildDate:"2021-10-27T18:34:20Z", GoVersion:"go1.16.10", Compiler:"gc", Platform:"darwin/arm64"}
    Server Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.3", GitCommit:"c92036820499fedefec0f847e2054d824aea6cd1", GitTreeState:"clean", BuildDate:"2021-10-27T18:35:25Z", GoVersion:"go1.16.9", Compiler:"gc", Platform:"linux/arm64"}
  • Install details (e.g. helm install args):
  • Others: local install wtih minikube
    i got the lscpu content on my node as follows

lscpu -e=CPU,NODE,SOCKET,CORE,CACHE,ONLINE

CPU NODE SOCKET CORE CACHE ONLINE
0 - 0 0 - yes
1 - 0 1 - yes
2 - 0 2 - yes
3 - 0 3 - yes

[proposal] Support fine-grained CPU binding mechanism to improve QoS model

What would you like to be added:
It is necessary to support fine-grained CPU binding mechanisms to improve the QoS model defined by Koordinator: LSE and LSR.

LSE and LSR simply require that cores must be bound and decide whether to share resources with other workloads.
LSE requires that the allocated CPU can only be used by a Pod and not shared with other workloads.
LSR binds logical cores on demand, but does not require strong isolation like LSE, allowing sharing with other offline workloads.

Considering different CPU architectures, such as Intel provides HT technology, under this technical mechanism, how different workloads bind logical cores will bring different runtime effects and resource allocation effects. Under the NUMA architecture, if the bound logical cores are unreasonable, cross-die access to memory will occur, which may have a catastrophic impact on latency-sensitive applications.

The CPU-bound scenarios I can think of,

  • A Pod exclusively occupies N physical cores according to the amount of resource requests.
  • Pods of the same workload will require mutual exclusion in the physical core dimension similar to the Pod AnitAffinity mechanism.
  • Pods of different workloads can be shared in the physical core dimension.
  • Requires logic cores with optimal NUMA topology

Why is this needed:
Enhanced CPU allocation capability and improve QoS model

  • v0.5 #226
  • v0.6 #227
  • v0.6 #229
    • #232
    • #233
      • #265
        • koordlet supports LSR
        • koordlet supports CPU Shared Pool for LS, K8s Burstable and Guaranteed Pods(newly created)
        • koordlet supports BE Shared Pool for BE, K8s BestEffort Pods
        • v0.6 #224
      • v0.6 #346
    • v0.6 #234
  • v0.6 support Node CPU orchestration API
  • v0.7 support kubelet reserved CPUs
  • v0.8 #228
    • koordlet report complete NUMA Node information
    • koord-scheduler support NUMA Topology Alignment scheduling
    • #1421
  • support mainstream architecture, like Intel/AMD/ARM
  • #230

[BUG] Koordinator's priority and qos docs not found

What happened:

What you expected to happen:

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know?:

Environment:

  • App version:
  • Kubernetes version (use kubectl version):
  • Install details (e.g. helm install args):
  • Others:

REQUEST: New membership for Mr-Linus

GitHub Username

@Mr-Linus

Requirements

  • I have reviewed the community membership guidelines
  • I have enabled 2FA on my GitHub account
  • I am actively contributing to 1 or more Koordinator subprojects
  • I have two sponsors that meet the sponsor requirements listed in the community membership guidelines
  • I have spoken to my sponsors ahead of this application, and they have agreed to sponsor my application
  • I have verified that my sponsors are a reviewer or an approver in at least one OWNERS file within one of the Koordinator GitHub organizations (excluding the contributor-playground)
  • OPTIONAL: I have taken the Inclusive Open Source Community Orientation course

Sponsor 1

@hormes

Sponsor 2

@jasonliu747

List of contributions to the Koordinator project

  • PRs Reviewed / authored
    PR38
    PR42
    PR51

  • Issues responded to
    Issues50

  • Subprojects I am involved with
    koordinator-sh/koordinator

[feature request] support running tensorflow

What would you like to be added:
Right now, there is an example for Spark job. Could you also add an example for running tensorflow in colocation mode? Many thanks.

Why is this needed:

[BUG] Typo in Nginx image tag

What happened:
There is a typo in image tag

image: docker.io/koordinatorsh/nginx:v1.18-koord-exmaple

What you expected to happen:
nginx:v1.18-koord-exmaple -> nginx:v1.18-koord-example

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know?:

Environment:

  • App version:
  • Kubernetes version (use kubectl version):
  • Install details (e.g. helm install args):
  • Others:

when deploy on ACK 1.18.1 koord-manager pod always crash

logs of koord-manager pod:

W0407 10:06:26.017519 1 noderesource.go:234] node koordinator.sh/batch-memory resource diff bigger than 0.1, need sync
W0407 10:06:26.032565 1 noderesource.go:234] node koordinator.sh/batch-memory resource diff bigger than 0.1, need sync
W0407 10:06:30.192036 1 noderesource.go:234] node koordinator.sh/batch-memory resource diff bigger than 0.1, need sync
W0407 10:06:30.212145 1 noderesource.go:234] node koordinator.sh/batch-memory resource diff bigger than 0.1, need sync
I0407 10:06:35.111049 1 request.go:597] Waited for 55.075105ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.124.1:443/apis/alert.alibabacloud.com/v1beta1?timeout=32s
I0407 10:06:35.145232 1 request.go:597] Waited for 89.253202ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.124.1:443/apis/autoscaling/v2beta2?timeout=32s
I0407 10:06:35.178362 1 request.go:597] Waited for 122.38075ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.124.1:443/apis/apiextensions.k8s.io/v1?timeout=32s
I0407 10:06:35.211422 1 request.go:597] Waited for 155.435651ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.124.1:443/apis/autoscaling/v2beta1?timeout=32s
I0407 10:06:35.244458 1 request.go:597] Waited for 188.457989ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.124.1:443/apis/storage.kubesphere.io/v1alpha1?timeout=32s
I0407 10:06:35.278526 1 request.go:597] Waited for 222.531793ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.124.1:443/apis/tenant.kubesphere.io/v1alpha1?timeout=32s
I0407 10:06:35.311777 1 request.go:597] Waited for 255.783658ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.124.1:443/apis/batch/v1?timeout=32s
I0407 10:06:35.344844 1 request.go:597] Waited for 288.851137ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.124.1:443/apis/authentication.k8s.io/v1?timeout=32s
I0407 10:06:35.377888 1 request.go:597] Waited for 321.886237ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.124.1:443/apis/apiextensions.k8s.io/v1beta1?timeout=32s
I0407 10:06:35.410868 1 request.go:597] Waited for 354.862882ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.124.1:443/apis/authorization.k8s.io/v1?timeout=32s
I0407 10:06:35.445056 1 request.go:597] Waited for 389.044635ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.124.1:443/apis/application.kubesphere.io/v1alpha1?timeout=32s
I0407 10:06:35.478284 1 request.go:597] Waited for 422.284091ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.124.1:443/apis/rbac.authorization.k8s.io/v1?timeout=32s
I0407 10:06:35.511365 1 request.go:597] Waited for 455.363074ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.124.1:443/apis/config.koordinator.sh/v1alpha1?timeout=32s
I0407 10:06:35.544428 1 request.go:597] Waited for 488.423354ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.124.1:443/apis/rbac.authorization.k8s.io/v1beta1?timeout=32s
I0407 10:06:35.577530 1 request.go:597] Waited for 521.517315ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.124.1:443/apis/monitoring.coreos.com/v1?timeout=32s
I0407 10:06:35.611734 1 request.go:597] Waited for 555.722446ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.124.1:443/apis/discovery.k8s.io/v1beta1?timeout=32s
I0407 10:06:35.645163 1 request.go:597] Waited for 589.153428ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.124.1:443/apis/node.k8s.io/v1beta1?timeout=32s
I0407 10:06:35.678261 1 request.go:597] Waited for 622.239304ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.124.1:443/apis/scheduling.k8s.io/v1beta1?timeout=32s
I0407 10:06:35.710979 1 request.go:597] Waited for 654.969336ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.124.1:443/apis/authentication.k8s.io/v1beta1?timeout=32s
I0407 10:06:35.745052 1 request.go:597] Waited for 689.04109ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.124.1:443/apis/networking.k8s.io/v1beta1?timeout=32s
I0407 10:06:35.778181 1 request.go:597] Waited for 722.153387ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.124.1:443/apis/coordination.k8s.io/v1beta1?timeout=32s
I0407 10:06:35.811371 1 request.go:597] Waited for 755.351178ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.124.1:443/apis/policy/v1beta1?timeout=32s
I0407 10:06:35.844491 1 request.go:597] Waited for 788.468955ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.124.1:443/apis/apps.kruise.io/v1alpha1?timeout=32s
I0407 10:06:35.878183 1 request.go:597] Waited for 822.161498ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.124.1:443/apis/networking.k8s.io/v1?timeout=32s
I0407 10:06:35.911844 1 request.go:597] Waited for 855.812499ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.124.1:443/apis/coordination.k8s.io/v1?timeout=32s
I0407 10:06:35.944926 1 request.go:597] Waited for 888.890657ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.124.1:443/apis/cluster.kubesphere.io/v1alpha1?timeout=32s
I0407 10:06:35.978193 1 request.go:597] Waited for 922.169979ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.124.1:443/apis/apps.kruise.io/v1beta1?timeout=32s
I0407 10:06:36.011313 1 request.go:597] Waited for 955.28796ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.124.1:443/apis/scheduling.k8s.io/v1?timeout=32s
E0407 10:06:36.016423 1 deleg.go:144] controller-runtime/source "msg"="if kind is a CRD, it should be installed before calling Start" "error"="no matches for kind "NodeSLO" in version "slo.koordinator.sh/v1alpha1"" "kind"={"Group":"slo.koordinator.sh","Kind":"NodeSLO"}

ps: 192.168.124.1 is the svc "kubernetes" in default namespace.

[feature request] add staticcheck in ci pipeline

Why is this needed:
staticcheck is a state of the art linter for the Go programming language. Using static analysis, it finds bugs and performance issues, offers simplifications, and enforces style rules.

REQUEST: New membership for saintube

GitHub Username

saintube

Requirements

  • I have reviewed the community membership guidelines
  • I have enabled 2FA on my GitHub account
  • I am actively contributing to 1 or more Koordinator subprojects
  • I have two sponsors that meet the sponsor requirements listed in the community membership guidelines
  • I have spoken to my sponsors ahead of this application, and they have agreed to sponsor my application
  • I have verified that my sponsors are a reviewer or an approver in at least one OWNERS file within one of the Koordinator GitHub organizations (excluding the contributor-playground)
  • OPTIONAL: I have taken the Inclusive Open Source Community Orientation course

Sponsor 1

@hormes

Sponsor 2

@zwzhang0107

List of contributions to the Koordinator project

  • PRs Reviewed / authored

  • Issues responded to

  • Subprojects I am involved with

    • koordinator-sh/koordinator

[proposal] runtime-hooks should support multiple running mode

What is your proposal:
injections in runtime hooks of koordlet such as cpu qos,memory oqs,llc/mba can work with or without runtime-hook-manager(runtime-porxy mode)

Why is this needed:
someone would rather use bypass-injection mode than changing kubelet cri endpoint config.

Is there a suggested solution, if so, please add it:
runtime hooks in koordlet should support two running mode: injection(with cri proxy),bypass(without cri proxy)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.