Giter Club home page Giter Club logo

dubbo-kubernetes's Introduction

The Dubbo Kubernetes Integration

⚠️ This is still an experimental version. ⚠️

Build codecov license

The universal Control Plane and Console for managing microservices on any environment - VM and Kubernetes.

architecture

Quick Start (under development)

NOTICE: As the project has not been officially released yet, the following commands may not run properly. The best way for now is to refer to the Developer's Guide to learn how to download the source code and build it locally!

  1. Download dubbo-control-plane binary package.

    curl -L https://raw.githubusercontent.com/apache/dubbo-kubernetes/master/release/downloadDubbo.sh | sh -
    
    cd dubbo-$version
    export PATH=$PWD/bin:$PATH
  2. Install control-plane on Kubernetes

    dubboctl install --profile=demo
  3. Check installation

    kubectl get services -n dubbo-system
  4. Next, deploy Dubbo applications to Kubernetes as shown below:

    apiVersion: v1
    kind: Service
    metadata:
      name: demo-service
      namespace: dubbo-demo
    spec:
      selector:
        app: dubbo-demo
      type: ClusterIP
      ports:
        - name: port1
          protocol: http
          port: 80
          targetPort: 8080
    ---
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: example-app
      namespace: dubbo-demo
    spec:
      ...
      template:
        metadata:
          ...
          labels:
            app: dubbo-demo
            dubbo.apache.org/service: dubbo-demo
        spec:
          containers:
            ...

    If you want to create your own Dubbo application from scratch and deploy it, please use dubboctl we provided below.

  5. Open the following page to check deployment status on control plane UI:

    kubectl port-forward svc/dubbo-control-plane \
      -n dubbo-system 5681:5681

    visit, 127.0.0.1:5681/admin

    ui-demo

Architecture

architecture

The microservcice architecture built with Dubbo Control Plane consists of two main components:

  • The Dubbo Control Plane configures the data plane - applications developed with Dubbo SDK, for handling service traffic. Users create policies that the dubbo control plane processes to generate configurations for the data plane.
  • The data plane - the Dubbo SDK, connets directly to control plane and receives configurations that can work as the sources for service discovery, traffic routing, load balancing, etc.

Dubbo Control Plane supports two deployment modes: kubernetes and universal.

  • kubernetes mode is like the classic Service Mesh architecture, with all microservices concepts bonded to kubernetes resources. Unlike classic service mesh solutions like istio, Dubbo favors a proxyless data plane deployment - with no envoy sidecar.
  • universal is the traditional microservice architecture that all Dubbo users are already familiar with. Unlike the kubernetes mode, it usually needs a dedicated registry like Nacos or Zookeeper for service discovery, etc.

Kubernetes

In kubernetes mode, the control plane will interact directly with the Kubernetes API-SERVER, watching the kubernetes resources and transform them as xDS resources for service discovery and traffic management configurations.

kubernetes-mode

We all know the service definitions of Kubernetes and Dubo are different, Kubernetes Service is more like an application concept run on a selected group of pods while Dubbo Service can mean a specific RPC service inside the application process. So how does dubbo control plane manages to bridge the interface-application gap, check here for more details.

Universal

In Universal mode, Dubbo still uses Nacos or Zookeeper as registries for service discovery, control plane then interact with registry directly to work as the console UI, as the entry point for viewing and managing the cluster.

universal-mode

Multiple clusters

Dubbo Control Plane supports running your services in multiple zones. It is even possible to run with a mix of Kubernetes and Universal zones. Your microservice environment can include multiple isolated services, and workloads running in different regions, on different clouds, or in different datacenters. A zone can be a Kubernetes cluster, a VPC, or any other deployment you need to include in the same distributed microservice environment. The only condition is that all the data planes running within the zone must be able to connect to the other data planes in this same zone.

Dubbo Control Plane supports a global deployment mode that can connect different zone region clusters. The picture below shows how it works.

multiple-cluster

Roadmap

  • Security
  • Metrics
  • Cross-cluster communication
  • Console

Refereces

  • Dubboctl
  • Console UI Design
  • Dubbo java xDS implementation
  • Dubbo go xDS implementation

dubbo-kubernetes's People

Contributors

2456868764 avatar albumenj avatar beiwei30 avatar biyuhao avatar chenzhiguo avatar chickenlj avatar dawnzzz avatar dependabot[bot] avatar dmwangnima avatar ev1lquark avatar haoyann avatar helltab avatar htynkn avatar ikun-lg avatar jianyi-gronk avatar keran213539 avatar kexianjun avatar kezhenxu94 avatar lovepoem avatar majinkai avatar mfordjody avatar nasuiyi avatar nzomkxia avatar panxiaojun233 avatar sduwys avatar seriouszyx avatar sjmshsh avatar sunbufu avatar wolf427 avatar yin1999 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

dubbo-kubernetes's Issues

Automatically inject zookeeper/nacos address into admin.

Admin relies on registry, config center as original data sources, so it needs to know the addresses to connect to them. Currently, admin reads admin.yml for those addresses, but the addresses are set to something like 127.0.0.1:2181 when the image is built, which are not real addresses.

We need to have a way to pass the right Zookeeper or Nacos addresses into admin. The right time to do this I think is when these components are installed, so by the time admin is started it already knows the right zookeeper/nacos addresses (typically zookeeper.dubbo-system.svc).

dubboctl create

现状:

dubboctl create -l java //当前目录必须为空,即要求用户提前创建空目录

期望:

  • 能否支持直接新建子目录,所有生成的资源放子目录
  • dubboctl create -l java dubbo-demo

Admin Console

Improve the Admin console and fix its bugs.Verify zk and nacos as registration center

dubboctl: The usage would be printed after the execution fails

The usage should only be printed when the user pass the wrong flags (or some required flags are not set). But dubboctl would also print usage strings when a command failed to execute.

Reproduction steps

# under the go template workspace
export DOCKER_HOST="unix:///path/to/no/exist.sock"
dubboctl build --image test:latest -d

The command would fail and print usage:

image

As the flags is correct in this case, the program should only print the actual error, rather than also print the usages.

Console feature: add cluster overview page

description

做成一个个的圆盘,把页面填上就行了(集群直观数据、简单点的)

  • 应用数、机器数、服务数、数据面版本分布
  • 底部也可以加一些文档链接

image

components

  • backend
  • frontend

depends on

#78

Supports night mode

Supporting night mode can enhance the user experience. Consider implementing it.

Control Plane data plane adaptation -- Java

dubbojava's dubbo-cp data plane adaptation。You can refer to
apache/dubbo#11928
test/testclient/ddsc.go(The specific rules can be seen inside)
For certificate related information, please refer to the PR above or consult 远云

当前dubbo已经对接过xds了,也就是以Istio作为网格控制面,然后dubbo兼容了Istio的一些规则。那么当前的dubbo对接dubbo-cp道理其实是类似的,我们的协议叫做dds,这是由xds这个名字改编而来的。它的作用是与Istio网格控制面互相配合,完成具有dubbo特色的服务治理(例如dubbo流量管控)

工作流程图:
dubbo-cp

具体的工作流程:

  • 当dubbo服务启动的时候需要把接口-应用的映射关系发送给dubbo-cp。然后监听他动态的获取接口-应用映射的变化。
  • 监听流量管控,authentication,authorization的规则,动态的获取配置并在数据面生效

以远云老师的pr为主进行参考,但是里面有一些东西有微弱的改变,例如数据的传输之前是json序列化,现在是protobuf序列化,请求的Type修改成了GVK等等具体看test/testclient/ddsc.go,以及里面的规则定义。

这就是主要的流程了!

但是除了这个之外,有一些证书的机制需要进行添加。证书的机制就主要是参考远云老师的pr了。

具体流程图如下:
证书

这其中的过程可能比较复杂,需要详细的参考远云老师的pr。

  • 控制面:实现了一个CA复杂为网格中的各个服务签发证书,并将证书颁发给各个服务,具体的做法是通过动态准入控制器的方式挂载到Pod里面去了,同时添加了环境变量,只需要读取环境变量就可以获取路径了。
  • 数据面:在网格中服务相互之前发起TCP通信的时候,dubbo sdk会拦截服务请求,采用证书进行双向TLS认证并建立一个TLS连接,使用改TLS连接来在网格中传输数据。

具体的参考远云老师的pr。

规则,协议参考test/testclient/ddsc.go即可,和pr差别没有很大。

Regarding ingress controller replacement

After the new version of the application stabilizes, the plan is to replace the Traefik Ingress controller with the community-native Ingress controller.

Reasons for the change:

  1. Seamless integration with the Kubernetes core API ecosystem.
  2. Dependency decoupling, reducing reliance on plugins, middleware, and extension mechanisms.
  3. Adherence to standardization, providing improved compatibility and stability.

Define the core caching or adapting layer for dubbo-cp to support both universal and kubernetes mode

We should define an caching or adapting layer for dubbo-cp console to work with both Universal and Kubernetes mode, it should be able to:

  1. Hide the implementation details of Zookeeper/Nacos or Kubernetes server.
  2. Provide unified API for different dubbo-cp console features.

Reference implementations we have now:

  1. Dubbo-cp, Universal cache implementation in https://github.com/apache/dubbo-kubernetes/tree/master/pkg/admin/cache
  2. Kiali, Kubernetes/Mesh cache implementation in https://github.com/kiali/kiali/tree/master/kubernetes
  3. Kuma, the Universal and Kubernetess mode in https://github.com/kumahq/kuma

dubboctl: consider extracting the template dockerfile contents into separate files under a dir

The template dockerfile content is now placed in a go source file. Consider extracting the template dockerfile content into separate dockerfiles, like go.dockerfile, java.dockerfile, and using go:embed to embed dockerfile content as string variables in a index go source file.

By this way, we could get dockerfile highlighting feature provided by IDE and it will be easier to use tools to check the syntax and format the source file. It also makes it easier for us to support more runtimes, modify individual runtime configuration files, etc.

Code generation tools

目前dubbo-control plane有一个代码生成工具,改工具的目的是为了开发者快速的添加crd资源。但是这个生成工具做的不够好,需要进行改善。任务需要对该工具提出自己改善的提案,然后参与工具整体的改进。在tools目录下。具体的可以参考kuma和istio相关的代码生成工具。

Exhaustive testing

At present, dubboctl and dubbo-kubernetes control plane need detailed testing. I need a classmate to assist me in testing together.

Could not find or load main class com.alibaba.dubbo.demo.consumer.Provider

Hi,我直接使用jar生成了docker文件,并尝试在kubernetes上部署,但是provider显示以下日志:

Error: Could not find or load main class com.alibaba.dubbo.demo.consumer.Provider
Error: Could not find or load main class com.alibaba.dubbo.demo.consumer.Provider

请帮忙看看

Upgrade to Vue 3 in the UI project .

  • I have searched the issues of this repository and believe that this is not a duplicate.
  • I have checked the REAMDE of this repository and believe that this is not a duplicate.

Environment

  • dubbo-admin-ui: 0.3.0-SNAPSHOT

Steps to reproduce this issue

  1. upgrade vue2.6 to ^3.3.4
  2. use ui with antdv ^4.0.7

Expected Result

The Vue 3 framework offers improved performance compared to Vue 2.6, resulting in higher efficiency for our UI project.

Actual Result

maven 包 dubbo-registry-kubernetes 是这个项目么?

我想试用 dubbo、kubernetes 相关功能,发现所有的 sample 都引用了 dubbo-registry-kubernetes 这个包,但该包在 dubbo 的主页上没有任何打包记录。在任何 sample 中都没有提到 dubbo-registry-kubernetes 项目,但是所有人都用到了,缺了该项目根本就起不来。

版本不一致
每个 sample 的版本也不尽一致,有的是 3.0.1-SNAPHOT,有的是 3.0.2 。

https://mvnrepository.com/artifact/org.apache.dubbo/dubbo-registry-kubernetes 上,这个包已经到 3.0.5 了,怎么打包上去的?为什么在本项目看不到 Release 记录?如果这个包是本项目的话,为什么没有代码提交记录?

Support podman

我们需要让dubboctl的build镜像能力支持podman,目前仅支持docker。

'Dubboctl build' failed with error 'docker API not available'

Error: cannot create docker client: docker API not available

Environment

MacBook 
Apple M1 Pro
Client:
 Cloud integration: v1.0.35-desktop+001
 Version:           24.0.5
 API version:       1.43
 Go version:        go1.20.6
 Git commit:        ced0996
 Built:             Fri Jul 21 20:32:30 2023
 OS/Arch:           darwin/arm64
 Context:           desktop-linux

Server: Docker Desktop 4.22.1 (118664)
 Engine:
  Version:          24.0.5
  API version:      1.43 (minimum version 1.12)
  Go version:       go1.20.6
  Git commit:       a61e2b4
  Built:            Fri Jul 21 20:35:38 2023
  OS/Arch:          linux/arm64
  Experimental:     false
 containerd:
  Version:          1.6.21
  GitCommit:        3dce8eb055cbb6872793272b4f20ed16117344f8
 runc:
  Version:          1.1.7
  GitCommit:        v1.1.7-0-g860f061
 docker-init:
  Version:          0.19.0
  GitCommit:        de40ad0

buildpack builder

Customize a builder so that users can quickly pull it in China and cater to dubbo's unique needs so that the program can run after it is built.

kubernetes console

We need to add k8s-related data to the observability of dubbo-kubernetes. For details, please refer to https://github.com/kiali/kiali.
Code contribution is under pkg/admin (specific details can be discussed at that time)

Elastic Scaling Support

  • I have searched the issues of this repository and believe that this is not a duplicate.
  • I have checked the REAMDE of this repository and believe that this is not a duplicate.

Is your feature request related to a problem? Please describe.

Incorporate KEDA and HPA support into the project's charts to enhance the project's automatic scaling capabilities. This will help better manage container scaling, automatically adjusting the number of containers based on load requirements.

Why is this Feature Needed

  1. Enhanced Automatic Scaling Support

    • KEDA automatically scales applications based on event sources such as queues and message queues.
    • HPA automatically scales based on CPU and memory usage.
  2. Facilitate Ease of Use for Users. Users will find it easier to integrate these automatic scaling features with the project.

  3. KEDA and HPA are widely adopted in the Kubernetes community. By adding this support, it enhances the project's appeal.

Contribute Feature

I am willing to contribute to this feature. I have experience using KEDA and HPA in Kubernetes environments and am familiar with the project's chart structure. My planned initial steps include:

  • Research the current project's chart structure and dependencies.
  • Add configuration options for KEDA and HPA to the charts, making it easy for users to enable these features.
  • Update documentation to reflect the usage of these new features.

Additional context

If there is any additional information or supplementary content, I would be happy to hear and consider it. I hope this feature will bring greater value to the project and the community.

Remove gogo proto plan

该项目中使用到了gogo proto,但该项目在前几个月停止维护了,因此需要将其替换,这可能会涉及到control plane小范围的重构。

Console feature: monitoring and details of dubbo-cp

description

监控控制面进程自身

  • dubbo-cp 相关基础信息:连接的registry、kubernetes集群、工作模式、配置文件等
  • 单独显示 dubbo-cp 进程的相关 metrics 指标

components

  • backend
  • frontend

depends on

#78

dubbogo kubernetes

Improve dubbogo's internal kubernetes-related mechanism to align it with the life cycle of dubboctl deploy generated templates and so on...

Console feature: add application detail page

description

围绕一个应用出发,展示应用的详细:

  • 如果是k8s,就是 kubernetes service 详情
  • deployment/pod/ip 详情
  • 发布的服务详情
  • 消费的服务详情
  • 显示流量的聚合页面 (类似当前我们做的 Grafana 大盘)
  • 从当前应用出发生成的应用拓扑/依赖关系

components

  • backend
  • frontend

depends on

#78

Ambiguous flags in dubboctl deploy command

image

Would the following be better?

dubboctl deploy 
# deploy 读取dubbo.yml上下文,如果之前build过,则以上命令直接通过,push默认是true
# 如果没build过,则promot,提示输入 image tag,push默认是true

dubboctl deploy --image docker.io/testuser/testdubbo:latest
# 跳过build,直接根据指定image生成k8s yaml

Console feature: add workload/ip detail page

description

单机/Workload/ip 详情:

  • 这部分可以做的比较多,根据情况来,很多可以参考的,如kiali/springboot admin 等。

components

  • backend
  • frontend

depends on

#78

Control Plane data plane adaptation -- Go

dubbogo's dubbo-cp data plane adaptation。You can refer to
apache/dubbo#11928
test/testclient/ddsc.go(The specific rules can be seen inside)
For certificate related information, please refer to the PR above or consult 远云

当前dubbo已经对接过xds了,也就是以Istio作为网格控制面,然后dubbo兼容了Istio的一些规则。那么当前的dubbo对接dubbo-cp道理其实是类似的,我们的协议叫做dds,这是由xds这个名字改编而来的。它的作用是与Istio网格控制面互相配合,完成具有dubbo特色的服务治理(例如dubbo流量管控)

工作流程图:
dubbo-cp

具体的工作流程:
269811914-b81c3eb8-c942-4838-9df7-c969050765cd

当dubbo服务启动的时候需要把接口-应用的映射关系发送给dubbo-cp。然后监听他动态的获取接口-应用映射的变化。
监听流量管控,authentication,authorization的规则,动态的获取配置并在数据面生效
以远云老师的pr为主进行参考,但是里面有一些东西有微弱的改变,例如数据的传输之前是json序列化,现在是protobuf序列化,请求的Type修改成了GVK等等具体看test/testclient/ddsc.go,以及里面的规则定义。

这就是主要的流程了!

但是除了这个之外,有一些证书的机制需要进行添加。证书的机制就主要是参考远云老师的pr了。

具体流程图如下:
269812076-bf41f4fa-df54-452b-9818-bc2f2dfbfa36

这其中的过程可能比较复杂,需要详细的参考远云老师的pr。

控制面:实现了一个CA复杂为网格中的各个服务签发证书,并将证书颁发给各个服务,具体的做法是通过动态准入控制器的方式挂载到Pod里面去了,同时添加了环境变量,只需要读取环境变量就可以获取路径了。
数据面:在网格中服务相互之前发起TCP通信的时候,dubbo sdk会拦截服务请求,采用证书进行双向TLS认证并建立一个TLS连接,使用改TLS连接来在网格中传输数据。
具体的参考远云老师的pr。

规则,协议参考test/testclient/ddsc.go即可,和pr差别没有很大。

Project release process

Write some scripts for project release. For details, you can refer to Istio’s approach.

Front-end packaging issues

There is a problem with the path of index.html generated after the current-end UI is packaged, causing the resource to not be found.

<script defer="defer" src="/static/js/chunk-vendors.6ea1db8a.js"></script><script defer="defer" src="/static/js/app.13889972.js"></script>
/static in this place should be static

.
├── dubbo-admin-info.json
├── dubbo.ico
├── echarts-en.min.js
├── fonts
│   ├── flUhRq6tzZclQEJ-Vdg-IuiaDsNcIhQ8tQ.woff2
│   ├── KFOlCnqEu92Fr1MmEU9fABc4AMP6lbBP.woff2
│   ├── KFOlCnqEu92Fr1MmEU9fBBc4AMP6lQ.woff2
│   ├── KFOlCnqEu92Fr1MmEU9fBxc4AMP6lbBP.woff2
│   ├── KFOlCnqEu92Fr1MmEU9fCBc4AMP6lbBP.woff2
│   ├── KFOlCnqEu92Fr1MmEU9fChc4AMP6lbBP.woff2
│   ├── KFOlCnqEu92Fr1MmEU9fCRc4AMP6lbBP.woff2
│   ├── KFOlCnqEu92Fr1MmEU9fCxc4AMP6lbBP.woff2
│   ├── KFOlCnqEu92Fr1MmSU5fABc4AMP6lbBP.woff2
│   ├── KFOlCnqEu92Fr1MmSU5fBBc4AMP6lQ.woff2
│   ├── KFOlCnqEu92Fr1MmSU5fBxc4AMP6lbBP.woff2
│   ├── KFOlCnqEu92Fr1MmSU5fCBc4AMP6lbBP.woff2
│   ├── KFOlCnqEu92Fr1MmSU5fChc4AMP6lbBP.woff2
│   ├── KFOlCnqEu92Fr1MmSU5fCRc4AMP6lbBP.woff2
│   ├── KFOlCnqEu92Fr1MmSU5fCxc4AMP6lbBP.woff2
│   ├── KFOlCnqEu92Fr1MmWUlfABc4AMP6lbBP.woff2
│   ├── KFOlCnqEu92Fr1MmWUlfBBc4AMP6lQ.woff2
│   ├── KFOlCnqEu92Fr1MmWUlfBxc4AMP6lbBP.woff2
│   ├── KFOlCnqEu92Fr1MmWUlfCBc4AMP6lbBP.woff2
│   ├── KFOlCnqEu92Fr1MmWUlfChc4AMP6lbBP.woff2
│   ├── KFOlCnqEu92Fr1MmWUlfCRc4AMP6lbBP.woff2
│   ├── KFOlCnqEu92Fr1MmWUlfCxc4AMP6lbBP.woff2
│   ├── KFOmCnqEu92Fr1Mu4mxKKTU1Kg.woff2
│   ├── KFOmCnqEu92Fr1Mu4WxKKTU1Kvnz.woff2
│   ├── KFOmCnqEu92Fr1Mu5mxKKTU1Kvnz.woff2
│   ├── KFOmCnqEu92Fr1Mu72xKKTU1Kvnz.woff2
│   ├── KFOmCnqEu92Fr1Mu7GxKKTU1Kvnz.woff2
│   ├── KFOmCnqEu92Fr1Mu7mxKKTU1Kvnz.woff2
│   └── KFOmCnqEu92Fr1Mu7WxKKTU1Kvnz.woff2
├── html
│   ├── 50x.html
│   ├── dubbo-admin-info.json
│   ├── dubbo.ico
│   ├── echarts-en.min.js
│   ├── fonts
│   │   ├── flUhRq6tzZclQEJ-Vdg-IuiaDsNcIhQ8tQ.woff2
│   │   ├── KFOlCnqEu92Fr1MmEU9fABc4AMP6lbBP.woff2
│   │   ├── KFOlCnqEu92Fr1MmEU9fBBc4AMP6lQ.woff2
│   │   ├── KFOlCnqEu92Fr1MmEU9fBxc4AMP6lbBP.woff2
│   │   ├── KFOlCnqEu92Fr1MmEU9fCBc4AMP6lbBP.woff2
│   │   ├── KFOlCnqEu92Fr1MmEU9fChc4AMP6lbBP.woff2
│   │   ├── KFOlCnqEu92Fr1MmEU9fCRc4AMP6lbBP.woff2
│   │   ├── KFOlCnqEu92Fr1MmEU9fCxc4AMP6lbBP.woff2
│   │   ├── KFOlCnqEu92Fr1MmSU5fABc4AMP6lbBP.woff2
│   │   ├── KFOlCnqEu92Fr1MmSU5fBBc4AMP6lQ.woff2
│   │   ├── KFOlCnqEu92Fr1MmSU5fBxc4AMP6lbBP.woff2
│   │   ├── KFOlCnqEu92Fr1MmSU5fCBc4AMP6lbBP.woff2
│   │   ├── KFOlCnqEu92Fr1MmSU5fChc4AMP6lbBP.woff2
│   │   ├── KFOlCnqEu92Fr1MmSU5fCRc4AMP6lbBP.woff2
│   │   ├── KFOlCnqEu92Fr1MmSU5fCxc4AMP6lbBP.woff2
│   │   ├── KFOlCnqEu92Fr1MmWUlfABc4AMP6lbBP.woff2
│   │   ├── KFOlCnqEu92Fr1MmWUlfBBc4AMP6lQ.woff2
│   │   ├── KFOlCnqEu92Fr1MmWUlfBxc4AMP6lbBP.woff2
│   │   ├── KFOlCnqEu92Fr1MmWUlfCBc4AMP6lbBP.woff2
│   │   ├── KFOlCnqEu92Fr1MmWUlfChc4AMP6lbBP.woff2
│   │   ├── KFOlCnqEu92Fr1MmWUlfCRc4AMP6lbBP.woff2
│   │   ├── KFOlCnqEu92Fr1MmWUlfCxc4AMP6lbBP.woff2
│   │   ├── KFOmCnqEu92Fr1Mu4mxKKTU1Kg.woff2
│   │   ├── KFOmCnqEu92Fr1Mu4WxKKTU1Kvnz.woff2
│   │   ├── KFOmCnqEu92Fr1Mu5mxKKTU1Kvnz.woff2
│   │   ├── KFOmCnqEu92Fr1Mu72xKKTU1Kvnz.woff2
│   │   ├── KFOmCnqEu92Fr1Mu7GxKKTU1Kvnz.woff2
│   │   ├── KFOmCnqEu92Fr1Mu7mxKKTU1Kvnz.woff2
│   │   └── KFOmCnqEu92Fr1Mu7WxKKTU1Kvnz.woff2
│   ├── index.html
│   ├── OpenSans.css
│   └── static
│   ├── css
│   │   ├── app.4ab00ae8.css
│   │   ├── app.888953a3.css
│   │   ├── app.f8af26eb.css
│   │   ├── chunk-vendors.15048f58.css
│   │   ├── chunk-vendors.90cf4675.css
│   │   └── chunk-vendors.ad39ed5a.css
│   ├── img
│   │   ├── jsoneditor-icons.94cc3007.svg
│   │   └── max_btn.546521f3.svg
│   └── js
│   ├── app.13889972.js
│   ├── app.13889972.js.map
│   ├── braceBase.26ea46f8.js
│   ├── braceBase.26ea46f8.js.map
│   ├── chunk-vendors.6ea1db8a.js
│   └── chunk-vendors.6ea1db8a.js.map
├── index.html
├── OpenSans.css
└── static
├── css
│   ├── app.4ab00ae8.css
│   └── chunk-vendors.90cf4675.css
├── img
│   ├── jsoneditor-icons.94cc3007.svg
│   └── max_btn.546521f3.svg
└── js
├── app.13889972.js
├── app.13889972.js.map
├── braceBase.26ea46f8.js
├── braceBase.26ea46f8.js.map
├── chunk-vendors.6ea1db8a.js
└── chunk-vendors.6ea1db8a.js.map

dubboctl can not run standalone

Steps to reproduce this issue

  1. git clone https://github.com/apache/dubbo-kubernetes.git
  2. cd app/dubboctl
  3. go build -o dubboctl main.go
  4. mv ./dubboctl ~/Downloads
  5. rm -f -R dubbo-kubernetes (project directory)
  6. ~/Downloads/dubboctl profile list
Error: open /Users/jun/java/dubbo/dubbo-kubernetes/deploy/profiles: no such file or directory

  1. app/dubboctl/identifier/env.go
var (
	_, b, _, _ = runtime.Caller(0)
	// Root folder of dubbo-admin
	// This relies on the fact this file is 3 levels up from the root; if this changes, adjust the path below
	Root            = filepath.Join(filepath.Dir(b), "../../..")
	Deploy          = filepath.Join(Root, "/deploy")
	Charts          = filepath.Join(Deploy, "/charts")
	Profiles        = filepath.Join(Deploy, "profiles")
	Addons          = filepath.Join(Deploy, "addons")
	AddonDashboards = filepath.Join(Addons, "dashboards")
	AddonManifests  = filepath.Join(Addons, "manifests")
)

The Root value is project source code directory when is build, so dubboctl can not run standalone.

Integration with CSI storage

  • I have searched the issues of this repository and believe that this is not a duplicate.
  • I have checked the REAMDE of this repository and believe that this is not a duplicate.

Is your feature request related to a problem? Please describe.

Incorporate charts distributed storage system capabilities. Integrate the Rook distributed storage solution into the cluster to provide persistent storage support for containerized applications.

Why is this Feature Needed

Storage Management: Managing distributed storage systems is typically a complex task, involving node management, data distribution, backups, and fault handling, among others, which can be simplified.

Compliance with CSI Standard: The standard Container Storage Interface (CSI) enables containerized applications to seamlessly access and manage storage without needing to understand the underlying details of the storage system.

Data Persistence: Data can be retained even if containers are restarted or migrated.

Across Clouds and Hybrid Cloud: Deploying or migrating containerized applications between different cloud providers, making it well-suited for organizations with multi-cloud strategies or hybrid cloud deployment requirements.

RBD

rbd:
      monitors:
      - 10.100.51.74:6789
      pool: rook-rbd
      image: demo
      fsType: ext4
      user: csi-rbd-node
      secretRef:
        name: ceph-secret

CephFS

cephfs:
      monitors:
      - 10.100.51.74:6789
      path: /
      user: csi-cephfs-node
      secretRef:
        name: ceph-secret

S3

gateway:
    sslCertificateRef:
    port: 80
    instances: 2

SNAPSHOT

source:
    persistentVolumeClaimName:  <rbd-pvc>  # cephfs-pvc

Contribute Feature

I have experience using Rook in a large-scale production environment, enabling users to deploy and manage distributed storage systems more easily. Through Rook integration, I aim to provide users with a comprehensive containerized storage solution to meet the needs of their applications.

Additional context

If there is any additional information or supplementary content, I would be happy to hear and consider it. I hope this feature will bring greater value to the project and the community.

Add dynamic admission controller

当前dubboctl会根据用户install了哪些组件自动将相应的注册中心的地址注入到k8s的yaml文件,不过目前的注入是在使用dubboctl deploy生成的yaml的情况下的。当前需要给用户提供一个选择,也就是不使用dubboctl deploy生成的yaml文件,用户自己写的yaml文件也可以去动态的注入。
具体的做法是在dubboctl install的阶段将动态准入控制器安装到k8s apiserver,然后在dubbo-injection=enabled的命名空间中的应用实现注入,在打了"dubbo-deploy"="enabled"的命名空间中去查找zk或者nacos的Pod里面有没有打上自动注入的labels("dubbo.apache.org/zookeeper"="true","dubbo.apache.org/nacos"="true"),如果同时有的话zk优先,用动态注入控制器为用户自己撰写的yaml文件添加DUBBO_REGISTRY_ADDRESS地址,格式为 zookeeper://zookeeper.dubbo-system.svc类似这样。

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.