Giter Club home page Giter Club logo

rocketmq-operator's Introduction

RocketMQ Operator

License Language Go Version GoDoc Go Report Card GitHub release Docker Automated Docker Pulls Docker TAG Docker Image Average time to resolve an issue Percentage of issues still open Twitter Follow

Table of Contents

Overview

RocketMQ Operator is to manage RocketMQ service instances deployed on the Kubernetes cluster. It is built using the Operator SDK, which is part of the Operator Framework.

RocketMQ-Operator architecture

Quick Start

Deploy RocketMQ Operator

  1. Clone the project on your Kubernetes cluster master node:
$ git clone https://github.com/apache/rocketmq-operator.git
$ cd rocketmq-operator
  1. To deploy the RocketMQ Operator on your Kubernetes cluster, please run the following command:
$ make deploy

If you get error rocketmq-operator/bin/controller-gen: No such file or directory, please run go version to check the version of Golang, the main version should be 1.16. Then run go mod tidy before run make deploy.

Or you can deploy the RocketMQ Operator by helm:

$ helm install rocketmq-operator charts/rocketmq-operator
  1. Use command kubectl get pods to check the RocketMQ Operator deploy status like:
$ kubectl get pods
NAME                                      READY   STATUS    RESTARTS   AGE
rocketmq-operator-564b5d75d-jllzk         1/1     Running   0          108s

If you find that pod image is not found, run the following command to build a new one locally, the image tag is specified by the IMG parameter.

$ make docker-build IMG=apache/rocketmq-operator:0.4.0-snapshot

Now you can use the CRDs provided by RocketMQ Operator to deploy your RocketMQ cluster.

Prepare Volume Persistence

Before RocketMQ deployment, you may need to do some preparation steps for RocketMQ data persistence.

Currently we provide several options for your RocketMQ data persistence: EmptyDir, HostPath and StorageClass, which can be configured in CR files, for example in rocketmq_v1alpha1_nameservice_cr.yaml:

...
 # storageMode can be EmptyDir, HostPath, StorageClass
  storageMode: HostPath
...

EmptyDir

If you choose EmptyDir, you don't need to do extra preparation steps for data persistence. But the data storage life is the same with the pod's life, if the pod is deleted you may lost the data.

If you choose other storage modes, please refer to the following instructions to prepare the data persistence.

HostPath

This storage mode means the RocketMQ data (including all the logs and store files) is stored in each host where the pod lies on. You need to create a directory on the host where you want the RocketMQ data to be stored. For example:

$ mkdir /data/rocketmq/broker

You can configure the host path in the CRD yaml file like hostPath: /data/rocketmq/broker in the example/rocketmq_v1alpha1_rocketmq_cluster.yaml file.

StorageClass (Use NFS for Example)

If you choose StorageClass as the storage mode, you need to prepare the storage class related provisioner and other dependencies. Using the NFS storage class as an example, the first step is to prepare a storage class based on NFS provider to create PV and PVC where the RocketMQ data will be stored.

  1. Deploy NFS server and clients on your Kubernetes cluster. You can refer to NFS deployment document for more details. Please make sure they are functional before you go to the next step. Here is a instruction on how to verify NFS service.

    1. On your NFS client node, check if NFS shared dir exists.
    $ showmount -e 192.168.130.32
    Export list for 192.168.130.32:
    /data/k8s * 
    
    1. On your NFS client node, create a test dir and mount it to the NFS shared dir (you may need sudo permission).
    $ mkdir -p   ~/test-nfc
    $ mount -t nfs 192.168.130.32:/data/k8s ~/test-nfc
    
    1. On your NFS client node, create a test file on the mounted test dir.
    $ touch ~/test-nfc/test.txt
    
    1. On your NFS server node, check the shared dir. If there exists the test file we created on the client node, it proves the NFS service is functional.
    $ ls -ls /data/k8s/
    total 4
    4 -rw-r--r--. 1 root root 4 Jul 10 21:50 test.txt
    
  2. Modify the following configurations of the deploy/storage/nfs-client.yaml file:

...
            - name: NFS_SERVER
              value: 192.168.130.32
            - name: NFS_PATH
              value: /data/k8s
      volumes:
        - name: nfs-client-root
          nfs:
            server: 192.168.130.32
            path: /data/k8s
...

Replace 192.168.130.32 and /data/k8s with your true NFS server IP address and NFS server data volume path.

  1. Create a NFS storage class for RocketMQ, run
$ cd deploy/storage
$ ./deploy-storage-class.sh
  1. If the storage class is successfully deployed, you can get the pod status like:
$ kubectl get pods
NAME                                      READY   STATUS    RESTARTS   AGE
nfs-client-provisioner-7cf858f754-7vxmm   1/1     Running   0          136m
rocketmq-operator-564b5d75d-jllzk         1/1     Running   0          108s

Define Your RocketMQ Cluster

RocketMQ Operator provides several CRDs to allow users define their RocketMQ service component cluster, which includes the Name Server, Broker cluster, Console, etc.

  1. Check the file rocketmq_v1alpha1_rocketmq_cluster.yaml in the example directory which we put these CR together:
apiVersion: v1
kind: ConfigMap
metadata:
  name: broker-config
  namespace: default
data:
  # BROKER_MEM sets the broker JVM, if set to "" then Xms = Xmx = max(min(1/2 ram, 1024MB), min(1/4 ram, 8GB))
  BROKER_MEM: " -Xms2g -Xmx2g -Xmn1g "
  broker-common.conf: |
    # brokerClusterName, brokerName, brokerId are automatically generated by the operator and do not set it manually!!!
    deleteWhen=04
    fileReservedTime=48
    flushDiskType=ASYNC_FLUSH
    # set brokerRole to ASYNC_MASTER or SYNC_MASTER. DO NOT set to SLAVE because the replica instance will automatically be set!!!
    brokerRole=ASYNC_MASTER

---
apiVersion: rocketmq.apache.org/v1alpha1
kind: Broker
metadata:
  # name of broker cluster
  name: broker
  namespace: default
spec:
  # size is the number of the broker cluster, each broker cluster contains a master broker and [replicaPerGroup] replica brokers.
  size: 1
  # nameServers is the [ip:port] list of name service
  nameServers: ""
  # replicaPerGroup is the number of each broker cluster
  replicaPerGroup: 1
  # brokerImage is the customized docker image repo of the RocketMQ broker
  brokerImage: apacherocketmq/rocketmq-broker:4.5.0-alpine-operator-0.3.0
  # imagePullPolicy is the image pull policy
  imagePullPolicy: Always
  # resources describes the compute resource requirements and limits
  resources:
    requests:
      memory: "2048Mi"
      cpu: "250m"
    limits:
      memory: "12288Mi"
      cpu: "500m"
  # allowRestart defines whether allow pod restart
  allowRestart: true
  # storageMode can be EmptyDir, HostPath, StorageClass
  storageMode: EmptyDir
  # hostPath is the local path to store data
  hostPath: /data/rocketmq/broker
  # scalePodName is [Broker name]-[broker group number]-master-0
  scalePodName: broker-0-master-0
  # env defines custom env, e.g. BROKER_MEM
  env:
    - name: BROKER_MEM
      valueFrom:
        configMapKeyRef:
          name: broker-config
          key: BROKER_MEM
  # volumes defines the broker.conf
  volumes:
    - name: broker-config
      configMap:
        name: broker-config
        items:
          - key: broker-common.conf
            path: broker-common.conf
  # volumeClaimTemplates defines the storageClass
  volumeClaimTemplates:
    - metadata:
        name: broker-storage
      spec:
        accessModes:
          - ReadWriteOnce
        storageClassName: rocketmq-storage
        resources:
          requests:
            storage: 8Gi
---
apiVersion: rocketmq.apache.org/v1alpha1
kind: NameService
metadata:
  name: name-service
  namespace: default
spec:
  # size is the the name service instance number of the name service cluster
  size: 1
  # nameServiceImage is the customized docker image repo of the RocketMQ name service
  nameServiceImage: apacherocketmq/rocketmq-nameserver:4.5.0-alpine-operator-0.3.0
  # imagePullPolicy is the image pull policy
  imagePullPolicy: Always
  # hostNetwork can be true or false
  hostNetwork: true
  #  Set DNS policy for the pod.
  #  Defaults to "ClusterFirst".
  #  Valid values are 'ClusterFirstWithHostNet', 'ClusterFirst', 'Default' or 'None'.
  #  DNS parameters given in DNSConfig will be merged with the policy selected with DNSPolicy.
  #  To have DNS options set along with hostNetwork, you have to specify DNS policy
  #  explicitly to 'ClusterFirstWithHostNet'.
  dnsPolicy: ClusterFirstWithHostNet
  # resources describes the compute resource requirements and limits
  resources:
    requests:
      memory: "512Mi"
      cpu: "250m"
    limits:
      memory: "1024Mi"
      cpu: "500m"
  # storageMode can be EmptyDir, HostPath, StorageClass
  storageMode: EmptyDir
  # hostPath is the local path to store data
  hostPath: /data/rocketmq/nameserver
  # volumeClaimTemplates defines the storageClass
  volumeClaimTemplates:
    - metadata:
        name: namesrv-storage
      spec:
        accessModes:
          - ReadWriteOnce
        storageClassName: rocketmq-storage
        resources:
          requests:
            storage: 1Gi

---
apiVersion: rocketmq.apache.org/v1alpha1
kind: Console
metadata:
  name: console
  namespace: default
spec:
  # nameServers is the [ip:port] list of name service
  nameServers: ""
  # consoleDeployment define the console deployment
  consoleDeployment:
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      labels:
        app: rocketmq-console
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: rocketmq-console
      template:
        metadata:
          labels:
            app: rocketmq-console
        spec:
          containers:
            - name: console
              image: apacherocketmq/rocketmq-console:2.0.0
              ports:
                - containerPort: 8080

The yaml defines the RocketMQ name server and broker cluster scale, the [ip:port] list of name service and so on. By default, the nameServers is an empty string which means it is automatically obtained by the operator.

Notice: Currently the broker image use the formula max(min(1/2 ram, 1024MB), min(1/4 ram, 8GB)) to calculate JVM Xmx size in which ram is the host memory size. If the memory resource limit is lower than the container requirement, it may occur the OOMkilled error.

Create RocketMQ Cluster

  1. Deploy the RocketMQ name service cluster by running:
$ kubectl apply -f example/rocketmq_v1alpha1_rocketmq_cluster.yaml
broker.rocketmq.apache.org/broker created
nameservice.rocketmq.apache.org/name-service created
console.rocketmq.apache.org/console created

The name server cluster will be created first, after all name server cluster is in running state, the operator will create the broker cluster.

Check the status:

$ kubectl get pods -owide
NAME                                 READY   STATUS    RESTARTS   AGE    IP             NODE             NOMINATED NODE   READINESS GATES
broker-0-master-0                    1/1     Running   0          71s    10.1.5.91      docker-desktop   <none>           <none>
broker-0-replica-1-0                 1/1     Running   0          71s    10.1.5.92      docker-desktop   <none>           <none>
console-5c4c9d5757-jnsbq             1/1     Running   0          71s    10.1.5.93      docker-desktop   <none>           <none>
name-service-0                       1/1     Running   0          78s    192.168.65.3   docker-desktop   <none>           <none>
rocketmq-operator-758bb9c774-jrfw4   1/1     Running   0          106s   10.1.5.90      docker-desktop   <none>           <none>

Using the default yaml, we can see that there are 2 name-server Pods and 1 master broker 1 replica(slave) broker running on the k8s cluster.

  1. Apply Service and visit the RocketMQ Console.

By default, we use nodePort service to expose the console service outside the k8s cluster:

$ kubectl apply -f example/rocketmq_v1alpha1_cluster_service.yaml

Then you can visit the RocketMQ Console (by default) by the URL any-k8s-node-IP:30000, or localhost:30000 if you are currently on the k8s node.

  1. If you are using storage class, check the PV and PVC status:
$ kubectl get pvc
NAME                                    STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS       AGE
broker-storage-broker-0-master-0        Bound    pvc-7a74871b-c005-441a-bb15-8106566c9d19   8Gi        RWO            rocketmq-storage   78s
broker-storage-broker-0-replica-1-0     Bound    pvc-521e7e9a-3795-487a-9f76-22da74db74dd   8Gi        RWO            rocketmq-storage   78s
namesrv-storage-name-service-0          Bound    pvc-c708cb49-aa52-4992-8cac-f46a48e2cc2e   1Gi        RWO            rocketmq-storage   79s

$ kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                                       STORAGECLASS       REASON   AGE
pvc-521e7e9a-3795-487a-9f76-22da74db74dd   8Gi        RWO            Delete           Bound    default/broker-storage-broker-0-replica-1-0 rocketmq-storage            79s
pvc-7a74871b-c005-441a-bb15-8106566c9d19   8Gi        RWO            Delete           Bound    default/broker-storage-broker-0-master-0    rocketmq-storage            79s
pvc-d7b76efe-384c-4f8d-9e8a-ebe209ba826c   8Gi        RWO            Delete           Bound    default/broker-storage-broker-1-master-0    rocketmq-storage            78s

Notice: if you don't choose the StorageClass storage mode, then the above PV and PVC won't be created.

Congratulations! You have successfully deployed your RocketMQ cluster by RocketMQ Operator.

Verify the Data Storage

Verify HostPath Storage

Access on any node which contains the RocketMQ service pod, check the hostPath you configured, for example:

$ ls /data/rocketmq/broker
logs  store

$ cat /data/rocketmq/broker/logs/broker-1-replica-1/rocketmqlogs/broker.log
...
2019-09-12 13:12:24 INFO main - The broker[broker-1, 10.244.3.35:10911] boot success. serializeType=JSON and name server is 192.168.130.35:9876
...

Verify NFS storage

Access the NFS server node of your cluster and verify whether the RocketMQ data is stored in your NFS data volume path:

$ cd /data/k8s/

$ ls
default-broker-storage-broker-0-master-0-pvc-7a74871b-c005-441a-bb15-8106566c9d19   
default-broker-storage-broker-0-replica-1-0-pvc-521e7e9a-3795-487a-9f76-22da74db74dd  
default-namesrv-storage-name-service-0-pvc-c708cb49-aa52-4992-8cac-f46a48e2cc2e

$ ls default-broker-storage-broker-0-master-0-pvc-7a74871b-c005-441a-bb15-8106566c9d19/logs/rocketmqlogs/
broker_default.log  broker.log  commercial.log  filter.log  lock.log  protection.log  remoting.log  stats.log  storeerror.log  store.log  transaction.log  watermark.log

$ cat default-broker-storage-broker-0-master-0-pvc-7a74871b-c005-441a-bb15-8106566c9d19/logs/rocketmqlogs/broker.log 
...
2019-09-10 14:12:22 INFO main - The broker[broker-1-master-0, 10.244.2.117:10911] boot success. serializeType=JSON and name server is 192.168.130.33:9876
...

Horizontal Scale

Name Server Cluster Scale

If the current name service cluster scale does not fit your requirements, you can simply use RocketMQ-Operator to up-scale or down-scale your name service cluster.

If you want to enlarge your name service cluster. Modify your name service CR file rocketmq_v1alpha1_nameservice_cr.yaml, increase the field size to the number you want, for example, from size: 1 to size: 2.

Notice: if your broker image version is 4.5.0 or earlier, you need to make sure that allowRestart: true is set in the broker CR file to enable rolling restart policy. If allowRestart: false, configure it to allowRestart: true and run kubectl apply -f example/rocketmq_v1alpha1_broker_cr.yaml to apply the new config.

After configuring the size fields, simply run

kubectl apply -f example/rocketmq_v1alpha1_nameservice_cr.yaml 

Then a new name service pod will be deployed and meanwhile the operator will inform all the brokers to update their name service list parameters, so they can register to the new name service.

Notice: under the policy allowRestart: true, the broker will gradually be updated so the update process is also not perceptible to the producer and consumer clients.

Broker Cluster Scale

Up-scale Broker in Out-of-order Message Scenario

It is often the case that with the development of your business, the old broker cluster scale no longer meets your needs. You can simply use RocketMQ-Operator to up-scale your broker cluster:

  1. Modify the size in the broker CR file to the number that you want the broker cluster scale will be, for example, from size: 1 to size: 2.

  2. Choose the source broker pod, from which the old metadata like topic and subscription information data will be transferred to the newly created brokers. The source broker pod field is

...
# scalePodName is broker-[broker group number]-master-0
  scalePodName: broker-0-master-0
...
  1. Apply the new configurations:
$ kubectl apply -f example/rocketmq_v1alpha1_broker_cr.yaml

Then a new broker group of pods will be deployed and meanwhile the operator will copy the metadata from the source broker pod to the newly created broker pods before the new brokers are stared, so the new brokers will reload previous topic and subscription information.

Topic Transfer

Topic Transfer means that the user wants to migrate the work of providing service for a specific topic from a source(original) cluster to a target cluster without affecting the business. This may happen when the source cluster is about to shutdown, or the user wants to reduce the workload on the source cluster.

Usually the Topic Transfer process consists of 7 steps:

  • Add all consumer groups of the topic to the target cluster.

  • Add the topic to be transferred to the target cluster.

  • Forbid new message writing into the source cluster.

  • Check the consumer group consumption progress to make sure all messages in the source cluster have been consumed.

  • Delete the topic in the source cluster when all messages in the source cluster have been consumed.

  • Delete the consumer groups in the source cluster.

  • Add the retry-topic to the target cluster.

The TopicTransfer CRD can help you do that. Simply configure the CR file example/rocketmq_v1alpha1_topictransfer_cr.yaml:

apiVersion: rocketmq.apache.org/v1alpha1
kind: TopicTransfer
metadata:
  name: topictransfer
spec:
  # topic defines which topic to be transferred
  topic: TopicTest
  # sourceCluster define the source cluster
  sourceCluster: broker-0
  # targetCluster defines the target cluster
  targetCluster: broker-1

Then apply the TopicTransfer resource:

$ kubectl apply -f example/rocketmq_v1alpha1_topictransfer_cr.yaml

The operator will automatically do the topic transfer job.

If the transfer process is failed, the operator will roll-back the transfer operations for the atomicity of the TopicTransfer operation.

You can check the operator logs or consume progress status to monitor and verify the topic transfer process:

$ kubectl logs -f [operator-pod-name] 
$ sh bin/mqadmin consumerprogress -g [consumer-group] -n [name-server-ip]:9876

Clean the Environment

If you want to tear down the RocketMQ cluster, to remove the name server and broker clusters run

$ kubectl delete -f example/rocketmq_v1alpha1_rocketmq_cluster.yaml
$ kubectl delete -f example/rocketmq_v1alpha1_cluster_service.yaml

to remove the RocketMQ Operator:

$ ./purge-operator.sh

to remove the storage class for RocketMQ:

$ cd deploy/storage
$ ./remove-storage-class.sh

Note: the StorageClass and HostPath persistence data will not be deleted by default.

Development

Prerequisites

Build

For developers who want to build and push the operator-related images to the docker hub, please follow the instructions below.

Operator

RocketMQ-Operator uses operator-sdk to generate the scaffolding and build the operator image. You can refer to the operator-sdk user guide for more details.

If you want to build your own operator image and push it to your own docker hub, please specify IMG as your image url and run make docker-build and make docker-push. For example:

$ make docker-build IMG={YOUR_IMAGE_URL} && make docker-push IMG={YOUR_IMAGE_URL}

Broker and Name Server Images

RocketMQ-Operator is based on customized images of Broker and Name Server, which are build by build-broker-image.sh and build-namesrv-image.sh respectively. Therefore, the images used in the Broker and NameService CR yaml files should be build by these scripts.

You can also modify the DOCKERHUB_REPO variable in the scripts to push the newly build images to your own repository:

$ cd images/alpine/broker
$ ./build-broker-image.sh
$ cd images/alpine/namesrv
$ ./build-namesrv-image.sh

Dashboard

The Console CR directly uses the RocketMQ Dashboard image from https://github.com/apache/rocketmq-docker/blob/master/image-build/Dockerfile-centos-dashboard, which has no customization for the operator.

Note: For users who just want to use the operator, there is no need to build the operator and customized broker and name server images themselves. Users can simply use the default official images which are maintained by the RocketMQ community.

rocketmq-operator's People

Contributors

caigy avatar drivebyer avatar duhenglucky avatar koalawangyang avatar liuruiyiyang avatar lovelcp avatar ltamber avatar mrlyg avatar rongtongjin avatar shangjin92 avatar shannonding avatar stevenleizhang avatar usernameisnull avatar vongosling avatar wlliqipeng avatar yiyiyimu avatar zhihui921016 avatar zhouxinyu avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

rocketmq-operator's Issues

unable to recognize "deploy/crds/rocketmq_v1alpha1_broker_crd.yaml": no matches for kind "CustomResourceDefinition" in version "apiextensions.k8s.io/v1beta1"

kubectl version v1.22.0

root@xxx:~/rocketmq-operator# bash -x install-operator.sh

  • kubectl create -f deploy/crds/rocketmq_v1alpha1_broker_crd.yaml
    error: unable to recognize "deploy/crds/rocketmq_v1alpha1_broker_crd.yaml": no matches for kind "CustomResourceDefinition" in version "apiextensions.k8s.io/v1beta1"
  • kubectl create -f deploy/crds/rocketmq_v1alpha1_nameservice_crd.yaml
    error: unable to recognize "deploy/crds/rocketmq_v1alpha1_nameservice_crd.yaml": no matches for kind "CustomResourceDefinition" in version "apiextensions.k8s.io/v1beta1"
  • kubectl create -f deploy/crds/rocketmq_v1alpha1_consoles_crd.yaml
    error: unable to recognize "deploy/crds/rocketmq_v1alpha1_consoles_crd.yaml": no matches for kind "CustomResourceDefinition" in version "apiextensions.k8s.io/v1beta1"
  • kubectl create -f deploy/crds/rocketmq_v1alpha1_topictransfer_crd.yaml
    error: unable to recognize "deploy/crds/rocketmq_v1alpha1_topictransfer_crd.yaml": no matches for kind "CustomResourceDefinition" in version "apiextensions.k8s.io/v1beta1"
  • kubectl create -f deploy/service_account.yaml
    serviceaccount/rocketmq-operator created
  • kubectl create -f deploy/role.yaml
    role.rbac.authorization.k8s.io/rocketmq-operator created
  • kubectl create -f deploy/role_binding.yaml
    rolebinding.rbac.authorization.k8s.io/rocketmq-operator created

pod resources control and args needed

The opeator creates StatefulSet for namesrv and broker without resources settings, and args field is also unsupported; the official RoctetMQ image boot for over 8G memory as default, and leads to huge ops problem.

doc: typo in README.md

kubectl apply -f example/rocketmq_cluster_service.yaml

should be

kubectl apply -f example/rocketmq_v1alpha1_cluster_service.yaml

Support for creating multiple brokerClusters with different names

BUG REPORT

  1. Please describe the issue you observed:
  • What did you do (The steps to reproduce)?
    After creating a broker cluster, create a second broker cluster with a different name

  • What did you expect to see?
    The second broker cluster can be created normally

  • What did you see instead?
    The second broker cluster cannot be created unless the operator is restarted

  1. Please tell us about your environment:
    operator:0.2.0
    k8s:1.17.2

  2. Other information (e.g. detailed explanation, logs, related issues, suggestions how to fix, etc):
    broker_controller use "isInitial" to determine if the cluster is new, this "isInitial" affects the creation of the second broker cluster

Are you using RocketMQ Operator

Due Diligence

RocketMQ Operator is to manage RocketMQ service instances deployed on the Kubernetes cluster, which has attracted the attention of many developers and enterprises after open source. In order to better understand the use of the RocketMQ operator, and continue to optimize it. Here, we sincerely invite you to take a minute to feedback on your usage scenario.

What we expect from you

Pls. submit a comment in this issue to include the following information:
• your company, school or organization
• your country and city
• your contact info, such as email, WeChat and twitter (optional).
• usage scenario

You can refer to the following sample answer for the format:

* Organization: XX Company
* Location: Seoul, South Korea
* Contact: [email protected]
* usage scenario: The RocketMQ operator manages about 200 + brokers, 3W + queues and 5000 + 4core 16g virtual machines in rocketmq cluster, which are mainly used to support core payment system and real-time communication system. The online time is August 2019.

Thanks again for your participation!
Apache RocketMQ Community

聆听你的声音

RocketMQ Operator 是Kubetenes平台之上的RocketMQ集群部署、管理工具,开源后受到很多开发者和企业的关注。为了更好的了解RocketMQ exporter项目的使用情况,并对其进行持续优化。在这里,我们诚恳的邀请您花费1分钟的时间来反馈一下您的使用场景。

我们期待您能提供

一条评论, 评论内容包括但不限于:
• 您所在公司、学校或组织
• 您所在的国家与城市
• 您的联系方式: 邮箱、微信、Twitter账号 (为了更好地保持联系,建议您留下联系方式)
• 您将RocketMQ Operator用于哪些业务场景,使用规模如何
• 您的反馈意见

您可以参考下面的样例来提供您的信息:

* 公司名称:XX有限公司
* 公司简介(可选):
* 地点:韩国首尔
* 联系方式:[email protected]
* 场景与规模:RocketMQ Operator目前管理RocketMQ集群规模大约 200+台broker,3W+队列数,客户端规模5000+台 4core 16g 虚拟机,主要用于支持核心支付系统以及实时通讯系统后台业务,上线时间2019年8月。
* 评论:

非常感谢您的参与
Apache RocketMQ Community

question about namesrv cluster up-scale

when namesrv cluster up-scaling,does the broker shutdown and change the namesrv list and then restart?
I have this question becuase I note the broker IP changed after namesrv cluster up-scale and the log file contians restart log. can we up-scale the namesrv cluster without restart broker?

Operator is not usefull cross namespaces

I follow the Operatorhub installation instruction (https://operatorhub.io/operator/rocketmq-operator):

1、curl -sL https://github.com/operator-framework/operator-lifecycle-manager/releases/download/v0.17.0/install.sh | bash -s v0.17.0
2、kubectl create -f https://operatorhub.io/install/rocketmq-operator.yaml

Below the instruction, it says: "This Operator will be installed in the "operators" namespace and will be usable from all namespaces in the cluster."

But when I create NameService And Broker, only works in the namespace operators, it will create deployment and services. In other namespaces, nothing happened.

ENV: Aliyun K8S 1.18

Why broker get former nameservice ip always

Hi there,
I just new to rocketmq. I try to launch rocketmq cluster on k8s
First, I rewrite the operator from rocketmq_v1alpha1_rocketmq_cluster.yaml to helm charts and everything is ok but the Broker&Console service alway get the wrong nameserver IP

Then ,I found that this IP is always the IP of the last Nameservice, So, is there somthing wrong ?
Btw, I just set nameServers:"" for autosetting by operator.

Dockerfile issue

BUG REPORT

  1. Please describe the issue you observed:
  • What did you do (The steps to reproduce)?
    ./create-operator.sh
  • What did you expect to see?
    operator image built with success
  • What did you see instead?
    operator image built with error, as following:
INFO[0000] Building OCI image docker.io/library/rocketmq-operator:v0.0.1-snapshot 
Sending build context to Docker daemon  70.62MB
Step 1/15 : FROM openjdk:8-alpine
 ---> a3562aa0b991
Step 2/15 : RUN apk add --no-cache bash gettext nmap-ncat openssl busybox-extras
 ---> Using cache
 ---> f589b44cfa81
Step 3/15 : ENV OPERATOR=/usr/local/bin/rocketmq-operator     USER_UID=1001     USER_NAME=rocketmq-operator
 ---> Using cache
 ---> cc4ea4c26514
Step 4/15 : COPY build/_output/bin/rocketmq-operator ${OPERATOR}
 ---> Using cache
 ---> 4a1a344fd240
Step 5/15 : COPY build/bin /usr/local/bin
 ---> Using cache
 ---> 7579aed2506c
Step 6/15 : RUN  /usr/local/bin/user_setup
 ---> Using cache
 ---> 3d3e29904d3d
Step 7/15 : ENV ROCKETMQ_VERSION 4.5.0
 ---> Using cache
 ---> bb839b99a39c
Step 8/15 : ENV ROCKETMQ_HOME  /home/rocketmq/rocketmq-${ROCKETMQ_VERSION}
 ---> Using cache
 ---> 43ff35f63e5f
Step 9/15 : WORKDIR  ${ROCKETMQ_HOME}
 ---> Using cache
 ---> 9ef1be398427
Step 10/15 : COPY build/rocketmq.zip ${ROCKETMQ_HOME}/rocketmq.zip
COPY failed: stat /var/lib/docker/tmp/docker-builder520479874/build/rocketmq.zip: no such file or directory
Error: failed to output build image docker.io/library/rocketmq-operator:v0.0.1-snapshot: (failed to exec []string{"docker", "build", "-f", "build/Dockerfile", "-t", "docker.io/library/rocketmq-operator:v0.0.1-snapshot", "."}: exit status 1)
  1. Please tell us about your environment:
    linux
  2. Other information (e.g. detailed explanation, logs, related issues, suggestions how to fix, etc):
    Why did we install rocketmq code in operator? I suggest removing the unused parts.

Incompatible store path and auto set name server IP list failed

BUG REPORT

  1. Please describe the issue you observed:
  1. Current master has incompatible store data path.
  2. Name server IP list auto-set function is not usable currently.
  3. ROCKETMQ_HOME env of Image is incorrect.
  4. README and example yaml files are incompatible.
  • What did you do (The steps to reproduce)?
    Try to use the master branch code.

  • What did you expect to see?
    The above questions to be solved.

Cannot access rocketmq from outside the k8s

Hi, I have deployed rocketmq-operator but this rmq cannot be accessed from outside k8s. I know that the pod ip of broker is virtual and can only be accessed in the cluster. If we want to access the rocketmq through the outside of the k8s, is there a good way to solve this problem?
The error information is as follows:
image

What is the difference between brokerName and brokerClusterName

The issue tracker is ONLY used for the bug report(feature request need to follow RIP process). Keep in mind, please check whether there is an existing same report before you raise a new one.

Alternately (especially if your communication is not a bug report), you can send mail to our mailing lists. We welcome any friendly suggestions, bug fixes, collaboration and other improvements.

Please ensure that your bug report is clear and that it is complete. Otherwise, we may be unable to understand it or to reproduce it, either of which would prevent us from fixing the bug. We strongly recommend the report(bug report or feature request) could include some hints as the following:

BUG REPORT

  1. Please describe the issue you observed:
  • What did you do (The steps to reproduce)?

Why does brokerClusterName need to be suffixed with "brokerGroupIndex"? As I understand it, one cluster contains multiple brokers
image

  • What did you expect to see?
    brokerClusterName removes the brokerGroupIndex suffix

  • What did you see instead?
    brokerClusterName is suffixed with "brokerGroupIndex"

  1. Please tell us about your environment:
    operator:0.2.0

  2. Other information (e.g. detailed explanation, logs, related issues, suggestions how to fix, etc):

Metrics and Log Printing

BUG REPORT

1) First deploy the service:
$ kubectl apply -f rocketmq_v1alpha1_rocketmq_cluster.yaml -n dev

2)After a while, check the pod status:
$ kubectl get pods -n dev -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
broker-0-master-0 1/1 Running 0 35s 10.244.14.94 azure-k8s-5
broker-0-replica-1-0 1/1 Running 0 35s 10.244.15.169 azure-k8s-6
name-service-0 1/1 Running 0 35s 10.0.0.9 azure-k8s-5

【bug1】
We cannot guarantee that the name-server cluster must be started before the broker clusters, which may cause the broker to fail to start or restart.

3) At this point, we modify the spec.size of the broker from 1 to 2, and then view the cluster status:
$ kubectl get pods -n dev -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
broker-0-master-0 1/1 Running 0 3m43s 10.244.14.94 azure-k8s-5
broker-0-replica-1-0 1/1 Running 0 3m43s 10.244.15.169 azure-k8s-6
broker-1-master-0 1/1 Running 0 15s 10.244.11.106 azure-k8s-1
broker-1-replica-1-0 1/1 Running 0 15s 10.244.13.192 azure-k8s-4
name-service-0 1/1 Running 0 3m43s 10.0.0.9 azure-k8s-5

4) Then we modify the spec.size of the broker from 2 to 1, and then check the cluster status

$ kubectl get pods -n dev -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
broker-0-master-0 1/1 Running 0 4m35s 10.244.14.94 azure-k8s-5
broker-0-replica-1-0 1/1 Running 0 4m35s 10.244.15.169 azure-k8s-6
broker-1-master-0 1/1 Running 0 67s 10.244.11.106 azure-k8s-1
broker-1-replica-1-0 1/1 Running 0 67s 10.244.13.192 azure-k8s-4
name-service-0 1/1 Running 0 4m35s 10.0.0.9 azure-k8s-5

[bug2]
We found that the cluster did not delete broker-1 related resources
At this point we check the status of the broker:
$ kubectl get broker -n dev broker -oyaml
apiVersion: rocketmq.apache.org/v1alpha1
kind: Broker
metadata:
...
spec:
allowRestart: true
brokerImage: apacherocketmq/rocketmq-broker:4.5.0-alpine-operator-0.3.0
env:

  • name: BROKER_MEM
    valueFrom:
    configMapKeyRef:
    key: BROKER_MEM
    name: broker-config
    hostPath: /tmp/data/rocketmq/broker
    imagePullPolicy: Always
    nameServers: ""
    replicaPerGroup: 1
    resources:
    ...
    scalePodName: broker-0-master-0
    size: 1
    storageMode: EmptyDir
    ...
    volumes:
  • configMap:
    items:
    • key: broker-common.conf
      path: broker-common.conf
      name: broker-config
      name: broker-config
      status:
      nodes:
  • broker-0-master-0
  • broker-0-replica-1-0
  • broker-1-master-0
  • broker-1-replica-1-0
    size: 1

[bug3]
At this point, we find that broker.spec.size has become 1, and status has also become 1.

  1. At this point, we manually fix the broker's bug, and we manually delete the related statefulset.

  2. When we need to upgrade the size of the cluster, from 1=》2, we check the cluster status again. At this time, we have a probability that the broker fails to start, and the operator reports an error:
    [bug4]
    {"level":"error","ts":1600223697.3956034,"logger":"controller_broker","msg":"Failed to update Broker Size status.","Request.Namespace":"dev","Request.Name":"broker","error":"Operation cannot be fulfilled on brokers.rocketmq.apache.org "broker": the object has been modified; please apply your changes to the latest version and try again","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\trocketmq-operator/vendor/github.com/go-logr/zapr/zapr.go:128\ngithub.com/apache/rocketmq-operator/pkg/controller/broker.(*ReconcileBroker).Reconcile\n\trocketmq-operator/pkg/controller/broker/broker_controller.go:286\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\trocketmq-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:215\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1\n\trocketmq-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:158\nk8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\trocketmq-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\trocketmq-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134\nk8s.io/apimachinery/pkg/util/wait.Until\n\trocketmq-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88"}

Then we can see that the /root/store/config/subscriptionGroup.json subscriptionGroup.json.bak of rocketmq is all . Then we can check the corresponding error log through the directory on the host machine:

com.alibaba.fastjson.JSONException: syntax error, expect {, actual error, pos 0, fastjson-version 1.2.51
at com.alibaba.fastjson.parser.deserializer.JavaBeanDeserializer.deserialze(JavaBeanDeserializer.java:474) ~[fastjson-1.2.51.jar:na]
at com.alibaba.fastjson.parser.deserializer.JavaBeanDeserializer.deserialze(JavaBeanDeserializer.java:273) ~[fastjson-1.2.51.jar:na]
at com.alibaba.fastjson.parser.DefaultJSONParser.parseObject(DefaultJSONParser.java:669) ~[fastjson-1.2.51.jar:na]
at com.alibaba.fastjson.JSON.parseObject(JSON.java:368) ~[fastjson-1.2.51.jar:na]
at com.alibaba.fastjson.JSON.parseObject(JSON.java:272) ~[fastjson-1.2.51.jar:na]
at com.alibaba.fastjson.JSON.parseObject(JSON.java:491) ~[fastjson-1.2.51.jar:na]
at org.apache.rocketmq.remoting.protocol.RemotingSerializable.fromJson(RemotingSerializable.java:43) ~[rocketmq-remoting-4.5.0.jar:4.5.0]
at org.apache.rocketmq.broker.subscription.SubscriptionGroupManager.decode(SubscriptionGroupManager.java:152) ~[rocketmq-broker-4.5.0.jar:4.5.0]
at org.apache.rocketmq.common.ConfigManager.load(ConfigManager.java:38) ~[rocketmq-common-4.5.0.jar:4.5.0]
at org.apache.rocketmq.broker.BrokerController.initialize(BrokerController.java:233) [rocketmq-broker-4.5.0.jar:4.5.0]
at org.apache.rocketmq.broker.BrokerStartup.createBrokerController(BrokerStartup.java:218) [rocketmq-broker-4.5.0.jar:4.5.0]
at org.apache.rocketmq.broker.BrokerStartup.main(BrokerStartup.java:58) [rocketmq-broker-4.5.0.jar:4.5.0]
2020-09-15 13:43:10 ERROR main - load /root/store/config/subscriptionGroup.json Failed
com.alibaba.fastjson.JSONException: syntax error, expect {, actual error, pos 0, fastjson-version 1.2.51
at com.alibaba.fastjson.parser.deserializer.JavaBeanDeserializer.deserialze(JavaBeanDeserializer.java:474) ~[fastjson-1.2.51.jar:na]
at com.alibaba.fastjson.parser.deserializer.JavaBeanDeserializer.deserialze(JavaBeanDeserializer.java:273) ~[fastjson-1.2.51.jar:na]
at com.alibaba.fastjson.parser.DefaultJSONParser.parseObject(DefaultJSONParser.java:669) ~[fastjson-1.2.51.jar:na]
at com.alibaba.fastjson.JSON.parseObject(JSON.java:368) ~[fastjson-1.2.51.jar:na]
at com.alibaba.fastjson.JSON.parseObject(JSON.java:272) ~[fastjson-1.2.51.jar:na]
at com.alibaba.fastjson.JSON.parseObject(JSON.java:491) ~[fastjson-1.2.51.jar:na]
at org.apache.rocketmq.remoting.protocol.RemotingSerializable.fromJson(RemotingSerializable.java:43) ~[rocketmq-remoting-4.5.0.jar:4.5.0]
at org.apache.rocketmq.broker.subscription.SubscriptionGroupManager.decode(SubscriptionGroupManager.java:152) ~[rocketmq-broker-4.5.0.jar:4.5.0]
at org.apache.rocketmq.common.ConfigManager.loadBak(ConfigManager.java:56) [rocketmq-common-4.5.0.jar:4.5.0]
at org.apache.rocketmq.common.ConfigManager.load(ConfigManager.java:44) [rocketmq-common-4.5.0.jar:4.5.0]
at org.apache.rocketmq.broker.BrokerController.initialize(BrokerController.java:233) [rocketmq-broker-4.5.0.jar:4.5.0]
at org.apache.rocketmq.broker.BrokerStartup.createBrokerController(BrokerStartup.java:218) [rocketmq-broker-4.5.0.jar:4.5.0]
at org.apache.rocketmq.broker.BrokerStartup.main(BrokerStartup.java:58) [rocketmq-broker-4.5.0.jar:4.5.0]
2020-09-15 13:43:10 INFO main - Try to shutdown service thread:PullRequestHoldService started:false lastThread:null

FEATURE REQUEST

  1. The rocketmq log needs to be printed to the console and directory, so that it is convenient to collect logs and locate error information. For example, when I failed to start rocketmq, for example, when init rocketmq failed, kubectl logs -f podname would not display any error message.

  2. Rocketmq's metrics support

  3. We are currently trying to use this operator to deploy rocketmq. For the above two issues and bugs, we can submit code.

  4. When is the affinity/anti-affinity/taint released?

why use ip for nameserver

We can see that there are 1 name service Pods running on 1 nodes and their IP addresses. Modify the nameServers field in the rocketmq_v1alpha1_broker_cr.yaml file using the IP addresses.

DeepCopy objects missing

BUG REPORT

  1. Please describe the issue you observed:
    I was trying to run the operator locally using operator-sdk run --local, due to the deepcopy objects not present in the master, i was not able to run it locally.
    Here is an error log
operator-sdk run --local
INFO[0000] Running the operator locally in namespace default. 
# github.com/apache/rocketmq-operator/pkg/apis/rocketmq/v1alpha1
pkg/apis/rocketmq/v1alpha1/broker_types.go:90:25: cannot use &Broker literal (type *Broker) as type runtime.Object in argument to SchemeBuilder.Register:
        *Broker does not implement runtime.Object (missing DeepCopyObject method)
pkg/apis/rocketmq/v1alpha1/broker_types.go:90:36: cannot use &BrokerList literal (type *BrokerList) as type runtime.Object in argument to SchemeBuilder.Register:
        *BrokerList does not implement runtime.Object (missing DeepCopyObject method)
pkg/apis/rocketmq/v1alpha1/nameservice_types.go:81:25: cannot use &NameService literal (type *NameService) as type runtime.Object in argument to SchemeBuilder.Register:
        *NameService does not implement runtime.Object (missing DeepCopyObject method)
pkg/apis/rocketmq/v1alpha1/nameservice_types.go:81:41: cannot use &NameServiceList literal (type *NameServiceList) as type runtime.Object in argument to SchemeBuilder.Register:
        *NameServiceList does not implement runtime.Object (missing DeepCopyObject method)
pkg/apis/rocketmq/v1alpha1/topictransfer_types.go:73:25: cannot use &TopicTransfer literal (type *TopicTransfer) as type runtime.Object in argument to SchemeBuilder.Register:
        *TopicTransfer does not implement runtime.Object (missing DeepCopyObject method)
pkg/apis/rocketmq/v1alpha1/topictransfer_types.go:73:43: cannot use &TopicTransferList literal (type *TopicTransferList) as type runtime.Object in argument to SchemeBuilder.Register:
        *TopicTransferList does not implement runtime.Object (missing DeepCopyObject method)
  • What did you see instead?
  • after running operator-sdk generate k8s, i was able to generate deepcopy objects and run it locally
  1. Please tell us about your environment:
operator-sdk version: "v0.16.0", commit: "55f1446c5f472e7d8e308dcdf36d0d7fc44fc4fd", go version: "go1.13.8 linux/amd64"

FEATURE REQUEST

  1. Please describe the feature you are requesting.
  • Deepcopy objects should be present in the repo itself. Any specific reason for not including in repo ?

Save the configuration of the broker cluster and jvm in the ConfigMap

FEATURE REQUEST

  1. Please describe the feature you are requesting.
    Save the configuration of the broker cluster and jvm in the ConfigMap

  2. Provide any additional detail on your proposed use case for this feature.

  3. Indicate the importance of this issue to you (blocker, must-have, should-have, nice-to-have). Are you currently using any workarounds to address this issue?
    The current configuration information is stored in the image, we have to rebuild the image every time we modify the configuration. Saving the configuration in ConfigMap is a better choice

  4. If there are some sub-tasks using -[] for each subtask and create a corresponding issue to map to the sub task:

Broker does not support initiating registration to a newly created nameserver

Question

we use rocketmq-operator to deploy in k8s,found that when up-scale name server cluster,broker will restart one by one,
this will put extra pressure on the cluster,and restart may fail.

I check the broker code, it seems the broker get name server address at the begining and don't support refresh name server address list automatic when add name server pod, so operator has to restart all broker,is this right?

FEATURE REQUEST

broker support register to new name server pod,don't need to restart.
eg:
1.broker provide api to refresh name server address
2.rocketmq-operator push the new name server address to broker api when add new nameserver pod.

NameService Env settings do not take effect

I want to limit the JVM of Nameservice, but setting env doesn't work
Nameservice yaml add

  env:
    - name: "Xms"
      value: "512m"
    - name: "Xmx"
      value: "512m"
    - name: "Xmn"
      value: "256m"
root     24959 24945  0 15:40 ?        00:00:00 sh /root/rocketmq/nameserver/bin/runserver.sh org.apache.rocketmq.namesrv.NamesrvStartup
root     24980 24959  0 15:40 ?        00:00:04 /usr/lib/jvm/java-1.8-openjdk/bin/java -server -Xms6013M -Xmx6013M -Xmn1000M -XX:+UseConcMarkSweepGC -XX:+UseCMSCompactAtFullCollection -XX:CMSInitiatingOccupancyFraction=70 -XX:+CMSParallelRemarkEnabled -XX:SoftRefLRUPolicyMSPerMB=0 -XX:+CMSClassUnloadingEnabled -XX:SurvivorRatio=8 -XX:-UseParNewGC -verbose:gc -Xloggc:/dev/shm/rmq_srv_gc.log -XX:+PrintGCDetails -XX:-OmitStackTraceInFastThrow -XX:-UseLargePages -Djava.ext.dirs=/usr/lib/jvm/java-1.8-openjdk/jre/lib/ext:/root/rocketmq/nameserver/bin/../lib -cp .:/root/rocketmq/nameserver/bin/../conf: org.apache.rocketmq.namesrv.NamesrvStartup

add rocketmq api resource

FEATURE REQUEST

  1. rocketmq-operator needs to add a rocketmq resource, which is the parent resource of broker, nameserver, transfer and other resources. If you do this, operator structure is more clearer and controllable.

We can contribute those code

how to use k8s.io/code-generator for generating rocketmq-operator apis

执行如下命令:

/src/k8s.io/code-generator/generate-groups.sh all github.com/apache/rocketmq-operator/client github.com/apache/rocketmq-operator/pkg/apis "rocketmq:v1alpha1"

生成如下图:

image

问题:
未生成rocketmq-operator相关的操作 如 List ,Create , Get......

not generate deepcopy.go

The issue tracker is ONLY used for the bug report(feature request need to follow RIP process). Keep in mind, please check whether there is an existing same report before you raise a new one.

Alternately (especially if your communication is not a bug report), you can send mail to our mailing lists. We welcome any friendly suggestions, bug fixes, collaboration and other improvements.

Please ensure that your bug report is clear and that it is complete. Otherwise, we may be unable to understand it or to reproduce it, either of which would prevent us from fixing the bug. We strongly recommend the report(bug report or feature request) could include some hints as the following:

BUG REPORT

  1. Please describe the issue you observed:
    the operator do not generate deepcopy.go ?now i run main.go report an error:

github.com/apache/rocketmq-operator/pkg/apis/rocketmq/v1alpha1

....\pkg\apis\rocketmq\v1alpha1\broker_types.go:94:25: cannot use &Broker{} (type *Broker) as type runtime.Object in argument to SchemeBuilder.Register:
*Broker does not implement runtime.Object (missing DeepCopyObject method)

  • What did you do (The steps to reproduce)?

  • What did you expect to see?

  • What did you see instead?

  1. Please tell us about your environment:

  2. Other information (e.g. detailed explanation, logs, related issues, suggestions how to fix, etc):

FEATURE REQUEST

  1. Please describe the feature you are requesting.

  2. Provide any additional detail on your proposed use case for this feature.

  3. Indicate the importance of this issue to you (blocker, must-have, should-have, nice-to-have). Are you currently using any workarounds to address this issue?

  4. If there are some sub-tasks using -[] for each subtask and create a corresponding issue to map to the sub task:

create broker cluster failed

execute this command:
kubectl apply -f example/rocketmq_v1alpha1_nameservice_cr.yaml

get error:
Failed to update Broker Size status.

confused

operator manages multiple clusters

FEATURE REQUEST

  1. rocketmq-operator can watch multiple namespaces at the same time, and then can operate multiple rocketmq clusters at the same time.

For the above issue, we can submit code.

How to customize the rocketmq-dashboard configuration

I follow the installation instruction(https://segmentfault.com/a/1190000023862693) to install rocketmq in k8s.
Now I want to change rocketmq's default configuration to enable console's password.
Operation(https://github.com/apache/rocketmq-externals/blob/master/rocketmq-console/doc/1_0_0/UserGuide_CN.md):
``
1.在Spring配置文件resources/application.properties中修改 开启登录功能

开启登录功能

rocketmq.config.loginRequired=true

Dashboard文件目录,登录用户配置文件所在目录

rocketmq.config.dataPath=/tmp/rocketmq-console/data
2.确保${rocketmq.config.dataPath}定义的目录存在,并且该目录下创建登录配置文件"users.properties", 如果该目录下不存在此文件,则默认使用resources/users.properties文件。 users.properties文件格式为:

该文件支持热修改,即添加和修改用户时,不需要重新启动console

格式, 每行定义一个用户, username=password[,N] #N是可选项,可以为0 (普通用户); 1 (管理员)

#定义管理员
admin=admin,1

#定义普通用户
user1=user1
user2=user2
``
But I dont konw how to find this two files and edit them.Should I inject them by adding configMap?

Add Support for Automatically Obtaining NameServer IP List

FEATURE REQUEST

  1. Please describe the feature you are requesting.
    Add support for automatically obtaining name server IP list when create broker cluster.

  2. Provide any additional detail on your proposed use case for this feature.
    Users do not need to care about the IP address of the name server instances, so that the deploy process is more simple and convenient.

  3. Indicate the importance of this issue to you (blocker, must-have, should-have, nice-to-have). Are you currently using any workarounds to address this issue?
    should-have

deploy the RocketMQ broker clusters error

when i use command
kubectl apply -f example/rocketmq_v1alpha1_broker_cr.yaml
kubectl get pods -owide
my clusters status is:
broker-0-master-0 0/1 CrashLoopBackOff 6 11m 10.244.1.7 k8s-node1
broker-0-replica-1-0 0/1 CrashLoopBackOff 6 11m 10.244.1.6 k8s-node1
curl-69c656fd45-94qbj 1/1 Running 1 107m 10.244.0.4 k8s-master
name-service-0 1/1 Running 0 12m 30.26.223.45 k8s-node1
rocketmq-operator-648b54b558-tgb72 1/1 Running 1 81m 10.244.1.2 k8s-node1

replica can not restart automatically

The issue tracker is ONLY used for the bug report(feature request need to follow RIP process). Keep in mind, please check whether there is an existing same report before you raise a new one.

I use example/rocketmq_v1alpha1_rocketmq_cluster.yaml to startup a rocketMQ cluster. But the replica pod can not restart automatically. The error in the broker.log is
image

. The subscriptionGroup.json and subscriptionGroup.json.bak are both have the same content "". And I found that is because the replica pod execute the command 'echo "" > subscriptionGroup.json'.
image

BUG REPORT

  1. Please describe the issue you observed:
  • What did you do (The steps to reproduce)?
    Use the operator to startup a rocketMQ cluster(the size is 2),and kill one replica pod.

  • What did you expect to see?
    The replica pod is restarting again and again.

  • What did you see instead?
    The replica pod can startup successfully

  1. Please tell us about your environment:
    operator version: the latest master branch.
    kubernetes version: v1.19.10

Sometimes we need nodeAffinity/podAffinity feature to determine which nodes the pods are scheduled.

FEATURE REQUEST

  1. Please describe the feature you are requesting.
    Sometimes, users may have the requirement that the broker pods being scheduled to particular nodes.
    So it'd be better to support the affinity feature.

  2. Provide any additional detail on your proposed use case for this feature.
    for example, we have 3 nodes which are node1,node2,node3, and we want to constrain the pods of the broker to node1 and node2.

  3. Indicate the importance of this issue to you (blocker, must-have, should-have, nice-to-have). Are you currently using any workarounds to address this issue?
    Nice to have.

  4. If there are some sub-tasks using -[] for each subtask and create a corresponding issue to map to the sub task:

broker OOMKilled

先贴下我的配置:
5
6
7

nameservice 可以启动,但broker不行,查看logs的话
QQ截图20200815225237

describe的话,结果如下:
1
2

不知道怎么处理,谢谢

Hope to support external client of K8S cluster access

有时候客户端并不是使用K8S进行部署,或者客户端和RocketMQ部署在不同的K8S集群

目前RocketMQ往NameServer注册的地址是PodIP导致集群外无法访问到,希望能得到支持

Add k8s Inner Cluster IP Option for NameServer

FEATURE REQUEST

  1. Please describe the feature you are requesting.
    Add k8s cluster inner ip option for name server.

  2. Provide any additional detail on your proposed use case for this feature.
    The name server ip is using host ip thus the name server instance number can't be more than node number. Under some cases the user does not need to access RocketMQ cluster outside the k8s cluster, or they can use ingress, etc. to access the RocketMQ service. Thus it is unnecessary to use host ip for name server.

  3. Indicate the importance of this issue to you (blocker, must-have, should-have, nice-to-have). Are you currently using any workarounds to address this issue?
    should-have

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.